chash
stringlengths
16
16
content
stringlengths
267
674k
c823281c00c41df5
Conformal Invariance for Non-Relativistic Field Theory Thomas Mehen, Iain W. Stewart, and Mark B. Wise California Institute of Technology, Pasadena, CA 91125 Department of Physics, University of California at San Diego, 9500 Gilman Drive, La Jolla, CA 92099 Momentum space Ward identities are derived for the amputated n-point Green’s functions in dimensional non-relativistic conformal field theory. For and the implications for scattering amplitudes (i.e. on-shell amputated Green’s functions) are considered. Any scale invariant 2-to-2 scattering amplitude is also conformally invariant. However, conformal invariance imposes constraints on off-shell Green’s functions and the three particle scattering amplitude which are not automatically satisfied if they are scale invariant. As an explicit example of a conformally invariant theory we consider non-relativistic particles in the infinite scattering length limit. preprint: CALT-68-2242 UCSD/PTH 99-14 Poincaré invariant theories that are scale invariant usually have a larger symmetry group called the conformal group111Exceptions are known to exist, however, these theories suffer from pathologies, such as non-unitarity. A detailed discussion of scale and conformal invariance in relativistic theories can be found in Ref.[1].. A similar phenomena happens for 3+1 dimensional non-relativistic systems. These are invariant under the extended Galilean group, which consists of 10 generators: translations (4), rotations (3), and Galilean boosts (3). The largest space-time symmetry group of the free Schrödinger equation is called the Schrödinger or non-relativistic conformal group [2]. This group has two additional generators corresponding to a scale transformation, and a one-dimensional special conformal transformation, sometimes called an “expansion”. The infinitesimal Galilean boost, scale and conformal transformations are where , and are the corresponding infinitesimal parameters. (The finite scale transformation is , , and the finite conformal transformation is , .) In this letter we explore the implications of non-relativistic conformal invariance for dimensional physical systems. In relativistic theories, conformal invariance can be used to constrain the functional form of n-point correlation functions[1], however, on-shell scattering amplitudes are typically ill-defined because of infrared divergences associated with massless particles. In non-relativistic theories scattering amplitudes are well defined even in the conformal limit. We show how conformal invariance can be used to gain information about scattering amplitudes by deriving Ward identities for the amputated momentum space Green’s functions. While the off-shell Green’s functions can be changed by field redefinitions, the scattering amplitudes (on-shell Green’s functions) are physical quantities and are therefore unchanged. We find that any 2-to-2 (identical particle) scattering amplitude that satisfies the scale Ward identity automatically satisfies the conformal Ward identity. However, this is not the case for the corresponding off-shell Green’s function or for the 3-to-3 scattering amplitude. We construct a field theory that has a four point function which obeys the scale and conformal Ward identities and conjecture that the higher point functions in this theory also obey these Ward identities. On-shell it gives S-wave scattering with an infinite scattering length. For the interaction of two nucleons, the scattering lengths in the and channels are large ( and ) compared to the typical length scales in nuclear physics. In the limit that these scattering lengths go to infinity (and higher terms in the effective range expansion are neglected) we show that the four point Green’s function obeys the scale and conformal Ward identities. Thus, two body nuclear systems at low energies are approximately scale and conformal invariant. It is likely that in some spin-isospin channels the higher point functions will also obey these Ward identities. Whether this conformal invariance can lead to new predictions for many body nuclear physics is presently unclear, but seems worthy of further study. The action for a free non-relativistic field is where is mass of the particle corresponding to the field . Under an infinitesimal Galilean transformation or equivalently The action in Eq. (2) is invariant under the infinitesimal scale transformation in Eq. (1) with or equivalently and under the infinitesimal conformal transformation provided Now consider adding interactions that preserve these invariances (an explicit example will be considered later). The position space Green’s functions for the interacting theory, , are defined by222In non-relativistic theories particle number is conserved so there must be the same number of ’s as ’s. where is the vacuum of the interacting theory and is assumed to be invariant under the Schrödinger group. Under the infinitesimal transformations in Eqs. (3-5) where is the differential operator for coordinates . Invariance under Galilean boosts, scale, and conformal symmetry implies that The momentum space Green’s functions are the Fourier transform of the position space Green’s functions where is for incoming particles (subscripts ) and for outgoing particles (subscripts ). The delta functions in Eq. (Conformal Invariance for Non-Relativistic Field Theory) arise due to translational invariance. Using Eq. (8) with and it is straightforward to show that invariance under Galilean boosts, scale transformations, and conformal transformations implies the Ward identities In deriving Eq. (10) we have integrated over the delta functions in Eq. (Conformal Invariance for Non-Relativistic Field Theory) so that The S-matrix elements are related to the amputated Green’s functions defined by333Neglecting relativistic corrections to , Eq. (13) is exact because adding interactions to Eq. (2) does not effect the two point function since there is no pair creation in the non-relativistic theory. where and are given by Eq. (12). is the connected part of and also satisfies Eq. (10). Applying the Galilean boost and scale Ward identities in Eq. (10) to Eq. (13) gives where and Applying the conformal Ward identity in Eq. (10) to Eq. (13) gives Therefore, amputated Green’s functions satisfying Eq. (14) also satisfy The leading term in the effective field theory for non-relativistic nucleon-nucleon scattering corresponds to a scale invariant theory in the limit that the S-wave scattering lengths go to infinity (see for e.g. Ref. [3]). As we will see below, this limit corresponds to a fixed point of the renormalization group. Since in nature the S-wave scattering lengths are very large, it is the unusual scaling of operators at this non-trivial fixed point[4] that controls their importance in this effective field theory [5, 6]. Motivated by this we add to Eq. (2) the interaction where is now a two component spin- fermion field and . Higher body non-derivative interaction terms are forbidden by Fermi statistics. The interaction in Eq. (19) only mediates spin singlet S-wave scattering. The scattering amplitude arises from the sum of bubble Feynman diagrams shown in Fig. 1. Terms contributing to Figure 1: Terms contributing to from the interaction in Eq. (19). The loop integration associated with a bubble has a linear ultraviolet divergence and consequently the values of the coefficients depend on the subtraction scheme adopted. In minimal subtraction, if where is the center of mass momentum and is the scattering length, then successive terms in the perturbative series represented by Fig. 1 get larger and larger. Subtraction schemes have been introduced where each diagram in Fig. 1 is of the same order as the sum. One such scheme is PDS [5], which subtracts not only poles at , but also the poles at (which correspond to linear divergences). Another such scheme is the OS momentum subtraction scheme [4, 7]. In these schemes the coefficients are subtraction point dependent, . Calculating the bubble sum in PDS or OS gives Note that Eq. (20) holds in any frame and we have not imposed the condition that the external particles be on-shell. It is easy to see that the limit corresponds to a nontrivial ultraviolet fixed point in this scheme. If we define a rescaled coupling , then The limit corresponds to the fixed point . At a fixed point one expects the theory to be scale invariant. In fact, it can be easily verified that in the limit satisfies both the scale and conformal Ward identities in Eqs. (14) and (18). In the case of the conformal Ward identity gives non-trivial information about the off-shell amplitude. For instance the amplitude is scale and Galilean invariant but not conformally invariant. The expressions for in Eqs. (23) and (24) agree on-shell, where . The interaction in Eq. (19) also induces non-trivial amputated Greens functions, , for . (For see Fig. 2.) It is believed that non-perturbatively the higher point functions are finite and we speculate that with at its critical fixed point the action defines a non-relativistic conformal field theory. Terms contributing to Figure 2: Terms contributing to from the interaction in Eq. (19). The filled circle denotes the sum of diagrams in Fig. 1. We will now derive scale and conformal Ward identities for the on-shell amplitudes since these are the physical quantities of interest. Consider the four point function for a scalar field444Eqs. (25) through (Conformal Invariance for Non-Relativistic Field Theory) are valid for fermions, but when imposing rotational invariance we will assume the particles are scalars. For fermions has spin singlet and spin triplet parts, and the expressions in Eq. (Conformal Invariance for Non-Relativistic Field Theory) are valid for the spin singlet component.. After imposing translation invariance it is a function of 12 variables The Ward identity is solved by the function where Therefore, using the Galilean boost invariance gives three constraints on leaving 9 variables. For this function the scale and conformal identities are Three more constraints are given by rotation invariance leaving a function of 6 variables, , where In terms of these variables we have On-shell the four point function has an additional four constraints and , where the last condition follows because . The operator can be defined consistently on-shell since all derivatives with respect to and are multiplied by coefficients which vanish in the on-shell limit. In taking the on-shell limit we are assuming that derivatives of with respect to the off-shell parameters are not singular. This is true of the explicit example in Eq. (23) as long as the momentum of the nucleons in the center of mass frame is nonzero. Finally, from Eq. (Conformal Invariance for Non-Relativistic Field Theory) we see that on-shell a scale invariant is automatically conformally invariant. Solving , the most general scattering amplitude consistent with Schrödinger group invariance is where is an arbitrary function, and is the scattering angle in the center of mass frame. Conformal invariance does not restrict the angular dependence of the scattering amplitude. Additional physical criteria can be used to provide further constraints. The condition that the S-wave scattering length goes to infinity corresponds to a fine tuning that produces a bound state at threshold. Assuming that this is the only fine tuning and that the interactions are short range the threshold behavior of the phase shift in the th partial wave is for . It is easy to see that the only partial wave obtained from Eq. (30) with acceptable threshold behavior is the S-wave, so can be replaced by a constant. In the limit the interaction in Eq. (19) provides an explicit example of a scale invariant theory which has this behavior. In the case of the 3-to-3 scattering amplitude, conformal invariance will provide a new constraint independent from that of scale invariance. We proceed exactly as in the case of the 2-to-2 scattering amplitude. After imposing energy and momentum conservation the 6 point function has 20 coordinates Using the Galilean boost invariance leaves 17 coordinates where and In terms of these variables Next consider imposing rotational invariance. For simplicity we specialize to the case of a scalar field. Rotational invariance implies that should be a function of 14 variables. We have chosen The coordinates and vanish on-shell since . For the function in Eq. (35) the scale and conformal derivatives are The ellipses are terms with factors of or and therefore vanish on-shell, and . It is possible to express in terms of . For scale invariant amputated Green’s functions the conformal operator can be defined on-shell because terms that involve derivatives with respect to the off-shell parameters ( and ) have coefficients which vanish on-shell. Even after demanding scale invariance the conformal Ward identity still imposes a nontrivial constraint on the amplitude. It is easy to find examples of boost and scale invariant functions which do not satisfy . Due to the complexity of Eq. (Conformal Invariance for Non-Relativistic Field Theory) we have not attempted to find its general solution. The effective field theory for the strong interactions of nucleons is more complicated than the toy model given by , because nucleons have isospin degrees of freedom. The inclusion of internal degrees of freedom does not change the Ward identities that correlations must satisfy to be Schrödinger invariant. However, isospin allows additional contact interactions to exist. There are two four-nucleon operators ( and ) and one six-nucleon operator that can be formed without using derivatives. With infinite spin singlet and spin triplet scattering lengths the four point functions are identical to Eq. (23) at leading order, and are therefore invariant under the Schrödinger group. For nucleons, the six point point functions can involve states with total spin 1/2 and 3/2. In the spin 1/2 channel a three body contact interaction with no derivatives exists and is needed to renormalize the integral equation for three body scattering[8]. This three body contact operator is expected to introduce a new scale and therefore break scale and conformal invariance. In the spin 3/2 channel [9], no three body operator is needed and this amplitude is expected to respect the constraints from scale and conformal invariance. Explicit verification of this would be interesting. In this letter we derived Ward identities for amputated momentum space Green’s functions that follow from invariance under the Schrödinger group. We also examined implications of these constraints for 2-to-2 and 3-to-3 on-shell scattering amplitudes. Motivated by recent developments in nuclear theory, we considered a non-relativistic theory in the limit of infinite scattering length and found it gives rise to a four point function which satisfies the Ward identities which follow from Schrödinger invariance. We would like to thank John Preskill for a conversation which led to this paper, and Jonathan Engel for a useful comment. This work was supported in part by the Department of Energy under grant numbers DE-FG03-92-ER 40701 and DOE-FG03-97ER40546. T.M. was also supported by a John A. McCone Fellowship. For everything else, email us at [email protected].
1c062ab1c1c39732
version history Stanford Encyclopedia of Philosophy last substantive content change Many-Worlds Interpretation of Quantum Mechanics The Many-Worlds Interpretation (MWI) is an approach to quantum mechanics according to which, in addition to the world we are aware of directly, there are many other similar worlds which exist in parallel at the same space and time. The existence of the other worlds makes it possible to remove randomness and action at a distance from quantum theory and thus from all physics. 1. Introduction The fundamental idea of the MWI, going back to Everett 1957, is that there are myriads of worlds in the Universe in addition to the world we are aware of. In particular, every time a quantum experiment with different outcomes with non-zero probability is performed, all outcomes are obtained, each in a different world, even if we are aware only of the world with the outcome we have seen. In fact, quantum experiments take place everywhere and very often, not just in physics laboratories: even the irregular blinking of an old fluorescent bulb is a quantum experiment. There are numerous variations and reinterpretations of the original Everett proposal, most of which are briefly discussed in the entry on Everett's relative state formulation of quantum mechanics. Here, a particular approach to the MWI (which differs from the popular "actual splitting worlds" approach in De Witt 1970) will be presented in detail, followed by a discussion relevant for many variants of the MWI. The MWI consists of two parts: 1. A mathematical theory which yields evolution in time of the quantum state of the (single) Universe. Part (i) is essentially summarized by the Schrödinger equation or its relativistic generalization. It is a rigorous mathematical theory and is not problematic philosophically. Part (ii) involves "our experiences" which do not have a rigorous definition. An additional difficulty in setting up (ii) follows from the fact that human languages were developed at a time when people did not suspect the existence of parallel worlds. This, however, is only a semantic problem.[1] 2. Definitions 2.1 What is "A World"? A world is the totality of (macroscopic) objects: stars, cities, people, grains of sand, etc. in a definite classically described state. This definition is based on the common attitude to the concept of world shared by human beings. Another concept (considered in some approaches as the basic one, e.g., in Saunders 1995) is a relative, or perspectival, world defined for every physical system and every one of its states (provided it is a state of non-zero probability): I will call it a centered world. This concept is useful when a world is centered on a perceptual state of a sentient being. In this world, all objects which the sentient being perceives have definite states, but objects that are not under her observation might be in a superposition of different (classical) states. The advantage of a centered world is that it does not split due to a quantum phenomenon in a distant galaxy, while the advantage of our definition is that we can consider a world without specifying a center, and in particular our usual language is just as useful for describing worlds at times when there were no sentient beings. The concept of "world" in the MWI belongs to part (ii) of the theory, i.e., it is not a rigorously defined mathematical entity, but a term defined by us (sentient beings) in describing our experience. When we refer to the "definite classically described state" of, say, a cat, it means that the position and the state (alive, dead, smiling, etc.) of the cat is maximally specified according to our ability to distinguish between the alternatives and that this specification corresponds to a classical picture, e.g., no superpositions of dead and alive cats are allowed in a single world.[2] The concept of a world in the MWI is based on the layman's conception of a world; however, several features are different: Obviously, the definition of the world as everything that exists does not hold in the MWI. "Everything that exists" is the Universe, and there is only one Universe. The Universe incorporates many worlds similar to the one the layman is familiar with. Nowadays, the layman knows that objects are made of elementary microscopic particles, and he believes that, consequently, a more precise definition of the world is the totality of all these particles. In the MWI this naive step is incorrect. Microscopic particles might be in a superposition, while objects within a world (as defined in the MWI) cannot be in a superposition. The connection between macroscopic objects defined according to our experience, and microscopic objects defined in a physical theory that aims to explain our experience, is more subtle, and will be discussed further below. The definition of a world in the MWI involves only concepts related to our experience. A layman believes that our present world has a unique past and future. According to the MWI, a world defined at some moment of time corresponds to a unique world at a time in the past, but to a multitude of worlds at a time in the future. 2.2 Who am "I"? "I" am an object, such as Earth, cat, etc. "I" is defined at a particular time by a complete (classical) description of the state of my body and of my brain. "I" and "Lev" do not name the same things (even though my name is Lev). At the present moment there are many different "Lev"s in different worlds (not more than one in each world), but it is meaningless to say that now there is another "I". I have a particular, well defined past: I correspond to a particular "Lev" in 2002, but I do not have a well defined future: I correspond to a multitude of "Lev"s in 2010. In the framework of the MWI it is meaningless to ask: Which Lev in 2010 will I be? I will correspond to them all. Every time I perform a quantum experiment (with several possible results) it only seems to me that I obtain a single definite result. Indeed, Lev who obtains this particular result thinks this way. However, this Lev cannot be identified as the only Lev after the experiment. Lev before the experiment corresponds to all "Lev"s obtaining all possible results. Although this approach to the concept of personal identity seems somewhat unusual, it is plausible in the light of the critique of personal identity by Parfit 1986. Parfit considers some artificial situations in which a person splits into several copies, and argues that there is no good answer to the question: Which copy is me? He concludes that personal identity is not what matters when I divide. 3. Correspondence Between the Formalism and Our Experience 3.1 The Quantum State of an Object The basis for the correspondence between the quantum state (the wave function) of the Universe and our experience is the description that physicists give in the framework of standard quantum theory for objects composed of elementary particles. Elementary particles of the same kind are identical. Therefore, the essence of an object is the quantum state of its particles and not the particles themselves (see the elaborate discussion in the entry on identity and individuality in quantum theory): one quantum state of a set of elementary particles might be a cat and another state of the same particles might be a small table. Clearly, we cannot now write down an exact wave function of a cat. We know with a reasonable approximation the wave function of some elementary particles that constitute a nucleon. The wave function of the electrons and the nucleons that together make up an atom is known with even better precision. The wave functions of molecules (i.e. the wave functions of the ions and electrons out of which molecules are built) are well studied. A lot is known about biological cells, so physicists can write down a rough form of the quantum state of a cell. This is difficult because there are many molecules in a cell. Out of cells we construct various tissues and then the whole body of a cat or of a table. So, let us denote the quantum state constructed in this way |Psi>OBJECT. In our construction |Psi>OBJECT is the quantum state of an object in a definite state and position.[3] According to the definition of a world we have adopted, in each world the cat is in a definite state: either alive or dead. Schrödinger's experiment with the cat leads to a splitting of worlds even before opening the box. Only in the alternative approach is Schrödinger's cat, which is in a superposition of being alive and dead, a member of the (single) centered world of the observer before she opened the sealed box with the cat (the observer perceives directly the facts related to the preparation of the experiment and she deduces that the cat is in a superposition). 3.2 The Quantum State that corresponds to a World The wave function of all particles in the Universe corresponding to any particular world will be a product of states of sets of particles corresponding to all objects in the world multiplied by the quantum state |Phi> of all the particles that do not constitute "objects". Within a world, "objects" have definite macroscopic states by fiat:[4] |PsiWORLD> = |Psi>OBJECT 1 |Psi>OBJECT 2 ... |Psi>OBJECT N    |Phi> (1) The quantum states corresponding to centered worlds of sentient beings have exactly the same form. The only difference is that in the product there are only states of the objects perceived directly, while most of the universe is, in general, entangled; it is described by |Phi>. 3.3 The Quantum State of the Universe The quantum state of the Universe can be decomposed into a superposition of terms corresponding to different worlds: |PsiUNIVERSE> = sigmaalphai |PsiWORLD i> (2) Different worlds correspond to different classically described states of at least one object. Different classically described states correspond to orthogonal quantum states. Therefore, different worlds correspond to orthogonal states: all states |PsiWORLD i> are mutually orthogonal and consequently, sigma|alphai| 2 = 1. 3.4 FAPP The construction of the quantum state of the Universe in terms of the quantum states of objects presented above is only approximate, it is good only for all practical purposes (FAPP). Indeed, the concept of an object itself has no rigorous definition: should a mouse that a cat just swallowed be considered as a part of the cat? The concept of a "definite position" is also only approximately defined: how far should a cat be displaced in order for it to be considered to be in a different position? If the displacement is much smaller than the quantum uncertainty, it must be considered to be at the same place, because in this case the quantum state of the cat is almost the same and the displacement is undetectable in principle. But this is only an absolute bound, because our ability to distinguish various locations of the cat is far from this quantum limit. Further, the state of an object (e.g. alive or dead) is meaningful only if the object is considered for a period of time. In our construction, however, the quantum state of an object is defined at a particular time. In fact, we have to ensure that the quantum state will have the shape of the object not only at that time, but for some period of time. Splitting of the world during this period of time is another source of ambiguity, in particular, due to the fact that there is no precise definition of when the splitting occurs. The reason that I am only able to propose an approximate prescription for correspondence between the quantum state of the Universe and our experience, is essentially the same that led Bell 1990 to claim that "ordinary quantum mechanics is just fine FAPP". The concepts we use: "object", "measurement", etc. are not rigorously defined. Bell was, and many others are looking (until now in vain) for a "precise quantum mechanics". Since it is not enough for a physical theory to be just fine FAPP, a quantum mechanics needs rigorous foundations. However, in the MWI just fine FAPP is enough. Indeed, the MWI has rigorous foundations for (i), the "physics part" of the theory; only part (ii), the correspondence with our experience, is approximate (just fine FAPP). But "just fine FAPP" means that the theory explains our experience for any possible experiment, and this is the goal of (ii). See Butterfield 2001 and Wallace 2001b for more arguments why a FAPP definition of a world ("branch" in their language) is enough. 3.5 The Measure of Existence There are many worlds existing in parallel in the Universe. Although all worlds are of the same physical size (this might not be true if we take quantum gravity into account), and in every world sentient beings feel as "real" as in any other world, in some sense some worlds are larger than others. I describe this property as the measure of existence of a world.[5] The measure of existence of a world quantifies its ability to interfere with other worlds in a gedanken experiment, see Vaidman 1998 (p. 256), and is the basis for introducing probability in the MWI. The measure of existence makes precise what is meant by the probability measure discussed in Everett 1957 and pictorially described in Lockwood 1989 (p. 230). Given the decomposition (2), the measure of existence of the world i is µi = |alphai| 2. It also can be expressed as the expectation value of Pi, the projection operator on the space of quantum states corresponding to the actual values of all physical variables describing the world i: mui triple bar <PsiUNIVERSE|Pi|PsiUNIVERSE> (3) "I" also have a measure of existence. It is the sum of measures of existence of all different worlds in which I exist; equally, it can be defined as the measure of existence of my perception world. Note that I do not experience directly the measure of my existence. I feel the same weight, see the same brightness, etc. irrespectively of how tiny my measure of existence might be. 4. Probability in the MWI There is a serious difficulty with the concept of probability in the context of the MWI. In a deterministic theory, such as the MWI, the only possible meaning for probability is an ignorance probability, but there is no relevant information that an observer who is going to perform a quantum experiment is ignorant about. The quantum state of the Universe at one time specifies the quantum state at all times. If I am going to perform a quantum experiment with two possible outcomes such that standard quantum mechanics predicts probability 1/3 for outcome A and 2/3 for outcome B, then, according to the MWI, both the world with outcome A and the world with outcome B will exist. It is senseless to ask: "What is the probability that I will get A instead of B?" because I will correspond to both "Lev"s: the one who observes A and the other one who observes B.[6] To solve this difficulty, Albert and Loewer 1988 proposed the Many Minds interpretation (in which the different worlds are only in the minds of sentient beings). In addition to the quantum wave of the Universe, Albert and Loewer postulate that every sentient being has a continuum of minds. Whenever the quantum wave of the Universe develops into a superposition containing states of a sentient being corresponding to different perceptions, the minds of this sentient being evolve randomly and independently to mental states corresponding to these different states of perception (with probabilities equal to the quantum probabilities for these states). In particular, whenever a measurement is performed by an observer, the observer's minds develop mental states that correspond to perceptions of the different outcomes, i.e. corresponding to the worlds A or B in our example. Since there is a continuum of minds, there will always be an infinity of minds in any sentient being and the procedure can continue indefinitely. This resolves the difficulty: each "I" corresponds to one mind and it ends up in a state corresponding to a world with a particular outcome. However, this solution comes at the price of introducing additional structure into the theory, including a genuinely random process. Vaidman1998 (p. 254) resolves the problem by constructing an ignorance probability in the framework of the MWI. It seems senseless to ask: "What is the probability that Lev in the world A will observe A?" This probability is trivially equal to 1. The task is to define the probability in such a way that we could reconstruct the prediction of the standard approach: probability 1/3 for A. It is indeed senseless for you to ask what is the probability that Lev in the world A will observe A, but this might be a meaningful question for Lev in the world of the outcome A. Under normal circumstances, the world A is created (i.e. measuring devices and objects which interact with measuring devices will become localized according to the outcome A) before Lev will be aware of the result A. Then, it is sensible to ask this Lev about his probability to be in world A. There is a matter of fact about which outcome this Lev will see, but he is ignorant about this fact at the time of the question. In order to make this point vivid, Vaidman proposed an experiment in which the experimenter is given a sleeping pill before the experiment. Then, while asleep, he is moved to room A or to room B depending on the results of the experiment. When the experimenter has woken up (in one of the rooms), but before he has opened his eyes, he is asked "In which room are you?" Certainly, there is a matter of fact about which room he is in (he can learn about it by opening his eyes), but he is ignorant about this fact at the time of the question. This construction provides the ignorance interpretation of probability, but the value of the probability has to be postulated (see Section 6.3 below for attempts to derive it): Probability Postulate The probability of an outcome of a quantum experiment is proportional to the total measure of existence of all worlds with that outcome.[7] The question of the probability of obtaining A also makes sense for the Lev in world B before he becomes aware of the outcome. Both "Lev"s have the same information on the basis of which they should give their answer. According to the probability postulate they will give the same answer: 1/3 (the relative measure of existence of the world A). Since Lev before the measurement is associated with two "Lev"s after the measurement who have identical ignorance probability concepts for the outcome of the experiment, I can define the probability of the outcome of the experiment to be performed as the ignorance probability of the successors of Lev for being in a world with a particular outcome. The "sleeping pill" argument does not reduce the probability of an outcome of a quantum experiment to a familiar concept of probability in the classical context. The quantum situation is genuinely different. Since all outcomes of a quantum experiment are actualized, there is no probability in the usual sense. The argument explains the Behavior Principle (see below) for an experimenter according to which he should behave as if there were certain probabilities for different outcomes. The justification is particularly clear in the approach to probability as the value of a rational bet on a particular result. The results of the betting of the experimenter are relevant for his successors emerging after performing the experiment in different worlds. Since the experimenter is related to all of his successors and they all have identical rational strategies for betting, then, this should also be the strategy of the experimenter before the experiment. Several authors justify the probability postulate without relying on the sleeping pill argument. Tappenden 2000 (p. 111) adopts a different semantics according to which "I" live in all branches and have "distinct experiences" in different "superslices", and uses "weight of a superslice" instead of measure of existence. He argues that it is intelligible to associate probabilities according to the probability postulate: "Faced with an array of weighted superslices as part of myself ... what choice do I have but to assign an array of attitudes, degrees of belief, towards the experiences associated with those superslices?". Saunders 1998, exploiting a variety of ideas in decoherence theory, the relational theory of tense and theories of identity over time, also argues for "identification of probability with the Hilbert Space norm" (which equals the measure of existence). Page 2002 promotes an approach which he has recently named Mindless Sensationalism. The basic concept in this approach is a conscious experience. He assigns weights to different experiences depending on the quantum state of the universe, as the expectation values of presently-unknown positive operators corresponding to the experiences (similar to the measures of existence of the corresponding worlds (3)). Page writes "... experiences with greater weights exist in some sense more ..." In all of these approaches, the postulate is justified by appeal to an analogy with treatments of time, e.g., the measure of existence of a world is analogous to the duration of a time interval. In a more ambitious work, Deutsch 1999 has claimed to derive the probability postulate from the quantum formalism and the classical decision theory, but it is far from clear that he achieves this (see Barnum et al.). 5. Tests of the MWI Despite the name "interpretation", the MWI is a variant of quantum theory that is different from others. Experimentally, the difference is relative to collapse theories. It seems that there is no experiment distinguishing the MWI from other no-collapse theories such as Bohmian mechanics or other variants of MWI. The collapse leads to effects that are, in principle, observable; these effects do not exist if the MWI is the correct theory. To observe the collapse we would need a super technology, which allows "undoing" a quantum experiment, including a reversal of the detection process by macroscopic devices. See Lockwood 1989 (p. 223), Vaidman 1998 (p. 257), and other proposals in Deutsch 1986. These proposals are all for gedanken experiments that cannot be performed with current or any foreseen future technology. Indeed, in these experiments an interference of different worlds has to be observed. Worlds are different when at least one macroscopic object is in macroscopically distinguishable states. Thus, what is needed is an interference experiment with a macroscopic body. Today there are interference experiments with larger and larger objects (e.g., fullerene molecules C60), but these objects are still not large enough to be considered "macroscopic". Such experiments can only refine the constraints on the boundary where the collapse might take place. A decisive experiment should involve the interference of states which differ in a macroscopic number of degrees of freedom: an impossible task for today's technology.[8] The collapse mechanism seems to be in contradiction with basic physical principles such as relativistic covariance, but nevertheless, some ingenious concrete proposals have been made (see Pearle 1986 and the entry on collapse theories). These proposals (and Weissman's 1999 non-linear MW idea) have additional observable effects, such as a tiny energy non-conservation, that were tested in several experiments. The effects were not found and some (but not all!) of these models have been ruled out. In most no-collapse interpretations, the evolution of the quantum state of the Universe is the same. Still, one might imagine that there is an experiment distinguishing the MWI from another no-collapse interepretation based on the difference in the correspondence between the formalism and the experience (the results of experiments). An apparent candidate for such an experiment is a setup proposed in Englert et al. 1992 in which a Bohmian world is different from the worlds of the MWI (see also Aharonov and Vaidman 1996). In this example, the Bohmian trajectory of a particle in the past is contrary to the records of seemingly good measuring devices (such trajectories were named surrealistic). However, at present, there are no memory records that can determine unambiguously (without deduction from a particular theory) the particle trajectory in the past. Thus, this difference does not lead to an experimental way of distinguishing between the MWI and Bohmian mechanics. I believe that no other experiment can distinguish between the MWI and other no-collapse theories either, except for some perhaps exotic modifications, e.g., Bohmian mechanics with initial particle position distribution deviating from the quantum distribution. There are other opinions about the possibility of testing the MWI. It has frequently been claimed, e.g. by De Witt 1970, that the MWI is in principle indistinguishable from the ideal collapse theory. On the other hand, Plaga 1997 claims to have a realistic proposal for testing the MWI, and Page 2000 argues that certain cosmological observations might support the MWI. 6. Objections to the MWI Some of the objections to the MWI follow from misinterpretations due to the multitude of various MWIs. The terminology of the MWI can be confusing: "world" is "universe" in Deutsch 1996, while "universe" is "multiverse", etc. There are two very different approaches with the same name "The Many-Minds Interpretation (MMI)". The Albert and Loewer 1988 MMI mentioned above should not be confused with Lockwood’ 1996 MMI (which resembles the approach of Zeh 1981). The latter is much closer to the MWI as it is presented here, see Sec. 17 of Vaidman 1998. Further, the MWI in the Heisenberg representation (Deutsch 2001) differs significantly from the MWI presented in the Schrödinger representation (used here). The MWI presented here is very close to Everett's original proposal, but in the entry on Everett's relative state formulation of quantum mechanics, as well as in his book Barrett 1999, Barrett uses the name "MWI" for the splitting worlds view publicized by De Witt 1970. This approach has been justly criticized: it has both some kind of collapse (an irreversible splitting of worlds in a preferred basis) and the multitude of worlds. Now I consider the main objections in detail. 6.1 Ockham's Razor It seems that the majority of the opponents of the MWI reject it because, for them, introducing a very large number of worlds that we do not see is an extreme violation of Ockham's principle: "Entities are not to be multiplied beyond necessity". However, in judging physical theories one could reasonably argue that one should not multiply physical laws beyond necessity either (such a verion of Ockham's Razor has been applied in the past), and in this respect the MWI is the most economical theory. Indeed, it has all the laws of the standard quantum theory, but without the collapse postulate, the most problematic of physical laws. The MWI is also more economic than Bohmian mechanics which has in addition the ontology of the particle trajectories and the laws which give their evolution. Tipler 1986 (p. 208) has presented an effective analogy with the criticism of Copernican theory on the grounds of Ockham's razor. One might consider also a possible philosophical advantage of the plurality of worlds in the MWI, similar to that claimed by realists about possible worlds, such as Lewis 1986 (see the discussion of the analogy between the MWI and Lewis's theory by Skyrms 1976). However, the analogy is not complete: Lewis' theory considers all logically possible worlds, many more than all worlds incorporated in the quantum state of the Universe. 6.2 The Problem of the Preferred Basis A common criticism of the MWI stems from the fact that the formalism of quantum theory allows infinitely many ways to decompose the quantum state of the Universe into a superposition of orthogonal states. The question arises: "Why choose the particular decomposition (2) and not any other?" Since other decompositions might lead to a very different picture, the whole construction seems to lack predictive power. Indeed, the mathematical structure of the theory (i) does not yield a particular basis. The basis for decomposition into worlds follows from the common concept of a world according to which it consists of objects in definite positions and states ("definite" on the scale of our ability to distinguish them). In the alternative approach, the basis of a centered world is defined directly by an observer. Therefore, given the nature of the observer and given her concepts for describing the world, the particular choice of the decomposition (2) follows (up to a precision which is good FAPP, as required). If we do not ask why we are what we are, and why the world we perceive is what it is, but only how to explain relations between the events we observe in our world, then the problem of the preferred basis does not arise: we and the concepts of our world define the preferred basis. But a stronger response can be made to this criticism. Looking at the details of the physical world, the structure of the Hamiltonian, the value of the Planck constant, etc., one can argue why the sentient beings we know are of a particular type and why they have the particular concepts they do for describing their worlds. The main argument is that the locality of interactions yields stability of worlds in which objects are well localized. The small value of the Planck constant allows macroscopic objects to be well localized for a considerable period of time. Thus, such worlds (corresponding to quantum states |Psii>) can maintain their macroscopic description long enough to be perceived by sentient beings. By constrast, a "world" with macroscopic objects being in a superposition of macroscopically distinguishable states (corresponding to a quantum state 1/2(|Psi1>+|Psi2>) evolves during an extremely small time, much smaller than the perception time of any feasible sentient being, into a mixture with the other "world" 1/2(|Psi1>-|Psi2>) (see Zurek 1998). This is a good argument why sentient beings perceive localized objects and not superpositions, but one cannot rely on the decoherence argument alone in order to single out the proper basis. (See some technical difficulties in Barvinsky and Kamenshchik 1995.) The fact that we can perceive only well localized objects in definite macroscopic states might not be just a physics issue: chemistry, biology, and even psychology might be needed to account for our evolution. See various attempts to construct a theory of evolution of sentient beings based on the MWI or its variants in Albert 1992, Chalmers 1996, Deutsch 1996, Donald 1990, Gell-Mann and Hartle 1990, Lehner 1997, Lockwood 1989, Page 2002, Penrose 1994, Saunders 1994, and Zeh 1981. 6.3 Derivation of the Probability Postulate from the Formalism of the MWI Besides the question of the interpretation of the probability measure, which we have treated above, there is a separate issue about probabilities in the MWI, namely the claim that was sometimes made, e.g. by De Witt 1970, that the probability postulate, i.e. the postulate that the probability measure is proportional to the measure of existence, can be derived from the formalism of the MWI. Several authors, e.g., Kent 1990, criticize the MWI on the grounds that this claim fails. As a matter of fact, the MWI has no advantage over other interpretations with regard to this issue. What is true instead is that one can derive the Probability Postulate from a weaker postulate according to which the probability is a function of the measure of existence. The derivation can be based on Gleason's 1957 theorem about the uniqueness of the probability measure. Similar results can be achieved by the analysis of the frequency operator originated by Hartle 1968 and from more general arguments by Deutsch 1999. All these results can be derived in the framework of various interpretations and thus the success or failure of these proofs cannot be an argument in favor or against the MWI. The MWI, like all other interpretations, requires a probability postulate. Another idea for obtaining a probability law out of the formalism is to state, by analogy to the frequency interpretation of classical probability, that the probability of an outcome is proportional to the number of worlds with this outcome. This proposal immediately yields predictions that are different from what we observe in experiments. Some authors, arguing that counting is the only sensible way to introduce probability, consider this to be a fatal difficulty for the MWI, e.g., Belifante 1975. Graham 1973 suggested that the counting of worlds does yield correct probabilities if one takes into account detailed splitting of the worlds in realistic experiments, but other authors have criticized the MWI because of the failure of Graham's claim. Weissman 1999 has proposed a modification of quantum theory with additional non-linear decoherence (and hence even more worlds than standard MWI), which can lead asymptotically to worlds of equal mean measure for different outcomes. Although this avoids random processes, like other MWI's, the price in the complication of the mathematical theory seems to be too high for the simplification in explaining probability. I believe that assigning equal probability to every world is unjustified. The formalism of quantum theory includes different amplitudes for quantum states corresponding to different worlds. It is a positive feature of the theory that the differences in the mathematical descriptions of worlds (different absolute values of amplitudes) are manifest in our experience. See Saunders 1998 for a detailed analysis of this issue. From the weak probability postulate (the probability is a function of the measure of existence) follows that in case all the worlds in which a particular experiment took place have equal measures of existence, the probability of an outcome is proportional to the number of worlds with this outcome. If the measures of existence of these worlds are not equal, the experimenters in all the worlds can perform additional auxiliary measurements of some variables such that all the new worlds will have equal measures of existence. The experimenters should be completely indifferent to the results of these auxiliary measurements: their only purpose is to split the worlds into "equal-weight" worlds. This procedure reconstructs the standard quantum probability rule from the counting worlds approach; see Deutsch 1999 for details. 6.4 Social Behavior of a Believer in the MWI There are claims that a believer in the MWI will behave in an irrational way. One claim is based on the naive argument described in the previous section: a believer who assigns equal probabilities to all different worlds will bet equal bets for the outcomes of quantum experiments that have unequal probabilities. Another claim, recently discussed by Lewis 2000, is related to the strategy of a believer in the MWI who is offered to play a quantum Russian roulette game. The argument is that I, who would not accept an offer to play a classical Russian roulette, should agree to play the roulette any number of times if the triggering occurs according to the outcome of a quantum experiment. Indeed, at the end, there will be one world in which Lev is a multi-millionaire and all other worlds in which there will be no Lev Vaidman alive. Thus, in the future, Lev will be rich and presumably a happy man. However, adopting the Probability Postulate leads all believers in the MWI to behave according to the following principle: Behavior Principle We care about all our successive worlds in proportion to their measures of existence. With this principle our behavior will be similar to the behavior of a believer in the collapse theory who cares about possible future worlds according to the probability of their occurrence. I should not agree to play quantum Russian roulette because the measure of existence of worlds with Lev dead will be much larger than the measure of existence of the worlds with rich Lev alive. 7. Why the MWI? The reason for adopting the MWI is that it avoids the collapse of the quantum wave. (Other non-collapse theories are not better than MWI for various reasons, e.g., nonlocality of Bohmian mechanics; and the disadvantage of all of them is that they have some additional structure.) The collapse postulate is a physical law that differs from all known physics in two aspects: it is genuinely random and it involves some kind of action at a distance. According to the collapse postulate the outcome of a quantum experiment is not determined by the initial conditions of the Universe prior to the experiment: only the probabilities are governed by the initial state. Moreover, Bell 1964 has shown that there cannot be a compatible local-variables theory that will make deterministic predictions. There is no experimental evidence in favor of collapse and against the MWI. We need not assume that Nature plays dice. The MWI is a deterministic theory for a physical Universe and it explains why a world appears to be indeterministic for human observers. The MWI exhibits some kind of nonlocality: "world" is a nonlocal concept, but it avoids action at a distance and, therefore, it is not in conflict with the relativistic quantum mechanics; see discussions of nonlocality in Vaidman 1994, Tipler 2000, Bacciagaluppi 2002, and Hemmo and Pitowsky 2001. Although the issues of (non)locality are most transparent in the Schrödinger representation, an additional insight can be gained through recent analysis in the framework of the Heisenberg representation, see Deutsch and Hayden 2000, Rubin 2001, and Deutsch 2001. The most celebrated example of nonlocality was given by Bell 1964 in the context of the Einstein-Podolsky-Rosen argument. However, in the framework of the MWI, Bell's argument cannot get off the ground because it requires a predetermined single outcome of a quantum experiment. Another example of a kind of an action at a distance in a quantum theory with collapse is the interaction-free measurement of Elitzur and Vaidman 1993. Consider a super-sensitive bomb which explodes when any single particle arrives at its location. It seems that it is impossible to see this bomb, because any photon that arrives at the location of the bomb will cause an explosion. Nevertheless, using the Elitzur and Vaidman method, it is possible, at least sometimes, to find the location of the bomb without exploding it. In the case of success, a paradoxical situation arises: we obtain information about some region of space without any particle being there. Indeed, we know that no particle was in the region of the bomb because there was no explosion. The paradox disappears in the framework of the MWI. The situation is paradoxical because it contradicts physical intuition: the bomb causes an observable change in a remote region without sending or reflecting any particle. Physics is the theory of the Universe and therefore the paradox is real if this story is correct in the whole physical Universe. But it is not. There was no photon in the region of the bomb in a particular world, but there are other worlds in which a photon reaches the bomb and causes it to explode. Since the Universe incorporates all the worlds, it is not true that in the Universe no photon arrived at the location of the bomb. It is not surprising that our physical intuition leads to a paradox when we limit ourselves to a particular world: physical laws are applicable when applied to the physical universe that incorporates all of the worlds. The MWI is not the most accepted interpretation of quantum theory among physicists, but it is becoming increasingly popular (see Tegmark 1998). The strongest proponents of the MWI can be found in the communities of quantum cosmology and quantum computing. In quantum cosmology it makes it possible to discuss the whole Universe avoiding the difficulty of the standard interpretation which requires an external observer. In quantum computing, the key issue is the parallel processing performed on the same computer; this is very similar to the basic picture of the MWI.[9] Many physicists and philosophers believe that the most serious weakness of the MWI (and especially of its version presented here) is that it "gives up trying to explain things". In the words of Steane 1999, "It is no use to say that the [Schrödinger] cat is ‘really’ both alive and dead when every experimental test yields unambiguously the result that the cat is either alive or dead." (Steane dismisses the interference experiment which can reveal the presence of the superposition as unfeasible.) Indeed, if there is nothing in physics except the wave-function of the Universe, evolving according to the Schrödinger equation, then there are questions answering which requires help by other sciences. However, the advantage of the MWI is that it allows us to view quantum mechanics as a complete and consistent physical theory which agrees with all experimental results obtained to date. Other Internet Resources Other Resources Related Entries quantum mechanics | quantum mechanics: Everett's relative-state formulation of | quantum theory: measurement in I am thankful to everybody who has borne with me through endless discussions of the MWI (in this and other worlds) and, in particular, to Yakir Aharonov, David Albert, Guido Bacciagalupi, Jeremy Butterfield, Rob Clifton, David Deutsch, Simon Saunders, Philip Pearle, and David Wallace. I acknowledge partial support by grant 62/01 of the Israel Science Foundation and the EPSRC grant GR/N33058. Copyright © 2002 Lev Vaidman Stanford Encyclopedia of Philosophy
1cf28fd24409429c
QUANTUM THEORY (QT). Basic elements eJournal: uffmm.org, ISSN 2567-6458, 2.January 2019 Email: info@uffmm.org Author: Gerd Doeben-Henisch This is a continuation from the post WHY QT FOR AAI? explaining the motivation why to look to quantum theory (QT) in the case of the AAI paradigm. After approaching QT from a philosophy of science perspective (see the post QUANTUM THEORY (QT). BASIC PROPERTIES) giving a ‘birds view’ of the relationship between a QT and the presupposed ‘real world’ and digging a bit into the first person view inside an observer we are here interested in the formal machinery of QT. For this we follow Grifftiths in his chapter 1. 1. The starting point of a quantum theory QT are ‘phenomena‘, which “lack any description in classical physics”, a kind of things “which human beings cannot observe directly”. To measure such phenomena one needs highly sophisticated machines, which poses the problem, that the interpretation of possible ‘measurement data’ in terms of a quantum theory depends highly on the understanding of the working of the used measurement apparatus. (cf. p.8) 2. This problem is well known in philosophy of science: (i) one wants to built a new theory T. (ii) For this theory one needs appropriate measurement data MD. (iii) The measurement as such needs a well defined procedure including different kinds of pre-defined objects and artifacts. The description of the procedure including the artifacts (which can be machines) is a theory of its own called measurement theory T*. (iv) Thus one needs a theory T* to enable a new theory T. 3. In the case of QT one has the special case that QT itself has to be part of the measurement theory T*, i.e. QT subset T*. But, as Griffiths points out, the measurement problem in QT is even deeper; it is not only the conceptual dependency of QT from its measurement theory T*, but in the case of QT does the measurement apparatus directly interact with the target objects of QT because the measurement apparatus is itself part of the atomic and sub-atomic world which is the target. (cf. p.8) This has led to include the measurement as ‘stochastic time development’ explicitly into the QT. (cf. p.8) In his book Griffiths follows the strategy to deal with the ‘collapse of the wave function’ within the theoretical level, because it does not take place “in the experimental physicist’s laboratory”. (cf. p.9) 4. As a consequence of these considerations Griffiths develops the fundamental principles in the chapters 2-16 without making any reference to measurement. 1. Besides the special problem of measurement in quantum mechanics there is the general problem of measurement for every kind of empirical discipline which requires a perception of the real world guided by a scientific bias called ‘scientific knowledge’! Without a theoretical pre-knowledge there is no scientific observation possible. A scientific observation needs already a pre-theory T* defining the measurement procedure as well as the pre-defined standard object as well as – eventually — an ‘appropriate’ measurement device. Furthermore, to be able to talk about some measurement data as ‘data related to an object of QT’ one needs additionally a sufficient ‘pre-knowledge’ of such an object which enables the observer to decide whether the measured data are to be classified as ‘related to the object of QT. The most convenient way to enable this is to have already a proposal for a QT as the ‘knowledge guide’ how one ‘should look’ to the measured data. 1. Related to the phenomena of quantum mechanics the phenomena are in QT according to Griffiths understood as ‘particles‘ whose ‘state‘ is given by a ‘complex-valued wave function ψ(x)‘, and the collection of all possible wave functions is assumed to be a ‘complex linear vector space‘ with an ‘inner product’, known as a ‘Hilbert space‘. “Two wave functions φ(x) and ψ(x) represent ‘distinct physical states’ … if and only if they are ‘orthogonal’ in the sense that their ‘inner product is zero’. Otherwise φ(x) and ψ(x) represent incompatible states of the quantum system …” .(p.2) 2. “A quantum property … corresponds to a subspace of the quantum Hilbert space or the projector onto this subspace.” (p.2) 3. A sample space of mutually-exclusive possibilities is a decomposition of the identity as a sum of mutually commuting projectors. One and only one of these projectors can be a correct description of a quantum system at a given time.cf. p.3) 4. Quantum sample spaces can be mutually incompatible. (cf. p.3) 5. “In … quantum mechanics [a physical variable] is represented by a Hermitian operator.… a real-valued function defined on a particular sample space, or decomposition of the identity … a quantum system can be said to have a value … of a physical variable represented by the operator F if and only if the quantum wave function is in an eigenstate of F … . Two physical variables whose operators do not commute correspond to incompatible sample spaces… “.(cf. p.3) 6. “Both classical and quantum mechanics have dynamical laws which enable one to say something about the future (or past) state of a physical system if its state is known at a particular time. … the quantum … dynamical law … is the (time-dependent) Schrödinger equation. Given some wave function ψ_0 at a time t_0 , integration of this equation leads to a unique wave function ψ_t at any other time t. At two times t and t’ these uniquely defined wave functions are related by a … time development operator T(t’ , t) on the Hilbert space. Consequently we say that integrating the Schrödinger equation leads to unitary time development.” (p.3) 7. “Quantum mechanics also allows for a stochastic or probabilistic time development … . In order to describe this in a systematic way, one needs the concept of a quantum history … a sequence of quantum events (wave functions or sub-spaces of the Hilbert space) at successive times. A collection of mutually … exclusive histories forms a sample space or family of histories, where each history is associated with a projector on a history Hilbert space. The successive events of a history are, in general, not related to one another through the Schrödinger equation. However, the Schrödinger equation, or … the time development operators T(t’ , t), can be used to assign probabilities to the different histories belonging to a particular family.” (p.3f) 1. “The wave functions for even such a simple system as a quantum particle in one dimension form an infinite-dimensional Hilbert space … [but] one does not have to learn functional analysis in order to understand the basic principles of quantum theory. The majority of the illustrations used in Chs. 2–16 are toy models with a finite-dimensional Hilbert space to which the usual rules of linear algebra apply without any qualification, and for these models there are no mathematical subtleties to add to the conceptual difficulties of quantum theory … Nevertheless, they provide many useful insights into general quantum principles.”. (p.4f) 1. Griffiths (2003) makes considerable use of toy models with a simple discretized time dependence … To obtain … unitary time development, one only needs to solve a simple difference equation, and this can be done in closed form on the back of an envelope. (cf. p.5f) 2. Probability theory plays an important role in discussions of the time development of quantum systems. … when using toy models the simplest version of probability theory, based on a finite discrete sample space, is perfectly adequate.” (p.6) 3. “The basic concepts of probability theory are the same in quantum mechanics as in other branches of physics; one does not need a new “quantum probability”. What distinguishes quantum from classical physics is the issue of choosing a suitable sample space with its associated event algebra. … in any single quantum sample space the ordinary rules for probabilistic reasoning are valid. ” (p.6) 1. The important difference compared to classical mechanics is the fact that “an initial quantum state does not single out a particular framework, or sample space of stochastic histories, much less determine which history in the framework will actually occur.” (p.7) There are multiple incompatible frameworks possible and to use the ordinary rules of propositional logic presupposes to apply these to a single framework. Therefore it is important to understand how to choose an appropriate framework.(cf. p.7) These are the basic ingredients which Griffiths mentions in chapter 1 of his book 2013. In the following these ingredients have to be understood so far, that is becomes clear how to relate the idea of a possible history of states (cf. chapters 8ff) where the future of a successor state in a sequence of timely separated states is described by some probability. • R.B. Griffiths. Consistent Quantum Theory. Cambridge University Press, New York, 2003
b99794c48baf9067
Another Fusion White Elephant Sighted in Germany According to an article that just appeared in Science magazine, scientists in Germany have completed building a stellarator by the name of Wendelstein 7-X (W7-X), and are seeking regulatory permission to turn the facility on in November.  If you can’t get past the Science paywall, here’s an article in the popular media with some links.  Like the much bigger ITER facility now under construction at Cadarache in France, W7-X is a magnetic fusion device.  In other words, its goal is to confine a plasma of heavy hydrogen isotopes at temperatures much hotter than the center of the sun with powerful magnetic fields in order to get them to fuse, releasing energy in the process.  There are significant differences between stellarators and the tokamak design used for ITER, but in both approaches the idea is to hold the plasma in place long enough to get significantly more fusion energy out than was necessary to confine and heat the plasma.  Both approaches are probably scientifically feasible.  Both are also white elephants, and a waste of scarce research dollars. The problem is that both designs have an Achilles heel.  Its name is tritium.  Tritium is a heavy isotope of hydrogen with a nucleus containing a proton and two neutrons instead of the usual lone proton.  Fusion reactions between tritium and deuterium, another heavy isotope of hydrogen with a single neutron in addition to the usual proton, begin to occur fast enough to be attractive as an energy source at plasma temperatures and densities much less than would be necessary for any alternative reaction.  The deuterium-tritium, or DT, reaction will remain the only feasible one for both stellarator and tokamak fusion reactors for the foreseeable future.  Unfortunately, tritium occurs in nature in only tiny trace amounts. The question is, then, where do you get the tritium fuel to keep the fusion reactions going?  Well, in addition to a helium nucleus, the DT fusion reaction produces a fast neutron.  These can react with lithium to produce tritium.  If a lithium-containing blanket could be built surrounding the reaction chamber in such a way as to avoid interfering with the magnetic fields, and yet thick enough and close enough to capture enough of the neutrons, then it should be possible to generate enough tritium to replace that burned up in the fusion process.  It sounds complicated but, again, it appears to be at least scientifically feasible.  However, it is by no means as certain that it is economically feasible. Consider what we’re dealing with here.  Tritium is an extremely slippery material that can pass right through walls of some types of metal.  It is also highly radioactive, with a half-life of about 12.3 years.  It will be necessary to find some way to efficiently extract it from the lithium blanket, allowing none of it to leak into the surrounding environment.  If any of it gets away, it will be easily detectable.  The neighbors are sure to complain and, probably, lawyer up.  Again, all this might be doable.  The problem is that it will never be doable at a low enough cost to make fusion reactor designs based on these approaches even remotely economically competitive with the non-fossil alternative sources of energy that will be available for, at the very least, the next several centuries. What’s that?  Reactor design studies by large and prestigious universities and corporations have all come to the conclusion that these magnetic fusion beasts will be able to produce electricity at least as cheaply as the competition?  I don’t think so.  I’ve participated in just such a government-funded study, conducted by a major corporation as prime contractor, with several other prominent universities and corporations participating as subcontractors.  I’m familiar with the methodology used in several others.  In general, it’s possible to make the cost electricity come out at whatever figure you choose, within reason, using the most approved methods and the most sound project management and financial software.  If the government is funding the work, it can be safely assumed that they don’t want to hear something like, “Fuggedaboudit, this thing will be way too expensive to build and run.”  That would make the office that funded the work look silly, and the fusion researchers involved in the design look like welfare queens in white coats.  The “right” cost numbers will always come out of these studies in the end. I submit that a better way to come up with a cost estimate is to use a little common sense.  Do you really think that a commercial power company will be able to master the intricacies of tritium production and extraction from the vicinity of a highly radioactive reaction chamber at anywhere near the cost of, say, wind and solar combined with next generation nuclear reactors for baseload power?  If you do, you’re a great deal more optimistic than me.  W7-X cost a billion euros.  ITER is slated to cost 13 billion, and will likely come in at well over that.  With research money hard to come by in Europe for much worthier projects, throwing amounts like that down a rat hole doesn’t seem like a good plan. All this may come as a disappointment to fusion enthusiasts.  On the other hand, you may want to consider the fact that, if fusion had been easy, we would probably have managed to blow ourselves up with pure fusion weapons by now.  Beyond that, you never know when some obscure genius might succeed in pulling a rabbit out of their hat in the form of some novel confinement scheme.  Several companies claim they have sure-fire approaches that are so good they will be able to dispense with tritium entirely in favor of more plentiful, naturally occurring isotopes.  See, for example, here, here, and here, and the summary at the Next Big Future website.  I’m not optimistic about any of them, either, but you never know. No Ignition at the National Ignition Facility: A Post Mortem The National Ignition Facility, or NIF, at Lawrence Livermore National Laboratory (LLNL) in California was designed and built, as its name implies, to achieve fusion ignition.  The first experimental campaign intended to achieve that goal, the National Ignition Campaign, or NIC, ended in failure.  Scientists at LLNL recently published a paper in the journal Physics of Plasmas outlining, to the best of their knowledge to date, why the experiments failed.  Entitled “Radiation hydrodynamics modeling of the highest compression inertial confinement fusion ignition experiment from the National Ignition Campaign,” the paper concedes that, The recently completed National Ignition Campaign (NIC) on the National Ignition Facility (NIF) showed significant discrepancies between post-shot simulations of implosion performance and experimentally measured performance, particularly in thermonuclear yield. To understand what went wrong, it’s necessary to know some facts about the fusion process and the nature of scientific attempts to achieve fusion in the laboratory.  Here’s the short version:  The neutrons and protons in an atomic nucleus are held together by the strong force, which is about 100 times stronger than the electromagnetic force, and operates only over tiny distances measured in femtometers.  The average binding energy per nucleon (proton or neutron) due to the strong force is greatest for the elements in the middle of the periodic table, and gradually decreases in the directions of both the lighter and heavier elements.  That’s why energy is released by fissioning heavy atoms like uranium into lighter atoms, or fusing light atoms like hydrogen into heavier atoms.  Fusion of light elements isn’t easy.  Before the strong force that holds atomic nuclei together can take effect, two light nuclei must be brought very close to each other.  However, atomic nuclei are all positively charged, and like charges repel.  The closer they get, the stronger the repulsion becomes.  The sun solves the problem with its crushing gravitational force.  On earth, the energy of fission can also provide the necessary force in nuclear weapons.  However, concentrating enough energy to accomplish the same thing in the laboratory has proved a great deal more difficult. The problem is to confine incredibly hot material at sufficiently high densities for a long enough time for significant fusion to take place.  At the moment there are two mainstream approaches to solving it:  magnetic fusion and inertial confinement fusion, or ICF.  In the former, confinement is achieved with powerful magnetic lines of force.  That’s the approach at the international ITER fusion reactor project currently under construction in France.  In ICF, the idea is to first implode a small target of fuel material to extremely high density, and then heat it to the necessary high temperature so quickly that its own inertia holds it in place long enough for fusion to happen.  That’s the approach being pursued at the NIF. The NIF consists of 192 powerful laser beams, which can concentrate about 1.8 megajoules of light on a tiny spot, delivering all that energy in a time of only a few nanoseconds.  It is much larger than the next biggest similar facility, the OMEGA laser system at the Laboratory for Laser Energetics in Rochester, NY, which maxes out at about 40 kilojoules.  The NIC experiments were indirect drive experiments, meaning that the lasers weren’t aimed directly at the BB-sized, spherical target, or “capsule,” containing the fuel material (a mixture of deuterium and tritium, two heavy isotopes of hydrogen).  Instead, the target was mounted inside of a tiny, cylindrical enclosure known as a hohlraum with the aid of a thin, plastic “tent.”  The lasers were fired through holes on each end of the hohlraum, striking the walls of the cylinder, generating a pulse of x-rays.  These x-rays then struck the target, ablating material from its surface at high speed.  In a manner similar to a rocket exhaust, this drove the remaining target material inward, causing it to implode to extremely high densities, about 40 times heavier than the heaviest naturally occurring elements.  As it implodes, the material must be kept as “cold” as possible, because it’s easier to squeeze and compress things that are cold than those that are hot.  However, when it reaches maximum density, a way must be found to heat a small fraction of this “cold” material to the very high temperatures needed for significant fusion to occur.  This is accomplished by setting off a series of shocks during the implosion process that converge at the center of the target at just the right time, generating the necessary “hot spot.”  The resulting fusion reactions release highly energetic alpha particles, which spread out into the surrounding “cold” material, heating it and causing it to fuse as well, in a “burn wave” that propagates outward.  “Ignition” occurs when the amount of fusion energy released in this way is equal to the energy in the laser beams that drove the target. As noted above, things didn’t go as planned.  The actual fusion yield achieved in the best experiment was less than that predicted by the best radiation hydrodynamics computer codes available at the time by a factor of about 50, give or take.  The LLNL paper in Physics of Plasmas discusses some of the reasons for this, and describes subsequent improvements to the codes that account for some, but not all, of the experimental discrepancies.  According to the paper, Since these simulation studies were completed, experiments have continued on NIF and have identified several important effects – absent in the previous simulations – that have the potential to resolve at least some of the large discrepancies between simulated and experimental yields.  Briefly, these effects include larger than anticipated low-mode distortions of the imploded core – due primarily to asymmetries in the x-ray flux incident on the capsule, – a larger than anticipated perturbation to the implosion caused by the thin plastic membrane or “tent” used to support the capsule in the hohlraum prior to the shot, and the presence, in some cases, of larger than expected amounts of ablator material mixed into the hot spot. In a later section, the LLNL scientists also note, Since this study was undertaken, some evidence has also arisen suggesting an additional perturbation source other than the three specifically considered here.  That is, larger than anticipated fuel pre-heat due to energetic electrons produced from laser-plasma interactions in the hohlraum. In simple terms, the first of these passages means that the implosions weren’t symmetric enough, and the second means that the fuel may not have been “cold” enough during the implosion process.  Any variation from perfectly spherical symmetry during the implosion can rob energy from the central hot spot, allow material to escape before fusion can occur, mix cold fuel material into the hot spot, quenching it, etc., potentially causing the experiment to fail.  The asymmetries in the x-ray flux mentioned in the paper mean that the target surface would have been pushed harder in some places than in others, resulting in asymmetries to the implosion itself.  A larger than anticipated perturbation due to the “tent” would have seeded instabilities, such as the Rayleigh-Taylor instability.  Imagine holding a straw filled with water upside down.  Atmospheric pressure will prevent the water from running out.  Now imagine filling a perfectly cylindrical bucket with water to the same depth.  If you hold it upside down, the atmospheric pressure over the surface of the water is the same.  Based on the straw experiment, the water should stay in the bucket, just as it did in the straw.  Nevertheless, the water comes pouring out.  As they say in the physics business, the straw experiment doesn’t “scale.”  The reason for this anomaly is the Rayleigh-Taylor instability.  Over such a large surface, small variations from perfect smoothness are gradually amplified, growing to the point that the surface becomes “unstable,” and the water comes splashing out.  Another, related instability, the Richtmeyer-Meshkov instability, leads to similar results in material where shocks are present, as in the NIF experiments. Now, with the benefit of hindsight, it’s interesting to look back at some of the events leading up to the decision to build the NIF.  At the time, government used a “key decision” process to approve major proposed projects.  The first key decision, known as Key Decision 0, or KD0, was approval to go forward with conceptual design.  The second was KD1, approval of engineering design and acquisition.  There were more “key decisions” in the process, but after passing KD1, it could safely be assumed that most projects were “in the bag.”  In the early 90’s, a federal advisory committee, known as the Inertial Confinement Fusion Advisory Committee, or ICFAC, had been formed to advise the responsible agency, the Department of Energy (DOE), on matters relating to the national ICF program.  Among other things, its mandate including advising the government on whether it should proceed with key decisions on the NIF project.  The Committee’s advice was normally followed by DOE. At the time, there were six major “program elements” in the national ICF program.  These included the three weapons laboratories, LLNL, Los Alamos National Laboratory (LANL), and Sandia National Laboratories (SNL).  The remaining three included the Laboratory for Laser Energetics at the University of Rochester (UR/LLE), the Naval Research Laboratory (NRL), and General Atomics (GA).  Spokespersons from all these “program elements” appeared before the ICFAC at a series of meetings in the early 90’s.  The critical meeting as far as approval of the decision to pass through KD1 is concerned took place in May 1994.  Prior to that time, extensive experimental programs at LLNL’s Nova laser, UR/LLE’s OMEGA, and a host of other facilities had been conducted to address potential uncertainties concerning whether the NIF could achieve ignition.  The best computer codes available at the time had modeled proposed ignition targets, and predicted that several different designs would ignite, typically producing “gains,” the ratio of the fusion energy out to the laser energy in, of from 1 to 10.  There was just one major fly in the ointment – a brilliant physicist named Steve Bodner, who directed the ICF program at NRL at the time. Bodner told the ICFAC that the chances of achieving ignition on the NIF were minimal, providing his reasons in the form of a detailed physics analysis.  Among other things, he noted that there was no way of controlling the symmetry because of blow-off of material from the hohlraum wall, which could absorb both laser light and x-rays.  Ablated material from the capsule itself could also absorb laser and x-ray radiation, again destroying symmetry.  He pointed out that codes had raised the possibility of pressure perturbations on the capsule surface due to stagnation of the blow-off material on the hohlraum axis.  LLNL’s response was that these problems could be successfully addressed by filling the hohlraum with a gas such as helium, which would hold back the blow-off from the walls and target.  Bodner replied that such “solutions” had never really been tested because of the inability to do experiments on Nova with sufficient pulse length.  In other words, it was impossible to conduct experiments that would “scale” to the NIF on existing facilities.  In building the NIF, we might be passing from the “straw” to the “bucket.”  He noted several other areas of major uncertainty with NIF-scale targets, such as the possibility of unaccounted for reflection of the laser light, and the possibility of major perturbations due to so-called laser-plasma instabilities. In light of these uncertainties, Bodner suggested delaying approval of KD1 for a year or two until these issues could be more carefully studied.  At that point, we may have gained the technological confidence to proceed.  However, I suspect he knew that two years would never be enough to resolve the issues he had raised.  What Bodner really wanted to do was build a much larger facility, known as the Laboratory Microfusion Facility, or LMF.  The LMF would have a driver energy of from 5 to 10 megajoules compared to the NIF’s 1.8.  It had been seriously discussed in the late 80’s and early 90’s.  Potentially, such a facility could be built with Bodner’s favored KrF laser drivers, the kind used on the Nike laser system at NRL, instead of the glass lasers that had been chosen for NIF.  It would be powerful enough to erase the physics uncertainties he had raised by “brute force.”  Bodner’s proposed approach was plausible and reasonable.  It was also a forlorn hope. Funding for the ICF program had been cut in the early 90’s.  Chances of gaining approval for a beast as expensive as LMF were minimal.  As a result, it was now officially considered a “follow-on” facility to the NIF.  No one took this seriously at the time.  Everyone knew that, if NIF failed, there would be no “follow-on.”  Bodner knew this, the scientists at the other program elements knew it, and so did the members of the ICFAC.  The ICFAC was composed of brilliant scientists.  However, none of them had any real insight into the guts of the computer codes that were predicting ignition on the NIF.  Still, they had to choose between the results of the big codes, and Bodner’s physical insight bolstered by what were, in comparison, “back of the envelope” calculations.  They chose the big codes.  With the exception of Tim Coffey, then Director of NRL, they voted to approve passing through KD1 at the May meeting. In retrospect, Bodner’s objections seem prophetic.  The NIC has failed, and he was not far off the mark concerning the reasons for the failure.  It’s easy to construe the whole affair as a morality tale, with Bodner playing the role of neglected Cassandra, and the LLNL scientists villains whose overweening technological hubris finally collided with the grim realities of physics.  Things aren’t that simple.  The LLNL people, not to mention the supporters of NIF from the other program elements, included many responsible and brilliant scientists.  They were not as pessimistic as Bodner, but none of them was 100% positive that the NIF would succeed.  They decided the risk was warranted, and they may well yet prove to be right. In the first place, as noted above, chances that an LMF might be substituted for the NIF after another year or two of study were very slim.  The funding just wasn’t there.  Indeed, the number of laser beams on the NIF itself had been reduced from the originally proposed 240 to 192, at least in part, for that very reason.  It was basically a question of the NIF or nothing.  Studying the problem to death, now such a typical feature of the culture at our national research laboratories, would have led nowhere.  The NIF was never conceived as an energy project, although many scientists preferred to see it in that light.  Rather, it was built to serve the national nuclear weapons program.  It’s supporters were aware that it would be of great value to that program even if it didn’t achieve ignition.  In fact, it is, and is now providing us with a technological advantage that rival nuclear powers can’t match in this post-testing era.  Furthermore, LLNL and the other weapons laboratories were up against another problem – what you might call a demographic cliff.  The old, testing-era weapons designers were getting decidedly long in the tooth, and it was necessary to find some way to attract new talent.  A facility like the NIF, capable of exploring issues in inertial fusion energy, astrophysics, and other non-weapons-related areas of high energy density physics, would certainly help address that problem as well. Finally, the results of the NIC in no way “proved” that ignition on the NIF is impossible.  There are alternatives to the current indirect drive approach with frequency-tripled “blue” laser beams.  Much more energy, up to around 4 megajoules, might be available if the known problems of using longer wavelength “green” light can be solved.  Thanks to theoretical and experimental work done by the ICF team at UR/LLE under the leadership of Dr. Robert McCrory, the possibility of direct drive experiments on the NIF, hitting the target directly instead of shooting the laser beams into a “hohlraum” can, was also left open, using a so-called “polar” illumination approach.  Another possibility is the “fast ignitor” approach to ICF, which would dispense with the need for complicated converging shocks to produce a central “hot spot.”  Instead, once the target had achieved maximum density, the hot spot would be created on the outer surface using a separate driver beam. In other words, while the results of the NIC are disappointing, stay tuned.  Pace Dr. Bodner, the scientists at LLNL may yet pull a rabbit out of their hats. Oswald Spengler got it Wrong Sometimes the best metrics for public intellectuals are the short articles they write for magazines.  There are page limits, so they have to get to the point.  It isn’t as easy to camouflage vacuous ideas behind a smoke screen of verbiage.  Take, for example, the case of Oswald Spengler.  His “Decline of the West” was hailed as the inspired work of a prophet in the years following its publication in 1918.  Read Spengler’s Wiki entry and you’ll see what I mean.  He should have quit while he was ahead. Fast forward to 1932, and the Great Depression was at its peak.  The Decline of the West appeared to be a fait accompli.  Spengler would have been well-advised to rest on his laurels.  Instead, he wrote an article for The American Mercury, still edited at the time by the Sage of Baltimore, H. L. Mencken, with the reassuring title, “Our Backs are to the Wall!”  It was a fine synopsis of the themes Spengler had been harping on for years, and a prophecy of doom worthy of Jeremiah himself.  It was also wrong. According to Spengler, high technology carried within itself the seeds of its own collapse.  Man had dared to “revolt against nature.”  Now the very machines he had created in the process were revolting against man.  At the time he wrote the article he summed up the existing situation as follows: A group of nations of Nordic blood under the leadership of British, German, French, and Americans command the situation.  Their political power depends on their wealth, and their wealth consists in their industrial strength.  But this in turn is bound up with the existence of coal.  The Germanic peoples, in particular, are secured by what is almost a monopoly of the known coalfields… Spengler went on to explain that, Countries industrially poor are poor all around; they cannot support an army or wage a war; therefore they are politically impotent; and the workers in them, leaders and led alike, are objects in the economic policy of their opponents. No doubt he would have altered this passage somewhat had he been around to witness the subsequent history of places like Vietnam, Algeria, and Cambodia.  Willpower, ideology, and military genius have trumped political and economic power throughout history.  Spengler simply assumed they would be ineffective against modern technology because the “Nordic” powers had not been seriously challenged in the 50 years before he wrote his book.  It was a rash assumption.  Even more rash were his assumptions about the early demise of modern technology.  He “saw” things happening in his own times that weren’t really happening at all.  For example, The machine, by its multiplication and its refinement, is in the end defeating its own purpose.  In the great cities the motor-car has by its numbers destroyed its own value, and one gets on quicker on foot.  In Argentina, Java, and elsewhere the simple horse-plough of the small cultivator has shown itself economically superior to the big motor implement, and is driving the latter out.  Already, in many tropical regions, the black or brown man with his primitive ways of working is a dangerous competitor to the modern plantation-technic of the white. Unfortunately, motor cars and tractors can’t read, so went right on multiplying without paying any attention to Spengler’s book.  At least he wasn’t naïve enough to believe that modern technology would end because of the exhaustion of the coalfields.  He knew that we were quite clever enough to come up with alternatives.  However, in making that very assertion, he stumbled into what was perhaps the most fundamental of all his false predictions; the imminence of the “collapse of the West.” It is, of course, nonsense to talk, as it was fashionable to do in the Nineteenth Century, of the imminent exhaustion of the coal-fields within a few centuries and of the consequences thereof – here, too, the materialistic age could not but think materially.  Quite apart from the actual saving of coal by the substitution of petroleum and water-power, technical thought would not fail ere long to discover and open up still other and quite different sources of power.  It is not worth while thinking ahead so far in time.  For the west-European-American technology will itself have ended by then.  No stupid trifle like the absence of material would be able to hold up this gigantic evolution. Alas, “so far in time” came embarrassingly fast, with the discovery of nuclear fission a mere six years later.  Be that as it may, among the reasons that this “gigantic evolution” was unstoppable was what Spengler referred to as “treason to technics.”  As he put it, Today more or less everywhere – in the Far East, India, South America, South Africa – industrial regions are in being, or coming into being, which, owing to their low scales of wages, will face us with a deadly competition.  the unassailable privileges of the white races have been thrown away, squandered, betrayed. In other words, the “treason” consisted of the white race failing to keep its secrets to itself, but bestowing them on the brown and black races.  They, however, were only interested in using this technology against the original creators of the “Faustian” civilization of the West.  Once the whites were defeated, they would have no further interest in it: For the colored races, on the contrary, it is but a weapon in their fight against the Faustian civilization, a weapon like a tree from the woods that one uses as scaffolding, but discards as soon as it has served its purpose.  This machine-technic will end with the Faustian civilization and one day will lie in fragments, forgotten – our railways and steamships as dead as the Roman roads and the Chinese wall, our giant cities and skyscrapers in ruins, like old Memphis and Babylon.  The history of this technic is fast drawing to its inevitable close.  It will be eaten up from within.  When, and in what fashion, we so far know not. Spengler was wise to include the Biblical caveat that, “…about that day or hour no one knows, not even the angels in heaven, nor the Son, but only the Father”  (Matthew 24:36).  However, he had too much the spirit of the “end time” Millennialists who have cropped up like clockwork every few decades for the last 2000 years, predicting the imminent end of the world, to leave it at that.  Like so many other would-be prophets, his predictions were distorted by a grossly exaggerated estimate of the significance of the events of his own time.  Christians, for example, have commonly assumed that reports of war, famine and pestilence in their own time are somehow qualitatively different from the war, famine and pestilence that have been a fixture of our history for that last 2000 years, and conclude that they are witnessing the signs of the end times, when, “…nation shall rise against nation, and kingdom against kingdom: and there shall be famines, and pestilences, and earthquakes, in divers places” (Matthew 24:7).  In Spengler’s case, the “sign” was the Great Depression, which was at its climax when he wrote the article: The center of gravity of production is steadily shifting away from them, especially since even the respect of the colored races for the white has been ended by the World War.  This is the real and final basis of the unemployment that prevails in the white countries.  It is no mere crisis, but the beginning of a catastrophe. Of course, Marxism was in high fashion in 1932 as well.  Spengler tosses it in for good measure, agreeing with Marx on the inevitability of revolution, but not on its outcome: This world-wide mutiny threatens to put an end to the possibility of technical economic work.  The leaders (bourgeoisie, ed.) may take to flight, but the led (proletariat, ed.) are lost.  Their numbers are their death. Spengler concludes with some advice, not for us, or our parents, or our grandparents, but our great-grandparents generation: Only dreamers believe that there is a way out.  Optimism is cowardice… Our duty is to hold on to the lost position, without hope, without rescue, like that Roman soldier whose bones were found in front of a door in Pompeii, who, during the eruption of Vesuvius, died at his post because they forgot to relieve him.  That is greatness.  That is what it means to be a thoroughbred.  The honorable end is the one thing that can not be taken from a man. One must be grateful that later generations of cowardly optimists donned their rose-colored glasses in spite of Spengler, went right on using cars, tractors, and other mechanical abominations, and created a world in which yet later generations of Jeremiahs could regale us with updated predictions of the end of the world.  And who can blame them?  After all, eventually, at some “day or hour no one knows, not even the angels in heaven,” they are bound to get it right, if only because our sun decides to supernova.  When that happens, those who are still around are bound to dust off their ancient history books, smile knowingly, and say, “See, Spengler was right after all!” Steven Pinker, Science, and “Scientism” In an article that appeared recently in The New Republic entitled, “Science is not Your Enemy,” Steven Pinker is ostensibly defending science, going so far as to embrace “scientism.”  As he points out, “The term “scientism” is anything but clear, more of a boo-word than a label for any coherent doctrine.”  That’s quite true, which is reason enough to be somewhat circumspect about self-identifying (if I may coin a term) as a “scientismist.”  Nothing daunted, Pinker does just that, defending scientism in “the good sense.” He informs us that “good scientism” is distinguished by “an explicit commitment to two ideals,” namely, the propositions that the world is intelligible, and that the acquisition of knowledge is hard. Let me say up front that I am on Pinker’s side when it comes to the defense of what he calls “science,” just as I am on his side in rejecting the ideology of the Blank Slate.  Certainly he’s worthy of a certain respect, if only in view of the sort of people who have been coming out of the woodwork to attack him for his latest.  Anyone with enemies like that can’t be all bad.  It’s just that, whenever I read his stuff, I find myself rolling my eyes before long.  Consider, for example, his tome about the Blank Slate.  My paperback version runs to 500 pages give or take, and in all that prose, I find only a single mention of Robert Ardrey, and then only accompanied by the claim that he was “totally and utterly wrong.”  Now, by the account of the Blank Slaters themselves (see, in particular, the essays by Geoffrey Gorer in Man and Aggression, edited by Ashley Montagu), Robert Ardrey was their most effective and influential opponent.  In other words, Pinker wrote a thick tome, purporting to be an account of the Blank Slate, in which he practically ignored the contributions of the most important player in the whole affair, only mentioning him at all in order to declare him wrong, when in fact he was on the same side of the issue as Pinker. Similar problems turn up in Pinker’s latest.  For example, he writes, Just as common, and as historically illiterate, is the blaming of science for political movements with a pseudoscientific patina, particularly Social Darwinism and eugenics.  Social Darwinism was the misnamed laissez-faire philosophy of Herbert Spencer.  It was inspired not by Darwin’s theory of natural selection, but by Spencer’s Victorian-era conception of a mysterious natural force for progress, which was best left unimpeded. Here, as in numerous similar cases, it is clear Pinker has never bothered to read Spencer.  The claim that he was a “Social Darwinist” was a red herring tossed out by his enemies after he was dead.  Based on a minimal fair reading of his work, the claim is nonsense.  If actually reading Spencer is too tedious, just Google something like “Spencer Social Darwinism.”  Check a few of the hits, and you will find that a good number of modern scholars have been fair-minded enough to actively dispute the claim.  Other than that, you will find no reference to specific writings of Spencer in which he promotes Social Darwinism as it is generally understood.  The same could be said of the laissez faire claim.  Spencer supported a small state, but hardly rejected stated intervention in all cases.  He was a supporter of labor unions, and even was of the opinion that land should be held in common in his earlier writings.  As for “Victorian-era” conceptions, if memory serves, Darwin wrote during that era as well, and while Spencer embraced Lamarckism and had a less than up-to-date notion of how evolution works, I find no reference in any of his work to a “mysterious natural force for progress.” Pinker’s comments about morality are similarly clouded.  He writes, In other words, the worldview that guides the moral and spiritual values of an educated person today is the worldview given to us by science… The facts of science, by exposing the absence of purpose in the laws governing the universe, force us to take responsibility for the welfare of ourselves, our species, and our planet.  For the same reason, they undercut any moral or political system based on mystical forces, quests, destinies, dialectics, struggles, or messianic ages.  And in combination with a few unexceptionable convictions – that all of us value our own welfare and that we are social beings who impinge on each other and can negotiate codes of conduct – the scientific facts militate toward a defensible morality, namely, adhering to principles that maximize the flourishing of humans and other sentient beings. In other words, Pinker has bought in to the Sam Harris “human flourishing” mumbo-jumbo, and thinks that the “facts of science” can somehow become material objects with the power to dictate that which is “good” and that which is “evil.”  Here Pinker, in company with Harris, has taken leave of his senses.  Based on what he wrote earlier in the essay, we know that he is aware that what we understand as morality is the expression of evolved behavioral traits.  Those traits are their ultimate cause, and without them morality would literally cease to exist as we know it.  They exist in the brains of individuals, solely by virtue of the fact that, at some point in the distant past utterly unlike the present, they promoted our survival.  And yet, in spite of the fact that Pinker must understand, at least at some level, that these things are true, he agrees with Harris that the emotional responses, or, as Hume, whom Pinker also claims to have read, puts it, sentiments, can jump out of our heads, become objects, or things in themselves, independent of the minds of individuals, and, as such, can be manipulated and specified by the “facts of science.”  Presumably, once the “educated” and the “scientists” have agreed on what the “facts of science” tell us is a “defensible morality,” at that point the rest of us become bound to agree with them on the meanings of “good” and “evil” that they pass down to us, must subordinate our own emotions and inherent predispositions regarding such matters to “science,” and presumably be justifiably (by “science”) punished if we do not.  What nonsense! “Science” is not an object, any more than “good” and “evil.”  “Science” cannot independently “say” anything, nor can it create values.  In reality, “science” is a rather vague set of principles and prescriptions for approaching the truth, applied willy-nilly if at all by most “scientists.”  By even embracing the use of the term “science” in that way, Pinker is playing into the hands of his enemies.  He is validating their claim that “science” is actually a thing, but in their case, a bête noire, transcending its real nature as a set of rules, more or less vaguely understood and applied, to become an object in itself.  Once the existence of such a “science” object is accepted, it becomes a mere bagatelle to fix on it the responsibility for all the evils of the world, or, in the case of the Pinkers of the world, all the good. In reality, the issue here is not whether this imaginary “science” object exists and, assuming it does, whether it is “good” or “evil.”  It is about whether we should be empowered to learn things about the universe in which we live or not.  The opponents of “scientism” typically rail against such things as eugenics, Social Darwinism, and the atomic bomb.  These are supposedly the creations of the “science” object.  But, in fact, they are no such thing.  In the case of eugenics and Social Darwinism, they represent the moral choices of individuals.  In the case of the atomic bomb, we have a thing which became possible as a result of the knowledge of the physical world acquired in the preceding half a century, give or take.  What would the opponents of “scientism” have us change?  The decision to build the atomic bomb?  Fine, but in that case they are not opposing “science,” but rather a choice made by individuals.  Opposition to “science” itself can only reasonably be construed as opposition to the acquisition of the knowledge that made the bomb possible to begin with.  If that is what the opponents of “scientism” really mean, let them put their cards on the table.  Let them explain to us in just what ways those things which the rest of us are to be allowed to know will be limited, and just why it is they think they have the right to dictate to the rest of us what we can know and what we can’t. It seems to me this whole “science” thing is getting out of hand.  If we must have an object, it would be much better for us to go back to the Enlightenment and use the term “reason.”  It seems to me that would make it a great deal more clear what we are talking about.  It would reveal the true nature of the debate.  It is not about the “science” object, and whether it is “good” or “evil,” but about whether we should actually try to use our rational minds, or instead relegate our brains to the less ambitious task of serving as a convenient stuffing for our skulls. Of Cold Fusion and the Timidity of ARPA-E ARPA-E, or the Advanced Research Projects Agency – Energy, is supposed to be DOE’s version of DARPA.  According to its website, its mission, …is to fund projects that will develop transformational technologies that reduce America’s dependence on foreign energy imports; reduce U.S. energy related emissions (including greenhouse gasses); improve energy efficiency across all sectors of the U.S. economy and ensure that the U.S. maintains its leadership in developing and deploying advanced energy technologies. So far, it has not come up with anything quite as “transformational” as the Internet or stealth technology.  There is good reason for this.  Its source selection people are decidedly weak in the knees.  Consider the sort of stuff it’s funded in the latest round of contract awards.  The people at DARPA would probably call it “workman like.”  H. L. Mencken, the great Sage of Baltimore, would more likely have called it “pure fla fla.”  For example, there are “transformational” systems to twiddle with natural gas storage that the industry, not exactly short of cash at the moment, would have been better left to develop on its own, such as, Liquid-Piston Isothermal Home Natural Gas Compressor Chilled Natural Gas At-Home Refueling Superplastic-Formed Gas Storage Tanks There is the “transformational” university research that is eye-glazingly mundane, and best reserved as filler for the pages of obscure academic journals, such as, Cell-level Power Management of Large Battery Packs Health Management System for Reconfigurable Battery Packs Optimal Operation and Management of Batteries Based on  Real Time Predictive Modeling and Adaptive Battery  Management Techniques. There is some “groundbreaking” stuff under the rubric of “build a better magnet, and the world will beat a pathway to your door.” Manganese-Based Permanent Magnet with 40 MGOe at 200°C Rare‐Earth‐Free Permanent Magnets for Electrical Vehicle Motors and Wind Turbine Generators: Hexagonal Symmetry Based Materials Systems Mn‐Bi and M‐type Hexaferrite Discovery and Design of Novel Permanent Magnets using Non-strategic Elements having Secure Supply Chains …and so on. Far be it for me to claim that any of this research is useless.  It is, however, also what the people at DARPA would call “incremental,” rather than transformational.  Of course, truly transformational ideas don’t grow on trees, and DARPA also funds its share of “workmanlike” projects, but at least the source selection people there occasionally go out on a limb. In the work funded by ARPA-E, on the other hand, I can find nothing that might induce the bureaucrats on Secretary Chu’s staff to swallow their gum. If the agency is really serious about fulfilling its mission, it might consider some of the innovative ideas out there for harnessing fusion energy.  All of them can be described as “high risk, high payoff,” but isn’t that the kind of work ARPA-E is supposed to be funding?  According to a recent article on the Science Magazine website, the White House has proposed cutting domestic fusion research by 16%to help pay for the U.S. contribution to the international fusion experiment, ITER, under construction in Cadarache, France.  As I’ve pointed out elsewhere, ITER is second only to the International Space Station as the greatest white elephant of all time, and is similarly vacuuming up funds that might otherwise have supported worthwhile research in several other countries.  All the more reason to give a leg up to fusion, a technology that has bedeviled scientists for decades, but that could potentially supply mankind’s energy needs for millennia to come.  Ideas being floated at the moment include advanced fusor concepts such as the Bussard polywell, magneto-inertial fusion, focus fusion, etc.  None of them look particularly promising to me, but if any of them pan out, the potential payoff is huge.  I’ve always been of the opinion that, if we ever do harness fusion energy, it will be by way of some such clever idea rather than by building anything like the current “conventional” inertial or magnetic fusion reactor designs. When it comes to conventional nuclear energy, we are currently in the process of being left in the dust by countries like India and China.  Don’t expect any help from industry here.  They are in the business to make a profit.  There’s certainly nothing intrinsically wrong with that, but at the moment, profits are best maximized by building light water reactors that consume the world’s limited supply of fissile uranium 235 without breeding more fuel to replace it, and spawn long-lived and highly radioactive transuranic actinides in the process that it will be necessary to find a way to safely store for thousands of years into the future.  This may be good for profits, but it’s definitely bad for future generations.  Alternative designs exist that would breed as much new fuel as they consume, be intrinsically safe against meltdown, would destroy the actinides along with some of the worst radioactive fission products, and would leave waste that could be potentially less radioactive than the original ore in a matter of a few hundred years.  DOE’s Office of Nuclear Energy already funds some research in these areas.  Unfortunately, in keeping with the time-honored traditions of government research funding, they like to play it safe, funneling awards to “noted experts” who tend to keep plodding down well-established paths even when they are clearly leading to dead ends.  ITER and the International Space Station are costly examples of where that kind of thinking leads.  If it were really doing its job, an agency like ARPA-E might really help to shake things up a little. Finally, we come to that scariest of boogeymen of “noted experts” the world over; cold fusion, or, as some of its advocates more reticently call it, Low Energy Nuclear Reactions (LENR).  Following the initial spate of excitement on the heels of the announcement by Pons and Fleischmann of excess heat in their experiments with palladium cells, the scientific establishment agreed that such ideas were to be denounced as heretical.  Anathemas and interdicts rained down on their remaining proponents.  Now, I must admit that I don’t have much faith in LENR myself.  I happened to attend the Cold Fusion Workshop in Sante Fe, NM which was held in 1989, not long after the Pons/Fleischmann bombshell, and saw and heard some memorably whacky posters and talks.  I’ve talked to several cold fusion advocates since then, and some appeared perfectly sober, but an unsettlingly large proportion of others seemed to be treading close to the lunatic fringe.  Just as fusion energy is always “30 years in the future,” cold fusion proponents have been claiming that their opponents will be “eating crow in six months” ever since 1989.  Some very interesting results have been reported.  Unfortunately, they haven’t been reproducible. For all that, LENR keeps hanging around.  It continues to find advocates among those who, for one reason or another, aren’t worried about their careers, or lack respect for authority, or are just downright contrarians.  The Science of Low Energy Nuclear Reactions by Edmund Storms is a useful source for the history of and evidence for LENR.  Websites run by the cold fusion faithful may be found here and here.  Recently, stories have begun cropping up again in “respectable” mags, such as Forbes and Wired.  Limited government funding has been forthcoming from NASA Langley and, at least until recently, from the Navy at its Space and Naval Warfare Systems Command (SPAWAR).  Predictably, such funding is routinely attacked as support for scientific quackery.  The proper response to that from the source selection folks at ARPA-E should be, “So what?”  After all, ARPA-E was created to be a catalyst for innovation. ARPA-E’s objective is to tap into the risk-taking American ethos and to identify and support the pioneers of the future. With the best research and development infrastructure in the world, a thriving innovation ecosystem in business and entrepreneurship, and a generation of youth that is willing to engage with fearless intensity, the U.S. has all the ingredients necessary for future success. The goal of ARPA-E is to harness these ingredients and make a full-court press to address the U.S.’s technological gaps and leapfrog over current energy approaches. The best way to “harness these ingredients and make a full-court press” is not by funding of the next round of incremental improvements in rare earth magnets.  Throwing a few dollars to the LENR people, on the other hand, will certainly be “high risk,” but it just might pan out.  I hope the people at ARPA-E can work up the minimal level of courage it takes to do so.  If the Paris fashions can face down ridicule, so can they.  If they lack the nerve, then DOE would probably do better to terminate its bad imitation of DARPA and feed the money back to its existing offices.  They can continue funding mediocrity just as well as ARPA-E. Pons & Fleischmann Higgs Boson? What’s a Boson? It’s been over a century since Max Planck came up with the idea that electromagnetic energy could only be emitted in fixed units called quanta as a means of explaining the observed spectrum of light from incandescent light bulbs. Starting from this point, great physicists such as Bohr, de Broglie, Schrödinger, and Dirac developed the field of quantum mechanics, revolutionizing our understanding of the physical universe. By the 1930’s it was known that matter, as well as electromagnetic energy, could be described by wave equations. In other words, at the level of the atom, particles do not behave at all as if they were billiard balls on a table, or, in general, in the way that our senses portray physical objects to us at a much larger scale. For example, electrons don’t act like hard little balls flying around outside the nuclei of atoms.  Rather, it is necessary to describe where they are in terms of probability distributions, and how they act in terms of wave functions. It is impossible to tell at any moment exactly where they are, a fact formalized mathematically in Heisenberg’s famous Uncertainty Principle. All this has profound implications for the very nature of reality, most of which, even after the passage of many decades, are still unknown to the average lay person. Among other things, it follows from all this that there are two basic types of elementary particles; fermions and bosons. It turns out that they behave in profoundly different ways, and that the idiosyncrasies of neither of them can be understood in terms of classical physics. Sometimes the correspondence between mathematics and physical reality seems almost magical.  So it is with the math that predicts the existence of fermions and bosons.  When it was discovered that particles at the atomic level actually behave as waves, a brilliant Austrian scientist named Erwin Schrödinger came up with a now-famous wave equation to describe the phenomenon.  Derived from a few elementary assumptions based on some postulates derived by Einstein and others relating the wavelength and frequency of matter waves to physical quantities such as momentum and energy, and the behavior of waves in general, the Schrödinger equation could be solved to find wave functions.  It was found that these wave functions were complex numbers, that is, they had a real component, and an “imaginary” component that was a multiple of i, the square root of minus one.  For example, such a number might be written down mathematically as x + iy.  Each such number has a complex conjugate, found by changing the sign of the complex term.  The complex conjugate of the above number is, therefore, x – iy.  Max born found that the probability of finding a physical particle at any given point in space and time could be derived from the product of a solution to Schrödinger’s equation and its complex conjugate. So far, so good, but eventually it was realized that there was a problem with describing particles in this way that didn’t arise in classical physics; you couldn’t tell them apart!  Elementary particles are, after all, indistinguishable.  One electron, for example, resembles every other electron like so many peas in a pod.  Suppose you could put two electrons in a glass box, and set them in motion bouncing off the walls.  Assuming you had very good eyes, you wouldn’t have any trouble telling the two of them apart if they behaved like classical billiard balls.  You would simply have to watch their trajectories as they bounced around in the box.  However, they don’t behave like billiard balls.  Their motion must be described by wave functions, and wave functions can overlap, making it impossible to tell which wave function belongs to which electron!  Trying to measure where they are won’t help, because the wave functions are changed by the very act of measurement. All this was problematic, because if elementary particles really were indistinguishable in that way, they also had to be indistinguishable in the mathematical equations that described their behavior.  As noted above, it had been discovered that the physical attributes of a particle could be determined in terms of the product of a solution to Schrödinger’s equation and its complex conjugate.  Assuming for the moment that the two electrons in the box didn’t collide or otherwise interact with each other, that implies that the solution for the two particle system would depend on the product of the solution for both particles and their complex conjugates.  Unfortunately, the simple product didn’t work.  If the particles were labeled and the labels switched around in the solution, the answer came out different.  The particles were distinguishable!  What to do? Well, Schrödinger’s equation has a very useful mathematical property.  It is linear.  What that means in practical terms is that if the products of the wave functions for the two particle system is a solution, then any combination of the products will also be a solution.  It was found that if the overall solution was expressed as the product of the two wave functions plus their product with the labels of the two particles interchanged, or of the product of the two wave functions minus their product with the labels interchanged, the resulting probability density function was not changed by changing around the labels.  The particles remained indistinguishable! The solution to the Schrödinger equation, referred to mathematically as an eigenfunction, is called symmetric in the plus case, and antisymmetric in the minus case.  It turns out, however, that if you do the math, particles act in very different ways depending on whether the plus sign or the minus sign is used.  And here’s where the magic comes in.  So far with just been doing math, right?  We’ve just been manipulating symbols to get the math to come out right.  Well, as the great physicist, Richard Feynman, once put it, “To those who do not know mathematics it is difficult to get across a real feeling as to the beauty, the deepest beauty, of nature.” So it is in this case. The real particles act just as the math predicts, and in ways that are completely unexplainable in terms of classical physics!  Particles that can be described by an antisymmetric eigenfunction are called fermions, and particles that can be described by an symmetric eigenfunction are called bosons. How do they actually differ?  Well, for reasons I won’t go into here, the so-called exclusion principle applies to fermions.  There can never be more than one of them in exactly the same quantum state.  Electrons are fermions, and that’s why they are arranged in different levels as they orbit the nucleus of an atom.  Bosons behave differently, and in ways that can be quite spectacular.  Assuming a collection of bosons can be cooled to a low enough temperature they will tend to all condense into the same low energy quantum state.  As it happens, the helium atom is a boson.  When it is cooled below a temperature of 2.18 degrees above absolute zero, it shows some very remarkable large scale quantum effects.  Perhaps the weirdest of these is superfluidity.  In this state, it behaves as if it had no viscosity at all, and can climb up the sides of a container and siphon itself out over the top! No one really knows what matter is at a fundamental level, or why it exists at all.  However, we do know enough about it to realize that our senses only tell us how it acts at the large scales that matter to most living creatures.  They don’t tell us anything about its essence.  It’s unfortunate that now, nearly a century after some of these wonderful discoveries about the quantum world were made, so few people know anything about them.  It seems to me that knowing about them and the great scientist who made them adds a certain interest and richness to life.  If nothing else, when physicists talk about the Higgs boson, it’s nice to have some clue what they’re talking about. Superfluid liquid helium creeping over the edge of a beaker Fusion Update: Signs of Life from the National Ignition Facility The National Ignition Facility, or NIF, is a huge, 192 beam laser system, located at Lawrence Livermore National Laboratory in California.  It was designed, as the name implies, to achieve thermonuclear ignition in the laboratory.  “Ignition” is generally accepted to mean getting a greater energy output from fusion than the laser input energy.  Unlike magnetic confinement fusion, the approach currently being pursued at the International Thermonuclear Experimental Reactor, or ITER, now under construction in France, the goal of the NIF is to achieve ignition via inertial confinement fusion, or ICF, in which the fuel material is compressed and heated to the extreme conditions at which fusion occurs so quickly that it is held in place by its own inertia. The NIF has been operational for over a year now, and a two year campaign is underway with the goal of achieving ignition by the end of this fiscal year.  Recently, there has been a somewhat ominous silence from the facility, manifesting itself as a lack of publications in the major journals favored by fusion scientists.  That doesn’t usually happen when there is anything interesting to report.  Finally, however, some papers have turned up in the journal Physics of Plasmas, containing reports of significant progress. To grasp the importance of the papers, it is necessary to understand what is supposed to occur within the NIF  target chamber for fusion to occur.  Of course, just as in magnetic fusion, the goal is to bring a mixture of deuterium and tritium, two heavy isotopes of hydrogen, to the extreme conditions at which fusion takes place.  In the ICF approach, this hydrogen “fuel” is contained in a tiny, BB-sized target.  However, the lasers are not aimed directly at the fuel “capsule.”  Instead, the capsule is suspended in the middle of a tiny cylinder made of a heavy metal like gold or uranium.  The lasers are fired through holes on each end of the cylinder, striking the interior walls, where their energy is converted to x-rays.  It is these x-rays that must actually bring the target to fusion conditions. It was recognized many years ago that one couldn’t achieve fusion ignition by simply heating up the target.  That would require a laser driver orders of magnitude bigger than the NIF.  Instead, it is first necessary to compress, or implode, the fuel material to extremely high density.  Obviously, it is harder to “squeeze” hot material than cold material to the necessary high densities, so the fuel must be kept as “cold” as possible during the implosion process.  However, cold fuel won’t ignite, begging the question of how to heat it up once the necessary high densities have been achieved. It turns out that the answer is shocks.  When the laser generated x-rays hit the target surface, they do so with such force that it begins to implode faster than the speed of sound.  Everyone knows that when a plane breaks the sound barrier, it, too, generates a shock, which can be heard as a sonic boom.  The same thing happens in ICF fusion targets.  When such a shock converges at the center of the target, the result is a small “hot spot” in the center of the fuel.  If the temperature in the hot spot were high enough, fusion would occur.  Each fusion reaction would release a high energy helium nucleus, or alpha particle, and a neutron.  The alpha particles would be slammed to a stop in the surrounding cold fuel material, heating it, in turn, to fusion conditions.  This would result in a fusion “burn wave” that would propagate out through the rest of the fuel, completing the fusion process. The problem is that one shock isn’t enough to create such a “hot spot.”  Four of them are required, all precisely timed by the carefully tailored NIF laser pulse to converge at the center of the target at exactly the same time.  This is where real finesse is needed in laser fusion.  The implosion must be extremely symmetric, or the shocks will not converge properly.  The timing must be exact, and the laser pulse must deliver just the right amount of energy. One problem in the work to date has been an inability to achieve high enough implosion velocities for the above scenario to work as planned.  One of the Physics of Plasmas papers reports that, by increasing the laser energy and replacing some of the gold originally used in the wall of the cylinder, or “hohlraum,” in which the fuel capsule is mounted with depleted uranium, velocities of 99% of those required for ignition have been achieved.  In view of the recent announcement that a shot on the NIF had exceeded its design energy of 1.8 megajoules, it appears the required velocity is within reach.  Another of the Physics of Plasmas papers dealt with the degree to which implosion asymmetries were causing harmful mixing of the surrounding cold fuel material into the imploded core of the target.  It, too, provided grounds for optimism. In the end, I suspect the success or failure of the NIF will depend on whether the complex sequence of four shocks can really be made to work as advertised.  That will depend on the accuracy of the physics algorithms in the computer codes that have been used to model the experiments.  Time and again, earlier and less sophisticated codes have been wrong because they didn’t accurately account for all the relevant physics.  There is no guarantee that critical phenomena have not been left out of the current versions as well.  We may soon find out, if the critical series of experiments planned to achieve ignition before the end of the fiscal year are carried out as planned. One can but hope they will succeed, if only because some of our finest scientists have dedicated their careers to the quest to achieve the elusive goal of controlled fusion.  Even if they do, fusion based on the NIF approach is unlikely to become a viable source of energy, at least in the foreseeable future.  Laser fusion may prove scientifically feasible, but getting useful energy out of it will be an engineering nightmare, dangerous because of the need to rely on highly volatile and radioactive tritium, and much too expensive to compete with potential alternatives.  I know many of the faithful in the scientific community will beg to differ with me, but, trust me, laser fusion energy aint’ gonna happen. On the other hand, if ignition is achieved, the NIF will be invaluable to the country, not as a source of energy, but for the reason it was funded in the first place – to insure that our nation has an unmatched suite of experimental facilities to study the physics of nuclear weapons in a era free of nuclear testing.  As long as we have unique access to facilities like the NIF, which can approach the extreme physical conditions within exploding nukes, we will have a significant leg up on the competition as long as the test ban remains in place.  For that, if for no other reason, we should keep our fingers crossed that the NIF team can finally clear the last technical hurdles and reach the goal they have been working towards for so long. Fusion ignition process,courtesy of Lawrence Livermore National Laboratory Space Colonization and Stephen Hawking Stephen Hawking is in the news again as an advocate for space colonization.  He raised the issue in a recent interview with the Canadian Press, and will apparently include it as a theme of his new TV series, Brave New World with Stephen Hawking, which debuts on Discovery World HD on Saturday.  There are a number of interesting aspects to the story this time around.  One that most people won’t even notice is Hawking’s reference to human nature.  Here’s what he had to say. The fact that Hawking can matter-of-factly assert something like that about innate behavior in humans as if it were a matter of common knowledge speaks volumes about the amazing transformation in public consciousness that’s taken place in just the last 10 or 15 years.  If he’d said something like that about “selfish and aggressive instincts” 50 years ago, the entire community of experts in the behavioral sciences would have dismissed him as an ignoramus at best, and a fascist and right wing nut case at worst.  It’s astounding, really.  I’ve watched this whole story unfold in my lifetime.  It’s just as stunning as the paradigm shift from an earth-centric to a heliocentric solar system, only this time around, Copernicus and Galileo are unpersons, swept under the rug by an academic and professional community too ashamed of their own past collective imbecility to mention their names.  Look in any textbook on Sociology, Anthropology, or Evolutionary Psychology, and you’ll see what the sounds of silence look like in black and white.  Aside from a few obscure references, the whole thing is treated as if it never happened.  Be grateful, dear reader.  At last we can say the obvious without being shouted down by the “experts.”  There is such a thing as human nature. Now look at the comments after the story in the Winnipeg Free Press I linked above.  Here are some of them. “Our only chance of long-term survival is not to remain lurking on planet Earth, but to spread out into space.”  If that is the case, perhaps we don’t deserve to survive. If we bring destruction to our planet, would it not be in the greater interest to destroy the virus, or simply let it expire, instead of spreading its virulence throughout the galaxy? And who would decide who gets to go? Also, “Our only chance of long-term survival is not to remain lurking on planet Earth, but to spread out into space.” What a stupid thing to say: if we can’t survive ‘lurking’ on planet Earth then who’s to say humans wouldn’t ruin things off of planet Earth? I will not go through any of this as I will be dead by then and gone to a better place as all those who remain and go through whatever happenings in the Future,will also do! I’ve written a lot about morality on this blog.  These comments speak to the reasons why getting it right about morality, why understanding its real nature, and why it exists, are important.  All of them are morally loaded.  As is the case with virtually all morally loaded comments, their authors couldn’t give you a coherent explanation of why they have those opinions.  They just feel that way.  I don’t doubt that they’re entirely sincere about what they say.  The genetic programming that manifests itself as human moral behavior evolved many millennia ago in creatures who couldn’t conceive of themselves as members of a worldwide species, or imagine travel into space.  What these comments demonstrate is something that’s really been obvious for a long time.  In the environment that now exists, vastly different as it is from the one in which our moral predispositions evolved, they can manifest themselves in ways that are, by any reasonable definition of the word, pathological.  In other words, they can manifest themselves in ways that no longer promote our survival, but rather the opposite. As can be seen from the first comment, for example, thanks to our expanded consciousness of the world we live in, we can conceive of such an entity as “all mankind.”  Our moral programming predisposes us to categorize our fellow creatures into ingroups and outgroups.  In this case, “all mankind” has become an outgroup or, as the commenter puts it, a “virus.”  The demise, not only of the individual commenter, but of all mankind, has become a positive Good.  More or less the same thing can be said about the second comment.  This commenter apparently believes that it would be better for humans to become extinct than to “mess things up.”  For whom? As for the third commenter, survival in this world is unimportant to him because he believes in eternal survival in a future imaginary world under the proprietership of an imaginary supernatural being.  It is unlikely that this attitude is more conducive to our real genetic survival than those of the first two commenters.  I submit that if these commenters had an accurate knowledge of the real nature of human morality in the first place, and were free of delusions about supernatural beings in the second, the tone of their comments would be rather different. And what of my opinion on the matter?  In my opinion, morality is the manifestation of genetically programmed traits that evolved because they happened to promote our survival.  No doubt because I understand morality in this way, I have a subjective emotional tendency to perceive the Good as my own genetic survival, the survival of my species, and the survival of life as it has evolved on earth, not necessarily in that order.  Objectively, my version of the Good is no more legitimate or objectively valid that those of the three commenters.  In some sense, you might say it’s just a whim.  I do, however, think that my subjective feelings on the matter are reasonable.  I want to pursue as a “purpose” that which the evolution of morality happened to promote; survival.  It seems to me that an evolved, conscious biological entity that doesn’t want to survive is dysfunctional – it is sick.  I would find the realization that I am sick and dysfunctional distasteful.  Therefore, I choose to survive.  In fact, I am quite passionate about it.  I believe that, if others finally grasp the truth about what morality really is, they are likely to share my point of view.  If we agree, then we can help each other.  That is why I write about it. By all means, then, let us colonize space, and not just our solar system, but the stars.  We can start now.  We lack sources of energy capable of carrying humans to even the nearest stars, but we can send life, even if only single-celled life.  Let us begin. Belgium Joins the Nuclear de-Renaissance The move away from nuclear power in Europe is becoming a stampede.  According to Reuters, the Belgians are now on the bandwagon, with plans for shutting down the country’s last reactors in 2025.  The news comes as no surprise, as the anti-nukers in Belgium have had the upper hand for some time.  However, the agreement reached by the country’s political parties has been made “conditional” on whether the energy deficit can be made up by renewable sources.  Since Belgium currently gets about 55 percent of its power from nuclear, the chances of that appear slim.  It’ s more likely that baseload power deficits will be made up with coal and gas plants that emit tons of carbon and, in the case of coal, represent a greater radioactive hazard than nuclear because of the uranium and thorium they spew into the atmosphere.  No matter.  Since Fukushima global warming hysteria is passé and anti-nuclear hysteria is back in fashion again for the professional saviors of the world. It will be interesting to see how all this turns out in the long run.  In the short term it will certainly be a boon to China and India.  They will continue to expand their nuclear capacity and their lead in advanced nuclear technology, with a windfall of cheaper fuel thanks to Western anti-nuclear activism.  By the time the Europeans come back to the real world and finally realize that renewables aren’t going to cover all their energy needs, they will likely be forced to fall back on increasingly expensive and heavily polluting fossil fuels.  Germany is already building significant new coal-fired capacity. Of course, we may be dealt a wild card if one of the longshot schemes for taming fusion on the cheap actually works.  The odds look long at the moment, though.  We’re hearing nothing but a stoney silence from the National Ignition Facility, which bodes ill for what seems to be the world’s last best hope to perfect inertial confinement fusion.  Things don’t look much better at ITER, the flagship facility for magnetic fusion, the other mainstream approach.  There are no plans to even fuel the facility before 2028. DARPA’s “100 Year Starship” and Planetary Colonization DARPA seems to have its priorities straight when it comes to space exploration.  The agency is funding what it calls the “100 Year Starship” program to study novel propulsion systems with the eventual goal of colonizing space.    Pete Worden, Director of NASA’s Ames Center, suggests that Mars might be colonized by 2030 via one-way missions.  It’s an obvious choice, really.  There’s little point in sending humans to Mars unless they’re going to stay there, and, at least from my point of view, establishing a permanent presence on the red planet is a good idea.  My point of view is based on the conclusion that, if there’s really anything that we “ought” to do, it’s survive.  Everything about us that makes us what we are evolved because it promoted our survival, so it seems that survival is a reasonable goal.  There’s no absolutely legitimate reason why we should survive, but, if we don’t, it would seem to indicate that we are a dysfunctional species, and I find that thought unpleasant.  There, in a nutshell, is my rationale for making human survival my number one priority.  If we seek to survive then, when it comes to planets, it would be unwise to put all of our eggs in one basket.  Steven Hawking apparently agrees with me on this, as can be seen here and here. In his words, It will be difficult enough to avoid disaster on planet Earth in the next hundred years, let alone the next thousand, or million. The human race shouldn’t have all its eggs in one basket, or on one planet. Let’s hope we can avoid dropping the basket until we have spread the load. Not unexpectedly in this hypermoralistic age, morality is being dragged into the debate.  The usual “ethics experts” are ringing their hands about how and under what circumstances we have a “right” to colonize space, and what we must do to avoid being “immoral” in the process.  Related discussions can be found here and here.  Apparently it never occurs to people who raise such issues that human beings make moral judgments and are able to conceive of such things as “rights” only because of the existence of emotional wiring in our brains that evolved because it promoted our survival and that of our prehuman ancestors.  Since it evolved at times and under circumstances that were apparently uninfluenced by what was happening on other planets, morality and “rights” are relevant to the issue only to the extent that they muddy the waters. Assuming that others agree with me and Dr. Hawking that survival is a desirable goal, then ultimately we must seek to move beyond our own solar system.  Unfortunately there are severe constraints on our ability to send human beings on such long voyages owing to the vast amounts of energy that would be necessary to make interstellar journey’s within human lifetimes.  For the time being, at least, we must rely on very small vessels that may take a very long time to reach their goals.  Nanotechnology is certainly part of the answer.  Tiny probes might survey the earth-like planets we discover to determine their capacity to support life.  Those found suitable should be seeded with life as soon as possible.  Again, because of energy constraints, it may only be possible to send one-celled or very simple life forms at first.  They can survive indefinitely long voyages in space, and would be the logical choice to begin seeding other planets.  Self-replicating nano-robots might then be sent capable of building a suitable environment for more complex life forms, including incubators and surrogate parents.  At that point, it would become possible to send more complex life forms, including human beings, in the form of frozen fertilized eggs.  These are some of the things we might consider doing if we consider our survival important. Of course, any number of the pathologically pious among us might find what I’ve written above grossly immoral.  The fact remains that there is no legitimate basis for such a judgment.  Morality exists because it promoted our survival.  There can be nothing more immoral than failing to survive. The Daedalus Starship
9578bf9b9c64f4c2
Energy & Green Tech GM expands market for hydrogen fuel cells beyond vehicles General Motors is finding new markets for its hydrogen fuel cell systems, announcing that it will work with another company to build mobile electricity generators, electric vehicle charging stations and power generators for ... Hydrogen (pronounced /ˈhaɪdrədʒən/) is the chemical element with atomic number 1. It is represented by the symbol H. At standard temperature and pressure, hydrogen is a colorless, odorless, nonmetallic, tasteless, highly flammable diatomic gas with the molecular formula H2. With an atomic weight of 1.00794 u, hydrogen is the lightest element. Hydrogen is the most abundant chemical element, constituting roughly 75% of the universe's elemental mass. Stars in the main sequence are mainly composed of hydrogen in its plasma state. Elemental hydrogen is relatively rare on Earth. Industrial production is from hydrocarbons such as methane with most being used "captively" at the production site. The two largest uses are in fossil fuel processing (e.g., hydrocracking) and ammonia production mostly for the fertilizer market. Hydrogen may be produced from water by electrolysis at substantially greater cost than production from natural gas. The most common isotope of hydrogen is protium (name rarely used, symbol H) with a single proton and no neutrons. In ionic compounds it can take a negative charge (an anion known as a hydride and written as H−), or as a positively-charged species H+. The latter cation is written as though composed of a bare proton, but in reality, hydrogen cations in ionic compounds always occur as more complex species. Hydrogen forms compounds with most elements and is present in water and most organic compounds. It plays a particularly important role in acid-base chemistry with many reactions exchanging protons between soluble molecules. As the only neutral atom with an analytic solution to the Schrödinger equation, the study of the energetics and bonding of the hydrogen atom played a key role in the development of quantum mechanics. Hydrogen is important in metallurgy as it can embrittle many metals, complicating the design of pipelines and storage tanks. Hydrogen is highly soluble in many rare earth and transition metals and is soluble in both nanocrystalline and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal lattice.
eea0db36e1433b90
Chaotic Quantum Motion of Two Particles in a 3D Harmonic Oscillator Potential Initializing live version Download to Desktop Requires a Wolfram Notebook System A system with three degrees of freedom, consisting of a superposition of three coherent stationary eigenfunctions with commensurate energy eigenvalues and a constant relative phase, can exhibit chaotic motion in the de Broglie–Bohm formulation of quantum mechanics (see Quantum Motion of Two Particles in a 3D Trigonometric Pöschl–Teller Potential). We consider here an analog using a three-dimensional harmonic-oscillator potential. In this case, the velocities of the particles are autonomous, with a complex, chaotic trajectory structure. Two particles are placed randomly, separated by an initial distance , on the boundary of the harmonic potential. The dynamic behavior for such a system is quite complex. Some of the curves are closed and periodic, while others are quasi-periodic. In the region of nodal points of the wavefunction, the trajectories apparently become accelerated and chaotic. The parameters have to be chosen carefully, because of the singularities in the velocities and the resulting large oscillations, which can lead to very unstable trajectories. The motion originates from the relative phase of the total wavefunction, which has no analog in classical particle mechanics. Further investigation to capture the full dynamics of the system is necessary. The graphics show three-dimensional contour plots of the squared wavefunction (if enabled) and two initially neighboring trajectories. Black points mark the initial positions of the two quantum particles and green points the actual positions. Blue points indicate the nodal point structure. Contributed by: Klaus von Bloh (July 2015) Open content licensed under CC BY-NC-SA Associated Hermite polynomials arise as the solution of the Schrödinger equation: , with , , and so on. A degenerate, unnormalized, complex-valued wavefunction for the three-dimensional case can be given by: where , , are eigenfunctions, and are permuted eigenenergies of the corresponding stationary one-dimensional Schrödinger equation with . The eigenfunctions are defined by where , , are Hermite polynomials. The parameter is a constant phase shift. The eigenvalues' numbers depend on the three quantum numbers . In this Demonstration, the wavefunction is defined by: In this case, the square of the Schrödinger wavefunction , where is its complex conjugate, is not time dependent: The velocity field is calculated from the gradient of the phase from the total wavefunction in the eikonal form (often called polar form) . The time-dependent phase function from the total wavefunction is: The corresponding velocity field becomes time independent (autonomous) because of the time-independent gradient of the phase function. In the program, if PlotPoints, AccuracyGoal, PrecisionGoal, MaxSteps, and MaxIterations are enabled, increasing them will give more accurate results. [1] "" (Jul 30, 2015) [2] S. Goldstein. "Bohmian Mechanics." The Stanford Encyclopedia of Philosophy. (Jul 30, 2015) Feedback (field required) Email (field required) Name Occupation Organization
deb2d0d66cc9b744
@article{8999, abstract = {In many basic shear flows, such as pipe, Couette, and channel flow, turbulence does not arise from an instability of the laminar state, and both dynamical states co-exist. With decreasing flow speed (i.e., decreasing Reynolds number) the fraction of fluid in laminar motion increases while turbulence recedes and eventually the entire flow relaminarizes. The first step towards understanding the nature of this transition is to determine if the phase change is of either first or second order. In the former case, the turbulent fraction would drop discontinuously to zero as the Reynolds number decreases while in the latter the process would be continuous. For Couette flow, the flow between two parallel plates, earlier studies suggest a discontinuous scenario. In the present study we realize a Couette flow between two concentric cylinders which allows studies to be carried out in large aspect ratios and for extensive observation times. The presented measurements show that the transition in this circular Couette geometry is continuous suggesting that former studies were limited by finite size effects. A further characterization of this transition, in particular its relation to the directed percolation universality class, requires even larger system sizes than presently available. }, author = {Avila, Kerstin and Hof, Björn}, issn = {1099-4300}, journal = {Entropy}, number = {1}, publisher = {MDPI}, title = {{Second-order phase transition in counter-rotating taylor-couette flow experiment}}, doi = {10.3390/e23010058}, volume = {23}, year = {2021}, } @article{9005, abstract = {Studies on the experimental realization of two-dimensional anyons in terms of quasiparticles have been restricted, so far, to only anyons on the plane. It is known, however, that the geometry and topology of space can have significant effects on quantum statistics for particles moving on it. Here, we have undertaken the first step toward realizing the emerging fractional statistics for particles restricted to move on the sphere instead of on the plane. We show that such a model arises naturally in the context of quantum impurity problems. In particular, we demonstrate a setup in which the lowest-energy spectrum of two linear bosonic or fermionic molecules immersed in a quantum many-particle environment can coincide with the anyonic spectrum on the sphere. This paves the way toward the experimental realization of anyons on the sphere using molecular impurities. Furthermore, since a change in the alignment of the molecules corresponds to the exchange of the particles on the sphere, such a realization reveals a novel type of exclusion principle for molecular impurities, which could also be of use as a powerful technique to measure the statistics parameter. Finally, our approach opens up a simple numerical route to investigate the spectra of many anyons on the sphere. Accordingly, we present the spectrum of two anyons on the sphere in the presence of a Dirac monopole field.}, author = {Brooks, Morris and Lemeshko, Mikhail and Lundholm, D. and Yakaboylu, Enderalp}, issn = {10797114}, journal = {Physical Review Letters}, number = {1}, publisher = {American Physical Society}, title = {{Molecular impurities as a realization of anyons on the two-sphere}}, doi = {10.1103/PhysRevLett.126.015301}, volume = {126}, year = {2021}, } @article{9010, abstract = {Availability of the essential macronutrient nitrogen in soil plays a critical role in plant growth, development, and impacts agricultural productivity. Plants have evolved different strategies for sensing and responding to heterogeneous nitrogen distribution. Modulation of root system architecture, including primary root growth and branching, is among the most essential plant adaptions to ensure adequate nitrogen acquisition. However, the immediate molecular pathways coordinating the adjustment of root growth in response to distinct nitrogen sources, such as nitrate or ammonium, are poorly understood. Here, we show that growth as manifested by cell division and elongation is synchronized by coordinated auxin flux between two adjacent outer tissue layers of the root. This coordination is achieved by nitrate‐dependent dephosphorylation of the PIN2 auxin efflux carrier at a previously uncharacterized phosphorylation site, leading to subsequent PIN2 lateralization and thereby regulating auxin flow between adjacent tissues. A dynamic computer model based on our experimental data successfully recapitulates experimental observations. Our study provides mechanistic insights broadening our understanding of root growth mechanisms in dynamic environments.}, author = {Ötvös, Krisztina and Marconi, Marco and Vega, Andrea and O’Brien, Jose and Johnson, Alexander J and Abualia, Rashed and Antonielli, Livio and Montesinos López, Juan C and Zhang, Yuzhou and Tan, Shutang and Cuesta, Candela and Artner, Christina and Bouguyon, Eleonore and Gojon, Alain and Friml, Jiří and Gutiérrez, Rodrigo A. and Wabnik, Krzysztof T and Benková, Eva}, issn = {14602075}, journal = {EMBO Journal}, number = {3}, publisher = {Embo Press}, title = {{Modulation of plant root growth by nitrogen source-defined regulation of polar auxin transport}}, doi = {10.15252/embj.2020106862}, volume = {40}, year = {2021}, } @article{9020, abstract = {We study dynamics and thermodynamics of ion transport in narrow, water-filled channels, considered as effective 1D Coulomb systems. The long range nature of the inter-ion interactions comes about due to the dielectric constants mismatch between the water and the surrounding medium, confining the electric filed to stay mostly within the water-filled channel. Statistical mechanics of such Coulomb systems is dominated by entropic effects which may be accurately accounted for by mapping onto an effective quantum mechanics. In presence of multivalent ions the corresponding quantum mechanics appears to be non-Hermitian. In this review we discuss a framework for semiclassical calculations for the effective non-Hermitian Hamiltonians. Non-Hermiticity elevates WKB action integrals from the real line to closed cycles on a complex Riemann surfaces where direct calculations are not attainable. We circumvent this issue by applying tools from algebraic topology, such as the Picard-Fuchs equation. We discuss how its solutions relate to the thermodynamics and correlation functions of multivalent solutions within narrow, water-filled channels. }, author = {Gulden, Tobias and Kamenev, Alex}, issn = {1099-4300}, journal = {Entropy}, number = {1}, publisher = {MDPI}, title = {{Dynamics of ion channels via non-hermitian quantum mechanics}}, doi = {10.3390/e23010125}, volume = {23}, year = {2021}, } @phdthesis{9022, abstract = {In the first part of the thesis we consider Hermitian random matrices. Firstly, we consider sample covariance matrices XX∗ with X having independent identically distributed (i.i.d.) centred entries. We prove a Central Limit Theorem for differences of linear statistics of XX∗ and its minor after removing the first column of X. Secondly, we consider Wigner-type matrices and prove that the eigenvalue statistics near cusp singularities of the limiting density of states are universal and that they form a Pearcey process. Since the limiting eigenvalue distribution admits only square root (edge) and cubic root (cusp) singularities, this concludes the third and last remaining case of the Wigner-Dyson-Mehta universality conjecture. The main technical ingredients are an optimal local law at the cusp, and the proof of the fast relaxation to equilibrium of the Dyson Brownian motion in the cusp regime. In the second part we consider non-Hermitian matrices X with centred i.i.d. entries. We normalise the entries of X to have variance N −1. It is well known that the empirical eigenvalue density converges to the uniform distribution on the unit disk (circular law). In the first project, we prove universality of the local eigenvalue statistics close to the edge of the spectrum. This is the non-Hermitian analogue of the TracyWidom universality at the Hermitian edge. Technically we analyse the evolution of the spectral distribution of X along the Ornstein-Uhlenbeck flow for very long time (up to t = +∞). In the second project, we consider linear statistics of eigenvalues for macroscopic test functions f in the Sobolev space H2+ϵ and prove their convergence to the projection of the Gaussian Free Field on the unit disk. We prove this result for non-Hermitian matrices with real or complex entries. The main technical ingredients are: (i) local law for products of two resolvents at different spectral parameters, (ii) analysis of correlated Dyson Brownian motions. In the third and final part we discuss the mathematically rigorous application of supersymmetric techniques (SUSY ) to give a lower tail estimate of the lowest singular value of X − z, with z ∈ C. More precisely, we use superbosonisation formula to give an integral representation of the resolvent of (X − z)(X − z)∗ which reduces to two and three contour integrals in the complex and real case, respectively. The rigorous analysis of these integrals is quite challenging since simple saddle point analysis cannot be applied (the main contribution comes from a non-trivial manifold). Our result improves classical smoothing inequalities in the regime |z| ≈ 1; this result is essential to prove edge universality for i.i.d. non-Hermitian matrices.}, author = {Cipolloni, Giorgio}, issn = {2663-337X}, pages = {380}, publisher = {IST Austria}, title = {{Fluctuations in the spectrum of random matrices}}, doi = {10.15479/AT:ISTA:9022}, year = {2021}, } @unpublished{9034, abstract = {We determine an asymptotic formula for the number of integral points of bounded height on a blow-up of $\mathbb{P}^3$ outside certain planes using universal torsors.}, author = {Wilsch, Florian Alexander}, booktitle = {arXiv}, title = {{Integral points of bounded height on a log Fano threefold}}, year = {2021}, } @article{9036, abstract = {In this short note, we prove that the square root of the quantum Jensen-Shannon divergence is a true metric on the cone of positive matrices, and hence in particular on the quantum state space.}, author = {Virosztek, Daniel}, issn = {0001-8708}, journal = {Advances in Mathematics}, keywords = {General Mathematics}, number = {3}, publisher = {Elsevier}, title = {{The metric property of the quantum Jensen-Shannon divergence}}, doi = {10.1016/j.aim.2021.107595}, volume = {380}, year = {2021}, } @article{9037, abstract = {We continue our study of ‘no‐dimension’ analogues of basic theorems in combinatorial and convex geometry in Banach spaces. We generalize some results of the paper (Adiprasito, Bárány and Mustafa, ‘Theorems of Carathéodory, Helly, and Tverberg without dimension’, Proceedings of the Thirtieth Annual ACM‐SIAM Symposium on Discrete Algorithms (Society for Industrial and Applied Mathematics, San Diego, California, 2019) 2350–2360) and prove no‐dimension versions of the colored Tverberg theorem, the selection lemma and the weak 𝜀 ‐net theorem in Banach spaces of type 𝑝>1 . To prove these results, we use the original ideas of Adiprasito, Bárány and Mustafa for the Euclidean case, our no‐dimension version of the Radon theorem and slightly modified version of the celebrated Maurey lemma.}, author = {Ivanov, Grigory}, issn = {14692120}, journal = {Bulletin of the London Mathematical Society}, publisher = {London Mathematical Society}, title = {{No-dimension Tverberg's theorem and its corollaries in Banach spaces of type p}}, doi = {10.1112/blms.12449}, year = {2021}, } @article{9038, abstract = {Layered materials in which individual atomic layers are bonded by weak van der Waals forces (vdW materials) constitute one of the most prominent platforms for materials research. Particularly, polar vdW crystals, such as hexagonal boron nitride (h-BN), alpha-molybdenum trioxide (α-MoO3) or alpha-vanadium pentoxide (α-V2O5), have received significant attention in nano-optics, since they support phonon polaritons (PhPs)―light coupled to lattice vibrations― with strong electromagnetic confinement and low optical losses. Recently, correlative far- and near-field studies of α-MoO3 have been demonstrated as an effective strategy to accurately extract the permittivity of this material. Here, we use this accurately characterized and low-loss polaritonic material to sense its local dielectric environment, namely silica (SiO2), one of the most widespread substrates in nanotechnology. By studying the propagation of PhPs on α-MoO3 flakes with different thicknesses laying on SiO2 substrates via near-field microscopy (s-SNOM), we extract locally the infrared permittivity of SiO2. Our work reveals PhPs nanoimaging as a versatile method for the quantitative characterization of the local optical properties of dielectric substrates, crucial for understanding and predicting the response of nanomaterials and for the future scalability of integrated nanophotonic devices. }, author = {Aguilar-Merino, Patricia and Álvarez-Pérez, Gonzalo and Taboada-Gutiérrez, Javier and Duan, Jiahua and Prieto Gonzalez, Ivan and Álvarez-Prado, Luis Manuel and Nikitin, Alexey Y. and Martín-Sánchez, Javier and Alonso-González, Pablo}, issn = {20794991}, journal = {Nanomaterials}, number = {1}, publisher = {MDPI}, title = {{Extracting the infrared permittivity of SiO2 substrates locally by near-field imaging of phonon polaritons in a van der Waals crystal}}, doi = {10.3390/nano11010120}, volume = {11}, year = {2021}, } @article{9046, author = {Römhild, Roderich and Andersson, Dan I.}, issn = {15537374}, journal = {PLoS Pathogens}, number = {1}, publisher = {Public Library of Science}, title = {{Mechanisms and therapeutic potential of collateral sensitivity to antibiotics}}, doi = {10.1371/journal.ppat.1009172}, volume = {17}, year = {2021}, } @article{9047, abstract = {This work analyzes the latency of the simplified successive cancellation (SSC) decoding scheme for polar codes proposed by Alamdar-Yazdi and Kschischang. It is shown that, unlike conventional successive cancellation decoding, where latency is linear in the block length, the latency of SSC decoding is sublinear. More specifically, the latency of SSC decoding is O(N1−1/μ) , where N is the block length and μ is the scaling exponent of the channel, which captures the speed of convergence of the rate to capacity. Numerical results demonstrate the tightness of the bound and show that most of the latency reduction arises from the parallel decoding of subcodes of rate 0 or 1.}, author = {Mondelli, Marco and Hashemi, Seyyed Ali and Cioffi, John M. and Goldsmith, Andrea}, issn = {15582248}, journal = {IEEE Transactions on Wireless Communications}, number = {1}, pages = {18--27}, publisher = {IEEE}, title = {{Sublinear latency for simplified successive cancellation decoding of polar codes}}, doi = {10.1109/TWC.2020.3022922}, volume = {20}, year = {2021}, } @article{9048, abstract = {The analogy between an equilibrium partition function and the return probability in many-body unitary dynamics has led to the concept of dynamical quantum phase transition (DQPT). DQPTs are defined by nonanalyticities in the return amplitude and are present in many models. In some cases, DQPTs can be related to equilibrium concepts, such as order parameters, yet their universal description is an open question. In this Letter, we provide first steps toward a classification of DQPTs by using a matrix product state description of unitary dynamics in the thermodynamic limit. This allows us to distinguish the two limiting cases of “precession” and “entanglement” DQPTs, which are illustrated using an analytical description in the quantum Ising model. While precession DQPTs are characterized by a large entanglement gap and are semiclassical in their nature, entanglement DQPTs occur near avoided crossings in the entanglement spectrum and can be distinguished by a complex pattern of nonlocal correlations. We demonstrate the existence of precession and entanglement DQPTs beyond Ising models, discuss observables that can distinguish them, and relate their interplay to complex DQPT phenomenology.}, author = {De Nicola, Stefano and Michailidis, Alexios and Serbyn, Maksym}, issn = {0031-9007}, journal = {Physical Review Letters}, keywords = {General Physics and Astronomy}, number = {4}, publisher = {American Physical Society}, title = {{Entanglement view of dynamical quantum phase transitions}}, doi = {10.1103/physrevlett.126.040602}, volume = {126}, year = {2021}, } @phdthesis{9056, abstract = {In this thesis we study persistence of multi-covers of Euclidean balls and the geometric structures underlying their computation, in particular Delaunay mosaics and Voronoi tessellations. The k-fold cover for some discrete input point set consists of the space where at least k balls of radius r around the input points overlap. Persistence is a notion that captures, in some sense, the topology of the shape underlying the input. While persistence is usually computed for the union of balls, the k-fold cover is of interest as it captures local density, and thus might approximate the shape of the input better if the input data is noisy. To compute persistence of these k-fold covers, we need a discretization that is provided by higher-order Delaunay mosaics. We present and implement a simple and efficient algorithm for the computation of higher-order Delaunay mosaics, and use it to give experimental results for their combinatorial properties. The algorithm makes use of a new geometric structure, the rhomboid tiling. It contains the higher-order Delaunay mosaics as slices, and by introducing a filtration function on the tiling, we also obtain higher-order α-shapes as slices. These allow us to compute persistence of the multi-covers for varying radius r; the computation for varying k is less straight-foward and involves the rhomboid tiling directly. We apply our algorithms to experimental sphere packings to shed light on their structural properties. Finally, inspired by periodic structures in packings and materials, we propose and implement an algorithm for periodic Delaunay triangulations to be integrated into the Computational Geometry Algorithms Library (CGAL), and discuss the implications on persistence for periodic data sets.}, author = {Osang, Georg F}, issn = {2663-337X}, pages = {134}, publisher = {IST Austria}, title = {{Multi-cover persistence and Delaunay mosaics}}, doi = {10.15479/AT:ISTA:9056}, year = {2021}, } @article{9073, abstract = {The sensory and cognitive abilities of the mammalian neocortex are underpinned by intricate columnar and laminar circuits formed from an array of diverse neuronal populations. One approach to determining how interactions between these circuit components give rise to complex behavior is to investigate the rules by which cortical circuits are formed and acquire functionality during development. This review summarizes recent research on the development of the neocortex, from genetic determination in neural stem cells through to the dynamic role that specific neuronal populations play in the earliest circuits of neocortex, and how they contribute to emergent function and cognition. While many of these endeavors take advantage of model systems, consideration will also be given to advances in our understanding of activity in nascent human circuits. Such cross-species perspective is imperative when investigating the mechanisms underlying the dysfunction of early neocortical circuits in neurodevelopmental disorders, so that one can identify targets amenable to therapeutic intervention.}, author = {Hanganu-Opatz, Ileana L. and Butt, Simon J. B. and Hippenmeyer, Simon and De Marco García, Natalia V. and Cardin, Jessica A. and Voytek, Bradley and Muotri, Alysson R.}, issn = {0270-6474}, journal = {The Journal of Neuroscience}, keywords = {General Neuroscience}, number = {5}, pages = {813--822}, publisher = {Society for Neuroscience}, title = {{The logic of developing neocortical circuits in health and disease}}, doi = {10.1523/jneurosci.1655-20.2020}, volume = {41}, year = {2021}, } @unpublished{9082, abstract = {Acquired mutations are sufficiently frequent such that the genome of a single cell offers a record of its history of cell divisions. Among more common somatic genomic alterations are loss of heterozygosity (LOH). Large LOH events are potentially detectable in single cell RNA sequencing (scRNA-seq) datasets as tracts of monoallelic expression for constitutionally heterozygous single nucleotide variants (SNVs) located among contiguous genes. We identified runs of monoallelic expression, consistent with LOH, uniquely distributed throughout the genome in single cell brain cortex transcriptomes of F1 hybrids involving different inbred mouse strains. We then phylogenetically reconstructed single cell lineages and simultaneously identified cell types by corresponding gene expression patterns. Our results are consistent with progenitor cells giving rise to multiple cortical cell types through stereotyped expansion and distinct waves of neurogenesis. Compared to engineered recording systems, LOH events accumulate throughout the genome and across the lifetime of an organism, affording tremendous capacity for encoding lineage information and increasing resolution for later cell divisions. This approach can conceivably be computationally incorporated into scRNA-seq analysis and may be useful for organisms where genetic engineering is prohibitive, such as humans.}, author = {Anderson, Donovan J. and Pauler, Florian and McKenna, Aaron and Shendure, Jay and Hippenmeyer, Simon and Horwitz, Marshall S.}, booktitle = {bioRxiv}, publisher = {Cold Spring Harbor Laboratory}, title = {{Simultaneous identification of brain cell type and lineage via single cell RNA sequencing}}, doi = {10.1101/2020.12.31.425016}, year = {2021}, } @article{9093, abstract = {We employ the Gross-Pitaevskii equation to study acoustic emission generated in a uniform Bose gas by a static impurity. The impurity excites a sound-wave packet, which propagates through the gas. We calculate the shape of this wave packet in the limit of long wave lengths, and argue that it is possible to extract properties of the impurity by observing this shape. We illustrate here this possibility for a Bose gas with a trapped impurity atom -- an example of a relevant experimental setup. Presented results are general for all one-dimensional systems described by the nonlinear Schrödinger equation and can also be used in nonatomic systems, e.g., to analyze light propagation in nonlinear optical media. Finally, we calculate the shape of the sound-wave packet for a three-dimensional Bose gas assuming a spherically symmetric perturbation.}, author = {Marchukov, Oleksandr and Volosniev, Artem}, issn = {2542-4653}, journal = {SciPost Physics}, number = {2}, publisher = {SciPost Foundation}, title = {{Shape of a sound wave in a weakly-perturbed Bose gas}}, doi = {10.21468/scipostphys.10.2.025}, volume = {10}, year = {2021}, } @article{9094, abstract = {Dendritic cells (DCs) are crucial for the priming of naive T cells and the initiation of adaptive immunity. Priming is initiated at a heterologous cell–cell contact, the immunological synapse (IS). While it is established that F-actin dynamics regulates signaling at the T cell side of the contact, little is known about the cytoskeletal contribution on the DC side. Here, we show that the DC actin cytoskeleton is decisive for the formation of a multifocal synaptic structure, which correlates with T cell priming efficiency. DC actin at the IS appears in transient foci that are dynamized by the WAVE regulatory complex (WRC). The absence of the WRC in DCs leads to stabilized contacts with T cells, caused by an increase in ICAM1-integrin–mediated cell–cell adhesion. This results in lower numbers of activated and proliferating T cells, demonstrating an important role for DC actin in the regulation of immune synapse functionality.}, author = {Leithner, Alexander F and Altenburger, LM and Hauschild, R and Assen, Frank P and Rottner, K and TEB, Stradal and Diz-Muñoz, A and Stein, JV and Sixt, Michael K}, issn = {0021-9525}, journal = {Journal of Cell Biology}, number = {4}, publisher = {Rockefeller University Press}, title = {{Dendritic cell actin dynamics control contact duration and priming efficiency at the immunological synapse}}, doi = {10.1083/jcb.202006081}, volume = {220}, year = {2021}, } @article{9097, abstract = {Psoriasis is a chronic inflammatory skin disease clinically characterized by the appearance of red colored, well-demarcated plaques with thickened skin and with silvery scales. Recent studies have established the involvement of a complex signalling network of interactions between cytokines, immune cells and skin cells called keratinocytes. Keratinocytes form the cells of the outermost layer of the skin (epidermis). Visible plaques in psoriasis are developed due to the fast proliferation and unusual differentiation of keratinocyte cells. Despite that, the exact mechanism of the appearance of these plaques in the cytokine-immune cell network is not clear. A mathematical model embodying interactions between key immune cells believed to be involved in psoriasis, keratinocytes and relevant cytokines has been developed. The complex network formed of these interactions poses several challenges. Here, we choose to study subnetworks of this complex network and initially focus on interactions involving TNFα, IL-23/IL-17, and IL-15. These are chosen based on known evidence of their therapeutic efficacy. In addition, we explore the role of IL-15 in the pathogenesis of psoriasis and its potential as a future drug target for a novel treatment option. We perform steady state analyses for these subnetworks and demonstrate that the interactions between cells, driven by cytokines could cause the emergence of a psoriasis state (hyper-proliferation of keratinocytes) when levels of TNFα, IL-23/IL-17 or IL-15 are increased. The model results explain and support the clinical potentiality of anti-cytokine treatments. Interestingly, our results suggest different dynamic scenarios underpin the pathogenesis of psoriasis, depending upon the dominant cytokines of subnetworks. We observed that the increase in the level of IL-23/IL-17 and IL-15 could lead to psoriasis via a bistable route, whereas an increase in the level of TNFα would lead to a monotonic and gradual disease progression. Further, we demonstrate how this insight, bistability, could be exploited to improve the current therapies and develop novel treatment strategies for psoriasis.}, author = {Pandey, Rakesh and Al-Nuaimi, Yusur and Mishra, Rajiv Kumar and Spurgeon, Sarah K. and Goodfellow, Marc}, issn = {20452322}, journal = {Scientific Reports}, publisher = {Springer Nature}, title = {{Role of subnetworks mediated by TNF α, IL-23/IL-17 and IL-15 in a network involved in the pathogenesis of psoriasis}}, doi = {10.1038/s41598-020-80507-7}, volume = {11}, year = {2021}, } @article{9098, abstract = {We study properties of the volume of projections of the n-dimensional cross-polytope $\crosp^n = \{ x \in \R^n \mid |x_1| + \dots + |x_n| \leqslant 1\}.$ We prove that the projection of $\crosp^n$ onto a k-dimensional coordinate subspace has the maximum possible volume for k=2 and for k=3. We obtain the exact lower bound on the volume of such a projection onto a two-dimensional plane. Also, we show that there exist local maxima which are not global ones for the volume of a projection of $\crosp^n$ onto a k-dimensional subspace for any n>k⩾2.}, author = {Ivanov, Grigory}, issn = {0012365X}, journal = {Discrete Mathematics}, number = {5}, publisher = {Elsevier}, title = {{On the volume of projections of the cross-polytope}}, doi = {10.1016/j.disc.2021.112312}, volume = {344}, year = {2021}, } @article{9099, abstract = {We show that on an Abelian variety over an algebraically closed field of positive characteristic, the obstruction to lifting an automorphism to a field of characteristic zero as a morphism vanishes if and only if it vanishes for lifting it as a derived autoequivalence. We also compare the deformation space of these two types of deformations.}, author = {Srivastava, Tanya K}, issn = {14208938}, journal = {Archiv der Mathematik}, publisher = {Springer Nature}, title = {{Lifting automorphisms on Abelian varieties as derived autoequivalences}}, doi = {10.1007/s00013-020-01564-y}, year = {2021}, } @article{9100, abstract = {Marine environments are inhabited by a broad representation of the tree of life, yet our understanding of speciation in marine ecosystems is extremely limited compared with terrestrial and freshwater environments. Developing a more comprehensive picture of speciation in marine environments requires that we 'dive under the surface' by studying a wider range of taxa and ecosystems is necessary for a more comprehensive picture of speciation. Although studying marine evolutionary processes is often challenging, recent technological advances in different fields, from maritime engineering to genomics, are making it increasingly possible to study speciation of marine life forms across diverse ecosystems and taxa. Motivated by recent research in the field, including the 14 contributions in this issue, we highlight and discuss six axes of research that we think will deepen our understanding of speciation in the marine realm: (a) study a broader range of marine environments and organisms; (b) identify the reproductive barriers driving speciation between marine taxa; (c) understand the role of different genomic architectures underlying reproductive isolation; (d) infer the evolutionary history of divergence using model‐based approaches; (e) study patterns of hybridization and introgression between marine taxa; and (f) implement highly interdisciplinary, collaborative research programmes. In outlining these goals, we hope to inspire researchers to continue filling this critical knowledge gap surrounding the origins of marine biodiversity.}, author = {Faria, Rui and Johannesson, Kerstin and Stankowski, Sean}, issn = {14209101}, journal = {Journal of Evolutionary Biology}, number = {1}, pages = {4--15}, publisher = {Wiley}, title = {{Speciation in marine environments: Diving under the surface}}, doi = {10.1111/jeb.13756}, volume = {34}, year = {2021}, } @article{9101, abstract = {Behavioral predispositions are innate tendencies of animals to behave in a given way without the input of learning. They increase survival chances and, due to environmental and ecological challenges, may vary substantially even between closely related taxa. These differences are likely to be especially pronounced in long-lived species like crocodilians. This order is particularly relevant for comparative cognition due to its phylogenetic proximity to birds. Here we compared early life behavioral predispositions in two Alligatoridae species. We exposed American alligator and spectacled caiman hatchlings to three different novel situations: a novel object, a novel environment that was open and a novel environment with a shelter. This was then repeated a week later. During exposure to the novel environments, alligators moved around more and explored a larger range of the arena than the caimans. When exposed to the novel object, the alligators reduced the mean distance to the novel object in the second phase, while the caimans further increased it, indicating diametrically opposite ontogenetic development in behavioral predispositions. Although all crocodilian hatchlings face comparable challenges, e.g., high predation pressure, the effectiveness of parental protection might explain the observed pattern. American alligators are apex predators capable of protecting their offspring against most dangers, whereas adult spectacled caimans are frequently predated themselves. Their distancing behavior might be related to increased predator avoidance and also explain the success of invasive spectacled caimans in the natural habitats of other crocodilians.}, author = {Reber, Stephan A. and Oh, Jinook and Janisch, Judith and Stevenson, Colin and Foggett, Shaun and Wilkinson, Anna}, issn = {14359456}, journal = {Animal Cognition}, publisher = {Springer Nature}, title = {{Early life differences in behavioral predispositions in two Alligatoridae species}}, doi = {10.1007/s10071-020-01461-5}, year = {2021}, } @article{9113, abstract = {“Hydrogen economy” could enable a carbon-neutral sustainable energy chain. However, issues with safety, storage, and transport of molecular hydrogen impede its realization. Alcohols as liquid H2 carriers could be enablers, but state-of-the-art reforming is difficult, requiring high temperatures >200 °C and pressures >25 bar, and the resulting H2 is carbonized beyond tolerance levels for direct use in fuel cells. Here, we demonstrate ambient temperature and pressure alcohol reforming in a fuel cell (ARFC) with a simultaneous electrical power output. The alcohol is oxidized at the alkaline anode, where the resulting CO2 is sequestrated as carbonate. Carbon-free H2 is liberated at the acidic cathode. The neutralization energy between the alkaline anode and the acidic cathode drives the process, particularly the unusually high entropy gain (1.27-fold ΔH). The significantly positive temperature coefficient of the resulting electromotive force allows us to harvest a large fraction of the output energy from the surrounding, achieving a thermodynamic efficiency as high as 2.27. MoS2 as the cathode catalyst allows alcohol reforming even under open-air conditions, a challenge that state-of-the-art alcohol reforming failed to overcome. We further show reforming of a wide range of alcohols. The ARFC offers an unprecedented route toward hydrogen economy as CO2 is simultaneously captured and pure H2 produced at mild conditions.}, author = {Manzoor Bhat, Zahid Manzoor and Thimmappa, Ravikumar and Dargily, Neethu Christudas and Raafik, Abdul and Kottaichamy, Alagar Raja and Devendrachari, Mruthyunjayachari Chattanahalli and Itagi, Mahesh and Makri Nimbegondi Kotresh, Harish and Freunberger, Stefan Alexander and Ottakam Thotiyl, Musthafa }, issn = {2168-0485}, journal = {ACS Sustainable Chemistry and Engineering}, publisher = {American Chemical Society}, title = {{Ambient condition alcohol reforming to hydrogen with electricity output}}, doi = {10.1021/acssuschemeng.0c07547}, year = {2021}, } @article{9119, abstract = {We present DILS, a deployable statistical analysis platform for conducting demographic inferences with linked selection from population genomic data using an Approximate Bayesian Computation framework. DILS takes as input single‐population or two‐population data sets (multilocus fasta sequences) and performs three types of analyses in a hierarchical manner, identifying: (a) the best demographic model to study the importance of gene flow and population size change on the genetic patterns of polymorphism and divergence, (b) the best genomic model to determine whether the effective size Ne and migration rate N, m are heterogeneously distributed along the genome (implying linked selection) and (c) loci in genomic regions most associated with barriers to gene flow. Also available via a Web interface, an objective of DILS is to facilitate collaborative research in speciation genomics. Here, we show the performance and limitations of DILS by using simulations and finally apply the method to published data on a divergence continuum composed by 28 pairs of Mytilus mussel populations/species.}, author = {Fraisse, Christelle and Popovic, Iva and Mazoyer, Clément and Spataro, Bruno and Delmotte, Stéphane and Romiguier, Jonathan and Loire, Étienne and Simon, Alexis and Galtier, Nicolas and Duret, Laurent and Bierne, Nicolas and Vekemans, Xavier and Roux, Camille}, issn = {17550998}, journal = {Molecular Ecology Resources}, publisher = {Wiley}, title = {{DILS: Demographic inferences with linked selection by using ABC}}, doi = {10.1111/1755-0998.13323}, year = {2021}, } @article{9121, abstract = {We show that the energy gap for the BCS gap equation is Ξ=μ(8e−2+o(1))exp(π2μ−−√a) in the low density limit μ→0. Together with the similar result for the critical temperature by Hainzl and Seiringer (Lett Math Phys 84: 99–107, 2008), this shows that, in the low density limit, the ratio of the energy gap and critical temperature is a universal constant independent of the interaction potential V. The results hold for a class of potentials with negative scattering length a and no bound states.}, author = {Lauritsen, Asbjørn Bækgaard}, issn = {0377-9017}, journal = {Letters in Mathematical Physics}, keywords = {Mathematical Physics, Statistical and Nonlinear Physics}, publisher = {Springer Nature}, title = {{The BCS energy gap at low density}}, doi = {10.1007/s11005-021-01358-5}, volume = {111}, year = {2021}, } @article{9158, abstract = {While several tools have been developed to study the ground state of many-body quantum spin systems, the limitations of existing techniques call for the exploration of new approaches. In this manuscript we develop an alternative analytical and numerical framework for many-body quantum spin ground states, based on the disentanglement formalism. In this approach, observables are exactly expressed as Gaussian-weighted functional integrals over scalar fields. We identify the leading contribution to these integrals, given by the saddle point of a suitable effective action. Analytically, we develop a field-theoretical expansion of the functional integrals, performed by means of appropriate Feynman rules. The expansion can be truncated to a desired order to obtain analytical approximations to observables. Numerically, we show that the disentanglement approach can be used to compute ground state expectation values from classical stochastic processes. While the associated fluctuations grow exponentially with imaginary time and the system size, this growth can be mitigated by means of an importance sampling scheme based on knowledge of the saddle point configuration. We illustrate the advantages and limitations of our methods by considering the quantum Ising model in 1, 2 and 3 spatial dimensions. Our analytical and numerical approaches are applicable to a broad class of systems, bridging concepts from quantum lattice models, continuum field theory, and classical stochastic processes.}, author = {De Nicola, Stefano}, issn = {1742-5468}, journal = {Journal of Statistical Mechanics: Theory and Experiment}, keywords = {Statistics, Probability and Uncertainty, Statistics and Probability, Statistical and Nonlinear Physics}, number = {1}, publisher = {IOP Publishing}, title = {{Disentanglement approach to quantum spin ground states: Field theory and stochastic simulation}}, doi = {10.1088/1742-5468/abc7c7}, volume = {2021}, year = {2021}, } @article{9173, abstract = {We show that Hilbert schemes of points on supersingular Enriques surface in characteristic 2, Hilbn(X), for n ≥ 2 are simply connected, symplectic varieties but are not irreducible symplectic as the hodge number h2,0 > 1, even though a supersingular Enriques surface is an irreducible symplectic variety. These are the classes of varieties which appear only in characteristic 2 and they show that the hodge number formula for G¨ottsche-Soergel does not hold over haracteristic 2. It also gives examples of varieties with trivial canonical class which are neither irreducible symplectic nor Calabi-Yau, thereby showing that there are strictly more classes of simply connected varieties with trivial canonical class in characteristic 2 than over C as given by Beauville-Bogolomov decomposition theorem.}, author = {Srivastava, Tanya K}, issn = {0007-4497}, journal = {Bulletin des Sciences Mathematiques}, number = {03}, publisher = {Elsevier}, title = {{Pathologies of the Hilbert scheme of points of a supersingular Enriques surface}}, doi = {10.1016/j.bulsci.2021.102957}, volume = {167}, year = {2021}, } @article{9189, abstract = {Transposable elements exist widely throughout plant genomes and play important roles in plant evolution. Auxin is an important regulator that is traditionally associated with root development and drought stress adaptation. The DEEPER ROOTING 1 (DRO1) gene is a key component of rice drought avoidance. Here, we identified a transposon that acts as an autonomous auxin‐responsive promoter and its presence at specific genome positions conveys physiological adaptations related to drought avoidance. Rice varieties with high and auxin‐mediated transcription of DRO1 in the root tip show deeper and longer root phenotypes and are thus better adapted to drought. The INDITTO2 transposon contains an auxin response element and displays auxin‐responsive promoter activity; it is thus able to convey auxin regulation of transcription to genes in its proximity. In the rice Acuce, which displays DRO1‐mediated drought adaptation, the INDITTO2 transposon was found to be inserted at the promoter region of the DRO1 locus. Transgenesis‐based insertion of the INDITTO2 transposon into the DRO1 promoter of the non‐adapted rice variety Nipponbare was sufficient to promote its drought avoidance. Our data identify an example of how transposons can act as promoters and convey hormonal regulation to nearby loci, improving plant fitness in response to different abiotic stresses.}, author = {Zhao, Y and Wu, L and Fu, Q and Wang, D and Li, J and Yao, B and Yu, S and Jiang, L and Qian, J and Zhou, X and Han, L and Zhao, S and Ma, C and Zhang, Y and Luo, C and Dong, Q and Li, S and Zhang, L and Jiang, X and Li, Y and Luo, H and Li, K and Yang, J and Luo, Q and Li, L and Peng, S and Huang, H and Zuo, Z and Liu, C and Wang, L and Li, C and He, X and Friml, Jiří and Du, Y}, issn = {0140-7791}, journal = {Plant, Cell & Environment}, publisher = {Wiley}, title = {{INDITTO2 transposon conveys auxin-mediated DRO1 transcription for rice drought avoidance}}, doi = {10.1111/pce.14029}, year = {2021}, } @unpublished{9200, abstract = {Formal design of embedded and cyber-physical systems relies on mathematical modeling. In this paper, we consider the model class of hybrid automata whose dynamics are defined by affine differential equations. Given a set of time-series data, we present an algorithmic approach to synthesize a hybrid automaton exhibiting behavior that is close to the data, up to a specified precision, and changes in synchrony with the data. A fundamental problem in our synthesis algorithm is to check membership of a time series in a hybrid automaton. Our solution integrates reachability and optimization techniques for affine dynamical systems to obtain both a sufficient and a necessary condition for membership, combined in a refinement framework. The algorithm processes one time series at a time and hence can be interrupted, provide an intermediate result, and be resumed. We report experimental results demonstrating the applicability of our synthesis approach.}, author = {Garcia Soto, Miriam and Henzinger, Thomas A and Schilling, Christian}, booktitle = {arXiv}, keywords = {hybrid automaton, membership, system identification}, pages = {2102.12734}, title = {{Synthesis of hybrid automata with affine dynamics from time-series data}}, year = {2021}, } @unpublished{9199, abstract = {We associate a certain tensor product lattice to any primitive integer lattice and ask about its typical shape. These lattices are related to the tangent bundle of Grassmannians and their study is motivated by Peyre's programme on "freeness" for rational points of bounded height on Fano varieties.}, author = {Browning, Timothy D and Horesh, Tal and Wilsch, Florian Alexander}, booktitle = {arXiv}, title = {{Equidistribution and freeness on Grassmannians}}, year = {2021}, } @article{9205, abstract = {Cryo-EM grid preparation is an important bottleneck in protein structure determination, especially for membrane proteins, typically requiring screening of a large number of conditions. We systematically investigated the effects of buffer components, blotting conditions and grid types on the outcome of grid preparation of five different membrane protein samples. Aggregation was the most common type of problem which was addressed by changing detergents, salt concentration or reconstitution of proteins into nanodiscs or amphipols. We show that the optimal concentration of detergent is between 0.05 and 0.4% and that the presence of a low concentration of detergent with a high critical micellar concentration protects the proteins from denaturation at the air-water interface. Furthermore, we discuss the strategies for achieving an adequate ice thickness, particle coverage and orientation distribution on free ice and on support films. Our findings provide a clear roadmap for comprehensive screening of conditions for cryo-EM grid preparation of membrane proteins.}, author = {Kampjut, Domen and Steiner, Julia and Sazanov, Leonid A}, issn = {25890042}, journal = {iScience}, number = {3}, publisher = {Elsevier}, title = {{Cryo-EM grid optimization for membrane proteins}}, doi = {10.1016/j.isci.2021.102139}, volume = {24}, year = {2021}, } @article{9206, abstract = {The precise engineering of thermoelectric materials using nanocrystals as their building blocks has proven to be an excellent strategy to increase energy conversion efficiency. Here we present a synthetic route to produce Sb-doped PbS colloidal nanoparticles. These nanoparticles are then consolidated into nanocrystalline PbS:Sb using spark plasma sintering. We demonstrate that the introduction of Sb significantly influences the size, geometry, crystal lattice and especially the carrier concentration of PbS. The increase of charge carrier concentration achieved with the introduction of Sb translates into an increase of the electrical and thermal conductivities and a decrease of the Seebeck coefficient. Overall, PbS:Sb nanomaterial were characterized by two-fold higher thermoelectric figures of merit than undoped PbS. }, author = {Cadavid, Doris and Wei, Kaya and Liu, Yu and Zhang, Yu and Li, Mengyao and Genç, Aziz and Berestok, Taisiia and Ibáñez, Maria and Shavel, Alexey and Nolas, George S. and Cabot, Andreu}, issn = {1996-1944}, journal = {Materials}, number = {4}, publisher = {MDPI}, title = {{Synthesis, bottom up assembly and thermoelectric properties of Sb-doped PbS nanocrystal building blocks}}, doi = {10.3390/ma14040853}, volume = {14}, year = {2021}, } @inproceedings{9202, abstract = {We propose a novel hybridization method for stability analysis that over-approximates nonlinear dynamical systems by switched systems with linear inclusion dynamics. We observe that existing hybridization techniques for safety analysis that over-approximate nonlinear dynamical systems by switched affine inclusion dynamics and provide fixed approximation error, do not suffice for stability analysis. Hence, we propose a hybridization method that provides a state-dependent error which converges to zero as the state tends to the equilibrium point. The crux of our hybridization computation is an elegant recursive algorithm that uses partial derivatives of a given function to obtain upper and lower bound matrices for the over-approximating linear inclusion. We illustrate our method on some examples to demonstrate the application of the theory for stability analysis. In particular, our method is able to establish stability of a nonlinear system which does not admit a polynomial Lyapunov function.}, author = {Garcia Soto, Miriam and Prabhakar, Pavithra}, booktitle = {2020 IEEE Real-Time Systems Symposium}, issn = {2576-3172}, location = {Houston, TX, USA }, pages = {244--256}, publisher = {IEEE}, title = {{Hybridization for stability verification of nonlinear switched systems}}, doi = {10.1109/RTSS49844.2020.00031}, year = {2021}, } @article{9212, abstract = {Plant fitness is largely dependent on the root, the underground organ, which, besides its anchoring function, supplies the plant body with water and all nutrients necessary for growth and development. To exploit the soil effectively, roots must constantly integrate environmental signals and react through adjustment of growth and development. Important components of the root management strategy involve a rapid modulation of the root growth kinetics and growth direction, as well as an increase of the root system radius through formation of lateral roots (LRs). At the molecular level, such a fascinating growth and developmental flexibility of root organ requires regulatory networks that guarantee stability of the developmental program but also allows integration of various environmental inputs. The plant hormone auxin is one of the principal endogenous regulators of root system architecture by controlling primary root growth and formation of LR. In this review, we discuss recent progress in understanding molecular networks where auxin is one of the main players shaping the root system and acting as mediator between endogenous cues and environmental factors.}, author = {Cavallari, Nicola and Artner, Christina and Benková, Eva}, issn = {1943-0264}, journal = {Cold Spring Harbor Perspectives in Biology}, publisher = {Cold Spring Harbor Laboratory Press}, title = {{Auxin-regulated lateral root organogenesis}}, doi = {10.1101/cshperspect.a039941}, year = {2021}, } @article{9225, abstract = {The Landau–Pekar equations describe the dynamics of a strongly coupled polaron. Here, we provide a class of initial data for which the associated effective Hamiltonian has a uniform spectral gap for all times. For such initial data, this allows us to extend the results on the adiabatic theorem for the Landau–Pekar equations and their derivation from the Fröhlich model obtained in previous works to larger times.}, author = {Feliciangeli, Dario and Rademacher, Simone Anna Elvira and Seiringer, Robert}, issn = {15730530}, journal = {Letters in Mathematical Physics}, publisher = {Springer Nature}, title = {{Persistence of the spectral gap for the Landau–Pekar equations}}, doi = {10.1007/s11005-020-01350-5}, volume = {111}, year = {2021}, } @inproceedings{9227, abstract = {In the multiway cut problem we are given a weighted undirected graph G=(V,E) and a set T⊆V of k terminals. The goal is to find a minimum weight set of edges E′⊆E with the property that by removing E′ from G all the terminals become disconnected. In this paper we present a simple local search approximation algorithm for the multiway cut problem with approximation ratio 2−2k . We present an experimental evaluation of the performance of our local search algorithm and show that it greatly outperforms the isolation heuristic of Dalhaus et al. and it has similar performance as the much more complex algorithms of Calinescu et al., Sharma and Vondrak, and Buchbinder et al. which have the currently best known approximation ratios for this problem.}, author = {Bloch-Hansen, Andrew and Samei, Nasim and Solis-Oba, Roberto}, booktitle = {Conference on Algorithms and Discrete Applied Mathematics}, isbn = {9783030678982}, issn = {1611-3349}, location = {Rupnagar, India}, pages = {346--358}, publisher = {Springer Nature}, title = {{Experimental evaluation of a local search approximation algorithm for the multiway cut problem}}, doi = {10.1007/978-3-030-67899-9_28}, volume = {12601}, year = {2021}, } @article{9224, abstract = {We re-examine attempts to study the many-body localization transition using measures that are physically natural on the ergodic/quantum chaotic regime of the phase diagram. Using simple scaling arguments and an analysis of various models for which rigorous results are available, we find that these measures can be particularly adversely affected by the strong finite-size effects observed in nearly all numerical studies of many-body localization. This severely impacts their utility in probing the transition and the localized phase. In light of this analysis, we discuss a recent study (Šuntajs et al., 2020) of the behaviour of the Thouless energy and level repulsion in disordered spin chains, and its implications for the question of whether MBL is a true phase of matter.}, author = {Abanin, D. A. and Bardarson, J. H. and De Tomasi, G. and Gopalakrishnan, S. and Khemani, V. and Parameswaran, S. A. and Pollmann, F. and Potter, A. C. and Serbyn, Maksym and Vasseur, R.}, issn = {1096035X}, journal = {Annals of Physics}, publisher = {Elsevier}, title = {{Distinguishing localization from chaos: Challenges in finite-size systems}}, doi = {10.1016/j.aop.2021.168415}, volume = {427}, year = {2021}, } @article{9226, abstract = {Half a century after Lewis Wolpert's seminal conceptual advance on how cellular fates distribute in space, we provide a brief historical perspective on how the concept of positional information emerged and influenced the field of developmental biology and beyond. We focus on a modern interpretation of this concept in terms of information theory, largely centered on its application to cell specification in the early Drosophila embryo. We argue that a true physical variable (position) is encoded in local concentrations of patterning molecules, that this mapping is stochastic, and that the processes by which positions and corresponding cell fates are determined based on these concentrations need to take such stochasticity into account. With this approach, we shift the focus from biological mechanisms, molecules, genes and pathways to quantitative systems-level questions: where does positional information reside, how it is transformed and accessed during development, and what fundamental limits it is subject to?}, author = {Tkačik, Gašper and Gregor, Thomas}, issn = {1477-9129}, journal = {Development}, number = {2}, publisher = {The Company of Biologists}, title = {{The many bits of positional information}}, doi = {10.1242/dev.176065}, volume = {148}, year = {2021}, } @article{9228, abstract = {Legacy conferences are costly and time consuming, and exclude scientists lacking various resources or abilities. During the 2020 pandemic, we created an online conference platform, Neuromatch Conferences (NMC), aimed at developing technological and cultural changes to make conferences more democratic, scalable, and accessible. We discuss the lessons we learned.}, author = {Achakulvisut, Titipat and Ruangrong, Tulakan and Mineault, Patrick and Vogels, Tim P and Peters, Megan A.K. and Poirazi, Panayiota and Rozell, Christopher and Wyble, Brad and Goodman, Dan F.M. and Kording, Konrad Paul}, issn = {1879-307X}, journal = {Trends in Cognitive Sciences}, publisher = {Elsevier}, title = {{Towards democratizing and automating online conferences: Lessons from the Neuromatch Conferences}}, doi = {10.1016/j.tics.2021.01.007}, year = {2021}, } @unpublished{9230, abstract = {We consider a model of the Riemann zeta function on the critical axis and study its maximum over intervals of length (log T)θ, where θ is either fixed or tends to zero at a suitable rate. It is shown that the deterministic level of the maximum interpolates smoothly between the ones of log-correlated variables and of i.i.d. random variables, exhibiting a smooth transition ‘from 3/4 to 1/4’ in the second order. This provides a natural context where extreme value statistics of log-correlated variables with time-dependent variance and rate occur. A key ingredient of the proof is a precise upper tail tightness estimate for the maximum of the model on intervals of size one, that includes a Gaussian correction. This correction is expected to be present for the Riemann zeta function and pertains to the question of the correct order of the maximum of the zeta function in large intervals.}, author = {Arguin, Louis-Pierre and Dubach, Guillaume and Hartung, Lisa}, booktitle = {arXiv}, title = {{Maxima of a random model of the Riemann zeta function over intervals of varying length}}, year = {2021}, } @article{9188, abstract = {Genomic imprinting is an epigenetic mechanism that results in parental allele-specific expression of ~1% of all genes in mouse and human. Imprinted genes are key developmental regulators and play pivotal roles in many biological processes such as nutrient transfer from the mother to offspring and neuronal development. Imprinted genes are also involved in human disease, including neurodevelopmental disorders, and often occur in clusters that are regulated by a common imprint control region (ICR). In extra-embryonic tissues ICRs can act over large distances, with the largest surrounding Igf2r spanning over 10 million base-pairs. Besides classical imprinted expression that shows near exclusive maternal or paternal expression, widespread biased imprinted expression has been identified mainly in brain. In this review we discuss recent developments mapping cell type specific imprinted expression in extra-embryonic tissues and neocortex in the mouse. We highlight the advantages of using an inducible uniparental chromosome disomy (UPD) system to generate cells carrying either two maternal or two paternal copies of a specific chromosome to analyze the functional consequences of genomic imprinting. Mosaic Analysis with Double Markers (MADM) allows fluorescent labeling and concomitant induction of UPD sparsely in specific cell types, and thus to over-express or suppress all imprinted genes on that chromosome. To illustrate the utility of this technique, we explain how MADM-induced UPD revealed new insights about the function of the well-studied Cdkn1c imprinted gene, and how MADM-induced UPDs led to identification of highly cell type specific phenotypes related to perturbed imprinted expression in the mouse neocortex. Finally, we give an outlook on how MADM could be used to probe cell type specific imprinted expression in other tissues in mouse, particularly in extra-embryonic tissues.}, author = {Pauler, Florian and Hudson, Quanah and Laukoter, Susanne and Hippenmeyer, Simon}, issn = {0197-0186}, journal = {Neurochemistry International}, keywords = {Cell Biology, Cellular and Molecular Neuroscience}, number = {5}, publisher = {Elsevier}, title = {{Inducible uniparental chromosome disomy to probe genomic imprinting at single-cell level in brain and beyond}}, doi = {10.1016/j.neuint.2021.104986}, volume = {145}, year = {2021}, } @article{9118, abstract = {Cesium lead halides have intrinsically unstable crystal lattices and easily transform within perovskite and nonperovskite structures. In this work, we explore the conversion of the perovskite CsPbBr3 into Cs4PbBr6 in the presence of PbS at 450 °C to produce doped nanocrystal-based composites with embedded Cs4PbBr6 nanoprecipitates. We show that PbBr2 is extracted from CsPbBr3 and diffuses into the PbS lattice with a consequent increase in the concentration of free charge carriers. This new doping strategy enables the adjustment of the density of charge carriers between 1019 and 1020 cm–3, and it may serve as a general strategy for doping other nanocrystal-based semiconductors.}, author = {Calcabrini, Mariano and Genc, Aziz and Liu, Yu and Kleinhanns, Tobias and Lee, Seungho and Dirin, Dmitry N. and Akkerman, Quinten A. and Kovalenko, Maksym V. and Arbiol, Jordi and Ibáñez, Maria}, issn = {23808195}, journal = {ACS Energy Letters}, number = {2}, pages = {581--587}, publisher = {American Chemical Society}, title = {{Exploiting the lability of metal halide perovskites for doping semiconductor nanocomposites}}, doi = {10.1021/acsenergylett.0c02448}, volume = {6}, year = {2021}, } @article{9234, abstract = {In this paper, we present two new inertial projection-type methods for solving multivalued variational inequality problems in finite-dimensional spaces. We establish the convergence of the sequence generated by these methods when the multivalued mapping associated with the problem is only required to be locally bounded without any monotonicity assumption. Furthermore, the inertial techniques that we employ in this paper are quite different from the ones used in most papers. Moreover, based on the weaker assumptions on the inertial factor in our methods, we derive several special cases of our methods. Finally, we present some experimental results to illustrate the profits that we gain by introducing the inertial extrapolation steps.}, author = {Izuchukwu, Chinedu and Shehu, Yekini}, issn = {1566-113X}, journal = {Networks and Spatial Economics}, keywords = {Computer Networks and Communications, Software, Artificial Intelligence}, publisher = {Springer Nature}, title = {{New inertial projection methods for solving multivalued variational inequality problems beyond monotonicity}}, doi = {10.1007/s11067-021-09517-w}, year = {2021}, } @article{8603, abstract = {We consider the Fröhlich polaron model in the strong coupling limit. It is well‐known that to leading order the ground state energy is given by the (classical) Pekar energy. In this work, we establish the subleading correction, describing quantum fluctuation about the classical limit. Our proof applies to a model of a confined polaron, where both the electron and the polarization field are restricted to a set of finite volume, with linear size determined by the natural length scale of the Pekar problem.}, author = {Frank, Rupert and Seiringer, Robert}, issn = {10970312}, journal = {Communications on Pure and Applied Mathematics}, number = {3}, pages = {544--588}, publisher = {Wiley}, title = {{Quantum corrections to the Pekar asymptotics of a strongly coupled polaron}}, doi = {10.1002/cpa.21944}, volume = {74}, year = {2021}, } @article{8792, abstract = {This paper is concerned with a non-isothermal Cahn-Hilliard model based on a microforce balance. The model was derived by A. Miranville and G. Schimperna starting from the two fundamental laws of Thermodynamics, following M. Gurtin's two-scale approach. The main working assumptions are made on the behaviour of the heat flux as the absolute temperature tends to zero and to infinity. A suitable Ginzburg-Landau free energy is considered. Global-in-time existence for the initial-boundary value problem associated to the entropy formulation and, in a subcase, also to the weak formulation of the model is proved by deriving suitable a priori estimates and by showing weak sequential stability of families of approximating solutions. At last, some highlights are given regarding a possible approximation scheme compatible with the a-priori estimates available for the system.}, author = {Marveggio, Alice and Schimperna, Giulio}, issn = {10902732}, journal = {Journal of Differential Equations}, number = {2}, pages = {924--970}, publisher = {Elsevier}, title = {{On a non-isothermal Cahn-Hilliard model based on a microforce balance}}, doi = {10.1016/j.jde.2020.10.030}, volume = {274}, year = {2021}, } @inbook{7941, abstract = {Expansion microscopy is a recently developed super-resolution imaging technique, which provides an alternative to optics-based methods such as deterministic approaches (e.g. STED) or stochastic approaches (e.g. PALM/STORM). The idea behind expansion microscopy is to embed the biological sample in a swellable gel, and then to expand it isotropically, thereby increasing the distance between the fluorophores. This approach breaks the diffraction barrier by simply separating the emission point-spread-functions of the fluorophores. The resolution attainable in expansion microscopy is thus directly dependent on the separation that can be achieved, i.e. on the expansion factor. The original implementation of the technique achieved an expansion factor of fourfold, for a resolution of 70–80 nm. The subsequently developed X10 method achieves an expansion factor of 10-fold, for a resolution of 25–30 nm. This technique can be implemented with minimal technical requirements on any standard fluorescence microscope, and is more easily applied for multi-color imaging than either deterministic or stochastic super-resolution approaches. This renders X10 expansion microscopy a highly promising tool for new biological discoveries, as discussed here, and as demonstrated by several recent applications.}, author = {Truckenbrodt, Sven M and Rizzoli, Silvio O.}, booktitle = {Methods in Cell Biology}, isbn = {978012820807-6}, issn = {0091-679X}, pages = {33--56}, publisher = {Elsevier}, title = {{Simple multi-color super-resolution by X10 microscopy}}, doi = {10.1016/bs.mcb.2020.04.016}, volume = {161}, year = {2021}, } @article{9168, abstract = {Interspecific crossing experiments have shown that sex chromosomes play a major role in reproductive isolation between many pairs of species. However, their ability to act as reproductive barriers, which hamper interspecific genetic exchange, has rarely been evaluated quantitatively compared to Autosomes. This genome-wide limitation of gene flow is essential for understanding the complete separation of species, and thus speciation. Here, we develop a mainland-island model of secondary contact between hybridizing species of an XY (or ZW) sexual system. We obtain theoretical predictions for the frequency of introgressed alleles, and the strength of the barrier to neutral gene flow for the two types of chromosomes carrying multiple interspecific barrier loci. Theoretical predictions are obtained for scenarios where introgressed alleles are rare. We show that the same analytical expressions apply for sex chromosomes and autosomes, but with different sex-averaged effective parameters. The specific features of sex chromosomes (hemizygosity and absence of recombination in the heterogametic sex) lead to reduced levels of introgression on the X (or Z) compared to autosomes. This effect can be enhanced by certain types of sex-biased forces, but it remains overall small (except when alleles causing incompatibilities are recessive). We discuss these predictions in the light of empirical data comprising model-based tests of introgression and cline surveys in various biological systems.}, author = {Fraisse, Christelle and Sachdeva, Himani}, issn = {1943-2631}, journal = {Genetics}, number = {2}, publisher = {Oxford University Press}, title = {{The rates of introgression and barriers to genetic exchange between hybridizing species: Sex chromosomes vs autosomes}}, doi = {10.1093/genetics/iyaa025}, volume = {217}, year = {2021}, } @article{9009, abstract = {Recent advancements in live cell imaging technologies have identified the phenomenon of intracellular propagation of late apoptotic events, such as cytochrome c release and caspase activation. The mechanism, prevalence, and speed of apoptosis propagation remain unclear. Additionally, no studies have demonstrated propagation of the pro-apoptotic protein, BAX. To evaluate the role of BAX in intracellular apoptotic propagation, we used high speed live-cell imaging to visualize fluorescently tagged-BAX recruitment to mitochondria in four immortalized cell lines. We show that propagation of mitochondrial BAX recruitment occurs in parallel to cytochrome c and SMAC/Diablo release and is affected by cellular morphology, such that cells with processes are more likely to exhibit propagation. The initiation of propagation events is most prevalent in the distal tips of processes, while the rate of propagation is influenced by the 2-dimensional width of the process. Propagation was rarely observed in the cell soma, which exhibited near synchronous recruitment of BAX. Propagation velocity is not affected by mitochondrial volume in segments of processes, but is negatively affected by mitochondrial density. There was no evidence of a propagating wave of increased levels of intracellular calcium ions. Alternatively, we did observe a uniform increase in superoxide build-up in cellular mitochondria, which was released as a propagating wave simultaneously with the propagating recruitment of BAX to the mitochondrial outer membrane.}, author = {Grosser, Joshua A. and Maes, Margaret E and Nickells, Robert W.}, issn = {1573-675X}, journal = {Apoptosis}, number = {2}, pages = {132--145}, publisher = {Springer Nature}, title = {{Characteristics of intracellular propagation of mitochondrial BAX recruitment during apoptosis}}, doi = {10.1007/s10495-020-01654-w}, volume = {26}, year = {2021}, } @article{8689, abstract = {This paper continues the discussion started in [CK19] concerning Arnold's legacy on classical KAM theory and (some of) its modern developments. We prove a detailed and explicit `global' Arnold's KAM Theorem, which yields, in particular, the Whitney conjugacy of a non{degenerate, real{analytic, nearly-integrable Hamiltonian system to an integrable system on a closed, nowhere dense, positive measure subset of the phase space. Detailed measure estimates on the Kolmogorov's set are provided in the case the phase space is: (A) a uniform neighbourhood of an arbitrary (bounded) set times the d-torus and (B) a domain with C2 boundary times the d-torus. All constants are explicitly given.}, author = {Chierchia, Luigi and Koudjinan, Edmond}, issn = {1560-3547}, journal = {Regular and Chaotic Dynamics}, keywords = {Nearly{integrable Hamiltonian systems, perturbation theory, KAM Theory, Arnold's scheme, Kolmogorov's set, primary invariant tori, Lagrangian tori, measure estimates, small divisors, integrability on nowhere dense sets, Diophantine frequencies.}, number = {1}, pages = {61--88}, publisher = {Springer Nature}, title = {{V.I. Arnold's ''Global'' KAM theorem and geometric measure estimates}}, doi = {10.1134/S1560354721010044}, volume = {26}, year = {2021}, } @article{9006, abstract = {Cytoplasm is a gel-like crowded environment composed of various macromolecules, organelles, cytoskeletal networks, and cytosol. The structure of the cytoplasm is highly organized and heterogeneous due to the crowding of its constituents and their effective compartmentalization. In such an environment, the diffusive dynamics of the molecules are restricted, an effect that is further amplified by clustering and anchoring of molecules. Despite the crowded nature of the cytoplasm at the microscopic scale, large-scale reorganization of the cytoplasm is essential for important cellular functions, such as cell division and polarization. How such mesoscale reorganization of the cytoplasm is achieved, especially for large cells such as oocytes or syncytial tissues that can span hundreds of micrometers in size, is only beginning to be understood. In this review, we will discuss recent advances in elucidating the molecular, cellular, and biophysical mechanisms by which the cytoskeleton drives cytoplasmic reorganization across different scales, structures, and species.}, author = {Shamipour, Shayan and Caballero Mancebo, Silvia and Heisenberg, Carl-Philipp J}, issn = {18781551}, journal = {Developmental Cell}, number = {2}, pages = {P213--226}, publisher = {Elsevier}, title = {{Cytoplasm's got moves}}, doi = {10.1016/j.devcel.2020.12.002}, volume = {56}, year = {2021}, } @article{9235, abstract = {Cu2–xS has become one of the most promising thermoelectric materials for application in the middle-high temperature range. Its advantages include the abundance, low cost, and safety of its elements and a high performance at relatively elevated temperatures. However, stability issues limit its operation current and temperature, thus calling for the optimization of the material performance in the middle temperature range. Here, we present a synthetic protocol for large scale production of covellite CuS nanoparticles at ambient temperature and atmosphere, and using water as a solvent. The crystal phase and stoichiometry of the particles are afterward tuned through an annealing process at a moderate temperature under inert or reducing atmosphere. While annealing under argon results in Cu1.8S nanopowder with a rhombohedral crystal phase, annealing in an atmosphere containing hydrogen leads to tetragonal Cu1.96S. High temperature X-ray diffraction analysis shows the material annealed in argon to transform to the cubic phase at ca. 400 K, while the material annealed in the presence of hydrogen undergoes two phase transitions, first to hexagonal and then to the cubic structure. The annealing atmosphere, temperature, and time allow adjustment of the density of copper vacancies and thus tuning of the charge carrier concentration and material transport properties. In this direction, the material annealed under Ar is characterized by higher electrical conductivities but lower Seebeck coefficients than the material annealed in the presence of hydrogen. By optimizing the charge carrier concentration through the annealing time, Cu2–xS with record figures of merit in the middle temperature range, up to 1.41 at 710 K, is obtained. We finally demonstrate that this strategy, based on a low-cost and scalable solution synthesis process, is also suitable for the production of high performance Cu2–xS layers using high throughput and cost-effective printing technologies.}, author = {Li, Mengyao and Liu, Yu and Zhang, Yu and Han, Xu and Zhang, Ting and Zuo, Yong and Xie, Chenyang and Xiao, Ke and Arbiol, Jordi and Llorca, Jordi and Ibáñez, Maria and Liu, Junfeng and Cabot, Andreu}, issn = {1936-086X}, journal = {ACS Nano}, keywords = {General Engineering, General Physics and Astronomy, General Materials Science}, publisher = {American Chemical Society }, title = {{Effect of the annealing atmosphere on crystal phase and thermoelectric properties of copper sulfide}}, doi = {10.1021/acsnano.0c09866}, year = {2021}, } @article{9207, abstract = {In this paper we experimentally study the transitional range of Reynolds numbers in plane Couette–Poiseuille flow, focusing our attention on the localized turbulent structures triggered by a strong impulsive jet and the large-scale flow generated around these structures. We present a detailed investigation of the large-scale flow and show how its amplitude depends on Reynolds number and amplitude perturbation. In addition, we characterize the initial dynamics of the localized turbulent spot, which includes the coupling between the small and large scales, as well as the dependence of the advection speed on the large-scale flow generated around the spot. Finally, we provide the first experimental measurements of the large-scale flow around an oblique turbulent band.}, author = {Klotz, Lukasz and Pavlenko, A. M. and Wesfreid, J. E.}, issn = {1469-7645}, journal = {Journal of Fluid Mechanics}, publisher = {Cambridge University Press}, title = {{Experimental measurements in plane Couette-Poiseuille flow: Dynamics of the large- and small-scale flow}}, doi = {10.1017/jfm.2020.1089}, volume = {912}, year = {2021}, } @unpublished{9238, abstract = {Metabolic adaptation to changing demands underlies homeostasis. During inflammation or metastasis, cells leading migration into challenging environments require an energy boost, however what controls this capacity is unknown. We identify a previously unstudied nuclear protein, Atossa, as changing metabolism in Drosophila melanogaster immune cells to promote tissue invasion. Atossa’s vertebrate orthologs, FAM214A-B, can fully substitute for Atossa, indicating functional conservation from flies to mammals. Atossa increases mRNA levels of Porthos, an unstudied RNA helicase and two metabolic enzymes, LKR/SDH and GR/HPR. Porthos increases translation of a gene subset, including those affecting mitochondrial functions, the electron transport chain, and metabolism. Respiration measurements and metabolomics indicate that Atossa and Porthos powers up mitochondrial oxidative phosphorylation to produce sufficient energy for leading macrophages to forge a path into tissues. As increasing oxidative phosphorylation enables many crucial physiological responses, this unique genetic program may modulate a wide range of cellular behaviors beyond migration.}, author = {Emtenani, Shamsi and Martin, Elliott T. and György, Attila and Bicher, Julia and Genger, Jakob-Wendelin and Hurd, Thomas R. and Köcher, Thomas and Bergthaler, Andreas and Rangan, Prashanth and Siekhaus, Daria E}, booktitle = {bioRxiv}, title = {{A genetic program boosts mitochondrial function to power macrophage tissue invasion}}, doi = {10.1101/2021.02.18.431643}, year = {2021}, } @article{9243, abstract = {Peptidoglycan is an essential component of the bacterial cell envelope that surrounds the cytoplasmic membrane to protect the cell from osmotic lysis. Important antibiotics such as β-lactams and glycopeptides target peptidoglycan biosynthesis. Class A penicillin-binding proteins (PBPs) are bifunctional membrane-bound peptidoglycan synthases that polymerize glycan chains and connect adjacent stem peptides by transpeptidation. How these enzymes work in their physiological membrane environment is poorly understood. Here, we developed a novel Förster resonance energy transfer-based assay to follow in real time both reactions of class A PBPs reconstituted in liposomes or supported lipid bilayers and applied this assay with PBP1B homologues from Escherichia coli, Pseudomonas aeruginosa, and Acinetobacter baumannii in the presence or absence of their cognate lipoprotein activator. Our assay will allow unravelling the mechanisms of peptidoglycan synthesis in a lipid-bilayer environment and can be further developed to be used for high-throughput screening for new antimicrobials.}, author = {Hernández-Rocamora, Víctor M. and Baranova, Natalia S. and Peters, Katharina and Breukink, Eefjan and Loose, Martin and Vollmer, Waldemar}, issn = {2050-084X}, journal = {eLife}, publisher = {eLife Sciences Publications}, title = {{Real time monitoring of peptidoglycan synthesis by membrane-reconstituted penicillin binding proteins}}, doi = {10.7554/eLife.61525}, volume = {10}, year = {2021}, } @article{9240, abstract = {A stochastic PDE, describing mesoscopic fluctuations in systems of weakly interacting inertial particles of finite volume, is proposed and analysed in any finite dimension . It is a regularised and inertial version of the Dean–Kawasaki model. A high-probability well-posedness theory for this model is developed. This theory improves significantly on the spatial scaling restrictions imposed in an earlier work of the same authors, which applied only to significantly larger particles in one dimension. The well-posedness theory now applies in d-dimensions when the particle-width ϵ is proportional to for and N is the number of particles. This scaling is optimal in a certain Sobolev norm. Key tools of the analysis are fractional Sobolev spaces, sharp bounds on Bessel functions, separability of the regularisation in the d-spatial dimensions, and use of the Faà di Bruno's formula.}, author = {Cornalba, Federico and Shardlow, Tony and Zimmer, Johannes}, issn = {1090-2732}, journal = {Journal of Differential Equations}, number = {5}, pages = {253--283}, publisher = {Elsevier}, title = {{Well-posedness for a regularised inertial Dean–Kawasaki model for slender particles in several space dimensions}}, doi = {10.1016/j.jde.2021.02.048}, volume = {284}, year = {2021}, } @article{9239, abstract = {A graph game proceeds as follows: two players move a token through a graph to produce a finite or infinite path, which determines the payoff of the game. We study bidding games in which in each turn, an auction determines which player moves the token. Bidding games were largely studied in combination with two variants of first-price auctions called “Richman” and “poorman” bidding. We study taxman bidding, which span the spectrum between the two. The game is parameterized by a constant : portion τ of the winning bid is paid to the other player, and portion to the bank. While finite-duration (reachability) taxman games have been studied before, we present, for the first time, results on infinite-duration taxman games: we unify, generalize, and simplify previous equivalences between bidding games and a class of stochastic games called random-turn games.}, author = {Avni, Guy and Henzinger, Thomas A and Žikelić, Đorđe}, issn = {1090-2724}, journal = {Journal of Computer and System Sciences}, number = {8}, pages = {133--144}, publisher = {Elsevier}, title = {{Bidding mechanisms in graph games}}, doi = {10.1016/j.jcss.2021.02.008}, volume = {119}, year = {2021}, } @article{9244, abstract = {Organ function depends on tissues adopting the correct architecture. However, insights into organ architecture are currently hampered by an absence of standardized quantitative 3D analysis. We aimed to develop a robust technology to visualize, digitalize, and segment the architecture of two tubular systems in 3D: double resin casting micro computed tomography (DUCT). As proof of principle, we applied DUCT to a mouse model for Alagille syndrome (Jag1Ndr/Ndr mice), characterized by intrahepatic bile duct paucity, that can spontaneously generate a biliary system in adulthood. DUCT identified increased central biliary branching and peripheral bile duct tortuosity as two compensatory processes occurring in distinct regions of Jag1Ndr/Ndr liver, leading to full reconstitution of wild-type biliary volume and phenotypic recovery. DUCT is thus a powerful new technology for 3D analysis, which can reveal novel phenotypes and provide a standardized method of defining liver architecture in mouse models.}, author = {Hankeova, Simona and Salplachta, Jakub and Zikmund, Tomas and Kavkova, Michaela and Van Hul, Noémi and Brinek, Adam and Smekalova, Veronika and Laznovsky, Jakub and Dawit, Feven and Jaros, Josef and Bryja, Vítězslav and Lendahl, Urban and Ellis, Ewa and Nemeth, Antal and Fischler, Björn and Hannezo, Edouard B and Kaiser, Jozef and Andersson, Emma Rachel}, issn = {2050084X}, journal = {eLife}, publisher = {eLife Sciences Publications}, title = {{DUCT reveals architectural mechanisms contributing to bile duct recovery in a mouse model for alagille syndrome}}, doi = {10.7554/eLife.60916}, volume = {10}, year = {2021}, } @article{9241, abstract = {Volumetric light transport is a pervasive physical phenomenon, and therefore its accurate simulation is important for a broad array of disciplines. While suitable mathematical models for computing the transport are now available, obtaining the necessary material parameters needed to drive such simulations is a challenging task: direct measurements of these parameters from material samples are seldom possible. Building on the inverse scattering paradigm, we present a novel measurement approach which indirectly infers the transport parameters from extrinsic observations of multiple-scattered radiance. The novelty of the proposed approach lies in replacing structured illumination with a structured reflector bonded to the sample, and a robust fitting procedure that largely compensates for potential systematic errors in the calibration of the setup. We show the feasibility of our approach by validating simulations of complex 3D compositions of the measured materials against physical prints, using photo-polymer resins. As presented in this paper, our technique yields colorspace data suitable for accurate appearance reproduction in the area of 3D printing. Beyond that, and without fundamental changes to the basic measurement methodology, it could equally well be used to obtain spectral measurements that are useful for other application areas.}, author = {Elek, Oskar and Zhang, Ran and Sumin, Denis and Myszkowski, Karol and Bickel, Bernd and Wilkie, Alexander and Křivánek, Jaroslav and Weyrich, Tim}, issn = {1094-4087}, journal = {Optics Express}, number = {5}, pages = {7568--7588}, publisher = {The Optical Society}, title = {{Robust and practical measurement of volume transport parameters in solid photo-polymer materials for 3D printing}}, doi = {10.1364/OE.406095}, volume = {29}, year = {2021}, } @article{9246, abstract = {We consider the Fröhlich Hamiltonian in a mean-field limit where many bosonic particles weakly couple to the quantized phonon field. For large particle numbers and a suitably small coupling, we show that the dynamics of the system is approximately described by the Landau–Pekar equations. These describe a Bose–Einstein condensate interacting with a classical polarization field, whose dynamics is effected by the condensate, i.e., the back-reaction of the phonons that are created by the particles during the time evolution is of leading order.}, author = {Leopold, Nikolai K and Mitrouskas, David Johannes and Seiringer, Robert}, issn = {14320673}, journal = {Archive for Rational Mechanics and Analysis}, pages = {383--417}, publisher = {Springer Nature}, title = {{Derivation of the Landau–Pekar equations in a many-body mean-field limit}}, doi = {10.1007/s00205-021-01616-9}, volume = {240}, year = {2021}, } @inbook{9245, abstract = {Tissue morphogenesis is driven by mechanical forces triggering cell movements and shape changes. Quantitatively measuring tension within tissues is of great importance for understanding the role of mechanical signals acting on the cell and tissue level during morphogenesis. Here we introduce laser ablation as a useful tool to probe tissue tension within the granulosa layer, an epithelial monolayer of somatic cells that surround the zebrafish female gamete during folliculogenesis. We describe in detail how to isolate follicles, mount samples, perform laser surgery, and analyze the data.}, author = {Xia, Peng and Heisenberg, Carl-Philipp J}, booktitle = {Germline Development in the Zebrafish}, editor = {Dosch, Roland}, isbn = {9781071609699}, issn = {19406029}, pages = {117--128}, publisher = {Springer Nature}, title = {{Quantifying tissue tension in the granulosa layer after laser surgery}}, doi = {10.1007/978-1-0716-0970-5_10}, volume = {2218}, year = {2021}, } @article{9252, abstract = {This paper analyses the conditions for local adaptation in a metapopulation with infinitely many islands under a model of hard selection, where population size depends on local fitness. Each island belongs to one of two distinct ecological niches or habitats. Fitness is influenced by an additive trait which is under habitat‐dependent directional selection. Our analysis is based on the diffusion approximation and accounts for both genetic drift and demographic stochasticity. By neglecting linkage disequilibria, it yields the joint distribution of allele frequencies and population size on each island. We find that under hard selection, the conditions for local adaptation in a rare habitat are more restrictive for more polygenic traits: even moderate migration load per locus at very many loci is sufficient for population sizes to decline. This further reduces the efficacy of selection at individual loci due to increased drift and because smaller populations are more prone to swamping due to migration, causing a positive feedback between increasing maladaptation and declining population sizes. Our analysis also highlights the importance of demographic stochasticity, which exacerbates the decline in numbers of maladapted populations, leading to population collapse in the rare habitat at significantly lower migration than predicted by deterministic arguments.}, author = {Szep, Eniko and Sachdeva, Himani and Barton, Nicholas H}, issn = {0014-3820}, journal = {Evolution}, keywords = {Genetics, Ecology, Evolution, Behavior and Systematics, General Agricultural and Biological Sciences}, publisher = {Wiley}, title = {{Polygenic local adaptation in metapopulations: A stochastic eco‐evolutionary model}}, doi = {10.1111/evo.14210}, year = {2021}, } @article{9242, abstract = {In the recent years important experimental advances in resonant electro-optic modulators as high-efficiency sources for coherent frequency combs and as devices for quantum information transfer have been realized, where strong optical and microwave mode coupling were achieved. These features suggest electro-optic-based devices as candidates for entangled optical frequency comb sources. In the present work, I study the generation of entangled optical frequency combs in millimeter-sized resonant electro-optic modulators. These devices profit from the experimentally proven advantages such as nearly constant optical free spectral ranges over several gigahertz, and high optical and microwave quality factors. The generation of frequency multiplexed quantum channels with spectral bandwidth in the MHz range for conservative parameter values paves the way towards novel uses in long-distance hybrid quantum networks, quantum key distribution, enhanced optical metrology, and quantum computing.}, author = {Rueda Sanchez, Alfredo R}, issn = {2469-9934}, journal = {Physical Review A}, number = {2}, publisher = {American Physical Society}, title = {{Frequency-multiplexed hybrid optical entangled source based on the Pockels effect}}, doi = {10.1103/PhysRevA.103.023708}, volume = {103}, year = {2021}, } @article{9257, abstract = {The inverse problem of designing component interactions to target emergent structure is fundamental to numerous applications in biotechnology, materials science, and statistical physics. Equally important is the inverse problem of designing emergent kinetics, but this has received considerably less attention. Using recent advances in automatic differentiation, we show how kinetic pathways can be precisely designed by directly differentiating through statistical physics models, namely free energy calculations and molecular dynamics simulations. We consider two systems that are crucial to our understanding of structural self-assembly: bulk crystallization and small nanoclusters. In each case, we are able to assemble precise dynamical features. Using gradient information, we manipulate interactions among constituent particles to tune the rate at which these systems yield specific structures of interest. Moreover, we use this approach to learn nontrivial features about the high-dimensional design space, allowing us to accurately predict when multiple kinetic features can be simultaneously and independently controlled. These results provide a concrete and generalizable foundation for studying nonstructural self-assembly, including kinetic properties as well as other complex emergent properties, in a vast array of systems.}, author = {Goodrich, Carl Peter and King, Ella M. and Schoenholz, Samuel S. and Cubuk, Ekin D. and Brenner, Michael P.}, issn = {1091-6490}, journal = {PNAS}, number = {10}, publisher = {Proceedings of the National Academy of Sciences}, title = {{Designing self-assembling kinetics with differentiable statistical physics models}}, doi = {10.1073/pnas.2024083118}, volume = {118}, year = {2021}, } @article{9259, abstract = {Gradients of chemokines and growth factors guide migrating cells and morphogenetic processes. Migration of antigen-presenting dendritic cells from the interstitium into the lymphatic system is dependent on chemokine CCL21, which is secreted by endothelial cells of the lymphatic capillary, binds heparan sulfates and forms gradients decaying into the interstitium. Despite the importance of CCL21 gradients, and chemokine gradients in general, the mechanisms of gradient formation are unclear. Studies on fibroblast growth factors have shown that limited diffusion is crucial for gradient formation. Here, we used the mouse dermis as a model tissue to address the necessity of CCL21 anchoring to lymphatic capillary heparan sulfates in the formation of interstitial CCL21 gradients. Surprisingly, the absence of lymphatic endothelial heparan sulfates resulted only in a modest decrease of CCL21 levels at the lymphatic capillaries and did neither affect interstitial CCL21 gradient shape nor dendritic cell migration toward lymphatic capillaries. Thus, heparan sulfates at the level of the lymphatic endothelium are dispensable for the formation of a functional CCL21 gradient.}, author = {Vaahtomeri, Kari and Moussion, Christine and Hauschild, Robert and Sixt, Michael K}, issn = {1664-3224}, journal = {Frontiers in Immunology}, publisher = {Frontiers}, title = {{Shape and function of interstitial chemokine CCL21 gradients are independent of heparan sulfates produced by lymphatic endothelium}}, doi = {10.3389/fimmu.2021.630002}, volume = {12}, year = {2021}, } @article{9254, abstract = {Auxin is a key regulator of plant growth and development. Local auxin biosynthesis and intercellular transport generates regional gradients in the root that are instructive for processes such as specification of developmental zones that maintain root growth and tropic responses. Here we present a toolbox to study auxin-mediated root development that features: (i) the ability to control auxin synthesis with high spatio-temporal resolution and (ii) single-cell nucleus tracking and morphokinetic analysis infrastructure. Integration of these two features enables cutting-edge analysis of root development at single-cell resolution based on morphokinetic parameters under normal growth conditions and during cell-type-specific induction of auxin biosynthesis. We show directional auxin flow in the root and refine the contributions of key players in this process. In addition, we determine the quantitative kinetics of Arabidopsis root meristem skewing, which depends on local auxin gradients but does not require PIN2 and AUX1 auxin transporter activities. Beyond the mechanistic insights into root development, the tools developed here will enable biologists to study kinetics and morphology of various critical processes at the single cell-level in whole organisms.}, author = {Hu, Yangjie and Omary, Moutasem and Hu, Yun and Doron, Ohad and Hörmayer, Lukas and Chen, Qingguo and Megides, Or and Chekli, Ori and Ding, Zhaojun and Friml, Jiří and Zhao, Yunde and Tsarfaty, Ilan and Shani, Eilon}, issn = {20411723}, journal = {Nature Communications}, publisher = {Springer Nature}, title = {{Cell kinetics of auxin transport and activity in Arabidopsis root growth and skewing}}, doi = {10.1038/s41467-021-21802-3}, volume = {12}, year = {2021}, } @article{9250, abstract = {Aprotic alkali metal–O2 batteries face two major obstacles to their chemistry occurring efficiently, the insulating nature of the formed alkali superoxides/peroxides and parasitic reactions that are caused by the highly reactive singlet oxygen (1O2). Redox mediators are recognized to be key for improving rechargeability. However, it is unclear how they affect 1O2 formation, which hinders strategies for their improvement. Here we clarify the mechanism of mediated peroxide and superoxide oxidation and thus explain how redox mediators either enhance or suppress 1O2 formation. We show that charging commences with peroxide oxidation to a superoxide intermediate and that redox potentials above ~3.5 V versus Li/Li+ drive 1O2 evolution from superoxide oxidation, while disproportionation always generates some 1O2. We find that 1O2 suppression requires oxidation to be faster than the generation of 1O2 from disproportionation. Oxidation rates decrease with growing driving force following Marcus inverted-region behaviour, establishing a region of maximum rate.}, author = {Petit, Yann K. and Mourad, Eléonore and Prehal, Christian and Leypold, Christian and Windischbacher, Andreas and Mijailovic, Daniel and Slugovc, Christian and Borisov, Sergey M. and Zojer, Egbert and Brutti, Sergio and Fontaine, Olivier and Freunberger, Stefan Alexander}, issn = {1755-4330}, journal = {Nature Chemistry}, keywords = {General Chemistry, General Chemical Engineering}, publisher = {Springer Nature}, title = {{Mechanism of mediated alkali peroxide oxidation and triplet versus singlet oxygen formation}}, doi = {10.1038/s41557-021-00643-z}, year = {2021}, } @article{9255, abstract = {Our ability to trust that a random number is truly random is essential for fields as diverse as cryptography and fundamental tests of quantum mechanics. Existing solutions both come with drawbacks—device-independent quantum random number generators (QRNGs) are highly impractical and standard semi-device-independent QRNGs are limited to a specific physical implementation and level of trust. Here we propose a framework for semi-device-independent randomness certification, using a source of trusted vacuum in the form of a signal shutter. It employs a flexible set of assumptions and levels of trust, allowing it to be applied in a wide range of physical scenarios involving both quantum and classical entropy sources. We experimentally demonstrate our protocol with a photonic setup and generate secure random bits under three different assumptions with varying degrees of security and resulting data rates.}, author = {Pivoluska, Matej and Plesch, Martin and Farkas, Máté and Ruzickova, Natalia and Flegel, Clara and Valencia, Natalia Herrera and Mccutcheon, Will and Malik, Mehul and Aguilar, Edgar A.}, issn = {2056-6387}, journal = {npj Quantum Information}, publisher = {Springer Nature}, title = {{Semi-device-independent random number generation with flexible assumptions}}, doi = {10.1038/s41534-021-00387-1}, volume = {7}, year = {2021}, } @inproceedings{9253, abstract = {In March 2020, the Austrian government introduced a widespread lock-down in response to the COVID-19 pandemic. Based on subjective impressions and anecdotal evidence, Austrian public and private life came to a sudden halt. Here we assess the effect of the lock-down quantitatively for all regions in Austria and present an analysis of daily changes of human mobility throughout Austria using near-real-time anonymized mobile phone data. We describe an efficient data aggregation pipeline and analyze the mobility by quantifying mobile-phone traffic at specific point of interests (POIs), analyzing individual trajectories and investigating the cluster structure of the origin-destination graph. We found a reduction of commuters at Viennese metro stations of over 80% and the number of devices with a radius of gyration of less than 500 m almost doubled. The results of studying crowd-movement behavior highlight considerable changes in the structure of mobility networks, revealed by a higher modularity and an increase from 12 to 20 detected communities. We demonstrate the relevance of mobility data for epidemiological studies by showing a significant correlation of the outflow from the town of Ischgl (an early COVID-19 hotspot) and the reported COVID-19 cases with an 8-day time lag. This research indicates that mobile phone usage data permits the moment-by-moment quantification of mobility behavior for a whole country. We emphasize the need to improve the availability of such data in anonymized form to empower rapid response to combat COVID-19 and future pandemics.}, author = {Heiler, Georg and Reisch, Tobias and Hurt, Jan and Forghani, Mohammad and Omani, Aida and Hanbury, Allan and Karimipour, Farid}, booktitle = {2020 IEEE International Conference on Big Data}, isbn = {9781728162515}, location = {Atlanta, GA, United States}, publisher = {IEEE}, title = {{Country-wide mobility changes observed using mobile phone data during COVID-19 pandemic}}, doi = {10.1109/bigdata50022.2020.9378374}, year = {2021}, } @article{9256, abstract = {We consider the ferromagnetic quantum Heisenberg model in one dimension, for any spin S≥1/2. We give upper and lower bounds on the free energy, proving that at low temperature it is asymptotically equal to the one of an ideal Bose gas of magnons, as predicted by the spin-wave approximation. The trial state used in the upper bound yields an analogous estimate also in the case of two spatial dimensions, which is believed to be sharp at low temperature.}, author = {Napiórkowski, Marcin M and Seiringer, Robert}, issn = {15730530}, journal = {Letters in Mathematical Physics}, number = {2}, publisher = {Springer Nature}, title = {{Free energy asymptotics of the quantum Heisenberg spin chain}}, doi = {10.1007/s11005-021-01375-4}, volume = {111}, year = {2021}, } @article{9258, author = {Pinkard, Henry and Stuurman, Nico and Ivanov, Ivan E. and Anthony, Nicholas M. and Ouyang, Wei and Li, Bin and Yang, Bin and Tsuchida, Mark A. and Chhun, Bryant and Zhang, Grace and Mei, Ryan and Anderson, Michael and Shepherd, Douglas P. and Hunt-Isaak, Ian and Dunn, Raymond L. and Jahr, Wiebke and Kato, Saul and Royer, Loïc A. and Thiagarajah, Jay R. and Eliceiri, Kevin W. and Lundberg, Emma and Mehta, Shalin B. and Waller, Laura}, issn = {1548-7105}, journal = {Nature Methods}, number = {3}, pages = {226--228}, publisher = {Springer Nature}, title = {{Pycro-Manager: Open-source software for customized and reproducible microscope control}}, doi = {10.1038/s41592-021-01087-6}, volume = {18}, year = {2021}, } @article{9262, abstract = {Sequence-specific oligomers with predictable folding patterns, i.e., foldamers, provide new opportunities to mimic α-helical peptides and design inhibitors of protein-protein interactions. One major hurdle of this strategy is to retain the correct orientation of key side chains involved in protein surface recognition. Here, we show that the structural plasticity of a foldamer backbone may notably contribute to the required spatial adjustment for optimal interaction with the protein surface. By using oligoureas as α helix mimics, we designed a foldamer/peptide hybrid inhibitor of histone chaperone ASF1, a key regulator of chromatin dynamics. The crystal structure of its complex with ASF1 reveals a notable plasticity of the urea backbone, which adapts to the ASF1 surface to maintain the same binding interface. One additional benefit of generating ASF1 ligands with nonpeptide oligourea segments is the resistance to proteolysis in human plasma, which was highly improved compared to the cognate α-helical peptide.}, author = {Mbianda, Johanne and Bakail, May M and André, Christophe and Moal, Gwenaëlle and Perrin, Marie E. and Pinna, Guillaume and Guerois, Raphaël and Becher, Francois and Legrand, Pierre and Traoré, Seydou and Douat, Céline and Guichard, Gilles and Ochsenbein, Françoise}, issn = {2375-2548}, journal = {Science Advances}, number = {12}, publisher = {American Association for the Advancement of Science}, title = {{Optimal anchoring of a foldamer inhibitor of ASF1 histone chaperone through backbone plasticity}}, doi = {10.1126/sciadv.abd9153}, volume = {7}, year = {2021}, } @article{9260, abstract = {We study the density of rational points on a higher-dimensional orbifold (Pn−1,Δ) when Δ is a Q-divisor involving hyperplanes. This allows us to address a question of Tanimoto about whether the set of rational points on such an orbifold constitutes a thin set. Our approach relies on the Hardy–Littlewood circle method to first study an asymptotic version of Waring’s problem for mixed powers. In doing so we make crucial use of the recent resolution of the main conjecture in Vinogradov’s mean value theorem, due to Bourgain–Demeter–Guth and Wooley.}, author = {Browning, Timothy D and Yamagishi, Shuntaro}, issn = {14321823}, journal = {Mathematische Zeitschrift}, publisher = {Springer Nature}, title = {{Arithmetic of higher-dimensional orbifolds and a mixed Waring problem}}, doi = {10.1007/s00209-021-02695-w}, year = {2021}, } @misc{9192, abstract = {Here are the research data underlying the publication " Effects of fine-scale population structure on inbreeding in a long-term study of snapdragons (Antirrhinum majus)" (working title "Estimating inbreeding and its effects in a long-term study of snapdragons (Antirrhinum majus)"). Further information are summed up in the README document.}, author = {Arathoon, Louise S and Surendranadh, Parvathy and Barton, Nicholas H and Field, David and Pickup, Melinda and Baskett, Carina}, publisher = {IST Austria}, title = {{Effects of fine-scale population structure on inbreeding in a long-term study of snapdragons (Antirrhinum majus)}}, doi = {10.15479/AT:ISTA:9192}, year = {2021}, } @unpublished{9281, abstract = {We comment on two formal proofs of Fermat's sum of two squares theorem, written using the Mathematical Components libraries of the Coq proof assistant. The first one follows Zagier's celebrated one-sentence proof; the second follows David Christopher's recent new proof relying on partition-theoretic arguments. Both formal proofs rely on a general property of involutions of finite sets, of independent interest. The proof technique consists for the most part of automating recurrent tasks (such as case distinctions and computations on natural numbers) via ad hoc tactics.}, author = {Dubach, Guillaume and Mühlböck, Fabian}, booktitle = {arXiv}, title = {{Formal verification of Zagier's one-sentence proof}}, year = {2021}, } @article{9298, abstract = {In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field. }, author = {Klionsky, Daniel J. and Abdel-Aziz, Amal Kamal and Abdelfatah, Sara and Abdellatif, Mahmoud and Abdoli, Asghar and Abel, Steffen and Abeliovich, Hagai and Abildgaard, Marie H. and Abudu, Yakubu Princely and Acevedo-Arozena, Abraham and Adamopoulos, Iannis E. and Adeli, Khosrow and Adolph, Timon E. and Adornetto, Annagrazia and Aflaki, Elma and Agam, Galila and Agarwal, Anupam and Aggarwal, Bharat B. and Agnello, Maria and Agostinis, Patrizia and Agrewala, Javed N. and Agrotis, Alexander and Aguilar, Patricia V. and Ahmad, S. Tariq and Ahmed, Zubair M. and Ahumada-Castro, Ulises and Aits, Sonja and Aizawa, Shu and Akkoc, Yunus and Akoumianaki, Tonia and Akpinar, Hafize Aysin and Al-Abd, Ahmed M. and Al-Akra, Lina and Al-Gharaibeh, Abeer and Alaoui-Jamali, Moulay A. and Alberti, Simon and Alcocer-Gómez, Elísabet and Alessandri, Cristiano and Ali, Muhammad and Alim Al-Bari, M. Abdul and Aliwaini, Saeb and Alizadeh, Javad and Almacellas, Eugènia and Almasan, Alexandru and Alonso, Alicia and Alonso, Guillermo D. and Altan-Bonnet, Nihal and Altieri, Dario C. and Álvarez, Élida M.C. and Alves, Sara and Alves Da Costa, Cristine and Alzaharna, Mazen M. and Amadio, Marialaura and Amantini, Consuelo and Amaral, Cristina and Ambrosio, Susanna and Amer, Amal O. and Ammanathan, Veena and An, Zhenyi and Andersen, Stig U. and Andrabi, Shaida A. and Andrade-Silva, Magaiver and Andres, Allen M. and Angelini, Sabrina and Ann, David and Anozie, Uche C. and Ansari, Mohammad Y. and Antas, Pedro and Antebi, Adam and Antón, Zuriñe and Anwar, Tahira and Apetoh, Lionel and Apostolova, Nadezda and Araki, Toshiyuki and Araki, Yasuhiro and Arasaki, Kohei and Araújo, Wagner L. and Araya, Jun and Arden, Catherine and Arévalo, Maria Angeles and Arguelles, Sandro and Arias, Esperanza and Arikkath, Jyothi and Arimoto, Hirokazu and Ariosa, Aileen R. and Armstrong-James, Darius and Arnauné-Pelloquin, Laetitia and Aroca, Angeles and Arroyo, Daniela S. and Arsov, Ivica and Artero, Rubén and Asaro, Dalia Maria Lucia and Aschner, Michael and Ashrafizadeh, Milad and Ashur-Fabian, Osnat and Atanasov, Atanas G. and Au, Alicia K. and Auberger, Patrick and Auner, Holger W. and Aurelian, Laure and Autelli, Riccardo and Avagliano, Laura and Ávalos, Yenniffer and Aveic, Sanja and Aveleira, Célia Alexandra and Avin-Wittenberg, Tamar and Aydin, Yucel and Ayton, Scott and Ayyadevara, Srinivas and Azzopardi, Maria and Baba, Misuzu and Backer, Jonathan M. and Backues, Steven K. and Bae, Dong Hun and Bae, Ok Nam and Bae, Soo Han and Baehrecke, Eric H. and Baek, Ahruem and Baek, Seung Hoon and Baek, Sung Hee and Bagetta, Giacinto and Bagniewska-Zadworna, Agnieszka and Bai, Hua and Bai, Jie and Bai, Xiyuan and Bai, Yidong and Bairagi, Nandadulal and Baksi, Shounak and Balbi, Teresa and Baldari, Cosima T. and Balduini, Walter and Ballabio, Andrea and Ballester, Maria and Balazadeh, Salma and Balzan, Rena and Bandopadhyay, Rina and Banerjee, Sreeparna and Banerjee, Sulagna and Bánréti, Ágnes and Bao, Yan and Baptista, Mauricio S. and Baracca, Alessandra and Barbati, Cristiana and Bargiela, Ariadna and Barilà, Daniela and Barlow, Peter G. and Barmada, Sami J. and Barreiro, Esther and Barreto, George E. and Bartek, Jiri and Bartel, Bonnie and Bartolome, Alberto and Barve, Gaurav R. and Basagoudanavar, Suresh H. and Bassham, Diane C. and Bast, Robert C. and Basu, Alakananda and Batoko, Henri and Batten, Isabella and Baulieu, Etienne E. and Baumgarner, Bradley L. and Bayry, Jagadeesh and Beale, Rupert and Beau, Isabelle and Beaumatin, Florian and Bechara, Luiz R.G. and Beck, George R. and Beers, Michael F. and Begun, Jakob and Behrends, Christian and Behrens, Georg M.N. and Bei, Roberto and Bejarano, Eloy and Bel, Shai and Behl, Christian and Belaid, Amine and Belgareh-Touzé, Naïma and Bellarosa, Cristina and Belleudi, Francesca and Belló Pérez, Melissa and Bello-Morales, Raquel and Beltran, Jackeline Soares De Oliveira and Beltran, Sebastián and Benbrook, Doris Mangiaracina and Bendorius, Mykolas and Benitez, Bruno A. and Benito-Cuesta, Irene and Bensalem, Julien and Berchtold, Martin W. and Berezowska, Sabina and Bergamaschi, Daniele and Bergami, Matteo and Bergmann, Andreas and Berliocchi, Laura and Berlioz-Torrent, Clarisse and Bernard, Amélie and Berthoux, Lionel and Besirli, Cagri G. and Besteiro, Sebastien and Betin, Virginie M. and Beyaert, Rudi and Bezbradica, Jelena S. and Bhaskar, Kiran and Bhatia-Kissova, Ingrid and Bhattacharya, Resham and Bhattacharya, Sujoy and Bhattacharyya, Shalmoli and Bhuiyan, Md Shenuarin and Bhutia, Sujit Kumar and Bi, Lanrong and Bi, Xiaolin and Biden, Trevor J. and Bijian, Krikor and Billes, Viktor A. and Binart, Nadine and Bincoletto, Claudia and Birgisdottir, Asa B. and Bjorkoy, Geir and Blanco, Gonzalo and Blas-Garcia, Ana and Blasiak, Janusz and Blomgran, Robert and Blomgren, Klas and Blum, Janice S. and Boada-Romero, Emilio and Boban, Mirta and Boesze-Battaglia, Kathleen and Boeuf, Philippe and Boland, Barry and Bomont, Pascale and Bonaldo, Paolo and Bonam, Srinivasa Reddy and Bonfili, Laura and Bonifacino, Juan S. and Boone, Brian A. and Bootman, Martin D. and Bordi, Matteo and Borner, Christoph and Bornhauser, Beat C. and Borthakur, Gautam and Bosch, Jürgen and Bose, Santanu and Botana, Luis M. and Botas, Juan and Boulanger, Chantal M. and Boulton, Michael E. and Bourdenx, Mathieu and Bourgeois, Benjamin and Bourke, Nollaig M. and Bousquet, Guilhem and Boya, Patricia and Bozhkov, Peter V. and Bozi, Luiz H.M. and Bozkurt, Tolga O. and Brackney, Doug E. and Brandts, Christian H. and Braun, Ralf J. and Braus, Gerhard H. and Bravo-Sagua, Roberto and Bravo-San Pedro, José M. and Brest, Patrick and Bringer, Marie Agnès and Briones-Herrera, Alfredo and Broaddus, V. Courtney and Brodersen, Peter and Brodsky, Jeffrey L. and Brody, Steven L. and Bronson, Paola G. and Bronstein, Jeff M. and Brown, Carolyn N. and Brown, Rhoderick E. and Brum, Patricia C. and Brumell, John H. and Brunetti-Pierri, Nicola and Bruno, Daniele and Bryson-Richardson, Robert J. and Bucci, Cecilia and Buchrieser, Carmen and Bueno, Marta and Buitrago-Molina, Laura Elisa and Buraschi, Simone and Buch, Shilpa and Buchan, J. Ross and Buckingham, Erin M. and Budak, Hikmet and Budini, Mauricio and Bultynck, Geert and Burada, Florin and Burgoyne, Joseph R. and Burón, M. Isabel and Bustos, Victor and Büttner, Sabrina and Butturini, Elena and Byrd, Aaron and Cabas, Isabel and Cabrera-Benitez, Sandra and Cadwell, Ken and Cai, Jingjing and Cai, Lu and Cai, Qian and Cairó, Montserrat and Calbet, Jose A. and Caldwell, Guy A. and Caldwell, Kim A. and Call, Jarrod A. and Calvani, Riccardo and Calvo, Ana C. and Calvo-Rubio Barrera, Miguel and Camara, Niels O.S. and Camonis, Jacques H. and Camougrand, Nadine and Campanella, Michelangelo and Campbell, Edward M. and Campbell-Valois, François Xavier and Campello, Silvia and Campesi, Ilaria and Campos, Juliane C. and Camuzard, Olivier and Cancino, Jorge and Candido De Almeida, Danilo and Canesi, Laura and Caniggia, Isabella and Canonico, Barbara and Cantí, Carles and Cao, Bin and Caraglia, Michele and Caramés, Beatriz and Carchman, Evie H. and Cardenal-Muñoz, Elena and Cardenas, Cesar and Cardenas, Luis and Cardoso, Sandra M. and Carew, Jennifer S. and Carle, Georges F. and Carleton, Gillian and Carloni, Silvia and Carmona-Gutierrez, Didac and Carneiro, Leticia A. and Carnevali, Oliana and Carosi, Julian M. and Carra, Serena and Carrier, Alice and Carrier, Lucie and Carroll, Bernadette and Carter, A. Brent and Carvalho, Andreia Neves and Casanova, Magali and Casas, Caty and Casas, Josefina and Cassioli, Chiara and Castillo, Eliseo F. and Castillo, Karen and Castillo-Lluva, Sonia and Castoldi, Francesca and Castori, Marco and Castro, Ariel F. and Castro-Caldas, Margarida and Castro-Hernandez, Javier and Castro-Obregon, Susana and Catz, Sergio D. and Cavadas, Claudia and Cavaliere, Federica and Cavallini, Gabriella and Cavinato, Maria and Cayuela, Maria L. and Cebollada Rica, Paula and Cecarini, Valentina and Cecconi, Francesco and Cechowska-Pasko, Marzanna and Cenci, Simone and Ceperuelo-Mallafré, Victòria and Cerqueira, João J. and Cerutti, Janete M. and Cervia, Davide and Cetintas, Vildan Bozok and Cetrullo, Silvia and Chae, Han Jung and Chagin, Andrei S. and Chai, Chee Yin and Chakrabarti, Gopal and Chakrabarti, Oishee and Chakraborty, Tapas and Chakraborty, Trinad and Chami, Mounia and Chamilos, Georgios and Chan, David W. and Chan, Edmond Y.W. and Chan, Edward D. and Chan, H. Y.Edwin and Chan, Helen H. and Chan, Hung and Chan, Matthew T.V. and Chan, Yau Sang and Chandra, Partha K. and Chang, Chih Peng and Chang, Chunmei and Chang, Hao Chun and Chang, Kai and Chao, Jie and Chapman, Tracey and Charlet-Berguerand, Nicolas and Chatterjee, Samrat and Chaube, Shail K. and Chaudhary, Anu and Chauhan, Santosh and Chaum, Edward and Checler, Frédéric and Cheetham, Michael E. and Chen, Chang Shi and Chen, Guang Chao and Chen, Jian Fu and Chen, Liam L. and Chen, Leilei and Chen, Lin and Chen, Mingliang and Chen, Mu Kuan and Chen, Ning and Chen, Quan and Chen, Ruey Hwa and Chen, Shi and Chen, Wei and Chen, Weiqiang and Chen, Xin Ming and Chen, Xiong Wen and Chen, Xu and Chen, Yan and Chen, Ye Guang and Chen, Yingyu and Chen, Yongqiang and Chen, Yu Jen and Chen, Yue Qin and Chen, Zhefan Stephen and Chen, Zhi and Chen, Zhi Hua and Chen, Zhijian J. and Chen, Zhixiang and Cheng, Hanhua and Cheng, Jun and Cheng, Shi Yuan and Cheng, Wei and Cheng, Xiaodong and Cheng, Xiu Tang and Cheng, Yiyun and Cheng, Zhiyong and Chen, Zhong and Cheong, Heesun and Cheong, Jit Kong and Chernyak, Boris V. and Cherry, Sara and Cheung, Chi Fai Randy and Cheung, Chun Hei Antonio and Cheung, King Ho and Chevet, Eric and Chi, Richard J. and Chiang, Alan Kwok Shing and Chiaradonna, Ferdinando and Chiarelli, Roberto and Chiariello, Mario and Chica, Nathalia and Chiocca, Susanna and Chiong, Mario and Chiou, Shih Hwa and Chiramel, Abhilash I. and Chiurchiù, Valerio and Cho, Dong Hyung and Choe, Seong Kyu and Choi, Augustine M.K. and Choi, Mary E. and Choudhury, Kamalika Roy and Chow, Norman S. and Chu, Charleen T. and Chua, Jason P. and Chua, John Jia En and Chung, Hyewon and Chung, Kin Pan and Chung, Seockhoon and Chung, So Hyang and Chung, Yuen Li and Cianfanelli, Valentina and Ciechomska, Iwona A. and Cifuentes, Mariana and Cinque, Laura and Cirak, Sebahattin and Cirone, Mara and Clague, Michael J. and Clarke, Robert and Clementi, Emilio and Coccia, Eliana M. and Codogno, Patrice and Cohen, Ehud and Cohen, Mickael M. and Colasanti, Tania and Colasuonno, Fiorella and Colbert, Robert A. and Colell, Anna and Čolić, Miodrag and Coll, Nuria S. and Collins, Mark O. and Colombo, María I. and Colón-Ramos, Daniel A. and Combaret, Lydie and Comincini, Sergio and Cominetti, Márcia R. and Consiglio, Antonella and Conte, Andrea and Conti, Fabrizio and Contu, Viorica Raluca and Cookson, Mark R. and Coombs, Kevin M. and Coppens, Isabelle and Corasaniti, Maria Tiziana and Corkery, Dale P. and Cordes, Nils and Cortese, Katia and Costa, Maria Do Carmo and Costantino, Sarah and Costelli, Paola and Coto-Montes, Ana and Crack, Peter J. and Crespo, Jose L. and Criollo, Alfredo and Crippa, Valeria and Cristofani, Riccardo and Csizmadia, Tamas and Cuadrado, Antonio and Cui, Bing and Cui, Jun and Cui, Yixian and Cui, Yong and Culetto, Emmanuel and Cumino, Andrea C. and Cybulsky, Andrey V. and Czaja, Mark J. and Czuczwar, Stanislaw J. and D’Adamo, Stefania and D’Amelio, Marcello and D’Arcangelo, Daniela and D’Lugos, Andrew C. and D’Orazi, Gabriella and Da Silva, James A. and Dafsari, Hormos Salimi and Dagda, Ruben K. and Dagdas, Yasin and Daglia, Maria and Dai, Xiaoxia and Dai, Yun and Dai, Yuyuan and Dal Col, Jessica and Dalhaimer, Paul and Dalla Valle, Luisa and Dallenga, Tobias and Dalmasso, Guillaume and Damme, Markus and Dando, Ilaria and Dantuma, Nico P. and Darling, April L. and Das, Hiranmoy and Dasarathy, Srinivasan and Dasari, Santosh K. and Dash, Srikanta and Daumke, Oliver and Dauphinee, Adrian N. and Davies, Jeffrey S. and Dávila, Valeria A. and Davis, Roger J. and Davis, Tanja and Dayalan Naidu, Sharadha and De Amicis, Francesca and De Bosscher, Karolien and De Felice, Francesca and De Franceschi, Lucia and De Leonibus, Chiara and De Mattos Barbosa, Mayara G. and De Meyer, Guido R.Y. and De Milito, Angelo and De Nunzio, Cosimo and De Palma, Clara and De Santi, Mauro and De Virgilio, Claudio and De Zio, Daniela and Debnath, Jayanta and Debosch, Brian J. and Decuypere, Jean Paul and Deehan, Mark A. and Deflorian, Gianluca and Degregori, James and Dehay, Benjamin and Del Rio, Gabriel and Delaney, Joe R. and Delbridge, Lea M.D. and Delorme-Axford, Elizabeth and Delpino, M. Victoria and Demarchi, Francesca and Dembitz, Vilma and Demers, Nicholas D. and Deng, Hongbin and Deng, Zhiqiang and Dengjel, Joern and Dent, Paul and Denton, Donna and Depamphilis, Melvin L. and Der, Channing J. and Deretic, Vojo and Descoteaux, Albert and Devis, Laura and Devkota, Sushil and Devuyst, Olivier and Dewson, Grant and Dharmasivam, Mahendiran and Dhiman, Rohan and Di Bernardo, Diego and Di Cristina, Manlio and Di Domenico, Fabio and Di Fazio, Pietro and Di Fonzo, Alessio and Di Guardo, Giovanni and Di Guglielmo, Gianni M. and Di Leo, Luca and Di Malta, Chiara and Di Nardo, Alessia and Di Rienzo, Martina and Di Sano, Federica and Diallinas, George and Diao, Jiajie and Diaz-Araya, Guillermo and Díaz-Laviada, Inés and Dickinson, Jared M. and Diederich, Marc and Dieudé, Mélanie and Dikic, Ivan and Ding, Shiping and Ding, Wen Xing and Dini, Luciana and Dinić, Jelena and Dinic, Miroslav and Dinkova-Kostova, Albena T. and Dionne, Marc S. and Distler, Jörg H.W. and Diwan, Abhinav and Dixon, Ian M.C. and Djavaheri-Mergny, Mojgan and Dobrinski, Ina and Dobrovinskaya, Oxana and Dobrowolski, Radek and Dobson, Renwick C.J. and Đokić, Jelena and Dokmeci Emre, Serap and Donadelli, Massimo and Dong, Bo and Dong, Xiaonan and Dong, Zhiwu and Dorn, Gerald W. and Dotsch, Volker and Dou, Huan and Dou, Juan and Dowaidar, Moataz and Dridi, Sami and Drucker, Liat and Du, Ailian and Du, Caigan and Du, Guangwei and Du, Hai Ning and Du, Li Lin and Du Toit, André and Duan, Shao Bin and Duan, Xiaoqiong and Duarte, Sónia P. and Dubrovska, Anna and Dunlop, Elaine A. and Dupont, Nicolas and Durán, Raúl V. and Dwarakanath, Bilikere S. and Dyshlovoy, Sergey A. and Ebrahimi-Fakhari, Darius and Eckhart, Leopold and Edelstein, Charles L. and Efferth, Thomas and Eftekharpour, Eftekhar and Eichinger, Ludwig and Eid, Nabil and Eisenberg, Tobias and Eissa, N. Tony and Eissa, Sanaa and Ejarque, Miriam and El Andaloussi, Abdeljabar and El-Hage, Nazira and El-Naggar, Shahenda and Eleuteri, Anna Maria and El-Shafey, Eman S. and Elgendy, Mohamed and Eliopoulos, Aristides G. and Elizalde, María M. and Elks, Philip M. and Elsasser, Hans Peter and Elsherbiny, Eslam S. and Emerling, Brooke M. and Emre, N. C.Tolga and Eng, Christina H. and Engedal, Nikolai and Engelbrecht, Anna Mart and Engelsen, Agnete S.T. and Enserink, Jorrit M. and Escalante, Ricardo and Esclatine, Audrey and Escobar-Henriques, Mafalda and Eskelinen, Eeva Liisa and Espert, Lucile and Eusebio, Makandjou Ola and Fabrias, Gemma and Fabrizi, Cinzia and Facchiano, Antonio and Facchiano, Francesco and Fadeel, Bengt and Fader, Claudio and Faesen, Alex C. and Fairlie, W. Douglas and Falcó, Alberto and Falkenburger, Bjorn H. and Fan, Daping and Fan, Jie and Fan, Yanbo and Fang, Evandro F. and Fang, Yanshan and Fang, Yognqi and Fanto, Manolis and Farfel-Becker, Tamar and Faure, Mathias and Fazeli, Gholamreza and Fedele, Anthony O. and Feldman, Arthur M. and Feng, Du and Feng, Jiachun and Feng, Lifeng and Feng, Yibin and Feng, Yuchen and Feng, Wei and Fenz Araujo, Thais and Ferguson, Thomas A. and Fernández, Álvaro F. and Fernandez-Checa, Jose C. and Fernández-Veledo, Sonia and Fernie, Alisdair R. and Ferrante, Anthony W. and Ferraresi, Alessandra and Ferrari, Merari F. and Ferreira, Julio C.B. and Ferro-Novick, Susan and Figueras, Antonio and Filadi, Riccardo and Filigheddu, Nicoletta and Filippi-Chiela, Eduardo and Filomeni, Giuseppe and Fimia, Gian Maria and Fineschi, Vittorio and Finetti, Francesca and Finkbeiner, Steven and Fisher, Edward A. and Fisher, Paul B. and Flamigni, Flavio and Fliesler, Steven J. and Flo, Trude H. and Florance, Ida and Florey, Oliver and Florio, Tullio and Fodor, Erika and Follo, Carlo and Fon, Edward A. and Forlino, Antonella and Fornai, Francesco and Fortini, Paola and Fracassi, Anna and Fraldi, Alessandro and Franco, Brunella and Franco, Rodrigo and Franconi, Flavia and Frankel, Lisa B. and Friedman, Scott L. and Fröhlich, Leopold F. and Frühbeck, Gema and Fuentes, Jose M. and Fujiki, Yukio and Fujita, Naonobu and Fujiwara, Yuuki and Fukuda, Mitsunori and Fulda, Simone and Furic, Luc and Furuya, Norihiko and Fusco, Carmela and Gack, Michaela U. and Gaffke, Lidia and Galadari, Sehamuddin and Galasso, Alessia and Galindo, Maria F. and Gallolu Kankanamalage, Sachith and Galluzzi, Lorenzo and Galy, Vincent and Gammoh, Noor and Gan, Boyi and Ganley, Ian G. and Gao, Feng and Gao, Hui and Gao, Minghui and Gao, Ping and Gao, Shou Jiang and Gao, Wentao and Gao, Xiaobo and Garcera, Ana and Garcia, Maria Noé and Garcia, Verónica E. and García-Del Portillo, Francisco and Garcia-Escudero, Vega and Garcia-Garcia, Aracely and Garcia-Macia, Marina and García-Moreno, Diana and Garcia-Ruiz, Carmen and García-Sanz, Patricia and Garg, Abhishek D. and Gargini, Ricardo and Garofalo, Tina and Garry, Robert F. and Gassen, Nils C. and Gatica, Damian and Ge, Liang and Ge, Wanzhong and Geiss-Friedlander, Ruth and Gelfi, Cecilia and Genschik, Pascal and Gentle, Ian E. and Gerbino, Valeria and Gerhardt, Christoph and Germain, Kyla and Germain, Marc and Gewirtz, David A. and Ghasemipour Afshar, Elham and Ghavami, Saeid and Ghigo, Alessandra and Ghosh, Manosij and Giamas, Georgios and Giampietri, Claudia and Giatromanolaki, Alexandra and Gibson, Gary E. and Gibson, Spencer B. and Ginet, Vanessa and Giniger, Edward and Giorgi, Carlotta and Girao, Henrique and Girardin, Stephen E. and Giridharan, Mridhula and Giuliano, Sandy and Giulivi, Cecilia and Giuriato, Sylvie and Giustiniani, Julien and Gluschko, Alexander and Goder, Veit and Goginashvili, Alexander and Golab, Jakub and Goldstone, David C. and Golebiewska, Anna and Gomes, Luciana R. and Gomez, Rodrigo and Gómez-Sánchez, Rubén and Gomez-Puerto, Maria Catalina and Gomez-Sintes, Raquel and Gong, Qingqiu and Goni, Felix M. and González-Gallego, Javier and Gonzalez-Hernandez, Tomas and Gonzalez-Polo, Rosa A. and Gonzalez-Reyes, Jose A. and González-Rodríguez, Patricia and Goping, Ing Swie and Gorbatyuk, Marina S. and Gorbunov, Nikolai V. and Görgülü, Kıvanç and Gorojod, Roxana M. and Gorski, Sharon M. and Goruppi, Sandro and Gotor, Cecilia and Gottlieb, Roberta A. and Gozes, Illana and Gozuacik, Devrim and Graef, Martin and Gräler, Markus H. and Granatiero, Veronica and Grasso, Daniel and Gray, Joshua P. and Green, Douglas R. and Greenhough, Alexander and Gregory, Stephen L. and Griffin, Edward F. and Grinstaff, Mark W. and Gros, Frederic and Grose, Charles and Gross, Angelina S. and Gruber, Florian and Grumati, Paolo and Grune, Tilman and Gu, Xueyan and Guan, Jun Lin and Guardia, Carlos M. and Guda, Kishore and Guerra, Flora and Guerri, Consuelo and Guha, Prasun and Guillén, Carlos and Gujar, Shashi and Gukovskaya, Anna and Gukovsky, Ilya and Gunst, Jan and Günther, Andreas and Guntur, Anyonya R. and Guo, Chuanyong and Guo, Chun and Guo, Hongqing and Guo, Lian Wang and Guo, Ming and Gupta, Pawan and Gupta, Shashi Kumar and Gupta, Swapnil and Gupta, Veer Bala and Gupta, Vivek and Gustafsson, Asa B. and Gutterman, David D. and H.B, Ranjitha and Haapasalo, Annakaisa and Haber, James E. and Hać, Aleksandra and Hadano, Shinji and Hafrén, Anders J. and Haidar, Mansour and Hall, Belinda S. and Halldén, Gunnel and Hamacher-Brady, Anne and Hamann, Andrea and Hamasaki, Maho and Han, Weidong and Hansen, Malene and Hanson, Phyllis I. . and Hao, Zijian and Harada, Masaru and Harhaji-Trajkovic, Ljubica and Hariharan, Nirmala and Haroon, Nigil and Harris, James and Hasegawa, Takafumi and Hasima Nagoor, Noor and Haspel, Jeffrey A. and Haucke, Volker and Hawkins, Wayne D. and Hay, Bruce A. and Haynes, Cole M. and Hayrabedyan, Soren B. and Hays, Thomas S. and He, Congcong and He, Qin and He, Rong Rong and He, You Wen and He, Yu Ying and Heakal, Yasser and Heberle, Alexander M. and Hejtmancik, J. Fielding and Helgason, Gudmundur Vignir and Henkel, Vanessa and Herb, Marc and Hergovich, Alexander and Herman-Antosiewicz, Anna and Hernández, Agustín and Hernandez, Carlos and Hernandez-Diaz, Sergio and Hernandez-Gea, Virginia and Herpin, Amaury and Herreros, Judit and Hervás, Javier H. and Hesselson, Daniel and Hetz, Claudio and Heussler, Volker T. and Higuchi, Yujiro and Hilfiker, Sabine and Hill, Joseph A. and Hlavacek, William S. and Ho, Emmanuel A. and Ho, Idy H.T. and Ho, Philip Wing Lok and Ho, Shu Leong and Ho, Wan Yun and Hobbs, G. Aaron and Hochstrasser, Mark and Hoet, Peter H.M. and Hofius, Daniel and Hofman, Paul and Höhn, Annika and Holmberg, Carina I. and Hombrebueno, Jose R. and Yi-Ren Hong, Chang Won Hong and Hooper, Lora V. and Hoppe, Thorsten and Horos, Rastislav and Hoshida, Yujin and Hsin, I. Lun and Hsu, Hsin Yun and Hu, Bing and Hu, Dong and Hu, Li Fang and Hu, Ming Chang and Hu, Ronggui and Hu, Wei and Hu, Yu Chen and Hu, Zhuo Wei and Hua, Fang and Hua, Jinlian and Hua, Yingqi and Huan, Chongmin and Huang, Canhua and Huang, Chuanshu and Huang, Chuanxin and Huang, Chunling and Huang, Haishan and Huang, Kun and Huang, Michael L.H. and Huang, Rui and Huang, Shan and Huang, Tianzhi and Huang, Xing and Huang, Yuxiang Jack and Huber, Tobias B. and Hubert, Virginie and Hubner, Christian A. and Hughes, Stephanie M. and Hughes, William E. and Humbert, Magali and Hummer, Gerhard and Hurley, James H. and Hussain, Sabah and Hussain, Salik and Hussey, Patrick J. and Hutabarat, Martina and Hwang, Hui Yun and Hwang, Seungmin and Ieni, Antonio and Ikeda, Fumiyo and Imagawa, Yusuke and Imai, Yuzuru and Imbriano, Carol and Imoto, Masaya and Inman, Denise M. and Inoki, Ken and Iovanna, Juan and Iozzo, Renato V. and Ippolito, Giuseppe and Irazoqui, Javier E. and Iribarren, Pablo and Ishaq, Mohd and Ishikawa, Makoto and Ishimwe, Nestor and Isidoro, Ciro and Ismail, Nahed and Issazadeh-Navikas, Shohreh and Itakura, Eisuke and Ito, Daisuke and Ivankovic, Davor and Ivanova, Saška and Iyer, Anand Krishnan V. and Izquierdo, José M. and Izumi, Masanori and Jäättelä, Marja and Jabir, Majid Sakhi and Jackson, William T. and Jacobo-Herrera, Nadia and Jacomin, Anne Claire and Jacquin, Elise and Jadiya, Pooja and Jaeschke, Hartmut and Jagannath, Chinnaswamy and Jakobi, Arjen J. and Jakobsson, Johan and Janji, Bassam and Jansen-Dürr, Pidder and Jansson, Patric J. and Jantsch, Jonathan and Januszewski, Sławomir and Jassey, Alagie and Jean, Steve and Jeltsch-David, Hélène and Jendelova, Pavla and Jenny, Andreas and Jensen, Thomas E. and Jessen, Niels and Jewell, Jenna L. and Ji, Jing and Jia, Lijun and Jia, Rui and Jiang, Liwen and Jiang, Qing and Jiang, Richeng and Jiang, Teng and Jiang, Xuejun and Jiang, Yu and Jimenez-Sanchez, Maria and Jin, Eun Jung and Jin, Fengyan and Jin, Hongchuan and Jin, Li and Jin, Luqi and Jin, Meiyan and Jin, Si and Jo, Eun Kyeong and Joffre, Carine and Johansen, Terje and Johnson, Gail V.W. and Johnston, Simon A. and Jokitalo, Eija and Jolly, Mohit Kumar and Joosten, Leo A.B. and Jordan, Joaquin and Joseph, Bertrand and Ju, Dianwen and Ju, Jeong Sun and Ju, Jingfang and Juárez, Esmeralda and Judith, Delphine and Juhász, Gábor and Jun, Youngsoo and Jung, Chang Hwa and Jung, Sung Chul and Jung, Yong Keun and Jungbluth, Heinz and Jungverdorben, Johannes and Just, Steffen and Kaarniranta, Kai and Kaasik, Allen and Kabuta, Tomohiro and Kaganovich, Daniel and Kahana, Alon and Kain, Renate and Kajimura, Shinjo and Kalamvoki, Maria and Kalia, Manjula and Kalinowski, Danuta S. and Kaludercic, Nina and Kalvari, Ioanna and Kaminska, Joanna and Kaminskyy, Vitaliy O. and Kanamori, Hiromitsu and Kanasaki, Keizo and Kang, Chanhee and Kang, Rui and Kang, Sang Sun and Kaniyappan, Senthilvelrajan and Kanki, Tomotake and Kanneganti, Thirumala Devi and Kanthasamy, Anumantha G. and Kanthasamy, Arthi and Kantorow, Marc and Kapuy, Orsolya and Karamouzis, Michalis V. and Karim, Md Razaul and Karmakar, Parimal and Katare, Rajesh G. and Kato, Masaru and Kaufmann, Stefan H.E. and Kauppinen, Anu and Kaushal, Gur P. and Kaushik, Susmita and Kawasaki, Kiyoshi and Kazan, Kemal and Ke, Po Yuan and Keating, Damien J. and Keber, Ursula and Kehrl, John H. and Keller, Kate E. and Keller, Christian W. and Kemper, Jongsook Kim and Kenific, Candia M. and Kepp, Oliver and Kermorgant, Stephanie and Kern, Andreas and Ketteler, Robin and Keulers, Tom G. and Khalfin, Boris and Khalil, Hany and Khambu, Bilon and Khan, Shahid Y. and Khandelwal, Vinoth Kumar Megraj and Khandia, Rekha and Kho, Widuri and Khobrekar, Noopur V. and Khuansuwan, Sataree and Khundadze, Mukhran and Killackey, Samuel A. and Kim, Dasol and Kim, Deok Ryong and Kim, Do Hyung and Kim, Dong Eun and Kim, Eun Young and Kim, Eun Kyoung and Kim, Hak Rim and Kim, Hee Sik and Hyung-Ryong Kim, Unknown and Kim, Jeong Hun and Kim, Jin Kyung and Kim, Jin Hoi and Kim, Joungmok and Kim, Ju Hwan and Kim, Keun Il and Kim, Peter K. and Kim, Seong Jun and Kimball, Scot R. and Kimchi, Adi and Kimmelman, Alec C. and Kimura, Tomonori and King, Matthew A. and Kinghorn, Kerri J. and Kinsey, Conan G. and Kirkin, Vladimir and Kirshenbaum, Lorrie A. and Kiselev, Sergey L. and Kishi, Shuji and Kitamoto, Katsuhiko and Kitaoka, Yasushi and Kitazato, Kaio and Kitsis, Richard N. and Kittler, Josef T. and Kjaerulff, Ole and Klein, Peter S. and Klopstock, Thomas and Klucken, Jochen and Knævelsrud, Helene and Knorr, Roland L. and Ko, Ben C.B. and Ko, Fred and Ko, Jiunn Liang and Kobayashi, Hotaka and Kobayashi, Satoru and Koch, Ina and Koch, Jan C. and Koenig, Ulrich and Kögel, Donat and Koh, Young Ho and Koike, Masato and Kohlwein, Sepp D. and Kocaturk, Nur M. and Komatsu, Masaaki and König, Jeannette and Kono, Toru and Kopp, Benjamin T. and Korcsmaros, Tamas and Korkmaz, Gözde and Korolchuk, Viktor I. and Korsnes, Mónica Suárez and Koskela, Ali and Kota, Janaiah and Kotake, Yaichiro and Kotler, Monica L. and Kou, Yanjun and Koukourakis, Michael I. and Koustas, Evangelos and Kovacs, Attila L. and Kovács, Tibor and Koya, Daisuke and Kozako, Tomohiro and Kraft, Claudine and Krainc, Dimitri and Krämer, Helmut and Krasnodembskaya, Anna D. and Kretz-Remy, Carole and Kroemer, Guido and Ktistakis, Nicholas T. and Kuchitsu, Kazuyuki and Kuenen, Sabine and Kuerschner, Lars and Kukar, Thomas and Kumar, Ajay and Kumar, Ashok and Kumar, Deepak and Kumar, Dhiraj and Kumar, Sharad and Kume, Shinji and Kumsta, Caroline and Kundu, Chanakya N. and Kundu, Mondira and Kunnumakkara, Ajaikumar B. and Kurgan, Lukasz and Kutateladze, Tatiana G. and Kutlu, Ozlem and Kwak, Seong Ae and Kwon, Ho Jeong and Kwon, Taeg Kyu and Kwon, Yong Tae and Kyrmizi, Irene and La Spada, Albert and Labonté, Patrick and Ladoire, Sylvain and Laface, Ilaria and Lafont, Frank and Lagace, Diane C. and Lahiri, Vikramjit and Lai, Zhibing and Laird, Angela S. and Lakkaraju, Aparna and Lamark, Trond and Lan, Sheng Hui and Landajuela, Ane and Lane, Darius J.R. and Lane, Jon D. and Lang, Charles H. and Lange, Carsten and Langel, Ülo and Langer, Rupert and Lapaquette, Pierre and Laporte, Jocelyn and Larusso, Nicholas F. and Lastres-Becker, Isabel and Lau, Wilson Chun Yu and Laurie, Gordon W. and Lavandero, Sergio and Law, Betty Yuen Kwan and Law, Helen Ka Wai and Layfield, Rob and Le, Weidong and Le Stunff, Herve and Leary, Alexandre Y. and Lebrun, Jean Jacques and Leck, Lionel Y.W. and Leduc-Gaudet, Jean Philippe and Lee, Changwook and Lee, Chung Pei and Lee, Da Hye and Lee, Edward B. and Lee, Erinna F. and Lee, Gyun Min and Lee, He Jin and Lee, Heung Kyu and Lee, Jae Man and Lee, Jason S. and Lee, Jin A. and Lee, Joo Yong and Lee, Jun Hee and Lee, Michael and Lee, Min Goo and Lee, Min Jae and Lee, Myung Shik and Lee, Sang Yoon and Lee, Seung Jae and Lee, Stella Y. and Lee, Sung Bae and Lee, Won Hee and Lee, Ying Ray and Lee, Yong Ho and Lee, Youngil and Lefebvre, Christophe and Legouis, Renaud and Lei, Yu L. and Lei, Yuchen and Leikin, Sergey and Leitinger, Gerd and Lemus, Leticia and Leng, Shuilong and Lenoir, Olivia and Lenz, Guido and Lenz, Heinz Josef and Lenzi, Paola and León, Yolanda and Leopoldino, Andréia M. and Leschczyk, Christoph and Leskelä, Stina and Letellier, Elisabeth and Leung, Chi Ting and Leung, Po Sing and Leventhal, Jeremy S. and Levine, Beth and Lewis, Patrick A. and Ley, Klaus and Li, Bin and Li, Da Qiang and Li, Jianming and Li, Jing and Li, Jiong and Li, Ke and Li, Liwu and Li, Mei and Li, Min and Li, Min and Li, Ming and Li, Mingchuan and Li, Pin Lan and Li, Ming Qing and Li, Qing and Li, Sheng and Li, Tiangang and Li, Wei and Li, Wenming and Li, Xue and Li, Yi Ping and Li, Yuan and Li, Zhiqiang and Li, Zhiyong and Li, Zhiyuan and Lian, Jiqin and Liang, Chengyu and Liang, Qiangrong and Liang, Weicheng and Liang, Yongheng and Liang, Yong Tian and Liao, Guanghong and Liao, Lujian and Liao, Mingzhi and Liao, Yung Feng and Librizzi, Mariangela and Lie, Pearl P.Y. and Lilly, Mary A. and Lim, Hyunjung J. and Lima, Thania R.R. and Limana, Federica and Lin, Chao and Lin, Chih Wen and Lin, Dar Shong and Lin, Fu Cheng and Lin, Jiandie D. and Lin, Kurt M. and Lin, Kwang Huei and Lin, Liang Tzung and Lin, Pei Hui and Lin, Qiong and Lin, Shaofeng and Lin, Su Ju and Lin, Wenyu and Lin, Xueying and Lin, Yao Xin and Lin, Yee Shin and Linden, Rafael and Lindner, Paula and Ling, Shuo Chien and Lingor, Paul and Linnemann, Amelia K. and Liou, Yih Cherng and Lipinski, Marta M. and Lipovšek, Saška and Lira, Vitor A. and Lisiak, Natalia and Liton, Paloma B. and Liu, Chao and Liu, Ching Hsuan and Liu, Chun Feng and Liu, Cui Hua and Liu, Fang and Liu, Hao and Liu, Hsiao Sheng and Liu, Hua Feng and Liu, Huifang and Liu, Jia and Liu, Jing and Liu, Julia and Liu, Leyuan and Liu, Longhua and Liu, Meilian and Liu, Qin and Liu, Wei and Liu, Wende and Liu, Xiao Hong and Liu, Xiaodong and Liu, Xingguo and Liu, Xu and Liu, Xuedong and Liu, Yanfen and Liu, Yang and Liu, Yang and Liu, Yueyang and Liu, Yule and Livingston, J. Andrew and Lizard, Gerard and Lizcano, Jose M. and Ljubojevic-Holzer, Senka and Lleonart, Matilde E. and Llobet-Navàs, David and Llorente, Alicia and Lo, Chih Hung and Lobato-Márquez, Damián and Long, Qi and Long, Yun Chau and Loos, Ben and Loos, Julia A. and López, Manuela G. and López-Doménech, Guillermo and López-Guerrero, José Antonio and López-Jiménez, Ana T. and López-Pérez, Óscar and López-Valero, Israel and Lorenowicz, Magdalena J. and Lorente, Mar and Lorincz, Peter and Lossi, Laura and Lotersztajn, Sophie and Lovat, Penny E. and Lovell, Jonathan F. and Lovy, Alenka and Lőw, Péter and Lu, Guang and Lu, Haocheng and Lu, Jia Hong and Lu, Jin Jian and Lu, Mengji and Lu, Shuyan and Luciani, Alessandro and Lucocq, John M. and Ludovico, Paula and Luftig, Micah A. and Luhr, Morten and Luis-Ravelo, Diego and Lum, Julian J. and Luna-Dulcey, Liany and Lund, Anders H. and Lund, Viktor K. and Lünemann, Jan D. and Lüningschrör, Patrick and Luo, Honglin and Luo, Rongcan and Luo, Shouqing and Luo, Zhi and Luparello, Claudio and Lüscher, Bernhard and Luu, Luan and Lyakhovich, Alex and Lyamzaev, Konstantin G. and Lystad, Alf Håkon and Lytvynchuk, Lyubomyr and Ma, Alvin C. and Ma, Changle and Ma, Mengxiao and Ma, Ning Fang and Ma, Quan Hong and Ma, Xinliang and Ma, Yueyun and Ma, Zhenyi and Macdougald, Ormond A. and Macian, Fernando and Macintosh, Gustavo C. and Mackeigan, Jeffrey P. and Macleod, Kay F. and Maday, Sandra and Madeo, Frank and Madesh, Muniswamy and Madl, Tobias and Madrigal-Matute, Julio and Maeda, Akiko and Maejima, Yasuhiro and Magarinos, Marta and Mahavadi, Poornima and Maiani, Emiliano and Maiese, Kenneth and Maiti, Panchanan and Maiuri, Maria Chiara and Majello, Barbara and Major, Michael B. and Makareeva, Elena and Malik, Fayaz and Mallilankaraman, Karthik and Malorni, Walter and Maloyan, Alina and Mammadova, Najiba and Man, Gene Chi Wai and Manai, Federico and Mancias, Joseph D. and Mandelkow, Eva Maria and Mandell, Michael A. and Manfredi, Angelo A. and Manjili, Masoud H. and Manjithaya, Ravi and Manque, Patricio and Manshian, Bella B. and Manzano, Raquel and Manzoni, Claudia and Mao, Kai and Marchese, Cinzia and Marchetti, Sandrine and Marconi, Anna Maria and Marcucci, Fabrizio and Mardente, Stefania and Mareninova, Olga A. and Margeta, Marta and Mari, Muriel and Marinelli, Sara and Marinelli, Oliviero and Mariño, Guillermo and Mariotto, Sofia and Marshall, Richard S. and Marten, Mark R. and Martens, Sascha and Martin, Alexandre P.J. and Martin, Katie R. and Martin, Sara and Martin, Shaun and Martín-Segura, Adrián and Martín-Acebes, Miguel A. and Martin-Burriel, Inmaculada and Martin-Rincon, Marcos and Martin-Sanz, Paloma and Martina, José A. and Martinet, Wim and Martinez, Aitor and Martinez, Ana and Martinez, Jennifer and Martinez Velazquez, Moises and Martinez-Lopez, Nuria and Martinez-Vicente, Marta and Martins, Daniel O. and Martins, Joilson O. and Martins, Waleska K. and Martins-Marques, Tania and Marzetti, Emanuele and Masaldan, Shashank and Masclaux-Daubresse, Celine and Mashek, Douglas G. and Massa, Valentina and Massieu, Lourdes and Masson, Glenn R. and Masuelli, Laura and Masyuk, Anatoliy I. and Masyuk, Tetyana V. and Matarrese, Paola and Matheu, Ander and Matoba, Satoaki and Matsuzaki, Sachiko and Mattar, Pamela and Matte, Alessandro and Mattoscio, Domenico and Mauriz, José L. and Mauthe, Mario and Mauvezin, Caroline and Maverakis, Emanual and Maycotte, Paola and Mayer, Johanna and Mazzoccoli, Gianluigi and Mazzoni, Cristina and Mazzulli, Joseph R. and Mccarty, Nami and Mcdonald, Christine and Mcgill, Mitchell R. and Mckenna, Sharon L. and Mclaughlin, Beth Ann and Mcloughlin, Fionn and Mcniven, Mark A. and Mcwilliams, Thomas G. and Mechta-Grigoriou, Fatima and Medeiros, Tania Catarina and Medina, Diego L. and Megeney, Lynn A. and Megyeri, Klara and Mehrpour, Maryam and Mehta, Jawahar L. and Meijer, Alfred J. and Meijer, Annemarie H. and Mejlvang, Jakob and Meléndez, Alicia and Melk, Annette and Memisoglu, Gonen and Mendes, Alexandrina F. and Meng, Delong and Meng, Fei and Meng, Tian and Menna-Barreto, Rubem and Menon, Manoj B. and Mercer, Carol and Mercier, Anne E. and Mergny, Jean Louis and Merighi, Adalberto and Merkley, Seth D. and Merla, Giuseppe and Meske, Volker and Mestre, Ana Cecilia and Metur, Shree Padma and Meyer, Christian and Meyer, Hemmo and Mi, Wenyi and Mialet-Perez, Jeanne and Miao, Junying and Micale, Lucia and Miki, Yasuo and Milan, Enrico and Milczarek, Małgorzata and Miller, Dana L. and Miller, Samuel I. and Miller, Silke and Millward, Steven W. and Milosevic, Ira and Minina, Elena A. and Mirzaei, Hamed and Mirzaei, Hamid Reza and Mirzaei, Mehdi and Mishra, Amit and Mishra, Nandita and Mishra, Paras Kumar and Misirkic Marjanovic, Maja and Misasi, Roberta and Misra, Amit and Misso, Gabriella and Mitchell, Claire and Mitou, Geraldine and Miura, Tetsuji and Miyamoto, Shigeki and Miyazaki, Makoto and Miyazaki, Mitsunori and Miyazaki, Taiga and Miyazawa, Keisuke and Mizushima, Noboru and Mogensen, Trine H. and Mograbi, Baharia and Mohammadinejad, Reza and Mohamud, Yasir and Mohanty, Abhishek and Mohapatra, Sipra and Möhlmann, Torsten and Mohmmed, Asif and Moles, Anna and Moley, Kelle H. and Molinari, Maurizio and Mollace, Vincenzo and Møller, Andreas Buch and Mollereau, Bertrand and Mollinedo, Faustino and Montagna, Costanza and Monteiro, Mervyn J. and Montella, Andrea and Montes, L. Ruth and Montico, Barbara and Mony, Vinod K. and Monzio Compagnoni, Giacomo and Moore, Michael N. and Moosavi, Mohammad A. and Mora, Ana L. and Mora, Marina and Morales-Alamo, David and Moratalla, Rosario and Moreira, Paula I. and Morelli, Elena and Moreno, Sandra and Moreno-Blas, Daniel and Moresi, Viviana and Morga, Benjamin and Morgan, Alwena H. and Morin, Fabrice and Morishita, Hideaki and Moritz, Orson L. and Moriyama, Mariko and Moriyasu, Yuji and Morleo, Manuela and Morselli, Eugenia and Moruno-Manchon, Jose F. and Moscat, Jorge and Mostowy, Serge and Motori, Elisa and Moura, Andrea Felinto and Moustaid-Moussa, Naima and Mrakovcic, Maria and Muciño-Hernández, Gabriel and Mukherjee, Anupam and Mukhopadhyay, Subhadip and Mulcahy Levy, Jean M. and Mulero, Victoriano and Muller, Sylviane and Münch, Christian and Munjal, Ashok and Munoz-Canoves, Pura and Muñoz-Galdeano, Teresa and Münz, Christian and Murakawa, Tomokazu and Muratori, Claudia and Murphy, Brona M. and Murphy, J. Patrick and Murthy, Aditya and Myöhänen, Timo T. and Mysorekar, Indira U. and Mytych, Jennifer and Nabavi, Seyed Mohammad and Nabissi, Massimo and Nagy, Péter and Nah, Jihoon and Nahimana, Aimable and Nakagawa, Ichiro and Nakamura, Ken and Nakatogawa, Hitoshi and Nandi, Shyam S. and Nanjundan, Meera and Nanni, Monica and Napolitano, Gennaro and Nardacci, Roberta and Narita, Masashi and Nassif, Melissa and Nathan, Ilana and Natsumeda, Manabu and Naude, Ryno J. and Naumann, Christin and Naveiras, Olaia and Navid, Fatemeh and Nawrocki, Steffan T. and Nazarko, Taras Y. and Nazio, Francesca and Negoita, Florentina and Neill, Thomas and Neisch, Amanda L. and Neri, Luca M. and Netea, Mihai G. and Neubert, Patrick and Neufeld, Thomas P. and Neumann, Dietbert and Neutzner, Albert and Newton, Phillip T. and Ney, Paul A. and Nezis, Ioannis P. and Ng, Charlene C.W. and Ng, Tzi Bun and Nguyen, Hang T.T. and Nguyen, Long T. and Ni, Hong Min and Ní Cheallaigh, Clíona and Ni, Zhenhong and Nicolao, M. Celeste and Nicoli, Francesco and Nieto-Diaz, Manuel and Nilsson, Per and Ning, Shunbin and Niranjan, Rituraj and Nishimune, Hiroshi and Niso-Santano, Mireia and Nixon, Ralph A. and Nobili, Annalisa and Nobrega, Clevio and Noda, Takeshi and Nogueira-Recalde, Uxía and Nolan, Trevor M. and Nombela, Ivan and Novak, Ivana and Novoa, Beatriz and Nozawa, Takashi and Nukina, Nobuyuki and Nussbaum-Krammer, Carmen and Nylandsted, Jesper and O’Donovan, Tracey R. and O’Leary, Seónadh M. and O’Rourke, Eyleen J. and O’Sullivan, Mary P. and O’Sullivan, Timothy E. and Oddo, Salvatore and Oehme, Ina and Ogawa, Michinaga and Ogier-Denis, Eric and Ogmundsdottir, Margret H. and Ogretmen, Besim and Oh, Goo Taeg and Oh, Seon Hee and Oh, Young J. and Ohama, Takashi and Ohashi, Yohei and Ohmuraya, Masaki and Oikonomou, Vasileios and Ojha, Rani and Okamoto, Koji and Okazawa, Hitoshi and Oku, Masahide and Oliván, Sara and Oliveira, Jorge M.A. and Ollmann, Michael and Olzmann, James A. and Omari, Shakib and Omary, M. Bishr and Önal, Gizem and Ondrej, Martin and Ong, Sang Bing and Ong, Sang Ging and Onnis, Anna and Orellana, Juan A. and Orellana-Muñoz, Sara and Ortega-Villaizan, Maria Del Mar and Ortiz-Gonzalez, Xilma R. and Ortona, Elena and Osiewacz, Heinz D. and Osman, Abdel Hamid K. and Osta, Rosario and Otegui, Marisa S. and Otsu, Kinya and Ott, Christiane and Ottobrini, Luisa and Ou, Jing Hsiung James and Outeiro, Tiago F. and Oynebraten, Inger and Ozturk, Melek and Pagès, Gilles and Pahari, Susanta and Pajares, Marta and Pajvani, Utpal B. and Pal, Rituraj and Paladino, Simona and Pallet, Nicolas and Palmieri, Michela and Palmisano, Giuseppe and Palumbo, Camilla and Pampaloni, Francesco and Pan, Lifeng and Pan, Qingjun and Pan, Wenliang and Pan, Xin and Panasyuk, Ganna and Pandey, Rahul and Pandey, Udai B. and Pandya, Vrajesh and Paneni, Francesco and Pang, Shirley Y. and Panzarini, Elisa and Papademetrio, Daniela L. and Papaleo, Elena and Papinski, Daniel and Papp, Diana and Park, Eun Chan and Park, Hwan Tae and Park, Ji Man and Park, Jong In and Park, Joon Tae and Park, Junsoo and Park, Sang Chul and Park, Sang Youel and Parola, Abraham H. and Parys, Jan B. and Pasquier, Adrien and Pasquier, Benoit and Passos, João F. and Pastore, Nunzia and Patel, Hemal H. and Patschan, Daniel and Pattingre, Sophie and Pedraza-Alva, Gustavo and Pedraza-Chaverri, Jose and Pedrozo, Zully and Pei, Gang and Pei, Jianming and Peled-Zehavi, Hadas and Pellegrini, Joaquín M. and Pelletier, Joffrey and Peñalva, Miguel A. and Peng, Di and Peng, Ying and Penna, Fabio and Pennuto, Maria and Pentimalli, Francesca and Pereira, Cláudia M.F. and Pereira, Gustavo J.S. and Pereira, Lilian C. and Pereira De Almeida, Luis and Perera, Nirma D. and Pérez-Lara, Ángel and Perez-Oliva, Ana B. and Pérez-Pérez, María Esther and Periyasamy, Palsamy and Perl, Andras and Perrotta, Cristiana and Perrotta, Ida and Pestell, Richard G. and Petersen, Morten and Petrache, Irina and Petrovski, Goran and Pfirrmann, Thorsten and Pfister, Astrid S. and Philips, Jennifer A. and Pi, Huifeng and Picca, Anna and Pickrell, Alicia M. and Picot, Sandy and Pierantoni, Giovanna M. and Pierdominici, Marina and Pierre, Philippe and Pierrefite-Carle, Valérie and Pierzynowska, Karolina and Pietrocola, Federico and Pietruczuk, Miroslawa and Pignata, Claudio and Pimentel-Muiños, Felipe X. and Pinar, Mario and Pinheiro, Roberta O. and Pinkas-Kramarski, Ronit and Pinton, Paolo and Pircs, Karolina and Piya, Sujan and Pizzo, Paola and Plantinga, Theo S. and Platta, Harald W. and Plaza-Zabala, Ainhoa and Plomann, Markus and Plotnikov, Egor Y. and Plun-Favreau, Helene and Pluta, Ryszard and Pocock, Roger and Pöggeler, Stefanie and Pohl, Christian and Poirot, Marc and Poletti, Angelo and Ponpuak, Marisa and Popelka, Hana and Popova, Blagovesta and Porta, Helena and Porte Alcon, Soledad and Portilla-Fernandez, Eliana and Post, Martin and Potts, Malia B. and Poulton, Joanna and Powers, Ted and Prahlad, Veena and Prajsnar, Tomasz K. and Praticò, Domenico and Prencipe, Rosaria and Priault, Muriel and Proikas-Cezanne, Tassula and Promponas, Vasilis J. and Proud, Christopher G. and Puertollano, Rosa and Puglielli, Luigi and Pulinilkunnil, Thomas and Puri, Deepika and Puri, Rajat and Puyal, Julien and Qi, Xiaopeng and Qi, Yongmei and Qian, Wenbin and Qiang, Lei and Qiu, Yu and Quadrilatero, Joe and Quarleri, Jorge and Raben, Nina and Rabinowich, Hannah and Ragona, Debora and Ragusa, Michael J. and Rahimi, Nader and Rahmati, Marveh and Raia, Valeria and Raimundo, Nuno and Rajasekaran, Namakkal Soorappan and Ramachandra Rao, Sriganesh and Rami, Abdelhaq and Ramírez-Pardo, Ignacio and Ramsden, David B. and Randow, Felix and Rangarajan, Pundi N. and Ranieri, Danilo and Rao, Hai and Rao, Lang and Rao, Rekha and Rathore, Sumit and Ratnayaka, J. Arjuna and Ratovitski, Edward A. and Ravanan, Palaniyandi and Ravegnini, Gloria and Ray, Swapan K. and Razani, Babak and Rebecca, Vito and Reggiori, Fulvio and Régnier-Vigouroux, Anne and Reichert, Andreas S. and Reigada, David and Reiling, Jan H. and Rein, Theo and Reipert, Siegfried and Rekha, Rokeya Sultana and Ren, Hongmei and Ren, Jun and Ren, Weichao and Renault, Tristan and Renga, Giorgia and Reue, Karen and Rewitz, Kim and Ribeiro De Andrade Ramos, Bruna and Riazuddin, S. Amer and Ribeiro-Rodrigues, Teresa M. and Ricci, Jean Ehrland and Ricci, Romeo and Riccio, Victoria and Richardson, Des R. and Rikihisa, Yasuko and Risbud, Makarand V. and Risueño, Ruth M. and Ritis, Konstantinos and Rizza, Salvatore and Rizzuto, Rosario and Roberts, Helen C. and Roberts, Luke D. and Robinson, Katherine J. and Roccheri, Maria Carmela and Rocchi, Stephane and Rodney, George G. and Rodrigues, Tiago and Rodrigues Silva, Vagner Ramon and Rodriguez, Amaia and Rodriguez-Barrueco, Ruth and Rodriguez-Henche, Nieves and Rodriguez-Rocha, Humberto and Roelofs, Jeroen and Rogers, Robert S. and Rogov, Vladimir V. and Rojo, Ana I. and Rolka, Krzysztof and Romanello, Vanina and Romani, Luigina and Romano, Alessandra and Romano, Patricia S. and Romeo-Guitart, David and Romero, Luis C. and Romero, Montserrat and Roney, Joseph C. and Rongo, Christopher and Roperto, Sante and Rosenfeldt, Mathias T. and Rosenstiel, Philip and Rosenwald, Anne G. and Roth, Kevin A. and Roth, Lynn and Roth, Steven and Rouschop, Kasper M.A. and Roussel, Benoit D. and Roux, Sophie and Rovere-Querini, Patrizia and Roy, Ajit and Rozieres, Aurore and Ruano, Diego and Rubinsztein, David C. and Rubtsova, Maria P. and Ruckdeschel, Klaus and Ruckenstuhl, Christoph and Rudolf, Emil and Rudolf, Rüdiger and Ruggieri, Alessandra and Ruparelia, Avnika Ashok and Rusmini, Paola and Russell, Ryan R. and Russo, Gian Luigi and Russo, Maria and Russo, Rossella and Ryabaya, Oxana O. and Ryan, Kevin M. and Ryu, Kwon Yul and Sabater-Arcis, Maria and Sachdev, Ulka and Sacher, Michael and Sachse, Carsten and Sadhu, Abhishek and Sadoshima, Junichi and Safren, Nathaniel and Saftig, Paul and Sagona, Antonia P. and Sahay, Gaurav and Sahebkar, Amirhossein and Sahin, Mustafa and Sahin, Ozgur and Sahni, Sumit and Saito, Nayuta and Saito, Shigeru and Saito, Tsunenori and Sakai, Ryohei and Sakai, Yasuyoshi and Sakamaki, Jun Ichi and Saksela, Kalle and Salazar, Gloria and Salazar-Degracia, Anna and Salekdeh, Ghasem H. and Saluja, Ashok K. and Sampaio-Marques, Belém and Sanchez, Maria Cecilia and Sanchez-Alcazar, Jose A. and Sanchez-Vera, Victoria and Sancho-Shimizu, Vanessa and Sanderson, J. Thomas and Sandri, Marco and Santaguida, Stefano and Santambrogio, Laura and Santana, Magda M. and Santoni, Giorgio and Sanz, Alberto and Sanz, Pascual and Saran, Shweta and Sardiello, Marco and Sargeant, Timothy J. and Sarin, Apurva and Sarkar, Chinmoy and Sarkar, Sovan and Sarrias, Maria Rosa and Sarkar, Surajit and Sarmah, Dipanka Tanu and Sarparanta, Jaakko and Sathyanarayan, Aishwarya and Sathyanarayanan, Ranganayaki and Scaglione, K. Matthew and Scatozza, Francesca and Schaefer, Liliana and Schafer, Zachary T. and Schaible, Ulrich E. and Schapira, Anthony H.V. and Scharl, Michael and Schatzl, Hermann M. and Schein, Catherine H. and Scheper, Wiep and Scheuring, David and Schiaffino, Maria Vittoria and Schiappacassi, Monica and Schindl, Rainer and Schlattner, Uwe and Schmidt, Oliver and Schmitt, Roland and Schmidt, Stephen D. and Schmitz, Ingo and Schmukler, Eran and Schneider, Anja and Schneider, Bianca E. and Schober, Romana and Schoijet, Alejandra C. and Schott, Micah B. and Schramm, Michael and Schröder, Bernd and Schuh, Kai and Schüller, Christoph and Schulze, Ryan J. and Schürmanns, Lea and Schwamborn, Jens C. and Schwarten, Melanie and Scialo, Filippo and Sciarretta, Sebastiano and Scott, Melanie J. and Scotto, Kathleen W. and Scovassi, A. Ivana and Scrima, Andrea and Scrivo, Aurora and Sebastian, David and Sebti, Salwa and Sedej, Simon and Segatori, Laura and Segev, Nava and Seglen, Per O. and Seiliez, Iban and Seki, Ekihiro and Selleck, Scott B. and Sellke, Frank W. and Selsby, Joshua T. and Sendtner, Michael and Senturk, Serif and Seranova, Elena and Sergi, Consolato and Serra-Moreno, Ruth and Sesaki, Hiromi and Settembre, Carmine and Setty, Subba Rao Gangi and Sgarbi, Gianluca and Sha, Ou and Shacka, John J. and Shah, Javeed A. and Shang, Dantong and Shao, Changshun and Shao, Feng and Sharbati, Soroush and Sharkey, Lisa M. and Sharma, Dipali and Sharma, Gaurav and Sharma, Kulbhushan and Sharma, Pawan and Sharma, Surendra and Shen, Han Ming and Shen, Hongtao and Shen, Jiangang and Shen, Ming and Shen, Weili and Shen, Zheni and Sheng, Rui and Sheng, Zhi and Sheng, Zu Hang and Shi, Jianjian and Shi, Xiaobing and Shi, Ying Hong and Shiba-Fukushima, Kahori and Shieh, Jeng Jer and Shimada, Yohta and Shimizu, Shigeomi and Shimozawa, Makoto and Shintani, Takahiro and Shoemaker, Christopher J. and Shojaei, Shahla and Shoji, Ikuo and Shravage, Bhupendra V. and Shridhar, Viji and Shu, Chih Wen and Shu, Hong Bing and Shui, Ke and Shukla, Arvind K. and Shutt, Timothy E. and Sica, Valentina and Siddiqui, Aleem and Sierra, Amanda and Sierra-Torre, Virginia and Signorelli, Santiago and Sil, Payel and Silva, Bruno J.De Andrade and Silva, Johnatas D. and Silva-Pavez, Eduardo and Silvente-Poirot, Sandrine and Simmonds, Rachel E. and Simon, Anna Katharina and Simon, Hans Uwe and Simons, Matias and Singh, Anurag and Singh, Lalit P. and Singh, Rajat and Singh, Shivendra V. and Singh, Shrawan K. and Singh, Sudha B. and Singh, Sunaina and Singh, Surinder Pal and Sinha, Debasish and Sinha, Rohit Anthony and Sinha, Sangita and Sirko, Agnieszka and Sirohi, Kapil and Sivridis, Efthimios L. and Skendros, Panagiotis and Skirycz, Aleksandra and Slaninová, Iva and Smaili, Soraya S. and Smertenko, Andrei and Smith, Matthew D. and Soenen, Stefaan J. and Sohn, Eun Jung and Sok, Sophia P.M. and Solaini, Giancarlo and Soldati, Thierry and Soleimanpour, Scott A. and Soler, Rosa M. and Solovchenko, Alexei and Somarelli, Jason A. and Sonawane, Avinash and Song, Fuyong and Song, Hyun Kyu and Song, Ju Xian and Song, Kunhua and Song, Zhiyin and Soria, Leandro R. and Sorice, Maurizio and Soukas, Alexander A. and Soukup, Sandra Fausia and Sousa, Diana and Sousa, Nadia and Spagnuolo, Paul A. and Spector, Stephen A. and Srinivas Bharath, M. M. and St. Clair, Daret and Stagni, Venturina and Staiano, Leopoldo and Stalnecker, Clint A. and Stankov, Metodi V. and Stathopulos, Peter B. and Stefan, Katja and Stefan, Sven Marcel and Stefanis, Leonidas and Steffan, Joan S. and Steinkasserer, Alexander and Stenmark, Harald and Sterneckert, Jared and Stevens, Craig and Stoka, Veronika and Storch, Stephan and Stork, Björn and Strappazzon, Flavie and Strohecker, Anne Marie and Stupack, Dwayne G. and Su, Huanxing and Su, Ling Yan and Su, Longxiang and Suarez-Fontes, Ana M. and Subauste, Carlos S. and Subbian, Selvakumar and Subirada, Paula V. and Sudhandiran, Ganapasam and Sue, Carolyn M. and Sui, Xinbing and Summers, Corey and Sun, Guangchao and Sun, Jun and Sun, Kang and Sun, Meng Xiang and Sun, Qiming and Sun, Yi and Sun, Zhongjie and Sunahara, Karen K.S. and Sundberg, Eva and Susztak, Katalin and Sutovsky, Peter and Suzuki, Hidekazu and Sweeney, Gary and Symons, J. David and Sze, Stephen Cho Wing and Szewczyk, Nathaniel J. and Tabęcka-Łonczynska, Anna and Tabolacci, Claudio and Tacke, Frank and Taegtmeyer, Heinrich and Tafani, Marco and Tagaya, Mitsuo and Tai, Haoran and Tait, Stephen W.G. and Takahashi, Yoshinori and Takats, Szabolcs and Talwar, Priti and Tam, Chit and Tam, Shing Yau and Tampellini, Davide and Tamura, Atsushi and Tan, Chong Teik and Tan, Eng King and Tan, Ya Qin and Tanaka, Masaki and Tanaka, Motomasa and Tang, Daolin and Tang, Jingfeng and Tang, Tie Shan and Tanida, Isei and Tao, Zhipeng and Taouis, Mohammed and Tatenhorst, Lars and Tavernarakis, Nektarios and Taylor, Allen and Taylor, Gregory A. and Taylor, Joan M. and Tchetina, Elena and Tee, Andrew R. and Tegeder, Irmgard and Teis, David and Teixeira, Natercia and Teixeira-Clerc, Fatima and Tekirdag, Kumsal A. and Tencomnao, Tewin and Tenreiro, Sandra and Tepikin, Alexei V. and Testillano, Pilar S. and Tettamanti, Gianluca and Tharaux, Pierre Louis and Thedieck, Kathrin and Thekkinghat, Arvind A. and Thellung, Stefano and Thinwa, Josephine W. and Thirumalaikumar, V. P. and Thomas, Sufi Mary and Thomes, Paul G. and Thorburn, Andrew and Thukral, Lipi and Thum, Thomas and Thumm, Michael and Tian, Ling and Tichy, Ales and Till, Andreas and Timmerman, Vincent and Titorenko, Vladimir I. and Todi, Sokol V. and Todorova, Krassimira and Toivonen, Janne M. and Tomaipitinca, Luana and Tomar, Dhanendra and Tomas-Zapico, Cristina and Tomić, Sergej and Tong, Benjamin Chun Kit and Tong, Chao and Tong, Xin and Tooze, Sharon A. and Torgersen, Maria L. and Torii, Satoru and Torres-López, Liliana and Torriglia, Alicia and Towers, Christina G. and Towns, Roberto and Toyokuni, Shinya and Trajkovic, Vladimir and Tramontano, Donatella and Tran, Quynh Giao and Travassos, Leonardo H. and Trelford, Charles B. and Tremel, Shirley and Trougakos, Ioannis P. and Tsao, Betty P. and Tschan, Mario P. and Tse, Hung Fat and Tse, Tak Fu and Tsugawa, Hitoshi and Tsvetkov, Andrey S. and Tumbarello, David A. and Tumtas, Yasin and Tuñón, María J. and Turcotte, Sandra and Turk, Boris and Turk, Vito and Turner, Bradley J. and Tuxworth, Richard I. and Tyler, Jessica K. and Tyutereva, Elena V. and Uchiyama, Yasuo and Ugun-Klusek, Aslihan and Uhlig, Holm H. and Ułamek-Kozioł, Marzena and Ulasov, Ilya V. and Umekawa, Midori and Ungermann, Christian and Unno, Rei and Urbe, Sylvie and Uribe-Carretero, Elisabet and Üstün, Suayib and Uversky, Vladimir N. and Vaccari, Thomas and Vaccaro, Maria I. and Vahsen, Björn F. and Vakifahmetoglu-Norberg, Helin and Valdor, Rut and Valente, Maria J. and Valko, Ayelén and Vallee, Richard B. and Valverde, Angela M. and Van Den Berghe, Greet and Van Der Veen, Stijn and Van Kaer, Luc and Van Loosdregt, Jorg and Van Wijk, Sjoerd J.L. and Vandenberghe, Wim and Vanhorebeek, Ilse and Vannier-Santos, Marcos A. and Vannini, Nicola and Vanrell, M. Cristina and Vantaggiato, Chiara and Varano, Gabriele and Varela-Nieto, Isabel and Varga, Máté and Vasconcelos, M. Helena and Vats, Somya and Vavvas, Demetrios G. and Vega-Naredo, Ignacio and Vega-Rubin-De-Celis, Silvia and Velasco, Guillermo and Velázquez, Ariadna P. and Vellai, Tibor and Vellenga, Edo and Velotti, Francesca and Verdier, Mireille and Verginis, Panayotis and Vergne, Isabelle and Verkade, Paul and Verma, Manish and Verstreken, Patrik and Vervliet, Tim and Vervoorts, Jörg and Vessoni, Alexandre T. and Victor, Victor M. and Vidal, Michel and Vidoni, Chiara and Vieira, Otilia V. and Vierstra, Richard D. and Viganó, Sonia and Vihinen, Helena and Vijayan, Vinoy and Vila, Miquel and Vilar, Marçal and Villalba, José M. and Villalobo, Antonio and Villarejo-Zori, Beatriz and Villarroya, Francesc and Villarroya, Joan and Vincent, Olivier and Vindis, Cecile and Viret, Christophe and Viscomi, Maria Teresa and Visnjic, Dora and Vitale, Ilio and Vocadlo, David J. and Voitsekhovskaja, Olga V. and Volonté, Cinzia and Volta, Mattia and Vomero, Marta and Von Haefen, Clarissa and Vooijs, Marc A. and Voos, Wolfgang and Vucicevic, Ljubica and Wade-Martins, Richard and Waguri, Satoshi and Waite, Kenrick A. and Wakatsuki, Shuji and Walker, David W. and Walker, Mark J. and Walker, Simon A. and Walter, Jochen and Wandosell, Francisco G. and Wang, Bo and Wang, Chao Yung and Wang, Chen and Wang, Chenran and Wang, Chenwei and Wang, Cun Yu and Wang, Dong and Wang, Fangyang and Wang, Feng and Wang, Fengming and Wang, Guansong and Wang, Han and Wang, Hao and Wang, Hexiang and Wang, Hong Gang and Wang, Jianrong and Wang, Jigang and Wang, Jiou and Wang, Jundong and Wang, Kui and Wang, Lianrong and Wang, Liming and Wang, Maggie Haitian and Wang, Meiqing and Wang, Nanbu and Wang, Pengwei and Wang, Peipei and Wang, Ping and Wang, Ping and Wang, Qing Jun and Wang, Qing and Wang, Qing Kenneth and Wang, Qiong A. and Wang, Wen Tao and Wang, Wuyang and Wang, Xinnan and Wang, Xuejun and Wang, Yan and Wang, Yanchang and Wang, Yanzhuang and Wang, Yen Yun and Wang, Yihua and Wang, Yipeng and Wang, Yu and Wang, Yuqi and Wang, Zhe and Wang, Zhenyu and Wang, Zhouguang and Warnes, Gary and Warnsmann, Verena and Watada, Hirotaka and Watanabe, Eizo and Watchon, Maxinne and Wawrzyńska, Anna and Weaver, Timothy E. and Wegrzyn, Grzegorz and Wehman, Ann M. and Wei, Huafeng and Wei, Lei and Wei, Taotao and Wei, Yongjie and Weiergräber, Oliver H. and Weihl, Conrad C. and Weindl, Günther and Weiskirchen, Ralf and Wells, Alan and Wen, Runxia H. and Wen, Xin and Werner, Antonia and Weykopf, Beatrice and Wheatley, Sally P. and Whitton, J. Lindsay and Whitworth, Alexander J. and Wiktorska, Katarzyna and Wildenberg, Manon E. and Wileman, Tom and Wilkinson, Simon and Willbold, Dieter and Williams, Brett and Williams, Robin S.B. and Williams, Roger L. and Williamson, Peter R. and Wilson, Richard A. and Winner, Beate and Winsor, Nathaniel J. and Witkin, Steven S. and Wodrich, Harald and Woehlbier, Ute and Wollert, Thomas and Wong, Esther and Wong, Jack Ho and Wong, Richard W. and Wong, Vincent Kam Wai and Wong, W. Wei Lynn and Wu, An Guo and Wu, Chengbiao and Wu, Jian and Wu, Junfang and Wu, Kenneth K. and Wu, Min and Wu, Shan Ying and Wu, Shengzhou and Wu, Shu Yan and Wu, Shufang and Wu, William K.K. and Wu, Xiaohong and Wu, Xiaoqing and Wu, Yao Wen and Wu, Yihua and Xavier, Ramnik J. and Xia, Hongguang and Xia, Lixin and Xia, Zhengyuan and Xiang, Ge and Xiang, Jin and Xiang, Mingliang and Xiang, Wei and Xiao, Bin and Xiao, Guozhi and Xiao, Hengyi and Xiao, Hong Tao and Xiao, Jian and Xiao, Lan and Xiao, Shi and Xiao, Yin and Xie, Baoming and Xie, Chuan Ming and Xie, Min and Xie, Yuxiang and Xie, Zhiping and Xie, Zhonglin and Xilouri, Maria and Xu, Congfeng and Xu, En and Xu, Haoxing and Xu, Jing and Xu, Jin Rong and Xu, Liang and Xu, Wen Wen and Xu, Xiulong and Xue, Yu and Yakhine-Diop, Sokhna M.S. and Yamaguchi, Masamitsu and Yamaguchi, Osamu and Yamamoto, Ai and Yamashina, Shunhei and Yan, Shengmin and Yan, Shian Jang and Yan, Zhen and Yanagi, Yasuo and Yang, Chuanbin and Yang, Dun Sheng and Yang, Huan and Yang, Huang Tian and Yang, Hui and Yang, Jin Ming and Yang, Jing and Yang, Jingyu and Yang, Ling and Yang, Liu and Yang, Ming and Yang, Pei Ming and Yang, Qian and Yang, Seungwon and Yang, Shu and Yang, Shun Fa and Yang, Wannian and Yang, Wei Yuan and Yang, Xiaoyong and Yang, Xuesong and Yang, Yi and Yang, Ying and Yao, Honghong and Yao, Shenggen and Yao, Xiaoqiang and Yao, Yong Gang and Yao, Yong Ming and Yasui, Takahiro and Yazdankhah, Meysam and Yen, Paul M. and Yi, Cong and Yin, Xiao Ming and Yin, Yanhai and Yin, Zhangyuan and Yin, Ziyi and Ying, Meidan and Ying, Zheng and Yip, Calvin K. and Yiu, Stephanie Pei Tung and Yoo, Young H. and Yoshida, Kiyotsugu and Yoshii, Saori R. and Yoshimori, Tamotsu and Yousefi, Bahman and Yu, Boxuan and Yu, Haiyang and Yu, Jun and Yu, Jun and Yu, Li and Yu, Ming Lung and Yu, Seong Woon and Yu, Victor C. and Yu, W. Haung and Yu, Zhengping and Yu, Zhou and Yuan, Junying and Yuan, Ling Qing and Yuan, Shilin and Yuan, Shyng Shiou F. and Yuan, Yanggang and Yuan, Zengqiang and Yue, Jianbo and Yue, Zhenyu and Yun, Jeanho and Yung, Raymond L. and Zacks, David N. and Zaffagnini, Gabriele and Zambelli, Vanessa O. and Zanella, Isabella and Zang, Qun S. and Zanivan, Sara and Zappavigna, Silvia and Zaragoza, Pilar and Zarbalis, Konstantinos S. and Zarebkohan, Amir and Zarrouk, Amira and Zeitlin, Scott O. and Zeng, Jialiu and Zeng, Ju Deng and Žerovnik, Eva and Zhan, Lixuan and Zhang, Bin and Zhang, Donna D. and Zhang, Hanlin and Zhang, Hong and Zhang, Hong and Zhang, Honghe and Zhang, Huafeng and Zhang, Huaye and Zhang, Hui and Zhang, Hui Ling and Zhang, Jianbin and Zhang, Jianhua and Zhang, Jing Pu and Zhang, Kalin Y.B. and Zhang, Leshuai W. and Zhang, Lin and Zhang, Lisheng and Zhang, Lu and Zhang, Luoying and Zhang, Menghuan and Zhang, Peng and Zhang, Sheng and Zhang, Wei and Zhang, Xiangnan and Zhang, Xiao Wei and Zhang, Xiaolei and Zhang, Xiaoyan and Zhang, Xin and Zhang, Xinxin and Zhang, Xu Dong and Zhang, Yang and Zhang, Yanjin and Zhang, Yi and Zhang, Ying Dong and Zhang, Yingmei and Zhang, Yuan Yuan and Zhang, Yuchen and Zhang, Zhe and Zhang, Zhengguang and Zhang, Zhibing and Zhang, Zhihai and Zhang, Zhiyong and Zhang, Zili and Zhao, Haobin and Zhao, Lei and Zhao, Shuang and Zhao, Tongbiao and Zhao, Xiao Fan and Zhao, Ying and Zhao, Yongchao and Zhao, Yongliang and Zhao, Yuting and Zheng, Guoping and Zheng, Kai and Zheng, Ling and Zheng, Shizhong and Zheng, Xi Long and Zheng, Yi and Zheng, Zu Guo and Zhivotovsky, Boris and Zhong, Qing and Zhou, Ao and Zhou, Ben and Zhou, Cefan and Zhou, Gang and Zhou, Hao and Zhou, Hong and Zhou, Hongbo and Zhou, Jie and Zhou, Jing and Zhou, Jing and Zhou, Jiyong and Zhou, Kailiang and Zhou, Rongjia and Zhou, Xu Jie and Zhou, Yanshuang and Zhou, Yinghong and Zhou, Yubin and Zhou, Zheng Yu and Zhou, Zhou and Zhu, Binglin and Zhu, Changlian and Zhu, Guo Qing and Zhu, Haining and Zhu, Hongxin and Zhu, Hua and Zhu, Wei Guo and Zhu, Yanping and Zhu, Yushan and Zhuang, Haixia and Zhuang, Xiaohong and Zientara-Rytter, Katarzyna and Zimmermann, Christine M. and Ziviani, Elena and Zoladek, Teresa and Zong, Wei Xing and Zorov, Dmitry B. and Zorzano, Antonio and Zou, Weiping and Zou, Zhen and Zou, Zhengzhi and Zuryn, Steven and Zwerschke, Werner and Brand-Saberi, Beate and Dong, X. Charlie and Kenchappa, Chandra Shekar and Li, Zuguo and Lin, Yong and Oshima, Shigeru and Rong, Yueguang and Sluimer, Judith C. and Stallings, Christina L. and Tong, Chun Kit}, issn = {15548635}, journal = {Autophagy}, number = {1}, publisher = {Bellwether Publishing}, title = {{Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)}}, doi = {10.1080/15548627.2020.1797280}, volume = {17}, year = {2021}, } @article{9283, abstract = {Gene expression levels are influenced by multiple coexisting molecular mechanisms. Some of these interactions such as those of transcription factors and promoters have been studied extensively. However, predicting phenotypes of gene regulatory networks (GRNs) remains a major challenge. Here, we use a well-defined synthetic GRN to study in Escherichia coli how network phenotypes depend on local genetic context, i.e. the genetic neighborhood of a transcription factor and its relative position. We show that one GRN with fixed topology can display not only quantitatively but also qualitatively different phenotypes, depending solely on the local genetic context of its components. Transcriptional read-through is the main molecular mechanism that places one transcriptional unit (TU) within two separate regulons without the need for complex regulatory sequences. We propose that relative order of individual TUs, with its potential for combinatorial complexity, plays an important role in shaping phenotypes of GRNs.}, author = {Nagy-Staron, Anna A and Tomasek, Kathrin and Caruso Carter, Caroline and Sonnleitner, Elisabeth and Kavcic, Bor and Paixão, Tiago and Guet, Calin C}, issn = {2050-084X}, journal = {eLife}, keywords = {Genetics and Molecular Biology}, publisher = {eLife Sciences Publications}, title = {{Local genetic context shapes the function of a gene regulatory network}}, doi = {10.7554/elife.65993}, volume = {10}, year = {2021}, } @article{9290, abstract = {Polar subcellular localization of the PIN exporters of the phytohormone auxin is a key determinant of directional, intercellular auxin transport and thus a central topic of both plant cell and developmental biology. Arabidopsis mutants lacking PID, a kinase that phosphorylates PINs, or the MAB4/MEL proteins of unknown molecular function display PIN polarity defects and phenocopy pin mutants, but mechanistic insights into how these factors convey PIN polarity are missing. Here, by combining protein biochemistry with quantitative live-cell imaging, we demonstrate that PINs, MAB4/MELs, and AGC kinases interact in the same complex at the plasma membrane. MAB4/MELs are recruited to the plasma membrane by the PINs and in concert with the AGC kinases maintain PIN polarity through limiting lateral diffusion-based escape of PINs from the polar domain. The PIN-MAB4/MEL-PID protein complex has self-reinforcing properties thanks to positive feedback between AGC kinase-mediated PIN phosphorylation and MAB4/MEL recruitment. We thus uncover the molecular mechanism by which AGC kinases and MAB4/MEL proteins regulate PIN localization and plant development.}, author = {Glanc, Matous and Van Gelderen, K and Hörmayer, Lukas and Tan, Shutang and Naramoto, S and Zhang, Xixi and Domjan, David and Vcelarova, L and Hauschild, Robert and Johnson, Alexander J and de Koning, E and van Dop, M and Rademacher, E and Janson, S and Wei, X and Molnar, Gergely and Fendrych, Matyas and De Rybel, B and Offringa, R and Friml, Jiří}, issn = {0960-9822}, journal = {Current Biology}, publisher = {Elsevier}, title = {{AGC kinases and MAB4/MEL proteins maintain PIN polarity by limiting lateral diffusion in plant cells}}, doi = {10.1016/j.cub.2021.02.028 }, year = {2021}, } @article{9287, abstract = {The phytohormone auxin and its directional transport through tissues are intensively studied. However, a mechanistic understanding of auxin-mediated feedback on endocytosis and polar distribution of PIN auxin transporters remains limited due to contradictory observations and interpretations. Here, we used state-of-the-art methods to reexamine the auxin effects on PIN endocytic trafficking. We used high auxin concentrations or longer treatments versus lower concentrations and shorter treatments of natural (IAA) and synthetic (NAA) auxins to distinguish between specific and nonspecific effects. Longer treatments of both auxins interfere with Brefeldin A-mediated intracellular PIN2 accumulation and also with general aggregation of endomembrane compartments. NAA treatment decreased the internalization of the endocytic tracer dye, FM4-64; however, NAA treatment also affected the number, distribution, and compartment identity of the early endosome/trans-Golgi network (EE/TGN), rendering the FM4-64 endocytic assays at high NAA concentrations unreliable. To circumvent these nonspecific effects of NAA and IAA affecting the endomembrane system, we opted for alternative approaches visualizing the endocytic events directly at the plasma membrane (PM). Using Total Internal Reflection Fluorescence (TIRF) microscopy, we saw no significant effects of IAA or NAA treatments on the incidence and dynamics of clathrin foci, implying that these treatments do not affect the overall endocytosis rate. However, both NAA and IAA at low concentrations rapidly and specifically promoted endocytosis of photo-converted PIN2 from the PM. These analyses identify a specific effect of NAA and IAA on PIN2 endocytosis, thus contributing to its polarity maintenance and furthermore illustrate that high auxin levels have nonspecific effects on trafficking and endomembrane compartments. }, author = {Narasimhan, Madhumitha and Gallei, Michelle C and Tan, Shutang and Johnson, Alexander J and Verstraeten, Inge and Li, Lanxin and Rodriguez Solovey, Lesia and Han, Huibin and Himschoot, E and Wang, R and Vanneste, S and Sánchez-Simarro, J and Aniento, F and Adamowski, Maciek and Friml, Jiří}, issn = {0032-0889}, journal = {Plant Physiology}, publisher = {Oxford University Press}, title = {{Systematic analysis of specific and nonspecific auxin effects on endocytosis and trafficking}}, doi = {10.1093/plphys/kiab134}, year = {2021}, } @article{9288, abstract = {• The phenylpropanoid pathway serves a central role in plant metabolism, providing numerous compounds involved in diverse physiological processes. Most carbon entering the pathway is incorporated into lignin. Although several phenylpropanoid pathway mutants show seedling growth arrest, the role for lignin in seedling growth and development is unexplored. • We use complementary pharmacological and genetic approaches to block CINNAMATE‐4‐HYDROXYLASE (C4H) functionality in Arabidopsis seedlings and a set of molecular and biochemical techniques to investigate the underlying phenotypes. • Blocking C4H resulted in reduced lateral rooting and increased adventitious rooting apically in the hypocotyl. These phenotypes coincided with an inhibition in auxin transport. The upstream accumulation in cis‐cinnamic acid was found to likely cause polar auxin transport inhibition. Conversely, a downstream depletion in lignin perturbed phloem‐mediated auxin transport. Restoring lignin deposition effectively reestablished phloem transport and, accordingly, auxin homeostasis. • Our results show that the accumulation of bioactive intermediates and depletion in lignin jointly cause the aberrant phenotypes upon blocking C4H, and demonstrate that proper deposition of lignin is essential for the establishment of auxin distribution in seedlings. Our data position the phenylpropanoid pathway and lignin in a new physiological framework, consolidating their importance in plant growth and development.}, author = {El Houari, I and Van Beirs, C and Arents, HE and Han, Huibin and Chanoca, A and Opdenacker, D and Pollier, J and Storme, V and Steenackers, W and Quareshy, M and Napier, R and Beeckman, T and Friml, Jiří and De Rybel, B and Boerjan, W and Vanholme, B}, issn = {0028-646x}, journal = {New Phytologist}, publisher = {Wiley}, title = {{Seedling developmental defects upon blocking CINNAMATE-4-HYDROXYLASE are caused by perturbations in auxin transport}}, doi = {10.1111/nph.17349}, year = {2021}, } @misc{9291, abstract = {This .zip File contains the transport data for figures presented in the main text and supplementary material of "Enhancement of Proximity Induced Superconductivity in Planar Germanium" by K. Aggarwal, et. al. The measurements were done using Labber Software and the data is stored in the hdf5 file format. The files can be opened using either the Labber Log Browser (https://labber.org/overview/) or Labber Python API (http://labber.org/online-doc/api/LogFile.html).}, author = {Katsaros, Georgios}, publisher = {IST Austria}, title = {{Raw transport data for: Enhancement of proximity induced superconductivity in planar germanium}}, doi = {10.15479/AT:ISTA:9291}, year = {2021}, } @inproceedings{9296, abstract = { matching is compatible to two or more labeled point sets of size n with labels {1,…,n} if its straight-line drawing on each of these point sets is crossing-free. We study the maximum number of edges in a matching compatible to two or more labeled point sets in general position in the plane. We show that for any two labeled convex sets of n points there exists a compatible matching with ⌊2n−−√⌋ edges. More generally, for any ℓ labeled point sets we construct compatible matchings of size Ω(n1/ℓ) . As a corresponding upper bound, we use probabilistic arguments to show that for any ℓ given sets of n points there exists a labeling of each set such that the largest compatible matching has O(n2/(ℓ+1)) edges. Finally, we show that Θ(logn) copies of any set of n points are necessary and sufficient for the existence of a labeling such that any compatible matching consists only of a single edge.}, author = {Aichholzer, Oswin and Arroyo Guevara, Alan M and Masárová, Zuzana and Parada, Irene and Perz, Daniel and Pilz, Alexander and Tkadlec, Josef and Vogtenhuber, Birgit}, booktitle = {15th International Conference on Algorithms and Computation}, isbn = {9783030682101}, issn = {16113349}, location = {Virtual}, pages = {221--233}, publisher = {Springer Nature}, title = {{On compatible matchings}}, doi = {10.1007/978-3-030-68211-8_18}, volume = {12635}, year = {2021}, } @article{9301, abstract = {Electrodepositing insulating lithium peroxide (Li2O2) is the key process during discharge of aprotic Li–O2 batteries and determines rate, capacity, and reversibility. Current understanding states that the partition between surface adsorbed and dissolved lithium superoxide governs whether Li2O2 grows as a conformal surface film or larger particles, leading to low or high capacities, respectively. However, better understanding governing factors for Li2O2 packing density and capacity requires structural sensitive in situ metrologies. Here, we establish in situ small- and wide-angle X-ray scattering (SAXS/WAXS) as a suitable method to record the Li2O2 phase evolution with atomic to submicrometer resolution during cycling a custom-built in situ Li–O2 cell. Combined with sophisticated data analysis, SAXS allows retrieving rich quantitative structural information from complex multiphase systems. Surprisingly, we find that features are absent that would point at a Li2O2 surface film formed via two consecutive electron transfers, even in poorly solvating electrolytes thought to be prototypical for surface growth. All scattering data can be modeled by stacks of thin Li2O2 platelets potentially forming large toroidal particles. Li2O2 solution growth is further justified by rotating ring-disk electrode measurements and electron microscopy. Higher discharge overpotentials lead to smaller Li2O2 particles, but there is no transition to an electronically passivating, conformal Li2O2 coating. Hence, mass transport of reactive species rather than electronic transport through a Li2O2 film limits the discharge capacity. Provided that species mobilities and carbon surface areas are high, this allows for high discharge capacities even in weakly solvating electrolytes. The currently accepted Li–O2 reaction mechanism ought to be reconsidered.}, author = {Prehal, Christian and Samojlov, Aleksej and Nachtnebel, Manfred and Lovicar, Ludek and Kriechbaum, Manfred and Amenitsch, Heinz and Freunberger, Stefan Alexander}, issn = {0027-8424}, journal = {PNAS}, keywords = {small-angle X-ray scattering, oxygen reduction, disproportionation, Li-air battery}, number = {14}, publisher = {Proceedings of the National Academy of Sciences}, title = {{In situ small-angle X-ray scattering reveals solution phase discharge of Li–O2 batteries with weakly solvating electrolytes}}, doi = {10.1073/pnas.2021893118}, volume = {118}, year = {2021}, } @article{9297, abstract = {We report the results of an experimental investigation into the decay of turbulence in plane Couette–Poiseuille flow using ‘quench’ experiments where the flow laminarises after a sudden reduction in Reynolds number Re. Specifically, we study the velocity field in the streamwise–spanwise plane. We show that the spanwise velocity containing rolls decays faster than the streamwise velocity, which displays elongated regions of higher or lower velocity called streaks. At final Reynolds numbers above 425, the decay of streaks displays two stages: first a slow decay when rolls are present and secondly a more rapid decay of streaks alone. The difference in behaviour results from the regeneration of streaks by rolls, called the lift-up effect. We define the turbulent fraction as the portion of the flow containing turbulence and this is estimated by thresholding the spanwise velocity component. It decreases linearly with time in the whole range of final Re. The corresponding decay slope increases linearly with final Re. The extrapolated value at which this decay slope vanishes is Reaz≈656±10, close to Reg≈670 at which turbulence is self-sustained. The decay of the energy computed from the spanwise velocity component is found to be exponential. The corresponding decay rate increases linearly with Re, with an extrapolated vanishing value at ReAz≈688±10. This value is also close to the value at which the turbulence is self-sustained, showing that valuable information on the transition can be obtained over a wide range of Re.}, author = {Liu, T. and Semin, B. and Klotz, Lukasz and Godoy-Diana, R. and Wesfreid, J. E. and Mullin, T.}, issn = {1469-7645}, journal = {Journal of Fluid Mechanics}, publisher = {Cambridge University Press}, title = {{Decay of streaks and rolls in plane Couette-Poiseuille flow}}, doi = {10.1017/jfm.2021.89}, volume = {915}, year = {2021}, } @article{9295, abstract = {Hill's Conjecture states that the crossing number cr(𝐾𝑛) of the complete graph 𝐾𝑛 in the plane (equivalently, the sphere) is 14⌊𝑛2⌋⌊𝑛−12⌋⌊𝑛−22⌋⌊𝑛−32⌋=𝑛4/64+𝑂(𝑛3) . Moon proved that the expected number of crossings in a spherical drawing in which the points are randomly distributed and joined by geodesics is precisely 𝑛4/64+𝑂(𝑛3) , thus matching asymptotically the conjectured value of cr(𝐾𝑛) . Let cr𝑃(𝐺) denote the crossing number of a graph 𝐺 in the projective plane. Recently, Elkies proved that the expected number of crossings in a naturally defined random projective plane drawing of 𝐾𝑛 is (𝑛4/8𝜋2)+𝑂(𝑛3) . In analogy with the relation of Moon's result to Hill's conjecture, Elkies asked if lim𝑛→∞ cr𝑃(𝐾𝑛)/𝑛4=1/8𝜋2 . We construct drawings of 𝐾𝑛 in the projective plane that disprove this.}, author = {Arroyo Guevara, Alan M and Mcquillan, Dan and Richter, R. Bruce and Salazar, Gelasio and Sullivan, Matthew}, issn = {1097-0118}, journal = {Journal of Graph Theory}, publisher = {Wiley}, title = {{Drawings of complete graphs in the projective plane}}, doi = {10.1002/jgt.22665}, year = {2021}, } @article{9293, abstract = {We consider planning problems for graphs, Markov Decision Processes (MDPs), and games on graphs in an explicit state space. While graphs represent the most basic planning model, MDPs represent interaction with nature and games on graphs represent interaction with an adversarial environment. We consider two planning problems with k different target sets: (a) the coverage problem asks whether there is a plan for each individual target set; and (b) the sequential target reachability problem asks whether the targets can be reached in a given sequence. For the coverage problem, we present a linear-time algorithm for graphs, and quadratic conditional lower bound for MDPs and games on graphs. For the sequential target problem, we present a linear-time algorithm for graphs, a sub-quadratic algorithm for MDPs, and a quadratic conditional lower bound for games on graphs. Our results with conditional lower bounds, based on the boolean matrix multiplication (BMM) conjecture and strong exponential time hypothesis (SETH), establish (i) model-separation results showing that for the coverage problem MDPs and games on graphs are harder than graphs, and for the sequential reachability problem games on graphs are harder than MDPs and graphs; and (ii) problem-separation results showing that for MDPs the coverage problem is harder than the sequential target problem.}, author = {Chatterjee, Krishnendu and Dvořák, Wolfgang and Henzinger, Monika and Svozil, Alexander}, issn = {00043702}, journal = {Artificial Intelligence}, number = {8}, publisher = {Elsevier}, title = {{Algorithms and conditional lower bounds for planning problems}}, doi = {10.1016/j.artint.2021.103499}, volume = {297}, year = {2021}, } @article{9294, abstract = {In this issue of Developmental Cell, Doyle and colleagues identify periodic anterior contraction as a characteristic feature of fibroblasts and mesenchymal cancer cells embedded in 3D collagen gels. This contractile mechanism generates a matrix prestrain required for crawling in fibrous 3D environments.}, author = {Gärtner, Florian R and Sixt, Michael K}, issn = {18781551}, journal = {Developmental Cell}, number = {6}, pages = {723--725}, publisher = {Elsevier}, title = {{Engaging the front wheels to drive through fibrous terrain}}, doi = {10.1016/j.devcel.2021.03.002}, volume = {56}, year = {2021}, } @article{9307, abstract = {We establish finite time extinction with probability one for weak solutions of the Cauchy–Dirichlet problem for the 1D stochastic porous medium equation with Stratonovich transport noise and compactly supported smooth initial datum. Heuristically, this is expected to hold because Brownian motion has average spread rate O(t12) whereas the support of solutions to the deterministic PME grows only with rate O(t1m+1). The rigorous proof relies on a contraction principle up to time-dependent shift for Wong–Zakai type approximations, the transformation to a deterministic PME with two copies of a Brownian path as the lateral boundary, and techniques from the theory of viscosity solutions.}, author = {Hensel, Sebastian}, issn = {2194041X}, journal = {Stochastics and Partial Differential Equations: Analysis and Computations}, publisher = {Springer Nature}, title = {{Finite time extinction for the 1D stochastic porous medium equation with transport noise}}, doi = {10.1007/s40072-021-00188-9}, year = {2021}, } @article{9304, abstract = {The high processing cost, poor mechanical properties and moderate performance of Bi2Te3–based alloys used in thermoelectric devices limit the cost-effectiveness of this energy conversion technology. Towards solving these current challenges, in the present work, we detail a low temperature solution-based approach to produce Bi2Te3-Cu2-xTe nanocomposites with improved thermoelectric performance. Our approach consists in combining proper ratios of colloidal nanoparticles and to consolidate the resulting mixture into nanocomposites using a hot press. The transport properties of the nanocomposites are characterized and compared with those of pure Bi2Te3 nanomaterials obtained following the same procedure. In contrast with most previous works, the presence of Cu2-xTe nanodomains does not result in a significant reduction of the lattice thermal conductivity of the reference Bi2Te3 nanomaterial, which is already very low. However, the introduction of Cu2-xTe yields a nearly threefold increase of the power factor associated to a simultaneous increase of the Seebeck coefficient and electrical conductivity at temperatures above 400 K. Taking into account the band alignment of the two materials, we rationalize this increase by considering that Cu2-xTe nanostructures, with a relatively low electron affinity, are able to inject electrons into Bi2Te3, enhancing in this way its electrical conductivity. The simultaneous increase of the Seebeck coefficient is related to the energy filtering of charge carriers at energy barriers within Bi2Te3 domains associated with the accumulation of electrons in regions nearby a Cu2-xTe/Bi2Te3 heterojunction. Overall, with the incorporation of a proper amount of Cu2-xTe nanoparticles, we demonstrate a 250% improvement of the thermoelectric figure of merit of Bi2Te3.}, author = {Zhang, Yu and Xing, Congcong and Liu, Yu and Li, Mengyao and Xiao, Ke and Guardia, Pablo and Lee, Seungho and Han, Xu and Ostovari Moghaddam, Ahmad and Josep Roa, Joan and Arbiol, Jordi and Ibáñez, Maria and Pan, Kai and Prato, Mirko and Xie, Ying and Cabot, Andreu}, issn = {13858947}, journal = {Chemical Engineering Journal}, number = {8}, publisher = {Elsevier}, title = {{Influence of copper telluride nanodomains on the transport properties of n-type bismuth telluride}}, doi = {10.1016/j.cej.2021.129374}, volume = {418}, year = {2021}, } @article{9306, abstract = {Assemblies of actin and its regulators underlie the dynamic morphology of all eukaryotic cells. To understand how actin regulatory proteins work together to generate actin-rich structures such as filopodia, we analyzed the localization of diverse actin regulators within filopodia in Drosophila embryos and in a complementary in vitro system of filopodia-like structures (FLSs). We found that the composition of the regulatory protein complex where actin is incorporated (the filopodial tip complex) is remarkably heterogeneous both in vivo and in vitro. Our data reveal that different pairs of proteins correlate with each other and with actin bundle length, suggesting the presence of functional subcomplexes. This is consistent with a theoretical framework where three or more redundant subcomplexes join the tip complex stochastically, with any two being sufficient to drive filopodia formation. We provide an explanation for the observed heterogeneity and suggest that a mechanism based on multiple components allows stereotypical filopodial dynamics to arise from diverse upstream signaling pathways.}, author = {Dobramysl, Ulrich and Jarsch, Iris Katharina and Inoue, Yoshiko and Shimo, Hanae and Richier, Benjamin and Gadsby, Jonathan R. and Mason, Julia and Szałapak, Alicja and Ioannou, Pantelis Savvas and Correia, Guilherme Pereira and Walrant, Astrid and Butler, Richard and Hannezo, Edouard B and Simons, Benjamin D. and Gallop, Jennifer L.}, issn = {15408140}, journal = {The Journal of Cell Biology}, number = {4}, publisher = {Rockefeller University Press}, title = {{Stochastic combinations of actin regulatory proteins are sufficient to drive filopodia formation}}, doi = {10.1083/jcb.202003052}, volume = {220}, year = {2021}, } @article{9305, abstract = {Copper chalcogenides are outstanding thermoelectric materials for applications in the medium-high temperature range. Among different chalcogenides, while Cu2−xSe is characterized by higher thermoelectric figures of merit, Cu2−xS provides advantages in terms of low cost and element abundance. In the present work, we investigate the effect of different dopants to enhance the Cu2−xS performance and also its thermal stability. Among the tested options, Pb-doped Cu2−xS shows the highest improvement in stability against sulfur volatilization. Additionally, Pb incorporation allows tuning charge carrier concentration, which enables a significant improvement of the power factor. We demonstrate here that the introduction of an optimal additive amount of just 0.3% results in a threefold increase of the power factor in the middle-temperature range (500–800 K) and a record dimensionless thermoelectric figure of merit above 2 at 880 K.}, author = {Zhang, Yu and Xing, Congcong and Liu, Yu and Spadaro, Maria Chiara and Wang, Xiang and Li, Mengyao and Xiao, Ke and Zhang, Ting and Guardia, Pablo and Lim, Khak Ho and Moghaddam, Ahmad Ostovari and Llorca, Jordi and Arbiol, Jordi and Ibáñez, Maria and Cabot, Andreu}, issn = {22112855}, journal = {Nano Energy}, number = {7}, publisher = {Elsevier}, title = {{Doping-mediated stabilization of copper vacancies to promote thermoelectric properties of Cu2-xS}}, doi = {10.1016/j.nanoen.2021.105991}, volume = {85}, year = {2021}, } @phdthesis{8934, abstract = {In this thesis, we consider several of the most classical and fundamental problems in static analysis and formal verification, including invariant generation, reachability analysis, termination analysis of probabilistic programs, data-flow analysis, quantitative analysis of Markov chains and Markov decision processes, and the problem of data packing in cache management. We use techniques from parameterized complexity theory, polyhedral geometry, and real algebraic geometry to significantly improve the state-of-the-art, in terms of both scalability and completeness guarantees, for the mentioned problems. In some cases, our results are the first theoretical improvements for the respective problems in two or three decades.}, author = {Goharshady, Amir Kafshdar}, issn = {2663-337X}, pages = {278}, publisher = {IST Austria}, title = {{Parameterized and algebro-geometric advances in static program analysis}}, doi = {10.15479/AT:ISTA:8934}, year = {2021}, } @article{7956, abstract = {When short-range attractions are combined with long-range repulsions in colloidal particle systems, complex microphases can emerge. Here, we study a system of isotropic particles, which can form lamellar structures or a disordered fluid phase when temperature is varied. We show that, at equilibrium, the lamellar structure crystallizes, while out of equilibrium, the system forms a variety of structures at different shear rates and temperatures above melting. The shear-induced ordering is analyzed by means of principal component analysis and artificial neural networks, which are applied to data of reduced dimensionality. Our results reveal the possibility of inducing ordering by shear, potentially providing a feasible route to the fabrication of ordered lamellar structures from isotropic particles.}, author = {Pȩkalski, J. and Rzadkowski, Wojciech and Panagiotopoulos, A. Z.}, issn = {10897690}, journal = {The Journal of chemical physics}, number = {20}, publisher = {AIP}, title = {{Shear-induced ordering in systems with competing interactions: A machine learning study}}, doi = {10.1063/5.0005194}, volume = {152}, year = {2020}, } @article{7957, abstract = {Neurodevelopmental disorders (NDDs) are a class of disorders affecting brain development and function and are characterized by wide genetic and clinical variability. In this review, we discuss the multiple factors that influence the clinical presentation of NDDs, with particular attention to gene vulnerability, mutational load, and the two-hit model. Despite the complex architecture of mutational events associated with NDDs, the various proteins involved appear to converge on common pathways, such as synaptic plasticity/function, chromatin remodelers and the mammalian target of rapamycin (mTOR) pathway. A thorough understanding of the mechanisms behind these pathways will hopefully lead to the identification of candidates that could be targeted for treatment approaches.}, author = {Parenti, Ilaria and Garcia Rabaneda, Luis E and Schön, Hanna and Novarino, Gaia}, issn = {1878108X}, journal = {Trends in Neurosciences}, number = {8}, pages = {608--621}, publisher = {Elsevier}, title = {{Neurodevelopmental disorders: From genetics to functional pathways}}, doi = {10.1016/j.tins.2020.05.004}, volume = {43}, year = {2020}, } @article{7960, abstract = {Let A={A1,…,An} be a family of sets in the plane. For 0≤i2b be integers. We prove that if each k-wise or (k+1)-wise intersection of sets from A has at most b path-connected components, which all are open, then fk+1=0 implies fk≤cfk−1 for some positive constant c depending only on b and k. These results also extend to two-dimensional compact surfaces.}, author = {Kalai, Gil and Patakova, Zuzana}, issn = {14320444}, journal = {Discrete and Computational Geometry}, pages = {304--323}, publisher = {Springer Nature}, title = {{Intersection patterns of planar sets}}, doi = {10.1007/s00454-020-00205-z}, volume = {64}, year = {2020}, } @article{7962, abstract = {A string graph is the intersection graph of a family of continuous arcs in the plane. The intersection graph of a family of plane convex sets is a string graph, but not all string graphs can be obtained in this way. We prove the following structure theorem conjectured by Janson and Uzzell: The vertex set of almost all string graphs on n vertices can be partitioned into five cliques such that some pair of them is not connected by any edge (n→∞). We also show that every graph with the above property is an intersection graph of plane convex sets. As a corollary, we obtain that almost all string graphs on n vertices are intersection graphs of plane convex sets.}, author = {Pach, János and Reed, Bruce and Yuditsky, Yelena}, issn = {14320444}, journal = {Discrete and Computational Geometry}, number = {4}, pages = {888--917}, publisher = {Springer Nature}, title = {{Almost all string graphs are intersection graphs of plane convex sets}}, doi = {10.1007/s00454-020-00213-z}, volume = {63}, year = {2020}, } @inproceedings{7966, abstract = {For 1≤m≤n, we consider a natural m-out-of-n multi-instance scenario for a public-key encryption (PKE) scheme. An adversary, given n independent instances of PKE, wins if he breaks at least m out of the n instances. In this work, we are interested in the scaling factor of PKE schemes, SF, which measures how well the difficulty of breaking m out of the n instances scales in m. That is, a scaling factor SF=ℓ indicates that breaking m out of n instances is at least ℓ times more difficult than breaking one single instance. A PKE scheme with small scaling factor hence provides an ideal target for mass surveillance. In fact, the Logjam attack (CCS 2015) implicitly exploited, among other things, an almost constant scaling factor of ElGamal over finite fields (with shared group parameters). For Hashed ElGamal over elliptic curves, we use the generic group model to argue that the scaling factor depends on the scheme's granularity. In low granularity, meaning each public key contains its independent group parameter, the scheme has optimal scaling factor SF=m; In medium and high granularity, meaning all public keys share the same group parameter, the scheme still has a reasonable scaling factor SF=√m. Our findings underline that instantiating ElGamal over elliptic curves should be preferred to finite fields in a multi-instance scenario. As our main technical contribution, we derive new generic-group lower bounds of Ω(√(mp)) on the difficulty of solving both the m-out-of-n Gap Discrete Logarithm and the m-out-of-n Gap Computational Diffie-Hellman problem over groups of prime order p, extending a recent result by Yun (EUROCRYPT 2015). We establish the lower bound by studying the hardness of a related computational problem which we call the search-by-hypersurface problem.}, author = {Auerbach, Benedikt and Giacon, Federico and Kiltz, Eike}, booktitle = {Advances in Cryptology – EUROCRYPT 2020}, isbn = {9783030457266}, issn = {0302-9743}, pages = {475--506}, publisher = {Springer Nature}, title = {{Everybody’s a target: Scalability in public-key encryption}}, doi = {10.1007/978-3-030-45727-3_16}, volume = {12107}, year = {2020}, } @article{7968, abstract = {Organic materials are known to feature long spin-diffusion times, originating in a generally small spin–orbit coupling observed in these systems. From that perspective, chiral molecules acting as efficient spin selectors pose a puzzle that attracted a lot of attention in recent years. Here, we revisit the physical origins of chiral-induced spin selectivity (CISS) and propose a simple analytic minimal model to describe it. The model treats a chiral molecule as an anisotropic wire with molecular dipole moments aligned arbitrarily with respect to the wire’s axes and is therefore quite general. Importantly, it shows that the helical structure of the molecule is not necessary to observe CISS and other chiral nonhelical molecules can also be considered as potential candidates for the CISS effect. We also show that the suggested simple model captures the main characteristics of CISS observed in the experiment, without the need for additional constraints employed in the previous studies. The results pave the way for understanding other related physical phenomena where the CISS effect plays an essential role.}, author = {Ghazaryan, Areg and Paltiel, Yossi and Lemeshko, Mikhail}, issn = {1932-7447}, journal = {The Journal of Physical Chemistry C}, number = {21}, pages = {11716--11721}, publisher = {American Chemical Society}, title = {{Analytic model of chiral-induced spin selectivity}}, doi = {10.1021/acs.jpcc.0c02584}, volume = {124}, year = {2020}, } @article{7971, abstract = {Multilayer graphene lattices allow for an additional tunability of the band structure by the strong perpendicular electric field. In particular, the emergence of the new multiple Dirac points in ABA stacked trilayer graphene subject to strong transverse electric fields was proposed theoretically and confirmed experimentally. These new Dirac points dubbed “gullies” emerge from the interplay between strong electric field and trigonal warping. In this work, we first characterize the properties of new emergent Dirac points and show that the electric field can be used to tune the distance between gullies in the momentum space. We demonstrate that the band structure has multiple Lifshitz transitions and higher-order singularity of “monkey saddle” type. Following the characterization of the band structure, we consider the spectrum of Landau levels and structure of their wave functions. In the limit of strong electric fields when gullies are well separated in momentum space, they give rise to triply degenerate Landau levels. In the second part of this work, we investigate how degeneracy between three gully Landau levels is lifted in the presence of interactions. Within the Hartree-Fock approximation we show that the symmetry breaking state interpolates between the fully gully polarized state that breaks C3 symmetry at high displacement field and the gully symmetric state when the electric field is decreased. The discontinuous transition between these two states is driven by enhanced intergully tunneling and exchange. We conclude by outlining specific experimental predictions for the existence of such a symmetry-breaking state.}, author = {Rao, Peng and Serbyn, Maksym}, issn = {2469-9950}, journal = {Physical Review B}, number = {24}, publisher = {American Physical Society}, title = {{Gully quantum Hall ferromagnetism in biased trilayer graphene}}, doi = {10.1103/physrevb.101.245411}, volume = {101}, year = {2020}, } @article{7985, abstract = {The goal of limiting global warming to 1.5 °C requires a drastic reduction in CO2 emissions across many sectors of the world economy. Batteries are vital to this endeavor, whether used in electric vehicles, to store renewable electricity, or in aviation. Present lithium-ion technologies are preparing the public for this inevitable change, but their maximum theoretical specific capacity presents a limitation. Their high cost is another concern for commercial viability. Metal–air batteries have the highest theoretical energy density of all possible secondary battery technologies and could yield step changes in energy storage, if their practical difficulties could be overcome. The scope of this review is to provide an objective, comprehensive, and authoritative assessment of the intensive work invested in nonaqueous rechargeable metal–air batteries over the past few years, which identified the key problems and guides directions to solve them. We focus primarily on the challenges and outlook for Li–O2 cells but include Na–O2, K–O2, and Mg–O2 cells for comparison. Our review highlights the interdisciplinary nature of this field that involves a combination of materials chemistry, electrochemistry, computation, microscopy, spectroscopy, and surface science. The mechanisms of O2 reduction and evolution are considered in the light of recent findings, along with developments in positive and negative electrodes, electrolytes, electrocatalysis on surfaces and in solution, and the degradative effect of singlet oxygen, which is typically formed in Li–O2 cells.}, author = {Kwak, WJ and Sharon, D and Xia, C and Kim, H and Johnson, LR and Bruce, PG and Nazar, LF and Sun, YK and Frimer, AA and Noked, M and Freunberger, Stefan Alexander and Aurbach, D}, issn = {0009-2665}, journal = {Chemical Reviews}, number = {14}, pages = {6626--6683}, publisher = {American Chemical Society}, title = {{Lithium-oxygen batteries and related systems: Potential, status, and future}}, doi = {10.1021/acs.chemrev.9b00609}, volume = {120}, year = {2020}, } @inproceedings{7989, abstract = {We prove general topological Radon-type theorems for sets in ℝ^d, smooth real manifolds or finite dimensional simplicial complexes. Combined with a recent result of Holmsen and Lee, it gives fractional Helly theorem, and consequently the existence of weak ε-nets as well as a (p,q)-theorem. More precisely: Let X be either ℝ^d, smooth real d-manifold, or a finite d-dimensional simplicial complex. Then if F is a finite, intersection-closed family of sets in X such that the ith reduced Betti number (with ℤ₂ coefficients) of any set in F is at most b for every non-negative integer i less or equal to k, then the Radon number of F is bounded in terms of b and X. Here k is the smallest integer larger or equal to d/2 - 1 if X = ℝ^d; k=d-1 if X is a smooth real d-manifold and not a surface, k=0 if X is a surface and k=d if X is a d-dimensional simplicial complex. Using the recent result of the author and Kalai, we manage to prove the following optimal bound on fractional Helly number for families of open sets in a surface: Let F be a finite family of open sets in a surface S such that the intersection of any subfamily of F is either empty, or path-connected. Then the fractional Helly number of F is at most three. This also settles a conjecture of Holmsen, Kim, and Lee about an existence of a (p,q)-theorem for open subsets of a surface.}, author = {Patakova, Zuzana}, booktitle = {36th International Symposium on Computational Geometry}, isbn = {9783959771436}, issn = {18688969}, location = {Zürich, Switzerland}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Bounding radon number via Betti numbers}}, doi = {10.4230/LIPIcs.SoCG.2020.61}, volume = {164}, year = {2020}, }
19f576668568fa1c
On Nonlinear Compensation Techniques for Coherent Fiber-Optical Channel Licentiate thesis, 2014 Fiber-optical communication systems form the backbone of the internet, enabling global broadband data services. Over the past decades, the demand for high-speed communications has grown exponentially. One of the key techniques for the efficient use of existing bandwidth is the use of higher order modulation formats along with coherent detection. However, moving to high-order constellations requires higher input power, and thus leads to increased nonlinear effects in the fiber. In long-haul optical communications (distances spanning from a hundred to a few thousands of kilometers), amplification of the signal is typically needed as the fibers exhibit power losses. Amplifiers add noise and the signal and noise interact, leading to nonlinear signal–noise interactions, which degrade the system performance. The propagation of light in an optical fiber is described by the nonlinear Schrödinger equation (NLSE). Due to the lack of analytical solutions for the NLSE, deriving statistics of this nonlinear channel is in general cumbersome. The state-of-the-art receiver for combating the impairments existing in a fiber-optical link is digital backpropagation (DBP), which inverts the NLSE, and is widely believed to be optimal. Following this optimality, DBP has enabled system designers to determine optimal transmission parameters and provides a benchmark against which other detectors are compared. However, a number of open questions remain: How is DBP affected by noise? With respect to which criterion is DBP optimal? Can we estimate the optimal transmit power for a system when DBP is used? In paper A, starting from basic principles in Bayesian decision theory, we consider the well-known maximum a posteriori (MAP) decision rule, a natural optimality criterion which minimizes the error probability. As the closed-form expressions required for MAP detection are not tractable for coherent optical transmission, we employ the framework of factor graphs and the sum-product algorithm, which allow a numerical evaluation of the MAP detector. The detector turns out to have similarities with DBP (which can be interpreted as a special case) and is termed stochastic digital backpropagation, as it accounts for noise, as well as nonlinear and dispersive effects. Through Monte Carlo simulations of a single-channel communication system, we see significant performance gains with respect to DBP for dispersion-managed links. In paper B, we investigate the performance limits of DBP for a non dispersion-managed fiber-optical link. An analytical expression is derived that can be used to find the optimal transmit power for a system when DBP is used. We found that a first-order approximation is reasonably tight for different symbol rates and it can be used to approximately compute the optimum transmit power in terms of minimizing the symbol error rate. Moreover, the first-order approximation results show that the variance of the nonlinear noise grows quadratically with transmitted power, which limits the performance of a system with DBP. performance limits. nonlinear compensation factor graphs fiber-optical communications near-MAP detector Digital backpropagation EB, floor 4, Hörsalsvägen 11 Opponent: Associate Professor Darko Zibar Naga Vishnukanth Irukulapati Chalmers, Signals and Systems, Communication and Antenna Systems, Communication Systems Areas of Advance Information and Communication Technology Subject Categories Communication Systems Signal Processing C3SE (Chalmers Centre for Computational Science and Engineering) R - Department of Signals and Systems, Chalmers University of Technology: 1403-266X EB, floor 4, Hörsalsvägen 11 Opponent: Associate Professor Darko Zibar More information
2d20fe0961a35a4c
In search of a Theory of Everything (download the book) download the introduction text: A simple and unified explanation 
 of the theories of modern physics and of the Universe Gérard Gremaud  (Facebook page) Honorary Professor of Swiss Federal Institute of Technology of Lausanne, Switzerland This text summarizes how a new approach to the Universe, which has been recently exposed in two books [1], allows to find a simple, unified and coherent explanation of all the theories of modern physics and of the Universe. The basic concepts of this approach can be summarized as follows: (i) the support of the Universe is a form of “ether” which consists of a solid and massive lattice, with the simplest possible elasticity, and in which matter is represented by the set of topological singularities of this lattice (loops of dislocations, disclinations and dispirations), and (ii) this lattice exclusively satisfies, in absolute space, the basic classical physical concepts of Newton’s law and the two principles of thermodynamics. With these basic classical concepts alone, we show that it is possible to find all the modern theories of physics, namely that the behaviors of this lattice (the Universe) and its topological singularities (the Matter) satisfy electromagnetism, special relativity, general relativity, gravitation, quantum physics, cosmology and even the standard model of elementary particles. The Quest for a Theory of Everything Modern theories of physics are based on mathematical relationships postulated to explain observed phenomena, and not on an inference of these mathematical relationships from an understandable first principle. Electromagnetism is based on Maxwell’s equations, without simple explanations of what electric and magnetic fields really are, what electric charge is, and how electromagnetic waves can propagate in a vacuum. Special relativity is based on Lorentz transformations, without any explanation of the root causes why time expands and lengths contract when an object moves at high speed, or in relation to what the object is moving. General relativity is based on Einstein’s famous equation that relates the curvature of space-time to the mass and energy of matter in space, without any real explanation of why matter “curves” space-time, and even what space-time is exactly. Quantum physics is based on Schrödinger’s equation, without any explanation of the deep reason for this relationship, what the wave function really is, and what defines the boundary between a classical and a quantum behavior of an object (quantum decoherence). Cosmology is based on general relativity, and it tries to describe the observed behaviors of the universe by injecting concepts, such as dark matter and dark energy, which have no underlying physical explanation for the moment, and which are introduced arbitrarily to make the theory fit the experiment. The Standard Model of Elementary Particles is constructed from numerous experimental observations, but without any explanation of what an elementary particle really is, why it has mass and electric charge, what its spin really is, what differentiates leptons and quarks, why there are three families of leptons and quarks, what weak and strong forces really are, and what explains the confinement and asymptotic behavior of the strong force. In addition, these various theories do not have a common origin, and it seems very difficult, if not impossible, to unify them. The search for a theory of everything capable of explaining the nature of space-time, what matter is and how matter interacts, is in fact one of the fundamental problems of modern physics. Since the 19th century, physicists have sought to develop unified field theories, which should consist of a coherent theoretical framework capable of taking into account the different fundamental forces of nature. Recent attempts to search for a unified theory include the following: the “Great Unification” which brings together the electromagnetic force, the weak interaction force and the strong interaction force, the “Quantum Gravity” and the «Looped Quantum Gravitation» which seek to describe the quantum properties of gravity, the “Supersymmetry” which proposes an extension of space-time symmetry linking the two classes of elementary particles, bosons and fermions, the “String and Superstring Theories”, which are theoretical structures integrating gravity, in which point particles are replaced by one-dimensional strings whose quantum states describe all types of observed elementary particles, and finally the “M-Theory”, which is supposed to unify five different versions of string theories, with the surprising property that extra-dimensions are necessary to ensure its coherence. However, none of these approaches is currently able to consistently explain at the same time electromagnetism, relativity, gravitation, quantum physics and observed elementary particles. Many physicists believe that the 11-dimensional M-Theory is the Theory of Everything. However, there is no broad consensus on this and there is currently no candidate theory able to calculate known experimental quantities such as for example the mass of the particles. Particle physicists hope that future results from current experiments – the search for new particles in large accelerators and the search for dark matter – will still be needed to define a Theory of Everything. But this research seems to have really stagnated for about 40 years. Since the 1980s, thousands of theoretical physicists have published thousands of generally accepted scientific papers in peer-reviewed journals, even though these papers have contributed absolutely nothing new to the explanation of the Universe and solve none of the current mysteries of physics. An enormous amount of energy has been put into developing these theories, in a race to publish increasingly esoteric articles, in search of a form of “mathematical beauty” that is ever more distant from the “physical reality” of our world. Moreover, enormous sums have been invested in this research, to the detriment of fundamental research in other areas of physics, in the form of the construction of ever more complex and expensive machines. And, to the great despair of experimental physicists, the results obtained have brought practically nothing new to high energy physics, contrary to the “visionary” and optimistic predictions of the theorists. Many physicists now have serious doubts about the relevance of these theories of unification. On this subject, I strongly advise readers to consult, among others, the books by Unzicker and Jones [2], Smolin [3], Woit [4] and Hossenfelder [5]. What if the Universe was a lattice? In the approach we will present here [1,6], the problem of the unification of physical theories is treated in a radically different way. Instead of trying to build a unified theory by tinkering with an assemblage of existing theories, making them more and more complex and esoteric, even adding strange symmetries and extra dimensions for their “mathematical beauty», we start exclusively from the most fundamental classical concepts of physics, which are Newton’s equation and the first two principles of thermodynamics. And using these fundamental principles, and developing an original geometry based on Euler coordinates to describe the topology of the Universe, we come, by a purely logical and deductive path, to suggest that the Universe could be a finite, elastic and massive solid, a “cosmological lattice”, which would move and deform in an infinite absolute vacuum. In this a priori strange concept, it is assumed that the Universe is a lattice of simple cubic crystalline structure, whose basic cells have a mass of inertia that satisfies Newtonian dynamics in absolute space, and whose isotropic elasticity is controlled by the existence of an internal energy of deformation as simple as possible. By introducing into infinite absolute space a purely imaginary observer called the Great Observer GO, and by equipping this observer with a reference system composed of an orthonormal absolute Euclidean reference frame to locate the points of the solid lattice and an absolute clock to measure the temporal evolution of the solid lattice in absolute space, a very detailed description of the spatio-temporal evolution of the lattice can be elaborated on the basis of the Euler coordinate system [7]. In this coordinate system of the Great Observer GO, one can then describe in a very detailed way the distortions (rotation and deformation) and the contortions (bending and torsion) of the lattice. And one can also introduce topological singularities (dislocations, disclinations and dispirations) in this lattice in the form of closed loops [8], as constitutive elements of Ordinary Matter. If this original concept is developed in detail using an approach similar to the one used in solid state physics, it can be demonstrated by a purely logical and deductive mathematical path that the behaviors of this lattice and its topological singularities satisfy “all” the physics currently known, by spontaneously bringing out very strong and often perfect analogies with all the current major physical theories of the macrocosm and the microcosm, such as the Maxwell equations [9], special relativity, Newtonian gravitation, general relativity, modern cosmology and quantum physics. But this approach does not only find analogies with other theories of physics, it also offers original, new and simple explanations to many physical phenomena that are still quite obscure and poorly understood at present by modern physics, such as the deep meaning and physical interpretation of cosmological expansion, electromagnetism, special relativity, general relativity, quantum physics and particle spin. It also offers new and simple explanations of quantum decoherence (the boundary between classical and quantum behavior of an object), dark energy, dark matter, black holes, and many other phenomena. The detailed development of this approach also leads to some very innovative ideas and predictions, among which the most important is the appearance, alongside the electrical charge, of a new charge characterizing the properties of topological singularities, the curvature charge, which is an inevitable consequence of the treatment of a solid lattice and its topological singularities in Euler coordinates. This concept of curvature charge has very important consequences and provides new explanations for many obscure points in modern physics, such as weak force, matter-antimatter asymmetry, galaxy formation, segregation between matter and antimatter within galaxies, the formation of gigantic black holes in the heart of galaxies, the apparent disappearance of antimatter in the Universe, the formation of neutron stars, the concept of dark matter, the bosonic or fermionic nature of particles, etc. Finally, the study of lattices with special symmetries called axial symmetries, symbolically represented by “colored” 3D cubic lattices, allows us to identify an astonishing lattice structure whose looped topological singularities coincide perfectly with the complex zoology of all the elementary particles of the Standard Model. It also allows us to find simple physical explanations for the weak and strong forces of the Standard Model, including the phenomena of confinement and asymptotic freedom of the strong force. It is this concept of “cosmological lattice” that we are going to detail in the rest of this paper, and we are going to show how this concept brings a simple and unified explanation of modern theories of physics and the Universe. The formulation of the deformation of a solid lattice in Euler coordinates When one wants to study the deformation of solid lattices, it is common practice to describe the evolution of their deformation using a Lagrange coordinate system and to use various differential geometries to describe the topological defects they contain. The use of Lagrange coordinates to describe deformable solids presents a number of inherent difficulties. From a mathematical point of view, the tensors describing the deformations of a continuous solid in Lagrange coordinates are always of a higher order than one in the spatial derivatives of the components of the displacement field, which leads to a very complicated mathematical formalism when a solid presents strong distortions (deformations and rotations). To these difficulties of a mathematical order are added difficulties of a physical order when it is a question of introducing certain known properties of solids. Indeed, the Lagrange coordinate system becomes practically unusable, for example when it is necessary to describe the temporal evolution of the microscopic structure of a solid lattice (phase transitions) and its structural defects (point defects, dislocations, disclinations, joints, etc.), or if it is necessary to introduce certain physical properties of the medium (thermal, electrical, magnetic, chemical, etc.) resulting in the existence in real space of scalar, vector or tensor fields. The use of differential geometries to introduce topological defects such as dislocations in deformable continuous media was initiated by the work of Nye (1953) [10], who for the first time established the relationship between the dislocation density tensor and the curvature of the lattice. On the other hand, Kondo (1952) [11] and Bilby (1954) [12] have independently shown that dislocations can be identified with a crystalline version of Cartan’s (1922) [13] concept of continuum torsion. This approach was formalized in great detail by Kröner (1960) [14]. However, the use of differential geometries to describe deformable media very quickly comes up against difficulties quite similar to those of the Lagrange coordinate system. A first difficulty is linked to the fact that the mathematical formalism is very complex, since it is similar to the formalism of general relativity, which consequently makes it very difficult to manipulate and interpret the general field equations thus obtained. A second difficulty appears with differential geometries when it is a question of introducing into the environment topological defects of other types than dislocations. For example, Kröner (1980) [15] proposed that the existence of extrinsic point defects, which can be considered as extra-matter, could be identified with the presence of matter in the universe and therefore introduced in the form of Einstein’s equations, which would lead to a purely Riemannian differential geometry in the absence of dislocations. He also proposed that intrinsic point defects (vacancies, interstitials) could be approached by a non-metric part of an affine connection. Finally, he also considered that the introduction of other topological defects such as disclinations could call upon even more complex higher order geometries, such as Finsler or Kawaguchi geometries. In fact, the introduction of differential geometries generally gives rise to a very heavy mathematical artillery (metric tensor and Christoffel symbols) in order to describe spatio-temporal evolution in infinitesimal local reference points, as shown for example by Zorawski’s mathematical theory of dislocations (1967) [16]. Given the complexity of the calculations thus obtained, whether in the case of the Lagrange coordinate system or in that of differential geometries, it had long seemed desirable to me to try to develop a much simpler approach to deformable solids, but nevertheless just as rigorous, which was finally published in 2013 and 2016 in two first books [7] entitled «Eulerian Theory of newtonian deformable lattices – dislocation and disclination charges in solids». These books describe how the deformation of a lattice can be characterized by distortions and contortions. For this purpose, a vector representation of tensors is used, which has undeniable advantages over the purely tensor representation, if only because of the possibility of using the powerful formalism of vector analysis, which makes it possible to easily obtain the geometrocompatibility equations, which ensure the solidity of the lattice, and the geometrokinetic equations, which make it possible to describe the kinetics of the deformation. Then, physics is introduced in this topological context, namely Newtonian dynamics and Eulerian thermokinetics. With all these ingredients, it becomes possible to describe the particular behaviors of solid lattices, such as elasticity, anelasticity, plasticity and self-diffusion, and to write the complete set of evolution equations of a lattice in the Euler coordinate system. On the basis of this Eulerian description of solids, it is possible to describe the various phenomenologies observed on usual solids. Among others, we can find out how to obtain the functions and equations of state of an isotropic solid, what are the elastic and thermal behaviors that can appear, how waves propagate and why there are thermoelastic relaxations, what are the phenomena of mass transport and why inertial relaxations can appear, what are the usual phenomena of anelasticity and plasticity, and finally how it can appear structural transitions of 2nd and 1st species in a solid lattice. The concepts of dislocation and disclination charges in lattices The description of defects (topological singularities) that can appear within a solid, such as dislocations and disclinations, is a field of physics, initiated mainly by the idea of the macroscopic defects of Volterra (1907) [17], which has undergone a dazzling development during its century of very rich history, as illustrated very well by Hirth (1985) [18]. It was in 1934 that the theory of lattice dislocations really began, following the papers by Orowan [19], Polanyi [20] and Taylor [21], who independently described the edge dislocation. Then it was in 1939 that Burgers [22] described screw and mixed dislocations. And it is finally in 1956 that the first experimental observations of dislocations are reported, simultaneously by Hirsch, Horne and Whelan [23] and by Bollmann [24], thanks to the electron microscope. As for disclinations, it was in 1904 that Lehmann [25] observed them for the first time in molecular crystals, and it was in 1922 that Friedel [26] gave a first physical description of them. Then, from the middle of the twentieth century, the physics of defects in solids took a considerable extent. In the Eulerian theory introduced here [7], dislocations and disclinations are approached by intuitively introducing the concept of dislocation charges, using the famous “pipes” of Volterra (1907) [26] and an analogy with electric charges. In Euler coordinates, the notion of charge density then appears in a geometrocompatibility equation of the solid, while the notion of charge flux is introduced in a geometrokinetic equation of the solid. The rigorous formulation of the concept of charges in solids makes the essential originality of this approach of topological singularities. The thorough development of this concept reveals first-order tensor charges, dislocation charges, associated with plastic distortions (plastic deformations and rotations) of the solid, and second-order tensor charges, disclination charges, associated with plastic contortions (plastic bending and torsion) of the solid. It appears that these topological singularities are quantified in a solid lattice and that they can be topologically localized only in strings (thin tubes), which can be modeled as one-dimensional lines of dislocation or disclination, or in membranes (thin plates), which can be modeled as two-dimensional joints of bending, torsion or accommodation. The concept of dislocation and disclination charges allows us to rigorously retrieve the main results obtained by the classical dislocation theory. However, it allows us to define a tensor of linear dislocation charge, from which we derive a scalar of linear rotation charge, which is associated with the screw part of the dislocation, and a vector of linear bending charge, which is associated with the edge part of the dislocation. For a given dislocation, the scalar charge of rotation and the vectorial charge of bending are then perfectly defined without having to use a convention to define them, contrary to the classical definition of a dislocation by its Burgers vector. On the other hand, the description of dislocations in the Euler coordinate system by the concept of dislocation charges allows to treat in an exact way the evolution of charges and deformations during very strong volume contractions or expansions of a solid medium. By analytically introducing the concepts of density and flux of dislocation and disclination charges in lattices, it is possible to describe in detail the macroscopic and microscopic topological singularities of the lattice that can be associated with dislocation and disclination charges, and to describe the movement of dislocation charges within the lattice by introducing the fluxes of dislocation charges and the Orowan relationships. The Peach and Koehler force acting on dislocations is also deduced and a new complete set of lattice evolution equations in the Euler coordinate system can be established, this time taking into account the existence of topological singularities within the lattice. The concept of charges within the Eulerian solid lattice allows the development of a very detailed dislocation theory in common solids. It is also possible to calculate the fields and energies of screw and edge dislocations in an isotropic solid lattice, as well as the interactions that can occur between dislocations. One can also develop a model of dislocation string, which is the fundamental model to explain most of the macroscopic behaviors of the anelasticity and plasticity of crystalline solids The premises of a possibility to describe the Universe by a “cosmological lattice” On the basis of the Eulerian description of solid lattices, we show that it is possible to calculate the rest energy   E0  of dislocations, which corresponds to the elastic energy stored in the lattice by their presence, and their kinetic energy   Ecin , which corresponds to the kinetic energy of the particles of the lattice mobilized by their motion, which then allows to attribute to them a virtual mass of inertia  M0  which surprisingly satisfies relations similar to the Einstein’s famous equation  E0 = M0c2  of special relativity, but which is obtained here in a quite classical way, i.e. without calling upon a principle of relativity. Moreover, at high speed, it is shown that dislocation dynamics also satisfies the principles of special relativity and the Lorentz transformations. It can also be shown that, in the case of isotropic solid media having a homogeneous and constant volume expansion, thus deforming only by shear, a perfect and complete analogy with Maxwell’s equations of electromagnetism appears, thanks to the possible replacement of the shear tensor by the rotation vector. The existence of an analogy between electromagnetism and the theory of incompressible continuous media was already perceived a long time ago and developed by many authors, as Whittaker (1951) [27] has shown. However, the analogy becomes much more complete by using the Euler coordinate approach [7], because it is not limited to an analogy with one of the two pairs of Maxwell equations in vacuum, but is generalized to the two pairs of Maxwell equations as well as to the various phenomenologies of dielectric polarization and magnetization of matter, and to the notions of electric charges and currents. This analogy makes it possible to consider the cosmological lattice as a physical support for electromagnetic fields, and to give physical interpretations to the various quantities of electromagnetism. For example, the local rotation field of the lattice corresponds to the electric induction field of electromagnetism and the velocity field of the lattice to the magnetic field. The analogy with Maxwell’s equations is very astonishing by the simple fact that it is initially postulated that the solid lattice satisfies a very simple dynamic, purely Newtonian, in the absolute reference frame of the external observer’s laboratory, which is equipped with orthonormal rules and a clock giving a universal time, whereas the topological singularities within the solid lattice, namely dislocations and disclinations with their respective charges, responsible for the distortions and plastic contortions of the solid, are subject to relativistic dynamics within the solid, precisely due to the set of Maxwellian equations governing the shear forces in the medium. From this point of view, the relativistic dynamics of topological singularities is nothing else than a consequence of the perfectly classical Newtonian dynamics of the elastic solid lattice in the frame of reference of the external observer. Finally, it also appears in Euler coordinates that at long distance from a localized cluster of topological singularities, formed for example by one or more dislocation loops or one or more disclination loops, the tensorial aspect of the distortion fields generated at short distance by this cluster may be neglected at long distance, so that lattice perturbations can be perfectly described at a large distance by the only two vectorial fields of torsional rotation and bending curvature associated with the only two scalar charges of the cluster, its scalar rotation charge  Qλ  and its scalar curvature charge   Qθ. The rotation charge then becomes the perfect analogue of the electric charge in Maxwell’s equations, whereas the curvature charge has some analogy with a gravitational mass in gravitation theory. The existence of analogies between the mechanics of continuous media and the physics of defects and the theories of electromagnetism, special relativity and gravitation had already been the subject of numerous publications, the most famous of which are certainly those of Kröner [4,5]. Excellent reviews in this field of physics have also been published, notably by Whittaker (1951) [20] and by Unzicker (2000) [28]. But none of these publications had gone as far in highlighting these analogies as the approach presented in my first books [7]. The numerous analogies that appeared in the first books [7] between the Eulerian theory of deformable media and the theories of electromagnetism, gravitation, special relativity, general relativity and even the Standard Model of elementary particles, reinforced by the absence of charges similar to magnetic monopoles, by a possible solution to the famous paradox of the electrical energy of an electron, and by the existence of a weak asymmetry between lacunar and interstitial charges of curvature, were sufficiently surprising and remarkable not to fail to titillate any open and somewhat curious scientific mind. But it is clear that these analogies were by no means perfect. It was therefore very tempting to analyze these analogies in greater depth and to try to find out how to perfect them, and this is what has led to the last books [1], which are devoted to the deepening, improvement and understanding of these analogies, and whose main steps are illustrated in the following. The “cosmological lattice” and its Newton’s equation By introducing quite particular elastic properties of volume expansion, shear and especially rotation, expressed in free energy per unit volume of lattice, we obtain an imaginary lattice with a very particular Newton’s equation, in which a new term of force appears, which is directly related to the energy of distortion due to the singularities contained in the lattice (Matter), and which is called to play a fundamental role in the analogies with Gravitation and with Quantum Physics. Wave propagation in this cosmological lattice has some interesting peculiarities: the propagation of linearly polarized transverse waves is always associated with longitudinal wavelets, and pure transverse waves can only be propagated by circularly polarized waves (which has a direct link with photons). On the other hand, the propagation of longitudinal waves can disappear (as in general relativity), but in favor of the appearance of localized longitudinal vibration modes (which has a direct link with quantum physics) in the case where the volume expansion of the medium is below a certain critical value. Calculating the curvature of wave rays in the vicinity of a singularity in the volume expansion of the lattice allows us to find conditions that the expansion field of a singularity must satisfy in order for a trap that captures transverse waves to appear, in other words, a “black hole”. Such a cosmological lattice, finite in absolute space, can exhibit dynamic volume expansion and/or contraction provided that it contains a certain amount of kinetic energy of expansion, a phenomenon quite similar to the cosmological expansion of the Universe. Depending on the signs and values of the elastic moduli, several types of cosmological behaviors of the lattice are possible, some of which present the phenomena of big-bang, rapid inflation and acceleration of the expansion speed, and which can be followed in some cases by a re-contraction of the lattice leading to big-crunch and big-bounce phenomena. It is the elastic and kinetic energies of expansion contained in the lattice which are responsible for these phenomena, and in particular for the increase in the speed of expansion, a phenomenon which is observed in the current Universe by astrophysicists and which is attributed by them to a hypothetical “dark energy”. Maxwell’s equations and special relativity Newton’s equation of the cosmological lattice can be separated into a rotational part and a divergent part. The rotational part shows a set of equations for the macroscopic local rotation field which is perfectly identical to the Maxwell’s equations of electromagnetism. Newton’s equation can also be separated in a different way into two partial Newton’s equations which allow on the one hand to calculate the elastic distortion fields associated with the topological singularities, and on the other hand to calculate the volume expansion perturbations associated with the elastic distortion energies of the topological singularities. Using Newton’s first partial equation, one can then tackle the calculations of the elastic distortion fields and energies of topological singularities within a cosmological lattice. It is thus shown that it is possible to attribute in a quite classical way an inertial mass to the topological singularities, which always satisfies the Einstein’s famous formula  E0 = M0c2 . The topological singularities also satisfy a typically relativistic dynamic when their velocity becomes close to the velocity of transverse waves. The cosmological lattice behaves in fact like an aether, in which the topological singularities satisfy exactly the same properties as those of the Special Relativity concerning the contraction of rules, the dilation of time, the Michelson-Morley experiment and the Doppler-Fizeau effect. The existence of the cosmological lattice then makes it possible to explain very simply certain somewhat obscure aspects of Special Relativity, notably by definitively giving a simple and convincing explanation of the famous twins paradox. Gravitation, General Relativity, Weak Interaction and Cosmology Disturbances in the scalar expansion field associated with a localized topological singularity are in fact an expression of the existence of a static external “gravitational field” at a long distance from this singularity, as long as the latter has an energy density or a rotation charge density below a certain critical value. Thanks to Newton’s second partial equation, one can calculate the external fields of expansion perturbations, i.e. the external gravitational fields associated with localized macroscopic topological singularities, either of a given elastic energy of distortion, of a given curvature charge, or of a given rotation charge. We can also introduce macroscopic lacunar or interstitial singularities, which may appear within the lattice in the form of a hole in the lattice or an interstitial embedding of a lattice piece, which will later prove to be ideal candidates to explain respectively the black holes and the neutron stars of the Universe. By applying the calculations of the external gravitational field of topological singularities to the microscopic singularities in the form of screw disclination loops, edge dislocation loops or mixed dislocation loops, we deduce all the properties of these loops. The notion of “mass of curvature” of edge dislocation loops then appears, which corresponds to the equivalent mass associated with the gravitational effects of the curvature charge of these loops, and which can be positive (in the case of loops of a vacancy nature) or negative (in the case of loops of an interstitial nature). This concept, which does not appear at all in modern theories of physics, such as general relativity, quantum physics or the Standard Model, implies a very slight deviation from the equivalence principle of general relativity: the inertial mass and the gravitational mass of a particle are very slightly different. If the inertial mass of a particle and its antiparticle are exactly the same, the gravitational mass of an antiparticle is very slightly higher than that of its antiparticle because of their opposite sign curvature charge. Even in the particular case of the neutrino, the effect of the curvature charge outweighs the inertial mass, and the gravitational mass of the neutrino becomes negative (antigravity), whereas the gravitational mass of the anti-neutrino is positive and the inertial mass of the neutrino and the anti-neutrino are identical, very small and always positive. In our approach, it is precisely this mass of curvature that will be responsible for the appearance of a weak asymmetry between particles (hypothetically containing edge dislocation loops of an interstitial nature) and anti-particles (hypothetically containing edge dislocation loops of a vacancy nature), and which will play a major role in the cosmological evolution of the topological singularities. By considering the gravitational interactions existing between topological singularities composed essentially of screw disclination loops, we can deduce the behaviors of the local rules and clocks of local observers according to the local expansion field that reigns within the cosmological lattice. We then show that for any local observer, and whatever the value of the local volume expansion of the lattice, Maxwell’s equations always remain perfectly invariant, so that, for this local observer, the transverse wave velocity is an immutable constant, whereas this velocity depends very strongly on the local volume expansion if it is measured by the observer outside the lattice. The gravitational interactions thus obtained present very strong analogies with Newton’s Gravitation and with Einstein’s General Relativity. For example, there is a perfect similarity with Schwarzschild’s metric at great distance from a massive object and with the curvature of wave rays by this massive object. But our Eulerian theory of the cosmological lattice also brings new elements to the theory of Gravitation, in particular very short-range modifications of Schwarzschild’s metric and a better understanding of the critical radii associated with black holes: the radii of the sphere of perturbations and of the point of no return are both similar and equal to the Schwarzschild radius   RSchwarschild = 2GM/c2  , and the limit radius for which the expansion of the observer’s time would tend towards infinity becomes zero, so that our theory is not limited by infinite quantities for the description of a black hole beyond the Schwarzschild sphere. It is possible to draw a complete picture of all the gravitational interactions existing between the various topological singularities of a lattice. If we then consider topological singularities formed from the coupling of a screw disclination loop with an edge dislocation loop, which are called dispiration loops, an interaction force similar to a capture potential appears, with a very low range, which allows interactions between loops that are perfectly analogous to the weak interactions between elementary particles of the Standard Model. On the basis of the cosmological expansion-contraction behaviors of the lattice and the gravitational interactions between topological singularities via the local volume expansion of the lattice, we can then imagine a very plausible scenario of cosmological evolution of the topological singularities leading to the current structure of our Universe. This scenario is entirely based on the fact that, in the case of the simplest edge dislocation loops, analogously similar to neutrinos, the mass of curvature dominates the mass of inertia, so that neutrinos should be the only gravitationally repulsive particles, while anti-neutrinos would be gravitationally attractive. This assertion then allows us to give a simple explanation to several facts that are still very poorly understood in the evolution of matter in the Universe. The formation of galaxies could correspond to a phenomenon of precipitation of matter and antimatter within a sea of repulsive neutrinos. The disappearance of antimatter could correspond to a phenomenon of segregation of particles and antiparticles within galaxies, due to their slight difference in gravitational properties, a segregation during which antiparticles would gather in the center of galaxies to finally form gigantic black holes in the heart of galaxies. Even the famous “dark matter” that astrophysicists had to invent to explain the abnormal gravitational behavior of the periphery of galaxies would then be very well explained in our theory. Indeed, the “dark matter” would in fact be the sea of repulsive neutrinos in which the galaxies would have precipitated and bathed, which, because of the compressive force it exerts on the periphery of the galaxies, would explain the abnormal gravitational behavior of the latter. Finally, we can also easily deal with the Hubble constant, the redshift of galaxies and the evolution of the background cosmic radiation in the framework of our cosmological lattice theory. Quantum physics, particle spin and photons In the case where the energy density or the rotation charge density of a topological singularity becomes greater than a certain critical value, the expansion field associated with this localized topological singularity can no longer exist as a static gravitational expansion field, but has to appear as a dynamic expansion perturbation, which will cause quantum behaviors of this singularity to appear. The critical value of the energy density or the rotation charge density then becomes an extremely important quantity since it actually corresponds to a quantitative value that defines the famous quantum decoherence limit, i.e. the limit of passage between a classical and a quantum behavior of a topological singularity. Using Newton’s second partial equation, in the dynamic case, it is shown that there are also dynamic longitudinal gravitational fluctuations associated with moving topological singularities within the lattice. By introducing operators similar to those of quantum physics, it is then shown that Newton’s second partial equation allows us to deduce the gravitational fluctuations associated with a topological singularity moving almost freely at relativistic velocities within the lattice. In the case of non-relativistic topological singularities linked by a potential, it is shown that the second partial Newton equation applied to the longitudinal gravitational fluctuations associated with these singularities leads very exactly to the Schrödinger equation of quantum physics, which makes it possible to give for the first time a simple and realistic physical interpretation of the Schrödinger equation and the quantum wave function: the quantum wave function deduced from the Schrödinger equation represents the amplitude and phase of the longitudinal gravitational vibrations associated with a topological singularity moving in the cosmological lattice. All the consequences of Schrödinger’s equation then appear with a simple physical explanation, such as the standing wave equation of a topological singularity placed in a static potential, Heisenberg’s uncertainty principle and the probabilistic interpretation of the square of the wave function. In the case where the gravitational fluctuations of expansion of two topological singularities are coupled, explanations of the concepts of bosons and fermions, as well as Pauli’s exclusion principle also appear quite simply. At the very heart of a topological singularity loop, it is shown that there can be no static solutions to Newton’s second partial equation for longitudinal gravitational fluctuations. It therefore becomes necessary to find a dynamic solution to this equation, and the simplest dynamic solution that can be envisaged is that the loop actually rotates on itself. By solving this rotational motion with Newton’s second partial equation, which in this dynamic case is nothing other than the Schrödinger’s equation, we obtain the quantized solution of the internal gravitational fluctuations of the loop, which is in fact the spin of the loop, which can take several different values (1/2, 1, 3/2, etc.) and which is perfectly similar to the spin of the particles of the Standard Model. If the loop is composed of a screw disclination loop, a magnetic moment of the loop also appears, proportional to the famous Bohr magneton. The famous argument of the pioneers of quantum physics according to which the spin can in no case be a real rotation of the particle on itself because of an equatorial speed of rotation higher than the speed of light is swept away in our theory by the fact that the static expansion in the vicinity of the core of the loop is very high, which leads to speeds of light in the vicinity of the core of the loop much higher than the equatorial speed of rotation of the loop. We can also show how to construct a bundle of pure circularly polarized transverse waves and why a quantification of the energy of these fluctuations appears. These wave packets form quasi-particles which have properties perfectly similar to the quantum properties of photons: circular polarization, zero mass, non-zero momentum, non-locality, wave-corpuscle duality, entanglement and decoherence phenomena. Standard model of elementary particles and strong force One can also search for ingredients that should be added to the cosmological lattice to find an analogy with the various particles of the Standard Model. We show that by introducing into a cubic lattice families of planes (imaginary “colored” in red, green and blue) which satisfy some simple rules concerning their arrangement and rotation, we find topological loops perfectly analogous to all particles, leptons and quarks, of the first family of elementary particles of the Standard Model, as well as topological loops analogous to the intermediate bosons of the Standard Model. It also spontaneously appears a strong force, in the sense of a force that exhibits asymptotic behavior, between quark-like loops, which are then topologically forced by the formation of an energetic disorientation joint to group together in triplets to form baryon-like loop assemblies, or in doublets to form meson-like loop-anti-loop assemblies. In addition, there are also simple “two-color” topological loops that perfectly match the gluons associated with the strong force in the Standard Model. To explain then the existence of three families of quarks and leptons in the Standard Model, we show that the introduction of a more complex topological structure of edge loops, based on the assembly of a pair of wedge disclination loops, allows to explain satisfactorily the existence of three families of particles of very different energies. Quantum fluctuations of the vacuum, cosmological theory of multiverses and gravitons It is still possible to deduce some very hypothetical consequences of the perfect cosmological lattice associated with pure gravitational fluctuations (fluctuations of the lattice expansion scalar). One can imagine the existence of pure longitudinal fluctuations within the cosmological lattice that can be treated either as random gravitational fluctuations that could have an analogy with the quantum fluctuations of the vacuum, or as stable gravitational fluctuations, which could lead at the macroscopic scale to a cosmological theory of Multiverses, and at the microscopic scale to the existence of a form of stable quasi-particles that could be called gravitons, by analogy with photons, but which in fact have nothing in common with the gravitons usually postulated in the framework of General Relativity. About the epistemology of our lattice approach of the Universe Our lattice approach to the Universe is based on the two basic concepts mentioned in the summary, which are disarmingly simple. And by judiciously applying these two perfectly classical initial concepts (massive and elastic solid lattice, Newton’s law, principles of thermodynamics), it is really very surprising to note that the behaviors of this lattice (the Universe) and its topological singularities (the Matter) satisfy all modern theories of physics, even though we postulated that the lattice in absolute space rigorously follows the perfectly classical laws of Newton and thermodynamics. But in this approach of the Universe, nothing comes yet to give a definitive explanation to the existence of the Universe, to the root cause of the big bang, and to the actual composition of the solid, massive and elastic cosmological lattice. These points remain, at least for the moment, within the scope of individual philosophy or beliefs. But, from an epistemological point of view, this approach shows that it is perfectly possible to find a very simple framework to understand, explain and unify the different theories of modern physics, a framework in which there would no longer be many mysterious phenomena other than the “raison d’être” of the Universe. [1] G. Gremaud, “Universe and Matter conjectured as a 3-dimensional Lattice with Topological Singularities”, second version revised and corrected of the book ISBN 978-2-8399-1934-0, 2020, 654 pages, free download of the book G. Gremaud, “Univers et Matière conjecturés comme un Réseau Tridimensionnel avec des Singularités Topologiques”, 2ème version revue et corrigée du livre ISBN 978-2-8399-1940-1, 2020, 668 pages, téléchargement gratuit du livre G. Gremaud, “Et si l’Univers était un réseau et que nous en étions ses singularités topologiques?”, 2ème version revue et corrigée du livre ISBN 978-613-9-56428-6, mai 2020, 324 pages, téléchargement gratuit du livre G. Gremaud, «What if the Universe was a lattice and we were its topological singularities?», second version revised and corrected of the english translation of book ISBN 978-613-9-56428-6, May 2020, 316 pages, free download of the book [2] Alexander Unzicker and Sheilla Jones, «Bankrupting Physics», Palgrave McMillan, New York, 2013, ISBN 978-1-137-27823-4 Alexander Unzicker, «The Higgs Fake»,, 2013, ISBN 978-1492176244 [3] Lee Smolin, «The trouble with Physics», Penguin Books 2008, London, ISBN 978-2-10-079553-6 Lee Smolin, «La révolution inachevée d’Einstein, au-delà du quantique», Dunod 2019, ISBN 978-2-10-079553-6 Lee Smolin, «Rien ne va plus en physique, L’échec de la théorie des cordes», Dunod 2007, ISBN 978-2-7578-1278-5 [4] Peter Woit, «Not Even Wrong, the failure of String Theory and the continuing challenge to unify the laws of physics», Vintage Books 2007, ISBN 9780099488644 [5] Sabine Hossenfelder, «Lost in Maths», Les Belles Lettres 2019, ISBN978-2-251-44931-9 [6] G. Gremaud, “Universe and Matter conjectured as a 3-dimensional Lattice with Topological Singularities”, July 2016,  Journal of Modern Physics, 7, 1389-1399 ,  DOI 10.4236/jmp.2016.712126, download G. Gremaud, «In Search of a Theory of Everything: What if the Universe was an elastic and massive lattice and we were its topological singularities?», May 2020, Journal of Advances in Physics, 17, 282-285,, download [7] G. Gremaud, “Théorie eulérienne des milieux déformables – charges de dislocation et désinclinaison dans les solides”, Presses polytechniques et universitaires romandes (PPUR), Lausanne (Switzerland), 2013, 751 pages, ISBN 978-2-88074-964-4 G. Gremaud, “Eulerian theory of newtonian deformable lattices – dislocation and disclination charges in solids”, Amazon, Charleston (USA) 2016, 312 pages, ISBN 978-2-8399-1943-2 [8] G. Gremaud, “On local space-time of loop topological defects in a newtonian lattice”, July 2014, arXiv:1407.1227, download [9] G. Gremaud, “Maxwell’s equations as a special case of deformation of a solid lattice in Euler’s coordinates”, September 2016,  arXiv :1610.00753, download [10] J.F. Nye, Acta Metall.,vol. 1, p.153, 1953 [11] K. Kondo, RAAG Memoirs of the unifying study of the basic problems in physics and engeneering science by means of geometry, volume 1. Gakujutsu Bunken Fukyu- Kay, Tokyo, 1952 [12] B. A. Bilby , R. Bullough and E. Smith, «Continous distributions of dislocations: a new application of the methods of non-riemannian geometry», Proc. Roy. Soc. London, Ser. A 231, p. 263–273, 1955 [13] E. Cartan, C.R. Akad. Sci., 174, p. 593, 1922 & C.R. Akad. Sci., 174, p.734, 1922 [14] E. Kröner, «Allgemeine Kontinuumstheorie der Versetzungen und Eigenspannungen», Arch. Rat. Mech. Anal., 4, p. 273-313, 1960 [15] E. Kröner, «Continuum theory of defects», in «physics of defects», ed. by R. Balian et al., Les Houches, Session 35, p. 215–315. North Holland, Amsterdam, 1980. [16] M. Zorawski, «Théorie mathématique des dislocations», Dunod, Paris, 1967. [17] V. Volterra, «L’équilibre des corps élastiques», Ann. Ec. Norm. (3), XXIV, Paris, 1907 [18] J.-P. Hirth, «A Brief History of Dislocation Theory», Metallurgical Transactions A, vol. 16A, p. 2085, 1985 [19] E. Orowan, Z. Phys., vol. 89, p. 605,614 et 634, 1934 [20] M. Polanyi, Z. Phys., vol.89, p. 660, 1934 [21] G. I. Taylor, Proc. Roy. Soc. London, vol. A145, p. 362, 1934 [22] J. M. Burgers, Proc. Kon. Ned. Akad. Weten schap., vol.42, p. 293, 378, 1939 [23] P. B. Hirsch, R. W. Horne, M. J. Whelan, Phil. Mag., vol. 1, p. 667, 1956 [24] W. Bollmann, Phys. Rev., vol. 103, p. 1588, 1956 [25] O. Lehmann, «Flussige Kristalle», Engelman, Leibzig, 1904 [26] G. Friedel, Ann. Physique, vol. 18, p. 273, 1922 [27] S. E. Whittaker, «A History of the Theory of Aether and Electricity», Dover reprint, vol. 1, p. 142, 1951. [28] A. Unzicker, «What can Physics learn from Continuum Mechanics?», arXiv:gr-qc/0011064, 2000 Free downloadable books The “theoretical” book, revised and expanded (June 2020), 654 pages:  download The “popularized” book, revised and expanded (June 2020), 316 pages: download Illustrated presentation “Universe and Matter conjectured as a 3-dimensional Lattice with Topological Singularities”, G. Gremaud, Lausanne, March 2017, 41 pages, DOI: 10.13140/RG.2.2.14804.81280, download
b673420ae384da5f
I was reading this paper: Financial Turbulence, Business Cycles and Intrinsic Time in an Artificial Economy. The author has the model presented here: Quantum Evolutionary Financial Economics But I am confused. There's all this build up of using quantum mechanics and quantum probability in the model, but the only thing he adds in the code is a normally distributed stochastic variable he calls the "business cycle quantum game term." What is quantum about this? Why bother with all the quantum formalism if the end result is effectively just Gaussian white noise? I'm not formally trained in QM, so am I missing something? Example of relevant portion of code (from second link): Business Cycle Quantum Game Term: to business-fitness-dynamics ask patches [ set z random-normal 0 1.000 ]; Gaussian wave packet reduction around the standardized fitness operator ask patches [ set $M_b = (1 - m) (b \cdot x_{t-1} - (b + 1) \cdot x_{t-1} ^ 3) + m\cdot r_{t-1}$ ] ; cubic map update (equation (18) with $M_b := f{_b,m}$) ask patches [ set $x_t (1 - \epsilon - \gamma) \cdot M_b + \epsilon \cdot \text{mean} [ M_{b}]\,\text{of patches}$ + $\gamma \cdot z$ ]; $F$ update and result of the quantum wave packet reduction in terms of the fitness field operator eigenvalue I also don't get this (from the second link): There are three main advantages of the quantum approach to Evolutionary Financial Economics: The explanatory effectiveness is expanded by the fact that one does not need any prior probability assumption, instead, one models the system's inter-relations and dynamics and from that result dynamical probabilities. Probabilities can have evolutionary and game theoretical interpretations. The adaptation process of a Complex Adaptive System (CAS) can be fully integrated with the probability formation and quantum game equilibrium assumptions. Can classical methods not do any of these things? • $\begingroup$ If you don't get any answers in a few days, you might want to try Quantitative Finance--but please do not cross-post (i.e., have this question up at both sites), delete & repost or request migration (the latter requires a moderator). $\endgroup$ – Kyle Kanos Apr 26 '15 at 0:20 • 1 $\begingroup$ To be very honest with you, this looks like one of those made up "scientists" that some people use to test the quality of peer review of open access (and classic) science journals. Nothing I have seen that links to the author looks like serious economic theory, but a lot looks like it's generated by a science "chatter bot" algorithm. If this is a Turing test, it sure failed. :-) $\endgroup$ – CuriousOne Apr 27 '15 at 1:17 • $\begingroup$ CuriousOne: I'm not sure that this is a Turing test, or that it's generated by an algorithm. The author seems to have several papers on similar topics and is a professor at the University of Lisbon: link and link. I mean, I suppose it's possible that he's playing a long game... but that seems unlikely to me. $\endgroup$ – Darragh Apr 27 '15 at 12:32 • $\begingroup$ Let me add a few comments: a) he doesn't really say he uses quantum mechanics - he uses what he calls "quantum evolutionary game theory" and "quantum econophysics". Both exist, but they are usually much closer to the game theory community than the physics community. b) We'll find here a very good example of what you could call the "scientific Babel effect": all communities in physics I know and the community writing papers like this speak a language that is incomprehensible to the other field. I don't get a single word he is saying unless he talks about the quantum harmonic oscillator. $\endgroup$ – Martin Apr 29 '15 at 16:32 • $\begingroup$ @Martin Well, no, strictly speaking he doesn't say that he's using quantum mechanics. But he is, is he not? He's using the ground state solution to the Schrödinger equation, but for reasons that elude me. Generally, quantum game theory makes use of quantum mechanical concepts such as superpositions of strategies and entangled states, neither of which are present in the model. Classical oscillators have been used in economics to model business cycle behavior, but what is special about the Gaussian wavefunction presented in this model that classical methods can't achieve? $\endgroup$ – Darragh Apr 29 '15 at 23:36 I have had a read through the paper that you quoted and have the following comments which you might find helpful: (I am formally trained in QM, so hopefully there shouldn't be any errors in the physics portions of the answer, but if there are any questions then please comment). A few comments about Quantum Mechanics (QM): Quantum mechanics is a physical theory of measurement originally developed to describe phenomena at atomic scales or smaller. The reason this was needed was because things appeared to behave strangely, and this is reflected in the theory being built on the concept of probability amplitudes, which (to paraphrase Feynman) "are unlike anything observed before". The significance of this is that how probabilities are computed in QM are unique to QM (as far as we know). Why I stress this is because it means that only when we are trying to describe physical processes should we interpret what is computed as a probability. The QM framework: If you ever get the chance to learn some QM, you will see that it is primarily formulated in linear algebra (cf. The Schrodinger Equation). This means that mathematically all we worry about are eigenvectors and eigenvalues, and how these change when we either apply a matrix (an operator), or change basis. Returning to the paper: The paper gives a brief (and in my opinion flimsy) justification of a model of companies, where these are arranged in a regular fashion (a lattice). There is an interaction between the companies which determines the changes in the supply and demand, which we describe by the state of the system (note the similarity to the role of a QM wavefunction). The paper then effectively proposes an overall function (an operator) which is dependent on this state. This can be thought of as the Hamiltonian of the system, and the aim of the game is to find the eigenvalues to this Hamiltonian (which in physics we would identify as the energy levels). This whole framework is fairly common in quantum mechanics, and especially so in quantum field theory (QFT), and seems to resemble a typical approach to modeling condensed matter (cf. The Heisenberg Model for ferromagnetism). What is quantum about the system: I can only guess at this point, but it seems reasonable to assume that the author clearly has some knowledge of QM, and hence has identified that the framework he is using in his model involves very similar structures to those physicists use everyday. Hence if he poses the maths problem in a language that physicists might understand they are motivated to read the paper, cite the paper, extend upon the paper, etc. (Remember that a huge number of physicists turn to the financial industry). I think many of the references to quantum game theory (and similar obscure fields) are superfluous. Can classical methods not do any of these things? The phrase classical methods is perhaps not well defined. Physics and financial maths in general: Although I think the quoted paper is a poor example, there is a huge field of applying approaches used in physics to areas in financial mathematics, including some of the areas mentioned involving QM and QFT. The main tool we can take from physics is Statistical Physics, and a good example of how this can be applied to finance is given by: • "Theory of financial risk and derivatives pricing: From statistical physics to risk management", Bouchard and Potters which gives great examples of using Hamiltonians, Lagrangians, etc. applied to finance. I hope this helps. Your Answer
d42775edff5eee52
The causal criteria for being real Ethan Siegel addresses a question on whether spacetime is real. But there’s more to the Universe than the objects within it. There’s also the fabric of spacetime, which has its own set of rules that it plays by: General Relativity. The fabric of spacetime is curved by the presence of matter and energy, and curved spacetime itself tells matter and energy how to move through it. But what, exactly, is spacetime, and is it a “real” thing, or just a calculational tool? After going through a quick grand tour of special and general relativity, as well as other physics, he comes to the conclusion that science can’t really provide an answer. This question about whether something is real or merely a mathematical accounting convenience, is one that comes up all the time in science, and has throughout its history. When Copernicus published his theory of the Earth moving around the Sun instead of the other way around, many were willing to accept his mathematics since they made astronomical predictions easier, but insisted it was only a mathematical convenience, not reality. Max Planck, when he first introduced energy quanta into physics, only considered them a mathematical tool. But for spacetime, I think Siegel actually answers the question in the quote above by paraphrasing John Wheeler (a physicist known for coming up with quick snappy terms and phrases): “Spacetime tells matter how to move; matter tells spacetime how to curve.” In other words, spacetime, whatever it is, has causal effects on things we can measure. It can affect and be affected by matter and energy. That, to me, is enough to consider it real in some sense. That doesn’t mean it’s necessarily something fundamental. A number of physicists think it might be emergent from other things, such as time perhaps emerging from entropy, or space emerging from quantum entanglement. But just because something is emergent doesn’t mean it doesn’t exist. If it did mean that, then nothing would exist above quantum fields, and maybe not even them. In this view, all that’s necessary for us to productively consider something real is that it participate in the causal chain that eventually effects what we measure. This is why, in quantum physics, I generally consider the wavefunction to be modeling something real. Something causes the measured interference effects, and the various formalisms for modeling the wave dynamics accurately predict those effects. (Even if they only provide probabilities for particle positions.) That doesn’t mean the wavefunction is necessarily the complete story, or that it’s real in every respect, only that the overall phenomena is something that participates in the causal chain. But this is also why I’m not a Platonic realist, someone who believes that abstract objects exist independently of the mind. In the Platonic view, abstract concepts are supposed to exist outside of time and space, be unchanging, and causally inert. Platonic objects, in and of themselves, do not participate in the causal chain. If we consider them to not exist, there’s nothing about them that forces us to reconsider that judgment. Any actual causal power they might have, only seems to happen through our mental models of them and the relations in the world that encourage us to form those models. So, if something has causal effects, it is, at least in some manner, real. If it has no detectable effects, or at least theoretical ones, we can’t say conclusively that it isn’t real, but it may effectively not be real for us. What do you think of the causal criteria? Does it miss anything real? Or does it include anything we commonly would say isn’t real? 76 thoughts on “The causal criteria for being real 1. I think of reality as being a pairwise relationship. A exists to B if A can influence B. I guess this is the same as your causal chain. It does seem to imply that there could be cases when A exists to B but B does not exist to A, or A exists to B and B exists to C but A does not exist to C. Unless there is a god-like thing that influences everything, not everything is real to everything else, so there is no universal reality. However this does raise further puzzles because we are generally thinking A is something like an object or a person, and so complex and distributed across space and time; so what it means for A to exist to B is actually a bit more tricky than it might seem at first sight. Even whether A continues to exist as the same thing comes into question. Liked by 2 people 1. I almost had a digression in the post about galaxies at the edge of the observable universe. The versions we’re seeing today can affect us by their light reaching us from over 13 billion years ago. However, those galaxies today are now beyond our cosmological horizon. The expansion of space is now moving them away from us faster than the speed of light. So for all intents and purposes, we’re now causally disconnected from them. Do they still exist for us? Do we exist for them? What about for galaxies far beyond the cosmological horizon where we’ll never have any interaction with them at all? What really makes my head hurt is that for those distant galaxies, relative to us, their time should now be going backward? What does that even mean? Relativity avoids paradoxes here because it’s impossible for us to ever interact with them. The same is true for someone falling into a black hole. For us, they slow down as they approach the event horizon, then freezes and become increasing redshifted. But for them, they cross the horizon without incident and continue moving toward the singularity. These are contradictory sequences, but again, physicists say it’s not a problem since us and the person who fell in can never compare notes. The urge to reconcile it into one reality is, apparently, misguided! 1. “What really makes my head hurt is that for those distant galaxies, relative to us, their time should now be going backward?” Wait, what? Why? (FWIW, the redshift we see of objects falling into a BH is just due to how gravity affects the light waves coming from that object. The “freeze” is just the wavelength dropping to zero. It’s not in any way a “real” effect felt by the object, merely a matter of what distant observers see. There is no paradox there.) 1. That’s my understanding of the relationship of something traveling faster than light relative to us. Is that wrong? If not, what is the time relationship between those galaxies and us? (My feeble attempts at the math just make the calculator spit out “ERROR”.) The black hole thing turns out to be something conjectural, black hole complementarity. I took Susskind’s description of it to be a general description of what general relativity said, but it turns out to be a speculative “solution” to the information paradox. My bad. 1. Oh, I see what you mean. SR isn’t defined for velocity due to the expansion of space. SR forbids FTL (in several ways), which is why you’re getting an error. With a velocity faster than c, you end up trying to take the square root of a negative number, so things become imaginary. Liked by 2 people 2. I think this is mostly right. But I think that if you say that A->B->C, where -> means ‘influences’, then you could think of A as existing for C also. Causal influence is presumably transitive. I think what you might be getting at though is that B might exist at different times, and maybe B’s influence on C comes from an earlier time than A’s influence on B. 2. Causality itself is emergent, so if you want to use causality as a criterion of reality, you’d damn well better not deny that emergent things can be real. But it had better not be the only criterion of reality either. (The foregoing depends on reading “causality” as inherently asymmetric: if A causes B, then B doesn’t cause A. If you waive the asymmetry requirement, that allows a different description of the situation.) As for how spacetime affects matter: I think you’re right, but it’s tricky (warning: philosophy of science weeds! bonus: philosophy of science learning!) I have no problem with mathematical Platonism, as long as the Platonist doesn’t try to go all Tegmark about it and insist that every mathematical structure is “physically real.” And as long as they don’t go talking about a “realm” where abstract objects “dwell”. I’m rather fond of this argument: There are numbers between 3 and 7. Therefore, there are numbers. Liked by 2 people 1. Interesting video. I especially like the point that something that has great leverage over the future is a cause while something that great leverage over the past is a record. I generally see information as causation, which fits with this view. What other criteria for being real would you add? That there are numbers between 3 and 7 doesn’t require mathematical Platonism to be true. It can be true in the sense that there are relations in the world involving between 3 and 7 entities, and this pattern occurs often enough that our brains model it. Numbers exist, but they don’t need to have an existence independent of physics. That said, when Platonists talk about realms where objects dwell, I don’t think they literally mean something like another universe with the objections floating around there. They’re trying to express a concept that is hard to put into words. Even Tegmark, I think, is trying to get across an idea that’s very difficult to put into words. The “physical” part I take to just mean “really exists”. Liked by 1 person 1. My criteria for being real would center on being explanatory. Causal explanations are just one form of explanation. Stanford Encyclopedia of Philosophy says –which seems like a pretty modest ask, and doesn’t mention independence from physics. But then the author later says “platonism entails that reality extends far beyond the physical world.” So go figure. I’m not sure what mathematical dependence on the physical would look like. If the universe had only one electron and no other particles, would that invalidate 2+2=4? It would make the equation a moot point, but that seems different. Liked by 1 person 1. What would you see as an example of a non-causal explanation? Maybe mathematical relations? But what bring those relations into being? And what causes us to think about them? On the SEP article, you have to go through the entire thing. From section 1: Platonism is the view that there exist abstract (that is, non-spatial, non-temporal) objects (see the entry on abstract objects). Because abstract objects are wholly non-spatiotemporal, it follows that they are also entirely non-physical (they do not exist in the physical world and are not made of physical stuff) and non-mental (they are not minds or ideas in minds; they are not disembodied souls, or Gods, or anything else along these lines). In addition, they are unchanging and entirely causally inert — that is, they cannot be involved in cause-and-effect relationships with other objects. So if we view abstract objects as mental models constructed based on observed relations in the world, that’s not Platonic. The Platonic version are supposed to exist in addition to the mental models and physical relations. On the electron, if we think in terms of its wavefunction, maybe 2+2=4 would continue to have meaning, since the electron would likely spread out in an ever larger bubble. Assuming no dark energy, the electron would eventually be all over the universe. Of course, there’d be no one around to think about arithmetic. 1. My first example of non-causal explanation would be explaining causality itself. That is, how it emerges from bidirectional physical laws plus the entropy gradient. Other emergence relationships, such as part-whole relationships, can also be explanatory. I don’t think anything brings mathematical relationships into being. Even if you deny abstract things exist, there has to be some physical feature not brought into being, even if it is only the whole sequence of events considered as a unit. I wouldn’t try to reduce mathematical objects to linguistic or conceptual ones, at least not in an asymmetric way. The reason why a word or concept refers to one thing and not another, depends on information theoretic properties of the word to world relations. 2. So what should we call the symmetrical relationship between entities that exist before entropy makes it asymmetrical? Carroll just says “patterns” in the video, which I can understand but these are patterns which have a time sequenced relationship to each other that two other adjacent patterns often won’t have. Maybe that’s what I should be using as my criteria. Or maybe we can just say it’s causal asymmetry which is emergent? Liked by 1 person 3. It seems like “patterns” underdetermines what we’re talking about, since there’s nothing that prevents a pattern existing consisting of elements that don’t have that relationship with each other. And the laws of nature are the rules that govern the time sequenced relationships, but not the relationships themselves. We could say “time sequenced relationships” but that’s a mouthful. In the absence of something better, I think “cause” still makes sense. We just have to understand that, without entropy, it’s fundamentally symmetrical. Liked by 1 person 4. Sure, that’s a reasonable terminological choice. You just have to be careful which audiences you use it with, some will require extra explanation. In our universe, laws of nature govern time sequenced relationships (primarily?) In a logically possible universe – and perhaps in ours, if it turns out that time itself along with space is emergent – there may be other laws. Liked by 1 person 2. I think the problem is that the word “real” is abused. Most people equate it with the physical. A tree in the park is real. It is very solid as I find if I inadvertently bang into it. A tree in my mind is not real in the same sense, although in the context of my mind it is real. The mathematical wouldn’t usually qualify for reality except to the extent that any mathematical truth probably has some representation in physical brain structure or process. However, approaching the mathematical truth at that level entails the loss of the meaning of the truth itself. On the other hand, if you take the view that the physical is really mathematical (it’s mathematical all the way down), then the mathematical would be the only thing that is real. But where does that leave Superman? We can talk about him as if he were real. We can read stories in comic books and see movies. In the context of the world of fiction, Superman could be real. 3. Superman is real James, and so is the spaghetti monster; so are demons, gods, evil spirits, GR and spacetime because real-ness is a context. So at the end of the day, we do not have to endlessly debate what is real and what is not, all we have to do is identify the context in which some “thing” is real and then a consensus can be reached. Liked by 1 person 2. Hi Paul, I’m a Tegmarkian, but I don’t always agree with how he describes the position. The way you describe it seems to suggest that at least part of your problem with it is how it is described. I think Tegmark’s ideas follow pretty much inevitably as long as you are willing to accept some sort of platonism, functionalism or computationalism about mind, and naturalism (by which I mean the idea that everything that happens does so according to natural laws describable mathematically). The way I would describe it, it is not that all mathematical structures are physically real. It is that the concept of objective, absolute physical reality is incoherent — physical reality only makes sense when construed as observer-relative. So, we shouldn’t think of Tegmark as claiming that somehow all mathematical structures are made physical, as if by magic. Rather the idea is that what we perceive as the physical reality of our universe is entirely explained by the fact that we are embedded in it. It is physical real only *to us*. In fact it’s just an abstract mathematical structure like any other. Mathematical structures without embedded observers are not physically real to anyone. I share your dislike of talk of abstract objects “dwelling” in a “realm”. 1. People often state their own positions awkwardly, or they make mistakes which are separate from the main point and shouldn’t be used to impugn that point. So by all means, rephrase Tegmark. Well, I don’t accept functionalism about mind in general. I do accept it about “do this creature’s vocalizations *refer* symbolically to the world?” and “is this creature conscious?” But I don’t accept it about “what are the qualities of its consciousness?” On physical reality as observer-relative, it depends what you mean. Here’s what I do accept: when people say “physical”, that word gets its meaning from interactions between the language community and the rest of the world. So in *some* sense, physicality is observers-relative. Observers with an s. 1. Hi Paul, I do disagree about the qualities of consciousness, but that’s another issue. It would indeed be reason to reject Tegmark if I conceded this controversial point. Just wanted to address the idea that it seems daft to think that mathematical structures somehow are made physically real as if by magic. Hopefully we’re more or less on the same page on this now. 3. You’re speaking here, I take it, about physically real? As a contrast, in some sense, unicorns are real, because when I use that word, you know exactly what I mean. Likewise Sherlock Holmes or any known literary character. Physical reality beyond our horizon, those distant galaxies that can’t affect us, can affect matter in their vicinity, so they would satisfy your causal criteria. I can’t think of any exceptions at first blush, but in some sense the definition is circular. That which is physically real can have causal effects. I suspect a true definition of “real” will remain a philosophical and definition issue, but having causal effects is, at least, a property of what is (physically) real. Now what about someone reads Sherlock Holmes when they’re young and decides to become a detective as an adult because of that. Is that a case of something not physically real having a causal effect? Liked by 2 people 1. I’m not sure if I’m following the circular point. It doesn’t necessarily seem circular to me. But maybe I’m missing something? I do agree that any definition of “real” is inevitably philosophical. (There are some scientists who take the attitude that only what they can directly measure is real, although that seems like a tough stance to hold consistently.) In the case of unicorns and Sherlock Holmes, it seems like they exist as mental models in our brains, the result of sensory impressions from numerous pictures, films, and books. Of course, a unicorn isn’t a real animal, and Sherlock Holmes was never a real person. But as concepts they are definitely real. Similar to Platonic concepts (actually identical to them), it’s often easier for us to just think of them as real in a non-physical sense, but I think that’s because the models don’t map to physical reality in the manner similar models typically do. So our model of a horse maps to reality in a certain way (it’s predictive of potential sensory impressions), but add a horn on its head and that mapping is no longer valid. But we still have the model of the horse with the horn. So it makes sense that a mental model can inspire someone to become a detective, particularly a model of a detective. Or we can just say Sherlock Holmes inspired them to become a detective, but we know that refers to an idealized model of a detective with phenomenal powers of logical deduction. 1. What I’m getting at is summed up in the phrase, “Ideas have the power to move mountains.” Even newly formed ideas, so it’s not the shared history of unicorns and Holmes, but that mental content can have causal power. Maybe “circular” isn’t the right word… it just feels like ‘causal power’ is a necessary property of anything real. Like it’s just another way of saying the same thing, although maybe ‘causal power’ is a larger category since it includes mental content? Or whatever. To be honest, I don’t seem to have much capacity for abstract thought these days. 1. I’m definitely on board with the idea that mental content has causal power. It is caused by incoming sensory information and innate impulses and has both short term and long term motor effects on the environment. So it’s part of the causal chain. So definitely ideas have a lot of power. Maybe instead of “circular” you’re thinking that it’s just trivially true? Could be. Although the fact that people debate whether things like spacetime, wavefunctions, or Platonic concepts are real seems to put pressure on that idea. 1. “So definitely ideas have a lot of power.’ Are you talking about the joule, as in 1 Watt = 1 Joule per second (1W = 1 J/s)? And if so, would some ideas be quantified as having more joules than others? Liked by 1 person 2. Hmmm. Well, all causal power is ultimately the ability to exert changes on things, directly or indirectly, physical changes if we’re operating under physicalism. So I suppose we can imagine in principle trying to measure it that way. Not sure how we’d go about measuring the amount of wattage produced by democracy, even in its effects on a signal individual’s life. 3. Okay, yeah, “trivially true” might be a better phrase. The notion of ‘causal power’ doesn’t seem, at least to me, to have much utility as a razor to cut between real and unreal if it includes unicorns and Holmes and mental content in general as all real. What would it define as unreal? FWIW, spacetime, because we’re clearly embedded within it and move through it, has some sort of physical reality even if we don’t fully understand how it works. All we can say about wave-functions for sure is that they describe an aspect of reality and allow predictions. (Some physics classes start by comparing the Schrödinger equation to Newton’s F=ma, and I think that’s a good comparison.) As I’ve mentioned in the past, I’ve come to resolve the Platonic question as a consequence of existence in a lawful physical reality. The canonical example, a circle or sphere, the concepts of which seem to exist whether we discover them or not, ends up as just an observation of how physical 3D space works. Given space, there is a notion of location and distance, then of equal distance from some location, and thus circles and spheres. All we can really say about math is that it describes physical reality (because it reflects physical reality). Liked by 1 person 4. In all seriousness Wyrd, I don’t see how conversations like this one can be productive if there is no consensus on a fundamental definition of power. Is power an objective state of the world, something that we discoverer; or is power a subjective state of mind, something that we make up like GR, spacetime and the joule? To me, it seems like power is the impetus and the driving force responsible for causation in the natural world, including the dynamics responsible for the motion and form of imaginative thought and yet, nobody wants to investigate it let alone discuss it. Instead, everybody seems absolute content to play in the sandbox ignoring the elephant in the room. Personally, you and others might consider the notion of power as too abstract, and that’s cool. In many ways, the notion of power reminds me of Michael Mark’s latest essay. I think most people appreciate the notion of power from the artist’s aesthetic perspective, a perspective that is clearly repulsed by the pure rationalist approach. I’m just thinking out loud here, so nobody should feel obligated to respond. 5. Well, as I said last time, just substitute “ability” or “capability” for “power” and the confusion goes away. As to the actual physics notion, “power” is a derived quality, not a fundamental one. As you said above, 1 joule per second and has units of kilogram-meters-squared-per-second-cubed. (Electricians know it as the more familiar volts×amps.) I don’t see it as too abstract; I see it as too derived to be a fundamental social or physical property. 6. Clearly Mike; when anyone asks how much “power” the POTUS has, no one is not asking how many joules that he possesses. For all practical purposes, “power” is a mystery, some “thing” that can only be appreciated for its aesthetic beauty and not understood from a rational reductionist perspective. Is that how you perceive it? 7. Can’t resist channeling the wonderful Emily Litella: “What’s all this fuss I hear about how many jewels the POTUS has? Why do we care? This isn’t a monarchy, we don’t have Crown Jewels. Why I doubt the man has any jewels at all. Maybe his wife does, but jewelry just doesn’t…” “What’s that?” “Never mind.” 8. Lee, I wouldn’t describe power that way. Certainly political power is something most people don’t have a good understanding of. But it’s been studied. Richard Neustadt’s “Presidential Power” is worth reading for anyone who wants to understand the POTUS version. In the end, all social power (including business or political power) involves influencing people to do what you want them to do. Sometimes it’s easy, with a legal order for people duty bound to obey. More often it involves persuading people, either directly or indirectly. In truth, it’s always about persuasion. It’s just that when you have line authority, you have extra tools. But fail to understand that people are not mere extensions of your will, that they each have their own values and agendas, and you will eventually flounder. In contrast, power or forces in physics are much simpler, even though social power is ultimately a special case of it. 9. Wyrd, I keep forgetting that your own confirmation bias is rooted in some form of Spinozaism; so my use of the word power would not have the same meaning to you as it does to me. My own conformational bias is similar to that of Kant; with some essential a priori intuitions added which Kant’s ontology did not contain. Spinozaism is a good metaphysical model, one that I agree with to a large degree with one important exception. Spinozaism posits the notion of natural and physical laws as irreducible and fundamental, whereas my metaphysics unequivocally rejects the entire notion of law all together. So for now, we will have to agree to disagree; and I will try to keep your own metaphysical position in mind whenever I correspond with you. 10. Wyrd, I suppose a good place to start would be to ask if you agree with Mike’s manifesto: “that we shouldn’t look to reality for meaning. We have to resolve to make our own meaning, and figure out how to bend reality to it.” I do not disagree that his manifesto is an explicit and succinct definition of subjective experience but personally, I unequivocally disagree with that position. What say you? 11. Regrettably, Lee, this isn’t a discussion I have much strength for. My New Year’s resolution was to swear off “fantasy bullshit” (FBS). Not as innately wrong — I love me some FBS sometimes — but as having become so very problematic in our culture. I’ve been terrified ever since this culture started blithely talking about, and normalizing, “post-factual,” and as of last November my terror seriously ramped up and then blew up on January 6th (and again last Saturday). Our culture has sunk into too many forms of fantasy. I’ve been reading, with growing horror, Aldous Huxley’s essays in Brave New World Revisited (1959) which he penned about 30 years after he wrote Brave New World (a profoundly disturbing novel in the current social climate). Huxley saw it back in 1959 (if not 30 years before). I quote: “A society, most of whose members spend a great part of their time, not on the spot, not here and now and in the calculable future, but somewhere else, in the irrelevant other worlds of sport and soap opera, of mythology and metaphysical fantasy, will find it hard to resist the encroachments of those who would manipulate and control it.” We’ve seen that play out the last decades and culminate in the last months. I feel as someone who has been badly beaten, my mind has been harmed by all this. To try to heal that damage, I’m sticking firmly to the physically real. So it’s hard for me to answer your question. For one thing, what is “meaning”? Is it that New Age thing people are always looking for? “Meaning” can only come from within (or maybe from God if you swing that way). Secondly, I’ve never known what to make of the idea of “bending reality” — does that mean magic or just building a thing with wheels? I’m a hard-core realist, both emotionally and philosophically. I’m just a tiny, tiny piece of a very large physical reality. I define sanity as the degree to which my internal mental model matches the external world I perceive, and life is the process of building and refining that model in an attempt to remove dissonance between it and physical experience. 12. Like you Wyrd, I am dreadfully disturbed by the prevailing trend taking place in our culture and I appreciate you being open and candid about your feelings as well. We do not have to engage in any serious discussions here. I read your own blog from time to time, and if there arises an opportunity for a productive discourse maybe we could collaborate on common ideas and goals such as the origin of meaning and where meaning actually resides; if that is acceptable to you. Your definition of sanity is a very good one as well. Take care my friend 4. I’ve had to determine my approach to this topic for my project (understanding consciousness), so I’ll just put it here and see what you think. I’ve decided it is useful to be very clear what the terms “exist” and “real” mean. These definitions will certainly conflict with someone else’s. All I can do is explain how I use the terms. So, I say something exists if it interacts with other stuff. Interaction is a relation, and so stuff that cannot interact with you (those far flung galaxies) does not exist for you. Patterns are real. (See Dennett.). So, abstractions are real. Numbers are real. Some patterns are discernible in existing stuff. All existing stuff exhibit patterns: specifically, patterns of behavior. A physical thing (system) exists if and only if it exhibits a pattern of behavior, and this pattern determines what a thing “is”. I should point out here that any pattern of behavior is multiply realizable, so even if something “exists” you can’t know what it fundamentally “is”. But you can assign a name, like “electron”, to anything that exhibits that pattern. Re causation: An interaction is best described in the format input->[mech]->output. You can then say the mech “causes” output when presented with input. This pattern (input->output) could be described as a causal power. The mech exhibiting this pattern of interaction has the “causal power”. So a pattern does not have causal power, but a pattern may be the particular pattern associated with a mech and describe the causal power of that mech. Again, more than one mech can exhibit the same pattern, and so have the same causal power. So to rewrite your penultimate paragraph, I would say if something has causal effects, it, at least in some manner, exists. If it has no detectable effects, or at least theoretical ones, we can’t say conclusively that it doesn’t exist, but it may effectively not exist for us. [taking questions] Liked by 1 person 1. That all sounds about right to me. But I’m wondering if I missed something, because it seems very similar to what I said. You did add stuff about multi-realizability, which I don’t have any problems with. Or maybe I should ask, what would you say distinguishes your view from mine? (Assuming something does.) 1. I recognize that we pretty much have the same understanding, but I am, and want you to be, more precise with the term “real”. For example, you said “this is also why I’m not a Platonic realist, someone who believes that abstract objects exist independently of the mind.” You say you are not a Platonic realist, but I say Platonic forms are real things independent of the mind. They just don’t exist, except some of them are patterns detectable in things that do exist.. I say unicorns are real, they just don’t exist. I say philosophical zombies are real, but they cannot exist (as their description requires contradiction). I guess what bugs me is when people talk about “causal power”, and ask things like “does information have causal power?”. “Causal power” seems like it’s intuitive, but is ill-defined and causes misconceptions. Liked by 1 person 1. Thanks for the clarification. I’m trying to see how we can make a distinction between being real and existing, but having a hard time. To me, those terms seem synonymous. (My working title for the post was actually “The causal criteria for existence”. I changed it to “being real” right before hitting Publish.) I do think we can make a distinction between ideas that most definitely exist which are about non-existent things like unicorns. Maybe that’s the sense in which you mean unicorns are real? If so, that seems strange, because it seems to imply concepts like the luminiferous aether or celestial crystalline spheres are real even though they don’t exist. I’m struggling to see how that use of language can be productive. Maybe if we say these concepts are abstractly real but not physically real? It’s all the same ontology, but different ways of talking about it. Liked by 1 person 1. I am making a distinction between being real/existence by fiat. I’m saying, instead of using both words for the same thing, use one word for abstractly real (real) and the other for physically real (exists). This does bring up the question of what you mean when you say an idea of non-existent beings exists. But I translate that as saying an existent system in your brain recognizes the real pattern of a non-existent thing. Make sense? 2. I follow what you’re saying. But I think it would be clearer to use those words with their common meanings and just use qualifiers to make what you’re saying explicit. So just preface with “abstractly” or “physically” for “real” or “exists”. If you use “real” in that fashion, it seems like you are obligated to constantly remind your audience of the special way in which you’re using it. 3. >”I do think we can make a distinction between ideas that most definitely exist which are about non-existent things like unicorns.” I would argue differently. “Ideas” are the product of some specific species (humans) brain activities. They exist within that specific community, and, by extension, on any media they are recorded, if they could be deciphered (by other species?). Outside the mentioned group, “ideas” do not exist. If such species got into extinction, then their “ideas” got to extinction too. It makes sense to broaden this example and make a distinction between “reality” within and outside this specific group. Liked by 1 person 4. It might depend on how we define an “idea”. A lot of mammalian and avian species, particularly social ones, can learn from each other. If a monkey figures out a new way to break open a nut, other monkeys will observe and copy. That troop will then have a cultural practice of how they break open the nuts that other troop lacks. In other words, culture, in the sense of shared concepts, isn’t unique to humans, at least unless we specifically define it to require symbolic communication. 5. FYI In the nineteenth century and even the early twentieth, many scientists considered atoms a useful fiction, indicating that they weren’t real. This is why Einstein won his Nobel Prize for his work on Brownian motion, which was a physical manifestation of atoms/molecules that was definitive. Liked by 1 person 1. Thanks. Didn’t know that. Pretty interesting. It’s amazing how often these useful fictions become real. I thought Einstein’s Nobel was for explaining the photoelectric effect, although I suppose it’s just as tied to atomism as well. 6. Hi Mike, Wyrd, I definitely see the circularity, and it points to the fact that the concept of objective physical reality is empty and meaningless. If we make up an abstract toy universe with its own laws (as physicists will do with constructs such as Anti-de Sitter Space) then things will “play out” in that structure in something analogous to time and causality, albeit timelessly and causelessly from our perspective. But if you imagine a perspective within that universe, objects within that universe will appear to be real because they appear to engage in causality, whereas our universe will appear to be unreal and abstract because it does not. The circularity is that must presuppose that our perspective is objectively privileged, that what we observe is physically real to decide if it is in fact engaging in causality. If you make that assumption, then you foreclose the possibility of there being universes just as real as ours which are entirely causally disconnected. Whether or not such universes might exist, it seems unreasonable to rule them out a priori simply by defining them out of existence. Liked by 1 person 1. Hi DM, You’re getting at why I hedged a bit toward the end, saying something might effectively not be real for us. We don’t even have to bring in other universes, just our own universe far beyond our cosmological horizon. (Which Tegmark actually considers another universe, so I guess I’m converging.) Is a galaxy a trillion light years away real for us? I suppose if cosmic inflation happened we could say we might still feel causal effects from the energy patterns that eventually became those galaxies. But what about galaxies 10^100 light years away? (Assuming such galaxies exist.) And somewhat tying in with the previous post, in a simulation, the simulated objects have simulated causal effects and are effectively real for any simulated entities within the simulation. But for those of us outside the simulation, they’re not. 1. Good question. A naive answer might be something like having direct conscious interaction with a phenomena to determine things about it, particularly quantitative properties. But of course, in modern science that rarely happens. No one has ever seen an electron. Instead we have direct perception of a stand in, like a readout on a measuring device, which we use to infer things about the phenomena. We do this because we trust our theory about how the device works, but ultimately it’s an inference made using theory. However, before we allow ourselves to get too upset about this, it’s worth noting that direct conscious interaction is itself an inference based on preconscious sensory information coming into the brain. Those inference are themselves heavily dependent on our understanding, our model or theory of the world. In the end, we make predictions, note the errors and adjust, and make new predictions. 7. Is there a sense in which our everyday concepts of time, space and causality are secondary, and behind the scenes it is quantum entanglement that is more fundamental, so that what is (potentially) real to us is everything with which we are entangled? 1. It’s a definite possibility, particularly if you subscribe to the idea of a universal wave function, that is, a quantum universe with no Heisenberg cut. Of course, that view implies many worlds, so most people reject it out of hand. 8. Your previous post was on a simulated universe and this one is on what is real. Is a simulation real? I mean real in the sense that it is more than real as a simulation. Would simulated consciousness be real also in the sense of more than real as a simulation? Liked by 1 person 1. I think the contents of the simulation would be real for any simulated entities within the simulation. Simulated wetness for a simulated being would be real wetness. Simulated pain would be real pain. For us on the outside, they would be real in the sense of being a real simulation. Of course, you could arrange for the simulated beings to have access to physical robot bodies in the outer world, which would graduate them from just simulation status to something much more real. Liked by 1 person 1. I don’t know. Would it? I suppose if you reject that a simulation of consciousness can be conscious, it might be. Essentially it would be a philosophical zombie. I personally don’t think p-zombies exist, so for me giving it a physical body makes it as physically real as we are. You could, of course, then argue that it was always physically real, since it was always implemented by some kind of physics. Liked by 1 person 1. Would the simulated consciousness using a simulated body execute the exact steps and processes that the simulated consciousness executes with a real body? If the steps/processes are identical, why would one be more real than the other? They would be indistinguishable. Liked by 1 person 2. Hi James, It wouldn’t be realer from an objective point of view. But I’ve been arguing that what is physically real is a matter of perspective. Putting a simulated consciousness in a robot body may make it physically real to us for this reason. If it’s just running in a simulated world and does not interact with the real world in any way (e.g. the program takes no inputs), then from our perspective it isn’t physically real. I actually think we have no moral responsibility for beings in such a simulation (because I’m a Tegmarkian platonist and I think the worlds we are simulating, no matter how horrific, must all exist out there in the multiverse independently of whether or not we want to explore them with simulations — our simulating them creates no additional suffering). But as soon as you start interacting with a simulated consciousness, then you are a part of the mathematical world it inhabits. You are physically real to it and it is to you. You are in the same relationship to it as you are to any other physical being, and so I think you do have moral responsibility for it. 3. So uploaded minds to a computer wouldn’t be real? But if the uploaded mind somehow instantiates itself in a physical body then suddenly it becomes real. Because perspective. But the only perspective would be our own perspective or the perspective of non-simulated mind. So in the end it is only our mind that makes it real. I don’t know. 9. Hi Mike, I meant to give an overall comment rather than just responding to comments of others but didn’t have time. I think you raise some very interesting issues and so I enjoyed and appreciated the article very much. But I think it is a mistake to get too hung up on what is and isn’t real — a mistake many philosophers have been making for too long in my view. What is real and what is not depends on what you mean by “real”, and different definitions are appropriate in different contexts. As long as you are clear about what you mean there is no problem. I don’t think there is a fact of the matter on which definition is correct and so what is “really real”. The question of whether some concept refers to something real or is just a calculational tool strikes me as entirely meaningless. I genuinely cannot make sense of it. The closest I can come is to take the examples of calculational tools from the past that are since discarded, such as caloric theory or geocentrism/epicycles. From my point of view, to the extent that these theories disagree with experiment (as caloric theory does when it claims that caloric is a gas), they are not physiclaly real and that’s all there is to it. If on the other hand they can be patched and amended (e.g. epicycles within epicycles) and so made to agree with experiment to the point where their predictions cannot be falsified, then they are as real as any other model but fail to be as useful or elegant as simpler theories. So in my view, it is possible for both geocentrism and heliocentrism to be true (i.e. there is no fact of the matter on what is actually at the centre), with the latter only being far more elegant and more useful. A more current debate is which of the Newtonian framework or the principle of least action is the more fundamental description of physical law: In the Newtonian framework, we have objects a certain point in time and space, and laws that describe how they evolve over time. In the framework of the principle of least action, the rule is that some quantity (the “action”) is minimised or maximised, and what happens will be whatever achieves this. The latter ends up being more apparently teleological and less intuitive to humans (but more intuitive to the Aliens in Arrival or Ted Chiang’s original short story “Story of your Life”), despite being very useful mathematically in some circumstances. But the two frameworks are mathematically equivalent, in that you can derive one from the other. So the question arises, which is prior? Which is the “real” one and which is the derived one? In my mind, there is no answer to this question. Each framework is equally real. Causal criteria for what is physically real to us makes sense to me, with some caveats. I think it’s better to require the relationship to be bidirectional. Our far future descendents can exhibit no causal influence on us, but we should consider them to be real in some sense or else we have no more responsibility for them than we would for fictional characters, and that doesn’t seem right. The fact that we can causally influence them makes them real, I think. Adopting this rule has some other benefits. We can trace the chain of causality backwards to the Big Bang and forwards again to parts of the universe that are no longer causally connected to us. So those parts of the universe are also physically real. That also seems right. But I think you should be aware that adopting your criteria might (like me) commit you to the physical reality of epicycles and geocentrism, assuming that epicycles can be used to make correct predictions. If so, then they can be said to have as much of a causal influence as spacetime or the wavefunction or whatever. If you want to exclude them, you may need to introduce an additional criterion that no simpler or less ad hoc model can produce the same predictions. But then we’re back where we started. Perhaps spacetime does not exist because there is a simpler model that can produce the same predictions, and it becomes an open question whether spacetime is real or just a calculational tool. Liked by 1 person 1. Hi DM, I know where you’re coming from. It’s basically why, until recently, I was comfortable calling myself an instrumentalist, although I recently covered why I’ve become uneasy with that label. (Too much baggage, with people projecting positions on me I don’t hold.) But I think it’s important to be able to put on the instrumentalist hat at times and assess theories in that light. I would note that I don’t consider causality to be the only criteria for reality. Aside from making more accurate predictions, parsimony also comes into it. That’s how we can dismiss geocentrism today. It’s just a much more complex theory given all the data now available. It’s always possible to add variables to any theory to make it consistent with observations, but at the cost of increasing complexity. If there is a simpler theory, one with fewer assumptions, then it has a better chance of remaining reliable. I do think asking whether something is real or just a mathematical convenience is productive though. We shouldn’t expect a mathematical tool to necessarily reconcile with other theories, or worry if it outright contradicts them. It’s just a tool, so no one should care. For example, the weak (epistemic) Copenhagen interpretation is basically an anti-real theory, so any concerns about its contradictions with cosmology would be misplaced. (Stronger more ontological versions of Copenhagen are a different matter.) But if it does reconcile with other well established theories, then that seems to increase the chance of it being real. Of course, that’s always with the possibility that a cluster of theories that reconcile with each other may constitute a paradigm that eventually ends up being overturned. In the end, we have theories that make more or less accurate predictions. If it’s the simplest theory, it may be reliable. And if its components are compatible with other reliable theories, they may be “real”. At least until a better theory comes along. It seems to be the best we can do. 10. The problem for me with concepts of “reality” is that the great majority of what scientists talk about can not be “experienced”. At least not by mere mortals such as myself without any skill in mathematics beyond the ten times table. I can readily experience (most) human qualia and emotions. I can understand the colour red in two ways: firstly in the scientific explanation of light waves of a certain frequency but secondly (and very importantly, to me at least) in my qualitative experience of colour. I can not experience the warping of spacetime. Nor the minute particles of matter of which (it is said) everything is made. As a very pedestrian mortal I have five very limited senses and can only really understand and accept what I can personally “feel” and “experience”. Perhaps these shortcomings are limited to just myself. Perhaps once Elon Musk has implanted the necessary extra processing chips in my brain I will be able to feel such matters as easily as I can feel and understand light or heat. Bring it on, Elon. Liked by 1 person 1. I’m tempted to do an appeal to the stone and point out anytime you’ve fallen on to the ground you’ve experience the warping of spacetime, but of course the idea that that is the warping of spacetime is what is so far outside of our experience. And we only experience photons and electrons en mass, such that the idea they were composed of units was controversial for a long time. The thing about Musk implanting extra processing chips is, if it becomes common, we might someday wonder what it was ever like to only experience reality with natural senses. Of course, that could also be when we start experiencing simulated reality. (Assuming we’re not already experiencing it.) 11. Space, time, space-time, particle, laws of physics – All that are physical terms. Our understanding and our discussions are based on variations of that physical view of the Universe. I think, there is a a new sheriff in town, so to speak. Please look at this article “ (New machine learning theory raises questions about nature of science). Here is a long quote from this article. -“The algorithm, devised by a scientist at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. “Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations,” said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. “What I’m doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law.”” What we see here? There is a new way to describe the Universe and its contents without a use of physical terms and laws. That has implications on how we define “real”, “existence”, the underlying laws of the world, and so forth. Liked by 2 people 1. I’ll have to check out that article, but making predictions with a black box seems like it will have limited utility. One of the benefits of actually having a theory, a model, is that we can then apply it for various purposes, like technology. Having an oracle just make predictions won’t do that. Although it might provide a useful test for possible theories. Now, if the black box can produce actual theories, then we might be on to something. But then we basically have an AI scientist. Liked by 1 person 2. I think I linked to the paper you reference on another thread. I also had a blog post on this. Quote from paper: “We discuss a possibility that the entire universe on its most fundamental level is a neural network…This shows that the learning dynamics of a neural network can indeed exhibit approximate behaviors described by both quantum mechanics and general relativity. We also discuss a possibility that the two descriptions are holographic duals of each other”. Your thoughts? You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
3cd2af5f43c84581
Tables for Volume D Physical properties of crystals Edited by A. Authier International Tables for Crystallography (2006). Vol. D, ch. 2.2, pp. 300-301 Section 2.2.10. Density functional theory K. Schwarza* Correspondence e-mail: 2.2.10. Density functional theory | top | pdf | The most widely used scheme for calculating the electronic structure of solids is based on density functional theory (DFT). It is described in many excellent books, for example that by Dreizler & Gross (1990[link]), which contains many useful definitions, explanations and references. Hohenberg & Kohn (1964[link]) have shown that for determining the ground-state properties of a system all one needs to know is the electron density [\rho({\bf r})]. This is a tremendous simplification considering the complicated wavefunction of a crystal with (in principle infinitely) many electrons. This means that the total energy of a system (a solid in the present case) is a functional of the density [E[\rho(r)]], which is independent of the external potential provided by all nuclei. At first it was just proved that such a functional exists, but in order to make this fundamental theorem of practical use Kohn & Sham (1965[link]) introduced orbitals and suggested the following procedure. In the universal approach of DFT to the quantum-mechanical many-body problem, the interacting system is mapped in a unique manner onto an effective non-interacting system of quasi-electrons with the same total density. Therefore the electron density plays the key role in this formalism. The non-interacting particles of this auxiliary system move in an effective local one-particle potential, which consists of a mean-field (Hartree) part and an exchange–correlation part that, in principle, incorporates all correlation effects exactly. However, the functional form of this potential is not known and thus one needs to make approximations. Magnetic systems (with collinear spin alignments) require a generalization, namely a different treatment for spin-up and spin-down electrons. In this generalized form the key quantities are the spin densities [\rho_{\sigma}(r)], in terms of which the total energy [E_{\rm tot}] is [\eqalignno{E_{\rm tot}(\rho_\uparrow,\rho_\downarrow) &=T_s(\rho_\uparrow, \rho_\downarrow)+E_{ee}(\rho_\uparrow,\rho_\downarrow)+ E_{Ne}(\rho_\uparrow,\rho_\downarrow)&\cr&\quad+E_{xc}(\rho_\uparrow, \rho_\downarrow)+E_{NN}, &(}]with the electronic contributions, labelled conventionally as, respectively, the kinetic energy (of the non-interacting particles), the electron–electron repulsion, the nuclear–electron attraction and the exchange–correlation energies. The last term [E_{NN}] is the repulsive Coulomb energy of the fixed nuclei. This expression is still exact but has the advantage that all terms but one can be calculated very accurately and are the dominating (large) quantities. The exception is the exchange–correlation energy [E_{xc}], which is defined by ([link] but must be approximated. The first important methods for this were the local density approximation (LDA) or its spin-polarized generalization, the local spin density approximation (LSDA). The latter comprises two assumptions: • (i) That [E_{xc}] can be written in terms of a local exchange–correlation energy density [\varepsilon_{xc}] times the total (spin-up plus spin-down) electron density as [E_{xc}=\textstyle\int\varepsilon_{xc}(\rho_{\uparrow},\rho_{\downarrow})\ast[\rho_{\uparrow}+\rho\downarrow]\,\,{\rm d}r.\eqno(] • (ii) The particular form chosen for [\varepsilon_{xc}]. For a homogeneous electron gas [\varepsilon_{xc}] is known from quantum Monte Carlo simulations, e.g. by Ceperley & Alder (1984[link]). The LDA can be described in the following way. At each point [{\bf r}] in space we know the electron density [\rho({\bf r})]. If we locally replace the system by a homogeneous electron gas of the same density, then we know its exchange–correlation energy. By integrating over all space we can calculate [E_{xc}]. The most effective way known to minimize [E_{\rm tot}] by means of the variational principle is to introduce (spin) orbitals [\chi_{jk}^{\sigma}] constrained to construct the spin densities [see ([link] below]. According to Kohn and Sham (KS), the variation of [E_{\rm tot}] gives the following effective one-particle Schrödinger equations, the so-called Kohn–Sham equations (Kohn & Sham, 1965[link]) (written for an atom in Rydberg atomic units with the obvious generalization to solids):[[-\nabla^{2}+V_{Ne}+V_{ee}+V_{xc}^{\sigma}]\chi_{jk}^{\sigma }(r)=\epsilon_{jk}^{\sigma}(r)\chi_{jk}^{\sigma}(r),\eqno(]with the external potential (the attractive interaction of the electrons by the nucleus) given by[V_{Ne}(r)={{2Z}\over{r}},\eqno(]the Coulomb potential (the electrostatic interaction between the electrons) given by [V_{ee}({\bf r})=V_{C}({\bf r})=\int{{\rho({\bf r}^{\prime})}\over{|{\bf r-r}^{\prime}|}}\,\,{\rm d}{\bf r}^{\prime}\eqno(]and the exchange–correlation potential (due to quantum mechanics) given by the functional derivative[V_{xc}({\bf r})={{\delta E_{xc}[\rho(r)]}\over{\delta\rho}}.\eqno(] In the KS scheme, the (spin) electron densities are obtained by summing over all occupied states, i.e. by filling the KS orbitals (with increasing energy) according to the Aufbau principle. [\rho_{\sigma}(r)=\textstyle\sum\limits_{j,k}\rho_{jk}^{\sigma}|\chi_{jk}^{\sigma} (r)|^{2}.\eqno(]Here [\rho_{jk}^{\sigma}] are occupation numbers such that [0\leq\rho _{jk}^{\sigma}\leq1/w_{k}], where [w_{k}] is the symmetry-required weight of point [{\bf k}]. These KS equations ([link] must be solved self-consistently in an iterative process, since finding the KS orbitals requires the knowledge of the potentials, which themselves depend on the (spin) density and thus on the orbitals again. Note the similarity to (and difference from) the Hartree–Fock equation ([link]. This version of the DFT leads to a (spin) density that is close to the exact density provided that the DFT functional is sufficiently accurate. In early applications, the local density approximation (LDA) was frequently used and several forms of functionals exist in the literature, for example by Hedin & Lundqvist (1971[link]), von Barth & Hedin (1972[link]), Gunnarsson & Lundqvist (1976[link]), Vosko et al. (1980[link]) or accurate fits of the Monte Carlo simulations of Ceperley & Alder (1984[link]). The LDA has some shortcomings, mostly due to the tendency of overbinding, which causes, for example, too-small lattice constants. Recent progress has been made going beyond the LSDA by adding gradient terms or higher derivatives ([\nabla\rho] and [\nabla^{2}\rho]) of the electron density to the exchange–correlation energy or its corresponding potential. In this context several physical constraints can be formulated, which an exact theory should obey. Most approximations, however, satisfy only part of them. For example, the exchange density (needed in the construction of these two quantities) should integrate to [-1] according to the Fermi exclusion principle (Fermi hole). Such considerations led to the generalized gradient approximation (GGA), which exists in various parameterizations, e.g. in the one by Perdew et al. (1996[link]). This is an active field of research and thus new functionals are being developed and their accuracy tested in various applications. The Coulomb potential [V_{C}({\bf r})] in ([link] is that of all N electrons. That is, any electron is also moving in its own field, which is physically unrealistic but may be mathematically convenient. Within the HF method (and related schemes) this self-interaction is cancelled exactly by an equivalent term in the exchange interaction [see ([link]]. For the currently used approximate density functionals, the self-interaction cancellation is not complete and thus an error remains that may be significant, at least for states (e.g. 4f or 5f) for which the respective orbital is not delocalized. Note that delocalized states have a negligibly small self-interaction. This problem has led to the proposal of self-interaction corrections (SICs), which remove most of this error and have impacts on both the single-particle eigenvalues and the total energy (Parr et al., 1978[link]). The Hohenberg–Kohn theorems state that the total energy (of the ground state) is a functional of the density, but the introduction of the KS orbitals (describing quasi-electrons) are only a tool in arriving at this density and consequently the total energy. Rigorously, the Kohn–Sham orbitals are not electronic orbitals and the KS eigenvalues [\varepsilon_{i}] (which correspond to [E_{{\bf k}}] in a solids) are not directly related to electronic excitation energies. From a formal (mathematical) point of view, the [\varepsilon_{i}] are just Lagrange multipliers without a physical meaning. Nevertheless, it is often a good approximation (and common practice) to partly ignore these formal inconsistencies and use the orbitals and their energies in discussing electronic properties. The gross features of the eigenvalue sequence depend only to a smaller extent on the details of the potential, whether it is orbital-based as in the HF method or density-based as in DFT. In this sense, the eigenvalues are mainly determined by orthogonality conditions and by the strong nuclear potential, common to DFT and the HF method. In processes in which one removes (ionization) or adds (electron affinity) an electron, one compares the N electron system with one with [N-1] or [N+1] electrons. Here another conceptual difference occurs between the HF method and DFT. In the HF method one may use Koopmans' theorem, which states that the [\varepsilon_{i}^{\rm HF}] agree with the ionization energies from state i assuming that the corresponding orbitals do not change in the ionization process. In DFT, the [\varepsilon_{i}] can be interpreted according to Janak's theorem (Janak, 1978[link]) as the partial derivative with respect to the occupation number [n_{i}],[\varepsilon_{i}={{\partial E}\over{\partial n_{i}}}.\eqno(]Thus in the HF method [\varepsilon_{i}] is the total energy difference for [\Delta n=1], in contrast to DFT where a differential change in the occupation number defines [\varepsilon_{i}], the proper quantity for describing metallic systems. It has been proven that for the exact density functional the eigenvalue of the highest occupied orbital is the first ionization potential (Perdew & Levy, 1983[link]). Roughly, one can state that the further an orbital energy is away from the highest occupied state, the poorer becomes the approximation to use [\varepsilon_{i}] as excitation energy. For core energies the deviation can be significant, but one may use Slater's transition state (Slater, 1974[link]), in which half an electron is removed from the corresponding orbital, and then use the [\varepsilon_{i}^{\rm TS}] to represent the ionization from that orbital. Another excitation from the valence to the conduction band is given by the energy gap, separating the occupied from the unoccupied single-particle levels. It is well known that the gap is not given well by taking [\Delta\varepsilon_{i}] as excitation energy. Current DFT methods significantly underestimate the gap (half the experimental value), whereas the HF method usually overestimates gaps (by a factor of about two). A trivial solution, applying the `scissor operator', is to shift the DFT bands to agree with the experimental gap. An improved but much more elaborate approach for obtaining electronic excitation energies within DFT is the GW method in which quasi-particle energies are calculated (Hybertsen & Louie, 1984[link]; Godby et al., 1986[link]; Perdew, 1986[link]). This scheme is based on calculating the dielectric matrix, which contains information on the response of the system to an external perturbation, such as the excitation of an electron. In some cases, one can rely on the total energy of the states involved. The original Hohenberg–Kohn theorems (Hohenberg & Kohn, 1964[link]) apply only to the ground state. The theorems may, however, be generalized to the energetically lowest state of any symmetry representation for which any property is a functional of the corresponding density. This allows (in cases where applicable) the calculation of excitation energies by taking total energy differences. Many aspects of DFT from formalism to applications are discussed and many references are given in the book by Springborg (1997[link]). Barth, U. von & Hedin, L. (1972). A local exchange-correlation potential for the spin-polarized case: I. J. Phys. C, 5, 1629–1642. Ceperley, D. M. & Alder, B. J. (1984). Ground state of the electron gas by a stochastic method. Phys. Rev. Lett. 45, 566–572. Dreizler, R. M. & Gross, E. K. U. (1990). Density functional theory. Berlin, Heidelberg, New York: Springer-Verlag. Godby, R. W., Schlüter, M. & Sham, L. J. (1986). Accurate exchange-correlation potential for silicon and its discontinuity of addition of an electron. Phys. Rev. Lett. 56, 2415–2418. Gunnarsson, O. & Lundqvist, B. I. (1976). Exchange and correlation in atoms, molecules, and solids by the spin-density-functional formation. Phys. Rev. B, 13, 4274–4298. Hedin, L. & Lundqvist, B. I. (1971). Explicit local exchange-correlation potentials. J. Phys. C, 4, 2064–2083. Hohenberg, P. & Kohn, W. (1964). Inhomogeneous electron gas. Phys. Rev. 136, B864–B871. Hybertsen, M. S. & Louie, G. (1984). Non-local density functional theory for the electronic and structural properties of semiconductors. Solid State Commun. 51, 451–454. Janak, J. F. (1978). Proof that [\partial E/\partial n_i = \epsilon_i] in density-functional theory. Phys. Rev. B, 18, 7165–7168. Kohn, W. & Sham, L. J. (1965). Self-consistent equations including exchange. Phys. Rev. 140, A1133–A1138. Parr, R., Donnelly, R. A., Levy, M. & Palke, W. A. (1978). Electronegativity: the density functional viewpoint. J. Chem. Phys. 68, 3801–3807. Perdew, J. P. (1986). Density functional theory and the band gap problem. Int. J. Quantum Chem. 19, 497–523. Perdew, J. P., Burke, K. & Ernzerhof, M. (1996) Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868. Perdew, J. P. & Levy, M. (1983). Physical content of the exact Kohn–Sham orbital energies: band gaps and derivative discontinuities. Phys. Rev. Lett. 51, 1884–1887. Slater, J. C. (1974). The self-consistent field for molecules and solids. New York: McGraw-Hill. Springborg, M. (1997). Density-functional methods in chemistry and material science. Chichester, New York, Weinheim, Brisbane, Singapore, Toronto: John Wiley and Sons Ltd. Vosko, S. H., Wilk, L. & Nusair, M. (1980). Accurate spin-dependent electron liquid correlation energies for local spin density calculations. Can. J. Phys. 58, 1200–1211. to end of page to top of page
faf7aefa7fb3fa71
{[ promptMessage ]} Bookmark it {[ promptMessage ]} solution_set_12 - 30 April 2007 Michael F Brown CHEMISTRY... View Full Document Right Arrow Icon -1- 30 April, 2007 Michael F. Brown CHEMISTRY 481 (Biophysical Chemistry) Problem Set 12 - STUDY GUIDE To be turned in by: NEVER Background reading: Chapter 9 Chapter 10 Chapter 11.1–11.3, 11.7 Chapter 13.1–13.3, 13.10 Chapter 15 Back of Chapter Problems related to the homework (optional): Problems 9.3–9.5, 9.7–9.8, 9.11–9.15, 9.17, 9.19–9.21 Problems 10.12, 10.13, 10.18-10.20, 10.22, 10.26-10.28 Problems 11.10, 11.11,11.13, 11.27, 11.17 Problems 15.3, 15.5, 15.8, 15.11, 15.30 Problem 1 . One of the major applications of classical mechanics in biochemistry involves solving Newton’s Laws of motion for macromolecules. This method is called molecular dynamics . It can be used to investigate the atomic motions of proteins, nucleic acids (DNA and RNA), and the lipids in membranes. Let us consider the molecular dynamics of membrane lipids. The vibrations of the bonds joining the various atoms are modeled as a classical harmonic oscillator . Consider a representative C–H bond of a lipid in a membrane bilayer. For simplicity, the bond vibrations are modeled in terms of the relative motion of the two atoms. In a center of mass coordinate frame, the equation of motion is given by: d 2 x dt 2 + ω 0 2 x = 0 Here ω 0 = k / μ and μ is called the reduced mass , which is defined by: μ = m C m H m C + m H Let us assume that the force constant k = 450 N m –1 for the case of a CH bond. a) What is the natural frequency ν 0` / s –1 of the harmonic oscillations of the C–H bonds? b) In one type of experiment, hydrogen (H) is replaced chemically by deuterium (D). Do you expect the frequency of the bond oscillations to increase, decrease, or remain unaltered upon substitution of D for H? Why? c) What is the natural frequency ν 0 / s –1 of the C–D bond oscillations? (Assume the force constant k is the same as for a C–H bond.) Background image of page 1 View Full Document Right Arrow Icon -2- d) Calculate the force needed to produce vibrations of a C–H bond with an amplitude ( A ) of 10.0 pm. Problem 2 . The wavefunction corresponding to a hydrogenic 1 s orbital is given by ψ ( r ) = 1 4 π e r / a 0 where the Bohr radius a 0 = 52.9 pm. ( Hint : the above wavefunction is not normalized, if you cannot normalize it then proceed to parts (b)–(e) without the normalization constant.) a) Find the normalized wave function. b) Calculate the expectation value of the radius, given by + r , . c) Calculate the expectation value of the radius squared, given by + r 2 , . d) Calculate the variance of the radius, defined by + r 2 , + r , 2 . e) Calculate the root mean square radius, defined by + r 2 , 1/2 . Problem 3 . An important pigment molecule found in plants is β -carotene: Assume that the electronic properties of β -carotene can be considered in terms of a particle-in-a- box , in which L = 2.0 nm. a) Write the Schrödinger equation and state the eigenfunctions and eigenvalues . Be sure to define all symbols. b) Given the free electron model, what is the wavelength of light absorbed by β -carotene? Problem 4 . A biochemist is interested in determining the amount of secondary structure (hydrogen bonding) in a newly discovered protein from human immunodeficiency virus (HIV). Background image of page 2 Image of page 3 {[ snackBarMessage ]}
5da693e2beaf0ede
An Unbiased View of hydrogen test Interpretation of the outcome rely on the sugar that's utilized for testing, as well as the sample of hydrogen output after the sugar is ingested. Following ingestion of test doses with the dietary sugars lactose, sucrose, fructose or sorbitol, any production of hydrogen signifies that There have been a challenge with digestion or absorption of your test sugar and that a number of the sugar has attained the colon. When fast passage of food items from the intestine is current, the test dose of lactulose reaches the colon additional rapidly than Ordinarily, and, consequently, hydrogen is produced by the colon’s bacteria quickly following the sugar is ingested. Through the universe, hydrogen is usually found in the atomic and plasma states, with Attributes really different from those of molecular hydrogen. Being a plasma, hydrogen's electron and proton are not bound together, leading to pretty high electrical conductivity and substantial emissivity (creating the light with the Solar and various stars). The billed particles are hugely influenced by magnetic and electric powered fields. When bacterial overgrowth of your smaller bowel is current, ingestion of lactulose results in two different durations in the course of the test through which hydrogen is made, an ancient times due to the germs in the tiny intestine in addition to a afterwards one particular caused by the microbes from the colon. Uncomfortable side effects In ionic compounds, hydrogen may take the shape of a negative charge (i.e., anion) when it is named a hydride, or being a positively billed (i.e., cation) species denoted from the symbol H+. The hydrogen cation is published as if composed of a bare proton, but Actually, hydrogen cations in ionic compounds are generally much more advanced. As the sole neutral atom for which the Schrödinger equation may be solved analytically,[ten] analyze of your energetics and bonding in the hydrogen atom has played a essential position in the development of quantum mechanics. This brings about the stool to be acidic. The acidity click of stools which have been passed soon after ingestion with the lactose then is calculated. This was produced with a very broad ranging means team. It's various levels of 'closure' and allows for differentiation and personalised learnin... This Site works by using cookies to increase your practical experience. We will think you happen to be ok with this, however, you can choose-out if you wish.Accept Examine Additional Even interpreting the hydrogen info (such as protection details) is confounded by many phenomena. Numerous physical and chemical Houses of hydrogen depend upon the parahydrogen/orthohydrogen ratio (it frequently will take days or perhaps weeks at a provided temperature to reach the equilibrium ratio, for which the data is frequently presented). This method happens in the course of the anaerobic corrosion of iron and metal in oxygen-absolutely free groundwater As well as in lessening soils beneath the drinking water table. Some sufferers produce a mix of the two gases.[three] Other individuals, who're referred to as "Non-Responders", Do not generate any gasoline; it hasn't however been decided whether they might really create A further fuel. In addition to hydrogen and methane, some amenities also make the most of carbon dioxide (CO2) within the sufferers' breath to determine When the breath samples that are being analyzed are contaminated (possibly with area air or bronchial lifeless space air). [7] The incident was broadcast live to tell the tale radio and filmed. Ignition of leaking hydrogen is greatly assumed being the lead to, but afterwards investigations pointed into the ignition of the aluminized cloth coating by static electrical energy. Although the damage to hydrogen's popularity as a lifting gasoline was previously finished. home / digestion Heart / digestion a-z checklist / hydrogen breath test index / hydrogen breath test posting Hydrogen Breath Test Turquet De Mayerne repeated Paracelsus’s experiment in 1650 and found which the gasoline was flammable.(two) Neither Paracelsus nor De Mayerne proposed that hydrogen could be a whole new element. Without a doubt, Paracelsus thought there were only three aspects – the tria click prima – salt, sulfur, and mercury – and that each one other substances had been produced of different combinations of these three. In its commonest form, the hydrogen atom click is product of a single proton, one particular electron, and no neutrons. Hydrogen is the only real component which can exist without neutrons. Leave a Reply
e2837ba5a99d4354
You are currently browsing the monthly archive for November 2009. The Schrödinger equation Read the rest of this entry » After a one-week hiatus, we are resuming our reading seminar of the Hrushovski paper. This week, we are taking a break from the paper proper, and are instead focusing on the subject of stable theories (or more precisely, {\omega}-stable theories), which form an important component of the general model-theoretic machinery that the Hrushovski paper uses. (Actually, Hrushovski’s paper needs to work with more general theories than the stable ones, but apparently many of the tools used to study stable theories will generalise to the theories studied in this paper.) Roughly speaking, stable theories are those in which there are “few” definable sets; a classic example is the theory of algebraically closed fields (of characteristic zero, say), in which the only definable sets are boolean combinations of algebraic varieties. Because of this paucity of definable sets, it becomes possible to define the notion of the Morley rank of a definable set (analogous to the dimension of an algebraic set), together with the more refined notion of Morley degree of such sets (analogous to the number of top-dimensional irreducible components of an algebraic set). Stable theories can also be characterised by their inability to order infinite collections of elements in a definable fashion. The material here was presented by Anush Tserunyan; her notes on the subject can be found here. Let me also repeat the previous list of resources on this paper (updated slightly): Read the rest of this entry » [A little bit of advertising on behalf of my maths dept.  Unfortunately funding for this scholarship was secured only very recently, so the application deadline is extremely near, which is why I am publicising it here, in case someone here may know of a suitable applicant. – T.] UCLA Mathematics has launched a new scholarship to be granted to an entering freshman who has an exceptional background and promise in mathematics. The UCLA Math Undergraduate Merit Scholarship provides for full tuition, and a room and board allowance. To be considered for fall 2010, candidates must apply on or before November 30, 2009. Details and online application for the scholarship are available here. Eligibility Requirements: • 12th grader applying to UCLA for admission in Fall of 2010. • Outstanding academic record and standardized test scores. • Evidence of exceptional background and promise in mathematics, such as: placing in the top 25% in the U.S.A. Mathematics Olympiad (USAMO) or comparable (International Mathematics Olympiad level) performance on a similar national competition. • Strong preference will be given to International Mathematics Olympiad medalists. Let {X} be a finite subset of a non-commutative group {G}. As mentioned previously on this blog (as well as in the current logic reading seminar), there is some interest in classifying those {X} which obey small doubling conditions such as {|X \cdot X| = O(|X|)} or {|X \cdot X^{-1}| = O(|X|)}. A full classification here has still not been established. However, I wanted to record here an elementary argument (based on Exercise 2.6.5 of my book with Van Vu, which in turn is based on this paper of Izabella Laba) that handles the case when {|X \cdot X|} is very close to {|X|}: Proposition 1 If {|X^{-1} \cdot X| < \frac{3}{2} |X|}, then {X \cdot X^{-1}} and {X^{-1} \cdot X} are both finite groups, which are conjugate to each other. In particular, {X} is contained in the right-coset (or left-coset) of a group of order less than {\frac{3}{2} |X|}. Remark 1 The constant {\frac{3}{2}} is completely sharp; consider the case when {X = \{e, x\}} where {e} is the identity and {x} is an element of order larger than {2}. This is a small example, but one can make it as large as one pleases by taking the direct product of {X} and {G} with any finite group. In the converse direction, we see that whenever {X} is contained in the right-coset {S \cdot x} (resp. left-coset {x \cdot S}) of a group of order less than {2|X|}, then {X \cdot X^{-1}} (resp. {X^{-1} \cdot X}) is necessarily equal to all of {S}, by the inclusion-exclusion principle (see the proof below for a related argument). Proof: We begin by showing that {S := X \cdot X^{-1}} is a group. As {S} is symmetric and contains the identity, it suffices to show that this set is closed under addition. Let {a, b \in S}. Then we can write {a=xy^{-1}} and {b=zw^{-1}} for {x,y,z,w \in X}. If {y} were equal to {z}, then {ab = xw^{-1} \in X \cdot X^{-1}} and we would be done. Of course, there is no reason why {y} should equal {z}; but we can use the hypothesis {|X^{-1} \cdot X| < \frac{3}{2}|X|} to boost this as follows. Observe that {x^{-1} \cdot X} and {y^{-1} \cdot X} both have cardinality {|X|} and lie inside {X^{-1} \cdot X}, which has cardinality strictly less than {\frac{3}{2} |X|}. By the inclusion-exclusion principle, this forces {x^{-1} \cdot X \cap y^{-1} \cdot X} to have cardinality greater than {\frac{1}{2}|X|}. In other words, there exist more than {\frac{1}{2}|X|} pairs {x',y' \in X} such that {x^{-1} x' = y^{-1} y'}, which implies that {a = x' (y')^{-1}}. Thus there are more than {\frac{1}{2}|X|} elements {y' \in X} such that {a = x' (y')^{-1}} for some {x'\in X} (since {x'} is uniquely determined by {y'}); similarly, there exists more than {\frac{1}{2}|X|} elements {z' \in X} such that {b = z' (w')^{-1}} for some {w' \in X}. Again by inclusion-exclusion, we can thus find {y'=z'} in {X} for which one has simultaneous representations {a = x' (y')^{-1}} and {b = y' (z')^{-1}}, and so {ab = x'(z')^{-1} \in X \cdot X^{-1}}, and the claim follows. In the course of the above argument we showed that every element of the group {S} has more than {\frac{1}{2}|X|} representations of the form {xy^{-1}} for {x,y \in X}. But there are only {|X|^2} pairs {(x,y)} available, and thus {|S| < 2|X|}. Now let {x} be any element of {X}. Since {X \cdot x^{-1} \subset S}, we have {X \subset S \cdot x}, and so {X^{-1} \cdot X \subset x^{-1} \cdot S \cdot x}. Conversely, every element of {x^{-1} \cdot S \cdot x} has exactly {|S|} representations of the form {z^{-1} w} where {z, w \in S \cdot x}. Since {X} occupies more than half of {S \cdot x}, we thus see from the inclusion-exclusion principle, there is thus at least one representation {z^{-1} w} for which {z, w} both lie in {X}. In other words, {x^{-1} \cdot S \cdot x = X^{-1} \cdot X}, and the claim follows. \Box To relate this to the classical doubling constants {|X \cdot X|/|X|}, we first make an easy observation: Lemma 2 If {|X \cdot X| < 2|X|}, then {X \cdot X^{-1} = X^{-1} \cdot X}. Again, this is sharp; consider {X} equal to {\{x,y\}} where {x,y} generate a free group. Proof: Suppose that {xy^{-1}} is an element of {X \cdot X^{-1}} for some {x,y \in X}. Then the sets {X \cdot x} and {X \cdot y} have cardinality {|X|} and lie in {X \cdot X}, so by the inclusion-exclusion principle, the two sets intersect. Thus there exist {z,w \in X} such that {zx=wy}, thus {xy^{-1}=z^{-1}w \in X^{-1} \cdot X}. This shows that {X \cdot X^{-1}} is contained in {X^{-1} \cdot X}. The converse inclusion is proven similarly. \Box Proposition 3 If {|X \cdot X| < \frac{3}{2} |X|}, then {S := X \cdot X^{-1}} is a finite group of order {|X \cdot X|}, and {X \subset S \cdot x = x \cdot S} for some {x} in the normaliser of {S}. The factor {\frac{3}{2}} is sharp, by the same example used to show sharpness of Proposition 1. However, there seems to be some room for further improvement if one weakens the conclusion a bit; see below the fold. Proof: Let {S = X^{-1} \cdot X = X \cdot X^{-1}} (the two sets being equal by Lemma 2). By the argument used to prove Lemma 2, every element of {S} has more than {\frac{1}{2}|X|} representations of the form {xy^{-1}} for {x,y \in X}. By the argument used to prove Proposition 1, this shows that {S} is a group; also, since there are only {|X|^2} pairs {(x,y)}, we also see that {|S| < 2|X|}. Pick any {x \in X}; then {x^{-1} \cdot X, X \cdot x^{-1} \subset S}, and so {X \subset x\cdot S, S \cdot x}. Because every element of {x \cdot S \cdot x} has {|S|} representations of the form {yz} with {y \in x \cdot S}, {z \in S \cdot x}, and {X} occupies more than half of {x \cdot S} and of {S \cdot x}, we conclude that each element of {x \cdot S \cdot x} lies in {X \cdot X}, and so {X \cdot X = x \cdot S \cdot x} and {|S| = |X \cdot X|}. The intersection of the groups {S} and {x \cdot S \cdot x^{-1}} contains {X \cdot x^{-1}}, which is more than half the size of {S}, and so we must have {S = x \cdot S \cdot x^{-1}}, i.e. {x} normalises {S}, and the proposition follows. \Box Because the arguments here are so elementary, they extend easily to the infinitary setting in which {X} is now an infinite set, but has finite measure with respect to some translation-invariant Kiesler measure {\mu}. We omit the details. (I am hoping that this observation may help simplify some of the theory in that setting.) Read the rest of this entry » Read the rest of this entry » Read the rest of this entry »
4ec7134cef69be8b
Time line Photos Money Stamps Sketch Search John Stewart Bell Birth date: Birth place: Date of death: Place of death: 28 July 1928 Belfast, Ireland 1 Oct 1990 Geneva, Switzerland Presentation Wikipedia Indeed, almost wholly due to Bell's pioneering efforts, the subject of quantum foundations, experimental as well as theoretical and conceptual, has became a focus of major interest for scientists from many countries, and has taught us much of fundamental importance, not just about quantum theory, but about the nature of the physical universe. In addition, and this could scarcely have been predicted even as recently as the mid-1990s, several years after Bell's death, many of the concepts studied by Bell and those who developed his work have formed the basis of the new subject area of quantum information theory, which includes such topics as quantum computing and quantum cryptography. Attention to quantum information theory has increased enormously over the last few years, and the subject seems certain to be one of the most important growth areas of science in the twenty-first century. John Stewart Bell's parents had both lived in the north of Ireland for several generations. His father was also named John, so John Stewart has always been called Stewart within the family. His mother, Annie, encouraged the children to concentrate on their education, which, she felt, was the key to a fulfilling and dignified life. However, of her four children - John had an elder sister, Ruby, and two younger brothers, David and Robert - only John was able to stay on at school much over fourteen. Their family was not well-off, and at this time there was no universal secondary education, and to move from a background such as that of the Bells to university was exceptionally unusual. Bell himself was interested in books, and particularly interested in science from an early age. He was extremely successful in his first schools, Ulsterville Avenue and Fane Street, and, at the age of eleven, passed with ease his examination to move to secondary education. Unfortunately the cost of attending one of Belfast's prestigious grammar schools was prohibitive, but enough money was found for Bell to move to the Belfast Technical High School, where a full academic curriculum which qualified him for University entrance was coupled with vocational studies. Bell then spent a year as a technician in the Physics Department at Queen's University Belfast, where the senior members of staff in the Department, Professor Karl Emeleus and Dr Robert Sloane, were exceptionally helpful, lending Bell books and allowing him to attend the first year lectures. Bell was able to enter the Department as a student in 1945. His progress was extremely successful, and he graduated with First-Class Honours in Experimental Physics in 1948. He was able to spend one more year as a student, in that year achieving a second degree, again with First-Class Honours, this time in Mathematical Physics. In Mathematical Physics, his main teacher was Professor Peter Paul Ewald, famous as one of the founders of X-ray crystallography; Ewald was a refugee from Nazi Germany. Bell was already thinking deeply about quantum theory, not just how to use it, but its conceptual meaning. In an interview with Jeremy Bernstein, given towards the end of his life and quoted in Bernstein's book , Bell reported being perplexed by the usual statement of the Heisenberg uncertainty or indeterminacy principle ( x p , where x and p are the uncertainties or indeterminacies, depending on one's philosophical position, in position and momentum respectively, and is the (reduced) Planck 's constant). It looked as if you could take this size and then the position is well defined, or that size and then the momentum is well defined. It sounded as if you were just free to make it what you wished. It was only slowly that I realized that it's not a question of what you wish. It's really a question of what apparatus has produced this situation. But for me it was a bit of a fight to get through to that. It was not very clearly set out in the books and courses that were available to me. I remember arguing with one of my professors, a Doctor Sloane, about that. I was getting very heated and accusing him, more or less, of dishonesty. He was getting very heated too and said, 'You're going too far'. At the conclusion of his undergraduate studies Bell would have liked to work for a PhD. He would also have liked to study the conceptual basis of quantum theory more thoroughly. Economic considerations, though, meant that he had to forget about quantum theory, at least for the moment, and get a job, and in 1949 he joined the UK Atomic Research Establishment at Harwell, though he soon moved to the accelerator design group at Malvern. It was here that he met his future wife, Mary Ross, who came with degrees in mathematics and physics from Scotland. They married in 1954 and had a long and successful marriage. Mary was to stay in accelerator design through her career; towards the end of John's life he returned to problems in accelerator design and he and Mary wrote some papers jointly. Through his career he gained much from discussions with Mary, and when, in 1987, his papers on quantum theory were collected , he included the following words: I here renew very especially my warm thanks to Mary Bell. When I look through these papers again I see her everywhere. Accelerator design was, of course, a relatively new field, and Bell's work at Malvern consisted of tracing the paths of charged particles through accelerators. In these days before computers, this required a rigorous understanding of electromagnetism, and the insight and judgment to make the necessary mathematical simplifications required to make the problem tractable on a mechanical calculator, while retaining the essential features of the physics. Bell's work was masterly. In 1951 Bell was offered a year's leave of absence to work with Rudolf Peierls, Professor of Physics at Birmingham University. During his time in Birmingham, Bell did work of great importance, producing his version of the celebrated CPT theorem of quantum field theory. This theorem showed that under the combined action of three operators on a physical event: P, the parity operator, which performed a reflection; C, the charge conjugation operator, which replaced particles by anti-particles; and T, which performed a time reversal, the result would be another possible physical event. Unfortunately Gerhard Lüders and Wolfgang Pauli proved the same theorem a little ahead of Bell, and they received all the credit. However, Bell added another piece of work and gained a PhD in 1956. He also gained the highly valuable support of Peierls, and when he returned from Birmingham he went to Harwell to join a new group set up to work on theoretical elementary particle physics. He remained at Harwell till 1960, but he and Mary gradually became concerned that Harwell was moving away from fundamental work to more applied areas of physics, and they both moved to CERN, the Centre for European Nuclear Research in Geneva. Here they spent the remainder of their careers. Bell published around 80 papers in the area of high-energy physics and quantum field theory. Some were fairly closely related to experimental physics programmes at CERN, but most were in general theoretical areas. The most important work was that of 1969 leading to the Adler-Bell-Jackiw (ABJ) anomaly in quantum field theory. This resulted from joint work of Bell and Ronan Jackiw, which was then clarified by Stephen Adler. They showed that the standard current algebra model contained an ambiguity. Quantisation led to a symmetry breaking of the model. This work solved an outstanding problem in particle physics; theory appeared to predict that the neutral pion could not decay into two photons, but experimentally the decay took place, as explained by ABJ. Over the subsequent thirty years, the study of such anomalies became important in many areas of particle physics. Reinhold Bertlmann, who himself did important work with Bell, has written a book titled Anomalies in Quantum Field Theory , and the two surviving members of ABJ, Adler and Jackiw shared the 1988 Dirac Medal of the International Centre for Theoretical Physics in Trieste for their work. While particle physics and quantum field theory was the work Bell was paid to do, and he made excellent contributions, his great love was for quantum theory, and it is for his work here that he will be remembered. As we have seen, he was concerned about the fundamental meaning of the theory from the time he as an undergraduate, and many of his important arguments had their basis at that time. The conceptual problems may be outlined using the spin-1/2 system. We may say that when the state-vector is + or - respectively, sz is equal to /2 and - /2 respectively, but, if one restricts oneself to the Schrödinger equation, sx and sy just do not have values. All one can say is that if a measurement of sx, for example, is performed, the probabilities of the result obtained being either /2 or - /2 are both 1/2. If, on the other hand, the initial state-vector has the general form of c+ ++ c- -, then all we can say is that in a measurement of sz, the probability of obtaining the value of /2is |c" 2|, and that of obtaining the value of - /2is |c-2|. Before any measurement, sz just does not have a value. These statements contradict two of our basic notions. We are rejecting realism, which tells us that a quantity has a value, to put things more grandly -- the physical world has an existence, independent of the actions of any observer. Einstein was particularly disturbed by this abandonment of realism -- he insisted in the existence of an observer-free realm. We are also rejecting determinism, the belief that, if we have a complete knowledge of the state of the system, we can predict exactly how it will behave. In this case, we know the state-vector of the system, but cannot predict the result of measuring sz. It is clear that we could try to recover realism and determinism if we allowed the view that the Schrödinger equation, and the wave-function or state-vector, might not contain all the information that is available about the system. There might be other quantities giving extra information -- hidden variables. As a simple example, the state-vector above might apply to an ensemble of many systems, but in addition a hidden variable for each system might say what the actual value of sz might be. Realism and determinism would both be restored; sz would have a value at all times, and, with full knowledge of the state of the system, including the value of the hidden variable, we can predict the result of the measurement of sz . A complete theory of hidden variables must actually be more complicated than this -- we must remember that we wish to predict the results of measuring not just sz, but also sx and sy, and any other component of s. Nevertheless it would appear natural that the possibility of supplementing the Schrödinger equation with hidden variables would have been taken seriously. In fact, though, Niels Bohr and Werner Heisenberg were convinced that one should not aim at realism. They were therefore pleased when John von Neumann proved a theorem claiming to show rigorously that it is impossible to add hidden variables to the structure of quantum theory. This was to be very generally accepted for over thirty years. Bohr put forward his (perhaps rather obscure) framework of complementarity, which attempted to explain why one should not expect to measure sx and sy (or x and p) simultaneously. This was his Copenhagen interpretation of quantum theory. Einstein however rejected this, and aimed to restore realism. Physicists almost unanimously favoured Bohr . Einstein 's strongest argument, though this did not become very generally apparent for several decades lay in the famous Einstein -Podolsky-Rosen (EPR) argument of 1935, constructed by Einstein with the assistance of his two younger co-workers, Boris Podolsky and Nathan Rosen. Here, as is usually done, we discuss a simpler version of the argument, thought up somewhat later by David Bohm. Two spin-1/2particles are considered; they are formed from the decay of a spin-1/2particle, and they move outwards from this decay in opposite directions. The combined state-vector may be written as (1/√2)( 1- 2+ - 1- 2+), where the 1s and 2s for particles 1 and 2 are related to the s above. This state-vector has a strange form. The two particles do not appear in it independently; rather either state of particle 1 is correlated with a particular state of particle 2. The state-vector is said to be entangled. Now imagine measuring s1z. If we get + /2, we know that an immediate measurement of s2z is bound to yield - /2, and vice-versa, although, at least according to Copenhagen, before any measurement, no component of either spin has a particular value. The result of this argument is that at least one of three statements must be true: (1) The particles must be exchanging information instantaneously i.e. faster than light; (2) There are hidden variables, so the results of the experiments are pre-ordained; or (3) Quantum theory is not exactly true in these rather special experiments. The first possibility may be described as the renunciation of the principle of locality, whereby signals cannot be passed from one particle to another faster than the speed of light. This suggestion was anathema to Einstein . He therefore concluded that if quantum theory was correct, so one ruled out possibility (3), then (2) must be true. In Einstein 's terms, quantum theory was not complete but needed to be supplemented by hidden variables. Bell regarded himself as a follower of Einstein . He told Bernstein : I felt that Einstein 's intellectual superiority over Bohr , in this instance, was enormous; a vast gulf between the man who saw clearly what was needed, and the obscurantist. Bell thus supported realism in the form of hidden variables. He was delighted by the creation in 1952 by David Bohm of a version of quantum theory which included hidden variables, seemingly in defiance of von Neumann 's result. Bell wrote : In 1952 I saw the impossible done. In 1964, Bell made his own great contributions to quantum theory. First he constructed his own hidden variable account of a measurement of any component of spin. This had the advantage of being much simpler that Bohm's work, and thus much more difficult just to ignore. He then went much further than Bohm by demonstrating quite clearly exactly what was wrong with von Neumann 's argument. Von Neumann had illegitimately extended to his putative hidden variables a result from the variables of quantum theory that the expectation value of A + B is equal to the sum of the expectation values of A and of B. (The expectation value of a variable is the mean of the possible experimental results weighted by their probability of occurrence.) Once this mistake was realised, it was clear that hidden variables theories of quantum theory were possible. However Bell then demonstrated certain unwelcome properties that hidden variable theories must have. Most importantly they must be non-local. He demonstrated this by extending the EPR argument, allowing measurements in each wing of the apparatus of any component of spin, not just sz. He found that, even when hidden variables are allowed, in some cases the result obtained in one wing must depend on which component of spin is measured in the other; this violates locality. The solution to the EPR problem that Einstein would have liked, rejecting (1) but retaining (2) was illegitimate. Even if one retained (2), as long as one maintained (3) one had also to retain (1). Bell had showed rigorously that one could not have local realistic theories of quantum theory. Henry Stapp called this result : the most profound discovery of science. The other property of hidden variables that Bell demonstrated was that they must be contextual. Except in the simplest cases, the result you obtained when measuring a variable must depend on which other quantities are measured simultaneously. Thus hidden variables cannot be thought of as saying what value a quantity 'has', only what value we will get if we measure it. Let us return to the locality issue. So it has been assumed that quantum theory is exactly true, but of course this can never be known. John Clauser, Richard Holt, Michael Horne and Abner Shimony adapted Bell's work to give a direct experimental test of local realism. Thus was the famous CHHS-Bell inequality , often just called the Bell inequality. In EPR-type experiments, this inequality is obeyed by local hidden variables, but may be violated by other theories, including quantum theory. Bell has reached what has been called experimental philosophy; results of considerable philosophical importance may be obtained from experiment. The Bell inequalities have been tested over nearly thirty years with increasing sophistication, the experimental tests actually using photons with entangled polarisations, which are mathematically equivalent to the entangled spins discussed above. While many scientists have been involved, a selection of the most important would include Clauser, Alain Aspect and Anton Zeilinger. While at least one loophole still remains to be closed [in August 2002], it seems virtually certain that local realism is violated, and that quantum theory can predict the results of all the experiments. For the rest of his life, Bell continued to criticise the usual theories of measurement in quantum theory. Gradually it became at least a little more acceptable to question Bohr and von Neumann , and study of the meaning of quantum theory has become a respectable activity. Bell himself became a Fellow of the Royal Society as early as 1972, but it was much later before he obtained the awards he deserved. In the last few years of his life he was awarded the Hughes Medal of the Royal Society , the Dirac Medal of the Institute of Physics, and the Heineman Prize of the American Physical Society. Within a fortnight in July 1988 he received honorary degrees from both Queen's and Trinity College Dublin. He was nominated for a Nobel Prize; if he had lived ten years longer he would certainly have received it. This was not to be. John Bell died suddenly from a stroke on 1st October 1990. Since that date, the amount of interest in his work, and in its application to quantum information theory has been steadily increasing. Source:School of Mathematics and Statistics University of St Andrews, Scotland
5e8464da32a9d844
Normalisable wavefunction Normalisable wavefunction Ask a question about 'Normalisable wavefunction' Start a new discussion about 'Normalisable wavefunction' Answer questions from other users Full Discussion Forum In quantum mechanics Quantum mechanics , wave functions which describe real particle Elementary particle s must be normalizable: the probability Probability theory  of the particle to occupy any place must equal 1. , in one dimension  this is expressed as: Or identically: where the integration from to indicates that the probability that the particle exists somewhere is unity. All wave functions which represent real particles must be normalizable, that is, they must have a total probability of one - they must describe the probability of the particle existing as 100%. For certain boundary conditions, this trait enables anyone who solves the Schrödinger equation Schrödinger equation The Schrödinger equation was formulated in 1926 by Austrian physicist Erwin Schrödinger. Used in physics , it is an equation that describes how the quantum state of a physical system changes in time....  to discard solutions which do not have a finite integral at a given interval. For example, this disqualifies periodic function Periodic function In mathematics, a periodic function is a function that repeats its values in regular intervals or periods. The most important examples are the trigonometric functions, which repeat over intervals of length 2π radians. Periodic functions are used throughout science to describe oscillations,... s as wave function solutions for infinite intervals, while those functions can be solutions for finite intervals. Derivation of normalization In general, is a complex Complex number Function (mathematics) . However, is real Real number , greater than or equal to zero, and is known as a probability density function Probability density function . Here, indicates the complex conjugate. This means that where is the probability of finding the particle at . Equation (1) is given by the definition of a probability density function Probability density function . Since the particle exists, its probability of being anywhere in space must be equal to 1. Therefore we integrate over all space: If the integral is finite, we can multiply the wave function, , by a constant such that the integral is equal to 1. Alternatively, if the wave function already contains an appropriate arbitrary constant, we can solve equation (2) to find the value of this constant which normalizes the wave function. Example of normalization A particle is restricted to a 1D region between and ; its wave function is: To normalize the wave function we need to find the value of the arbitrary constant ; i.e., solve to find . Substituting into we get Hence, the normalized wave function is: Proof that wave function normalization does not change associated properties If normalization of a wave function changed the properties associated with the wave function, the process becomes pointless as we still cannot yield any information about the properties of the particle associated with the un-normalized wave function. It is therefore important to establish that the properties associated with the wave function are not altered by normalization. All properties of the particle such as: probability distribution, momentum, energy, expectation value of position etc.; are derived from the Schrödinger Erwin Schrödinger Erwin Rudolf Josef Alexander Schrödinger was an Austrian physicist and theoretical biologist who was one of the fathers of quantum mechanics, and is famed for a number of important contributions to physics, especially the Schrödinger equation, for which he received the Nobel Prize in Physics in 1933...  wave equation. The properties are therefore unchanged if the Schrödinger wave equation is invariant under normalization. The Schrödinger wave equation is: If is normalized and replaced with , then The Schrödinger wave equation therefore becomes: which is the original Schrödinger wave equation. That is to say, the Schrödinger wave equation is invariant Invariant (mathematics) In mathematics, an invariant is a property of a class of mathematical objects that remains unchanged when transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used... under normalization, and consequently associated properties are unchanged. External links
d75662cbe121a3f7
Presentation is loading. Please wait. Presentation is loading. Please wait. The Development of atomic theory Similar presentations Presentation on theme: "The Development of atomic theory"— Presentation transcript: 1 The Development of atomic theory Chemistry Rules! 2 The Philosophical Era (Circa 500~300BCE) A time when logic ruled the land… This is a good era to do before Chapter 4 officially begins 3 Philosophical Era (Ancient Greece) Two ancient Greeks stand out in the advancement of chemistry. Their ideas were purely based on logic, without experimental support (as was common in that time) 4 Philosophical Era Democritus ( BCE) The most well-known proponent of the idea that matter was made of small, indivisible particles Called the small particles “atomos” meaning “that which cannot be divided” Believed properties of matter came from the properties of the “atomos” 5 Aristotle (384-322 BCE) Famous philosopher of the ancient Greeks Philosophical Era Philosophical Era Aristotle ( BCE) Famous philosopher of the ancient Greeks Believed matter was comprised of four elements Earth, Air, Fire, Water These elements had a total of four properties Dry, Moist, Hot, Cold People liked him – so this idea stayed 6 Alchemical Era (300 BCE ~ 1400CE) The “Dark Ages” of Chemistry where early chemists had to work in secret and encode their findings for fear of persecution This is another good era to do before Chapter 4 officially begins 7 Alchemical Era Alchemy the closest thing to the study of chemistry for nearly two thousand years based on the Aristotelian idea of the four elements of matter If you change the properties, then you could change elements themselves – lead to gold and immortality Very mystical study and experimentation with the elements and what was perceived as magic Study was persecuted, findings hidden in code 8 Procedures of Alchemy Alchemy brought about many lab procedures Alchemical Era Procedures of Alchemy Alchemy brought about many lab procedures We use some of the same methods and the names developed in these dark ages of chemistry 9 Alchemical Era Elements in Alchemy Alchemists studied many different materials, and their properties, in order to find a way to turn lead into gold and achieve immortality 10 Alchemical symbols for various materials Alchemical Era Alchemical symbols for various materials Alchemy had to be discussed in secret so that its students could avoid persecution 11 Alchemists’ Persecution Alchemical Era Alchemists’ Persecution Alchemy was tied to witchcraft and druids it was perceived as heresy by the catholic church Practitioners had to hide their trade or hobby Information was passed in code Coded messages were sent between friends Symbols were used to avoid readable words The growth of Chemistry was stunted by the oppression endured during this era (No such problems in the Far East –Hence gunpowder) Ending the alchemy era with the Flame test lab is a good experience, and a preview for the spectroscopy to come. 12 The Classical Era (1400CE – 1887CE) The printing press heralds the widespread transfer and acquisition of knowledge ------This is a good section to do with Chapter 4, sections 1&2 (students read those simultaneously) The Printing Press was invented in Germany, and this lead to the widespread transfer of knowledge in Europe. Other regions were more geographically restricted from this technological advancement. 13 Foundations Robert Boyle departs from Aristotle (1661) Classical Era Foundations Robert Boyle departs from Aristotle (1661) Suggested in A Skeptical Chymist a substance was not an element if it was made of more than one component Antoine Lavoisier ( ) Accepted Boyle’s idea of elements Developed the concept of compounds Determined Law of Conservation of Mass Law: There is no change in mass due to chemical reactions Discovered Oxygen Recognized Hydrogen as an element 14 Foundations (continued) Classical Era Foundations (continued) Joseph Proust (1790s) Determined the Law of Definite Proportions Elements combine in definite mass ratios to form compounds Robert Boyle Irish Antoine Lavoisier (and wife) French Joseph Proust French This slide is a good opportunity to comment on the ethnicity of scientists and how even Lavoisier’s wife was highly involved with chemistry. Note: these are eastern Europeans because the printing press was invented in east Europe. 15 John Dalton [really famous] (1766-1844) Classical Era John Dalton [really famous] ( ) Dalton returns to Democritus’ ideas in 1803 with four postulates All matter is made up of tiny particles called atoms All atoms of a given element are identical to one another and different from atoms of other elements Atoms of two or more different elements combine to form compounds. A particular compound is always made up of the same kinds of atoms and the same number of each kind of atom. A chemical reaction involves the rearrangements, separation, or combination of atoms. Atoms are never created or destroyed during a chemical reaction. John Dalton English (Originally poor and self-educated) 16 Defense of Atoms (After Dalton) Classical Era Defense of Atoms (After Dalton) Joseph Gay-Lussac ( ) 2L hydrogen (g) + 1L Oxygen (g)  2L Water Vapor (g) Experimental findings disagreed with some of Dalton’s beliefs Amadeo Avogadro ( ) Suggested Hydrogen and Oxygen are diatomic molecules This solved the riddle over Gay-Lussac’s experimental results Gay-Lussac had the only experiment that seemed to be contrary to Dalton’s ideas. This was unsettling for Dalton, and many people began to seek a way to resolve this issue. Avogadro was the one to suggest a functional response, but living past the Swiss alps, he was at a disadvantage to defend his ideas in the majorly English/French Chemistry forum. Joseph Gay-Lussac French Amadeo Avogadro Italian lawyer 17 Dalton’s Disbelief Dalton refused Avogadro's Diatomic molecules Classical Era Dalton’s Disbelief Dalton refused Avogadro's Diatomic molecules Dalton wrongly believed that similar types of atoms would repel, like poles of a magnet – hence no diatoms Due to Dalton’s reputation in chemistry, his ideas were believed over Avogadro’s Sustaining Dalton’s (wrong) theory, that mass corresponded to amount of atoms, led to confusion Avogadro’s ideas lived on in Italy (south of the Alps) 18 Classical Era Avogadro’s Number In 1860 a council of chemists met to solve the problems they had standardizing atomic masses This was only a problem because they kept Dalton’s idea instead of Avogadro’s An Italian chemistry teacher, Cannizzaro, presented His teaching pamphlet used simple math based on a corollary of Avogadro’s theory– Avogadro's Number Avogadro's Number grouped atoms into moles: ×1023 parts = 1mole (6.022×1023parts/mole) 19 Classical Era Mendeleev’s Table (1869) Once a standard for atomic masses was made, people started to see trends These trends showed that properties gradually changed with atomic mass, but seemed to cycle periodically Dmitri Mendeleev was a Russian teacher He arranged the elements in a table so that his students could learn more easily Listed atoms by atomic masses New columns whenever the properties cycled Empty spots left – He predicted undiscovered elements Dmitri Mendeleev Russian teacher 20 Mendeleev’s table quickly became famous Classical Era The B/W version on the left is one of Mendeleev’s original Russian manuscripts. The image on the right is the same information translated into an English textbook – only a few years later. Mendeleev’s table quickly became famous Here is a black and white copy of the manuscript, and an English textbook version 21 **Don’t Forget Newton!!! (1643-1727) Classical Era **Don’t Forget Newton!!! ( ) Isaac Newton was very important to science He is most remembered for his contributions to physics, including gravity and much work in optics (light) He was the first person to divide white light into its parts Splitting light into parts lead to many interesting discoveries Use spectroscopes of some kind to re-evaluate the flame test labs for their emission spectra. It will likely be a good idea to link this activity to the flame test, but instead use the spectra emission tubes. 22 The Subatomic Era (1897CE – 1932CE) The relatively quick discovery of things smaller than the once “indivisible” atom This is a good era to do with Chapter 4, section 3 23 It’s Electric! Electricity was studied throughout the classical era Subatomic Era It’s Electric! Electricity was studied throughout the classical era Ben Franklin’s kite in a thunderstorm (1752) Electricity could flow through gasses (atmosphere) 24 Cathode Ray Tubes Glass chambers used to study electricity in gasses Subatomic Era Cathode Ray Tubes Glass chambers used to study electricity in gasses Crooke observed glowing rays emitted from the cathode Glowing rays were observed in all gasses, and even gasless set-ups 25 J.J. Thompson English (1897) Subjected cathode rays to magnetic fields Subatomic Era J.J. Thompson English (1897) Subjected cathode rays to magnetic fields Using three different arrangements of CRTs he was able to determine that the Cathode rays… Were streams of negatively charged particles Those particles had very low mass-to-charge ratios The observed mass-to-charge ratio was over one thousand times smaller than that of hydrogen ions The CRT particles had to be much lighter than hydrogen and/or very highly charged Mass-to-charge ratio of Electron: ×1011C/kg Mass-to-charge ratio of Proton (H+):9.578×107C/kg The schematic depiction of the CRT given here is one of only three types of CRTs that Thompson experimented with. He needed all three types to collect the data needed to get the information he presented. Also, the particular schematic shown here is also a rudimentary schematic for any CRT television. An interesting talking point for students, who may have some experience with the latter. 26 Robert Millikan American (1909) Subatomic Era Robert Millikan American (1909) Thompson needed to know either the mass or the charge of his negative particles to describe them Millikan’s oil drop let him find that the charge on objects is always some multiple of 1.60×10-19C He proposed this as the basic increment of charge Applying this charge to Thompson’s particles, he found the mass to be much less than any atom This is a good time to read the excerpt from the Caltech commencement speech about refining Millikan’s results. It greatly highlights the idea of Scientific Bias and how this affects “real” scientists and what students need to be leery of in their own classroom experiments (and other life scenarios). Find an atomizer – and build this set-up. Learn how to either do it for real as a demonstration, or make it with an illusion good enough that the students can’t tell its fake. 27 Subatomic Era Plumb Pudding Model (1904) With the combined work of Thompson and Millikan the first subatomic particle was established! Electrons – one part of an atom with one negative fundamental increment of electrical charge Since whole atoms were known to be electrically neutral, Thompson developed the plumb pudding model of the atom Positively (+) charged majority Negatively (-) Charged electrons 28 Ernest Rutherford New Zealander (1910) Subatomic Era Ernest Rutherford New Zealander (1910) Rutherford worked with radiation and had heard of Thompson’s plumb pudding model He wanted to use radiation to prove Thompson’s model He set-up an alpha particle gun (with help from Marie Curie) to shoot at an ultra-thin piece of gold foil, with a Geiger counter on the other side This is another good break to comment on the diversity of people in science. Marie Curie was a BIG DEAL. She had 2 Nobel prizes to be proud of. Ernest Rutherford New Zealand Marie Curie Polish/ French 29 Rutherford’s Results Rutherford’s results were not what he expected Subatomic Era Rutherford’s Results Rutherford’s results were not what he expected Expected to have all alpha particles go straight through all of the atoms Saw that occasionally an alpha particle would ricochet Determined the positive charge of an atom must be held in a massive, centrally located, “nucleus” 30 Subatomic Era The Second Subatomic After more realizations and experiments the second subatomic particle was formally named (1911) Through more Nuclear physics Rutherford determined all atomic nuclei were made up of hydrogen nuclei Hydrogen nuclei are deemed Protons Antonius van den Broek suggested elements on the periodic table are in order by their increasing number of protons, not Mendeleev’s atomic masses Proton: The massive subatomic particle, within the nucleus of an atom, with a single positive charge 31 Subatomic Era The Planetary Model (1911) Earnest Rutherford took his idea of a nucleus, and the known electrons, to construct a new atomic model There is a compact nucleus The nucleus, made of nucleons, is the location of positive charge in the atom The charge of the nucleus might be proportional to its mass The orbit of the electrons kept them from falling directly into the nucleus, just like planetary motion The Rutherford Model or The Planetary Model The image shows a distinction between two types of particles in the nucleus. Rutherford’s model technically would not have had this – or even possibly known about neutrons. In fact, Rutherford's model was kind of vaguely described even in the article he used to propose it – he was very leery of committing to more than what he absolutely knew to be true about the atom. (He never even said “electron orbits.” That idea was just pieced together from commentary on Rutherford’s model and what came after it.) You can raise questions as to why that may have been a good move… 32 Subatomic Era The Third Subatomic (1932) Electrons and Protons were identified as particles, but these alone could not fully describe atoms The charge-to-mass ratio of atoms was off without another addition James Chadwick studied an unnamed form of radiation– he found it to be electrically neutral and about the mass of a proton Including these particles in the nucleus of the atom solved all discrepancies that were previously observed James Chadwick English 33 Subatomic Review Subatomic Era Electrons Orbit the nucleus Very small mass: ×10−31 kg Negatively charged: − ×10−19 C Nucleons: all particles that make up the nucleus Protons Reside in the nucleus Relatively large mass: ×10−27 kg Positively Charged: ×10-19 C Neutrons Reside in the Nucleus Relatively large mass: ×10−27 kg No electric charge 34 Atomic Variance An atom’s element is defined by the number of… Protons Subatomic Era Atomic Variance An atom’s element is defined by the number of… Protons Any atom with a non-neutral charge is called an… Ion Ions exist because the atom has either more or fewer than There are several different forms of elements called that vary in amounts of Electrons Protons Isotopes Neutrons 35 The Modern Era (1900CE – Present) The Quark Era starts in 1964, but that advance can be regarded as outside the realm of chemistry – instead a part of nuclear physics Comment on the scope of the course, and how chemistry is distinct from other “nearby” physical sciences. ****Warning: Before this era there needs to be a presentation on the nature of light and EM radiation. Chapter 5 in your book! Read pages 36 Modern Era It all begins… (1900) Scientists believed that we had answered all major questions- only leaving a few items to finish Max Plank was commissioned to build a better light bulb He wanted to answer questions about “black body radiation” He reluctantly used statistics to solve questions (he was very conservative) December 14, 1900 Statistics was a “dirty word” at the time in science. It couldn’t make concrete predictions or descriptive and absolute rules about the world, like calculus could. Max Plank German, Physicist 37 Statistics in Science Modern Era Most science uses regular math (ex: F=ma) This era starts to deviate from tradition… The second law of thermodynamics (Boltzmann) All systems move toward a less organized state Plank knew about Boltzmann’s ideas –but disproved of deviation from tradition Plank reluctantly adopted statistics to best explain experimental findings, although he didn’t want to be progressive Einstein interpreted Plank’s use of statistics to start Quantum theory Highlight the supremacy of the second law of thermodynamics in chemistry. Inform the students that it is one thing they will have to understand in chemistry. Take the time to comment on the interaction between scientists in this time era – the social aspect of science is important. 38 Quantum Theory Energy can only be transferred in small packets Modern Era Quantum Theory Energy can only be transferred in small packets Plank saw the emission of light could not be explained by classical physics of the day Energy transferred in whole-number multiples of hν ΔE = energy transferred n = integer multiple ν = frequency of light h = Plank constant (4.134×10-15eV·s ) ΔE = nhν Contrast this type of math to statistics, and ensure the students know they will be held accountable for basic algebra skills in this class. 39 Modern Era Photon – light packets Light partially behaves like particles that Einstein called Photons De Broglie said - all matter can be described by similar wave packets This blurred the line between particles and waves λ=h/p Highlight that this the second time students have seen this slide. 40 λ=h/p …or(λ=h/mv) Wavelength = Plank’s constant / momentum Modern Era λ=h/p …or(λ=h/mv) Wavelength = Plank’s constant / momentum Wavelength – wave property Plank’s constant – a fundamental constant × 10-34 m2 kg / s Momentum – a mechanical property Momentum = mass × velocity (p=mv) Find the wavelength of lots of things! Highlight that this the second time students have seen this slide. 41 Modern Era Explaining Data The quantum theory suddenly meant energy could only be transferred in discrete amounts We had observed emission spectra and knew the Rutherford model, but neither was fully explained Emission Spectra of Iron (Fe) Define discrete. Emission Spectra of Hydrogen (H) 42 Bohr’s Planetary Model of the Atom Modern Era Bohr’s Planetary Model of the Atom integrated all known information into a new, mathematically based, model of the atom He kept electrons in orbits around the nucleus Only allowed certain specific electron orbits for each atom Electron transitions between energy levels (orbits) could only be jumps – nothing could be in between these energy levels (like steps on stairs) Make connection between orbits and energy levels. Be sure that students know that he drew in the lines that Rutherford was not willing to do. His model only worked well for hydrogen atoms Niels Bohr Danish Physicist 43 Discrete Electron Energy Levels Modern Era Discrete Electron Energy Levels DeBroglie said that electrons always act like waves This supported the idea of discrete energy levels Only certain wavelengths will “fit” around the atom Shake a jump rope with someone, slowly increasing speed. Comment on how not all speeds will create a standing wave, and how this relates to discrete orbits or energy levels. 44 Bohr Energy levels Z2 E=-13.6eV n2 Modern Era Bohr Energy levels Electrons can only travel in specific energy levels E=-13.6eV Z2 n2 E = The actual energy of the given energy level Z = the nuclear charge (number of protons) n=1 n=2 n=3 This linked the properties of atoms with the observations of emission spectrum 45 Bohr Energy Levels Atoms typically found in “Ground State” Modern Era Bohr Energy Levels Atoms typically found in “Ground State” Electrons want to exist in the lowest energy levels available Atoms can be raised to an “Excited State” Electrons can be put into higher energy levels than usual, but energy has to be added to do so Lowest energy levels due to 2nd law of thermodynamics 46 Energy Level Transitions Modern Era Energy Level Transitions Electron jump: Quantum leap! Electrons can jump from any lower energy level to a higher energy level and vice versa Total energy of atom changes Light is absorbed to get to higher energy states Light is emitted when electrons jump to lower energy states 47 Modern Era Electron Transitions Only Specific wavelengths of light are absorbed and emitted by atoms – you have seen these before Light emitted by atoms is the emission spectra ΔE = Efinal –Einitial E = hν h=Plank’s Constant 4.134×10-15eV·s 6.63×10-34 m2kg/s 48 Modern Era Some Practice! Colors of light are identified by their frequency and/or wavelength Find the frequency of light for transitions 1-3 Find the wavelength of light for transition 3 What does 4 mean? 2 4 3 1 49 Modern Era The Fall of Bohr… Bohr had easily come up with the best model for the atom so far, and his impact is still felt today but… Werner Heisenberg, a student of Bohr’s, stated: It is impossible to know the absolutely exact position and momentum of anything at the same time Δx Δp ≥ h Werner Heisenberg Germany 50 Modern Era The New Quantum Model In 1926 Erwin Schrödinger developed an equation that took care of all inconsistencies of Bohr’s model Completely treated electrons as waves (Ψ) Accounted for uncertainty principle This took the electron from existing in defined orbits to living in a “probability cloud” Concentric probability clouds expand out from the nucleus Probability cloud – the area where an electron is likely to be found The above equation is the 1-dimensional Schrödinger equation for the behavior of quantum particles. It is coursely: E=energy, Ψ=wavefunction, V=velocity, ▼(del)= multivariable derivative (since it is squared it is the second derivative), m=mass? 51 The Modern (current) Atom Modern Era The Modern (current) Atom We don’t know any electron’s exact location or momentum Heisenberg uncertainty principle We know electrons act like waves Electrons are likely to exist in some areas around a nucleus, and not in other areas We can find probabilities where electrons can be found Erwin Schrödinger Austria 52 Modern Era What does it look like? Likely electron locations are now represented by probability clouds – a way to graph probability in three dimensions Electron Clouds Electron Bubbles The bubble represent the same thing as the clouds, however it is much easier to draw a bubble. So, when graphing this 3-D data the bubble is constructed by choosing an arbitrary point of probability (usually two standard deviations) and drawing in the surface of the bubble at that point of equal probability. 53 Modern Era Electron Orbitals Bubbles are much easier to draw… Download ppt "The Development of atomic theory" Similar presentations Ads by Google
b0dc10c5b860edb4
Chaitanya's Random Pages December 7, 2019 Coordinates of special points of the 3-4-5 triangle Filed under: mathematics — ckrao @ 3:40 am One thing I observed is that the 3-4-5 triangle is rather attractive in solving problems using coordinates. If the vertices are placed at (0,3), (0,0) and (4,0) the following are the coordinates of points and equations of some lines of interest. Line AC: x/4 + y/3 = 1 Incentre: (1, 1) Centroid: (4/3, 1) Circumcentre: (2, 1.5) Orthocentre: (0, 0) Nine-point centre: (1, 3/4) (midpoint of the midpoints of AB and BC) Angle bisectors: y = x, y=-2x +3, y=4/3-x/3 Ex-centres (intersection of internal and external bisectors): (3,- 3), (6, 6), (-2, 2) Lines joining the excentres (in red above): y=-x, y=x/2 +3, y = 3(x-4) Altitude to the hypotenuse: y = 4x/3 Euler line: y=3x/4 Foot of altitude to the hypotenuse: (36/25, 48/25) (where x/4 + y/3 = 1 intersects y=4x/3) Symmedian point (midpoint of the altitude to the hypotenuse [1]): (18/25, 24/25) Contact points of incircle and triangle: (1,0), (0,1), (8/5, 9/5) Gergonne point (intersection of Cevians that pass through the contact points of the incircle and triangle = the intersection of y=3-3x and y=1-x/4): (8/11, 9/11) Nagel point (intersection of Cevians that pass through the contact points of the ex-circles and triangle = the intersection of y=3-x and y=2-x/2: (2,1) [1] Weisstein, Eric W. “Symmedian Point.” From MathWorld–A Wolfram Web Resource. January 26, 2019 49+ °C temperatures in Australia Filed under: climate and weather — ckrao @ 12:32 pm Below is a list of recorded instances of maximum temperatures of 49 degrees Celsius or more in Australia, based on [1] and [2] from Australia’s Bureau of Meteorology. Out of the 48 52 occasions, 22 26 have occurred in this decade including 8 (so far) during this summer alone! I believe all the stations have been recording temperatures for at least 20 years except Port Augusta and Keith West (which both started in 2001). Edited: 20 Dec 2019 Temperature (°C) Date Station Name State 50.7 2-Jan-60 Oodnadatta Airport SA 50.5 19-Feb-98 Mardie WA 50.3 3-Jan-60 Oodnadatta Airport SA 49.9 19-Dec-19 Nullarbor SA 49.8 19-Dec-19 Eucla WA 49.8 21-Feb-98 Emu Creek Station WA 49.8 13-Jan-79 Forrest Aero WA 49.8 3-Jan-79 Mundrabilla Station WA 49.7 10-Jan-39 Menindee Post Office NSW 49.6 12-Jan-13 Moomba Airport SA 49.5 19-Dec-19 Forrest WA 49.5 24-Jan-19 Port Augusta Aero SA 49.5 24-Dec-72 Birdsville Police Station QLD 49.4 21-Dec-11 Roebourne WA 49.4 16-Feb-98 Emu Creek Station WA 49.4 7-Jan-71 Madura Station WA 49.4 2-Jan-60 Marree Comparison SA 49.4 2-Jan-60 Whyalla (Norrie) SA 49.3 27-Dec-18 Marble Bar WA 49.3 2-Jan-14 Moomba Airport SA 49.3 9-Jan-39 Kyancutta SA 49.2 20-Dec-19 Keith West SA 49.2 24-Jan-19 Kyancutta SA 49.2 21-Feb-15 Roebourne Aero WA 49.2 10-Jan-14 Emu Creek Station WA 49.2 22-Dec-11 Onslow Airport WA 49.2 1-Jan-10 Onslow WA 49.2 11-Jan-08 Onslow WA 49.2 9-Feb-77 Mardie WA 49.2 1-Jan-60 Oodnadatta Airport SA 49.2 3-Jan-22 Marble Bar Comparison WA 49.2 11-Jan-05 Marble Bar Comparison WA 49.1 24-Jan-19 Tarcoola Aero SA 49.1 23-Jan-19 Red Rocks Point WA 49.1 13-Jan-19 Marble Bar WA 49.1 27-Dec-18 Onslow Airport WA 49.1 3-Jan-14 Walgett Airport AWS NSW 49.1 2-Jan-10 Emu Creek Station WA 49.1 18-Feb-98 Roebourne WA 49.1 23-Dec-72 Moomba SA 49 15-Jan-19 Tarcoola Aero SA 49 23-Jan-15 Marble Bar WA 49 13-Jan-13 Birdsville Airport QLD 49 9-Jan-13 Leonora WA 49 21-Dec-11 Roebourne Aero WA 49 1-Jan-10 Mardie WA 49 10-Jan-09 Emu Creek Station WA 49 11-Jan-08 Port Hedland Airport WA 49 11-Jan-08 Roebourne WA 49 12-Jan-88 Marla Police Station SA 49 6-Dec-81 Birdsville Police Station QLD 49 22-Dec-72 Marree SA December 30, 2018 A collection of energy formulas Filed under: science — ckrao @ 10:58 am Energy is a quantity that is conserved as a consequence of the time translation invariance of the laws of physics. Below are some formulas calculating energy of different forms. Kinetic energy is that associated with motion and is defined as K = \frac{1}{2} mv^2 = \frac{p^2}{2m} for a particle with mass m, velocity v and momentum p. If the mass is a fluid in motion (e.g. wind) with density \rho and volume A v t through cross-sectional area A, then K = \frac{1}{2} At\rho v^3. Work is the result of a force F applied over a displacement \mathbf{s} and is given by the line integral \displaystyle W = \int_C \mathbf{F} . \mathrm{d}\mathbf{s} = \int_{t_1}^{t_2} \mathbf{F} . \frac{\mathrm{d}\mathbf{s}}{\mathrm{d}t} \ \mathrm{d}t= \int_{t_1}^{t_2} \mathbf{F}.\mathbf{v}\ \mathrm{d}t . This has the simple form W = Fs \cos \theta when force is constant and displacement is linear where \theta is the angle between the force and displacement vectors. Using Newton’s 2nd law and the relation \frac{\mathrm{d}}{\mathrm{d}t} (\mathbf{v}^2) = 2\mathbf{v}.\frac{\mathrm{d}\mathbf{v}}{\mathrm{d}t} this can be written as \displaystyle W = m\int_{t_1}^{t_2} \frac{d\mathbf{v}}{dt} . \mathbf{v} \mathrm{d}t = \frac{1}{2}m\int_{t_1}^{t_2} \frac{\mathrm{d}}{\mathrm{d}t} (\mathbf{v}^2) \mathrm{d}t = \frac{1}{2}m\int_{v_1^2}^{v_2^2} \mathrm{d}(\mathbf{v}^2) = \frac{1}{2}mv_2^2 - \frac{1}{2}mv_1^2. This is the work-energy theorem which says that work is the change in kinetic energy by a net force. It can also be written as W = \int_{v_1}^{v_2} m \mathbf{v}.\mathrm{d}\mathbf{v} = \int_{p_1}^{p_2} \mathbf{v}.d\mathbf{p} where \mathbf{p} = m\mathbf{v} is momentum. The above has the rotational analogue K = \frac{1}{2} I \omega^2 where I is moment of inertia and \omega is angular velocity and the equation for work becomes \displaystyle W = \int_{t_1}^{t_2} \mathbf{T} . \mathbf{\omega}\ \mathrm{d}t, where \mathbf{T} is a torque vector. This has the simple form W = Fr \omega = \tau \omega in the special case of a constant magnitude tangential force where \tau = Fr is the torque resulting from force F applied at distance r from the centre of rotation. Note that the time derivative of work is defined as power, so work can also be expressed as the time integral of power: W = \int P(t)\  \mathrm{d}t = \int_{t_1}^{t_2} \mathbf{F}.\mathbf{v}\ \mathrm{d}t. If the work done by a force field \mathbf{F} depends only on a particle’s end points and not on its trajectory (i.e. conservative forces), one may define a potential function of position, known as potential energy U satisfying \mathbf{F} = -\nabla U. By convention positive work is a reduction in potential, hence the minus sign. It then follows that in such force fields the sum of kinetic and potential energy is conserved. Some types of potential energy: • due to a gravitational field: \mathbf{F} = -(GMm/r^2) \hat{r}, U = -GMm/r, where M, m are the masses of two bodies, r the distance between their centre of masses and G is Newton’s gravitation constant. • due to earth’s gravity at the surface: \mathbf{F} = -mg, U = mgh where g \approx 9.8 ms^{-2} and h is the object’s height above ground (small compared with the size of the earth). • due to a spring obeying Hooke’s law: \mathbf{F} = -kx, U = kx^2/2 where k is the spring constant and x the displacement from an equilibrium position. • due to an electrostatic field: \mathbf{F} = q\mathbf{E} = (k_e qQ/r^2) \hat{r}, U = k_e qQ/r where k_e is Coulomb’s constant 1/(4\pi \epsilon_0) and q, Q are charges. This can be written as U = qV where V is a potential function measured in volts. • for a system of point charges: \displaystyle U =k_e \sum_{1 \leq i < j \leq n} \frac{q_i q_j}{r_{ij}}. • for a system of conductors: U = \frac{1}{2} \sum_{i=1}^n Q_i V_i where the charge on conductor i is Q_i and its potential is V_i. • for a charged dielectric: the above may be generalised to the volume integral U = \frac{1}{2} \int_V \rho \Phi \ \mathrm{d}v where \rho is charge density and \Phi is the potential corresponding to the electric field. • for an electric dipole in an electric field: U = -\mathbf{p}.\mathbf{E} where \mathbf{p} is directed from the negative to positive charge and has magnitude equal to the product of the positive charge and charge separation distance. • for a current loop in a magnetic field: U = -\mathbf{\mu}.\mathbf{B} where \mathbf{\mu} is directed normal to the loop and has magnitude equal to the product of the current through the loop and its area. In electric circuits the voltage drop across an inductance L is v = L di/dt and the current though a capacitance C is i = C dv/dt. These inserted into the relationship E = \int i(t)v(t) \ \mathrm{d}t lead to the formulas E = \frac{1}{2}L(\Delta I)^2 and E = \frac{1}{2}C(\Delta V)^2 for the energy stored in a capacitor and inductor respectively. Also in electromagnetism the energy flux (flow per unit area per unit time) is the Poynting vector \mathbf{S} = \mathbf{E} \times \mathbf{H}, the cross product of the electric and magnetising field vectors. The electromagnetic energy in a volume V is given by ([1]) \displaystyle \frac{1}{2}\int_V \mathbf{B}.\mathbf{H} + \mathbf{E}.\mathbf{D}  \ \mathrm{d}v, where \mathbf{D} is the electric displacement field and \mathbf{B} is the magnetic field. This is more commonly written as \displaystyle \frac{1}{2} \int_V \epsilon_0 |E|^2 + |B^2|/\mu_0 \ \mathrm{d}v when the relationships \mathbf{D} = \epsilon_0\mathbf{E}, \mathbf{B} = \mu_0 \mathbf{H} hold. In special relativity energy is the time component of the momentum 4-vector. That is, energy and momentum are mixed in a similar way to how space and time are mixed at high velocities. Computing the norm of the momentum four-vector gives the energy-momentum relation This leads to E = pc for massless particles (such as photons) and more generally E = \gamma m_0 c^2 , the mass-energy equivalence relation (here \gamma = (1 - (v/c)^2)^{-1/2} and m_0 is rest mass). In quantum mechanics the energy of a photon is also written as E = hf = hc/\lambda (Planck-Einstein relation) where h is Planck’s constant and f, \lambda are frequency and wavelength respectively. Energies of quantum systems are based on the eigenstates of the Hamiltonian operator, an example of which is \displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}\nabla^2+V(x). Force is also equal to pressure times area, so another formula for work (e.g. done by an expanding gas) is the volume integral W = \int p \mathrm{d}V. In thermodynamics heat is energy transferred through the random motion of particles. The fundamental equation of thermodynamics quantifies the internal energy U which disregards kinetic or potential energy of a system as a whole (only considering microscopic kinetic and potential energy): \displaystyle U = \int  \left(T \text{d}S - p \mathrm{d}V + \sum_i \mu_i \mathrm{d}N_i \right) where T is temperature, S is entropy, N_i is the number of particles and \mu_i the chemical potential of species i. Similar formulas exist for other thermodynamic potentials such as Gibbs energy, enthalpy and Helmholtz energy. The mean translational kinetic energy of a bulk substance is related to its temperature by \bar{E} = \frac{3}{2}k_B T where k_B is Boltzmann’s constant. In thermal transfer the change in internal energy is given by \Delta U = m C \Delta T where m is mass and C is the heat capacity which may apply to constant volume or constant pressure. The power per unit area emitted by a body is given by the Stefan-Boltzmann law P = A \epsilon \sigma T^4 where \epsilon is the emissivity (=1 for black body radiation) and \sigma is the Stefan–Boltzmann constant. This equation may be used to determine the energy emitted by stars using their emission spectrum. The latent heat (thermal energy change during a phase transition) of mass m of a substance with specific latent heat constant L is given by Q = mL. Finally, the energy of a single wavelength of a mechanical wave is \displaystyle \frac{1}{2} m\omega^2 A^2 where m the mass of a wavelength, A the amplitude and \omega the angular frequency [2]. This can be applied to finding the energy density of ocean waves for example [3]. [1] Poynting Vector. Retrieved 22:24, December 28, 2018, from [2] Power of a Wave. Retrieved 21:23, December 30, 2018, from [3] Wikipedia contributors, “Wave power,” Wikipedia, The Free Encyclopedia, (accessed December 30, 2018). [4] Wikipedia contributors, “Work (physics),” Wikipedia, The Free Encyclopedia, (accessed December 30, 2018). [5] Wikipedia contributors, “Potential energy,” Wikipedia, The Free Encyclopedia, (accessed December 30, 2018). [6] Wikipedia contributors, “Electric potential energy,” Wikipedia, The Free Encyclopedia, (accessed December 30, 2018). [7] Wikipedia contributors, “Thermodynamic equations,” Wikipedia, The Free Encyclopedia, (accessed December 30, 2018). [8] H. Ohanian, Physics, 2nd edition, Norton & Company, 1989. June 12, 2018 Rafael Nadal in best of five set matches on clay Filed under: sport — ckrao @ 1:28 pm Following Rafael Nadal‘s 11th French Open win, it’s worth looking at just how amazing his best-of-five set record on clay is. He now has a 111-2 win-loss record, with his only two losses against Söderling and Djokovic. Win-loss breakdown by tournament (Masters tournament finals changed to best of 3 sets from 2007): • French Open: 86-2 • Davis Cup: 18-0 • Barcelona Open: 2-0 • Monte Carlo Masters: 2-0 • Rome Masters: 2-0 • Stuttgart: 1-0 Win-loss breakdown by number of sets (overall he has won 331 and lost 36 completed sets so even winning a set against him is a big deal!): • 5 sets: 4-0 (Coria, Federer, Isner, Djokovic – 5th set scores 7-6 (6), 7-6 (5), 6-4, 9-7 respectively) • 4 sets: 22-1 (loss to Söderling) • 3 sets: 83-1 Most common opponents (2 or more matches): • Djokovic: 7-1 (lost 7 sets) • Federer: 7-0 (lost 7 sets) • Ferrer: 4-0 (lost 1 set) • Almagro: 4-0 • Hewitt: 4-0 (lost 1 set) • Söderling: 3-1 (lost 3 sets) • Thiem: 3-0 • del Potro: 3-0 (lost 1 set) • Gasquet: 3-0 • Seppi: 2-0 (lost 1 set) • Murray: 2-0 • Roddick: 2-0 (lost 1 set) • Coria: 2-0 (lost 3 sets) • Ljubicic: 2-0 • Monaco: 2-0 • Bolelli: 2-0 • Wawrinka: 2-0 (same score of 6-2 6-3 6-1 both times) • Bellucci: 2-0 Breakdown by set score (almost the same likelihood of winning a set 6-2, 6-3 or 6-4): • 6-0: 26 • 6-1: 61 • 6-2: 68 • 6-3: 66 • 6-4: 67 • 7-5: 18 • 7-6: 24 • 9-7: 1 • 6-7: 9 • 5-7: 6 • 4-6: 8 • 3-6: 6 • 2-6: 3 (Federer in 2006 Rome, Söderling in 2009 French Open, Djokovic in 2012 French Open) • 1-6: 3 (Federer in 2006 French Open, del Potro in 2011 Davis Cup, Djokovic in 2015 French Open) • 0-6: 1 (Coria in 2005 Monte Carlo Masters) (only one incomplete set 2-0 after which Pablo Carreno Busta retired) (1) Tennis Abstract: Rafael Nadal ATP Match Results, Splits, and Analysis (2) Ultimate Tennis Statistics – Rafael Nadal April 1, 2018 A collection of binary grid counting problems Filed under: mathematics — ckrao @ 3:52 am The number of ways of colouring an m by n grid one of two colours without restriction is 2^{mn}. The following examples show what happens when varying restrictions are placed on the colouring. Example 1: The number of ways of colouring an m by n grid black or white so that there is an even number of 1s in each row and column is \displaystyle 2^{(m-1)(n-1)}. Proof: The first m-1 rows and n-1 columns may be coloured arbitarily. This then uniquely determines how the bottom row and rightmost column are coloured (restoring even parity). The bottom right square will be black if and only if the number of black squares in the remainder of the grid is odd, hence this is also uniquely determined by the first m-1 rows and n-1 columns. Details are also given here. Example 2: The number of ways of colouring an m by n grid black or white so that every 2 by 2 square has an odd number (1 or 3) of black squares is \displaystyle 2^{m+n-1}. Proof: First colour the first row and first column arbitarily (there are m+n-1 such squares each with 2 possibilities). This uniquely determines how the rest of the grid must be coloured by considering the colouring of adjacent squares above and to the left. By the same argument, the above is the same as the number of colouring an m by n grid black or white so that every 2 by 2 square has an even number (0, 2 or 4) of black squares. Example 3: The number of ways of colouring an m by n grid black or white so that every 2 by 2 square has two of each type is \displaystyle 2^m + 2^n - 2. Proof: If there are two adjacent squares of the same colour with one above the other, the remaining squares of the corresponding two rows are uniquely determined as being the same alternating between black and white. The remainder of the grid is then determined by the colouring of first column (2^m - 2 possibilities where we omit the two cases of alternating colours down the first column). Such a grid cannot have two horizontally adjacent squares of the same colour. By a similar argument a colouring that has two adjacent colours with one left of the other can be done in 2^n-2 ways. Finally we have the two additional configurations where there are no adjacent squares of the same colour, which is uniquely determined by the colour of the top left square. Hence in total we have (2^m-2) + (2^n-2) + 2 = 2^m + 2^n - 2 possible colourings. This question for m = n = 8 was in the 2017 Australian Mathematics Competition and the general solution is also discussed here. Example 4: The number of ways of colouring an m by n grid black or white so that each row and each column contain at least one black square is (OEIS A183109) \displaystyle \sum_{j=0}^m (-1)^j \binom{m}{j} (2^{m-j}-1)^n. Proof: First we count the number of colourings where a fixed subset of j columns is entirely white and each row has at least one black square. The remaining m-j columns and n rows can be coloured in (2^{m-j}-1)^n ways. To count colourings where each column has at least one black square we apply the principle of inclusion-exclusion and arrive at the above result. Another inclusion-exclusion example shown here counts the number of 3 by 3 black/white grids in which there is no 2 by 2 black square. The answer is 417 with more terms for n by n grids in OEIS A139810. Example 5: Suppose we wish to count the number of colourings of an m by n grid in which row i has k_i black squares and column j has l_j black squares (i = 1, 2, \ldots m, j = 1, 2, \ldots, n). Following [1], the number of ways this can be done is the coefficient of x_1^{k_1}x_2^{k_2} \ldots x_m^{k_m}y_1^{l_1}y_2^{l_2}\ldots y_n^{l_n} in the polynomial \displaystyle \prod_{i=1}^m \prod_{j=1}^n (1 + x_i y_j). To see this note that expanding the product gives products of terms of the form (x_i y_j) where such a term included corresponds to the i‘th row and jth column being coloured black. Hence the coefficient of x_1^{k_1}x_2^{k_2} \ldots x_m^{k_m}y_1^{l_1}y_2^{l_2}\ldots y_n^{l_n} is the number of ways in which the system \sum_{j=1}^n a_{ij} = k_i, \sum_{i=1}^m a_{ij} = l_j has a solution (i = 1, 2, \ldots m, j = 1, 2, \ldots, n) for a_{ij} equal to 1 if and only if row i and column j are coloured black and 0 otherwise. Let us evaluate this in the special case of 2 black squares in every row and every column for an n by n square grid (i.e. k_i = l_j = 2 and m = n). Picking two squares in each column to colour black means viewing the expansion as a polynomial in y_1, \ldots, y_n the coefficient of y_1^2y_2^2\ldots y_n^2 has sums of products of n terms of the form x_ix_j. Then using [] notation to denote the coefficient of an expression, we have \begin{aligned} \left[x_1^2x_2^2 \ldots x_n^2y_1^2y_2^2\ldots y_n^2 \right]  \prod_{i=1}^n \prod_{j=1}^n (1 + x_i y_j) &= \left[x_1^2x_2^2 \ldots x_n^2 \right] \left( \sum_{i=1}^n\sum_{j=i+1}^n x_i x_j \right)^n\\&= \left[x_1^2x_2^2 \ldots x_n^2 \right] 2^{-n} \left( \left( \sum_{i=1}^n x_i\right)^2 - \sum_{i=1}^n x_i^2 \right)^n\\ &= \left[x_1^2x_2^2 \ldots x_n^2 \right] 2^{-n} \sum_{k=0}^n (-1)^k \binom{n}{k} \left( \sum_{i=1}^n x_i^2 \right)^k\left(\sum_{i=1}^n x_i\right)^{2(n-k)}\\ &=  2^{-n}  \sum_{k=0}^n (-1)^k \binom{n}{k} \frac{n!}{(n-k)!} \frac{(2n-2k)!}{2^{n-k}}\\ &= 4^{-n}  \sum_{k=0}^n (-1)^k \binom{n}{k}^2 2^k  (2n-2k)!. \end{aligned} Here the second last line follows from considering the number of ways that products of k terms of the form x_i^2 arise in the product \left( \sum_{i=1}^n x_i^2 \right)^k (which is \frac{n!}{(n-k)!}) and products of (n-k) terms of the form x_i^2 can be formed in the product \left(\sum_{i=1}^n x_i\right)^{2(n-k)} (which is \frac{(2n-2k)!}{2^{n-k}}). For example, when n=4 this is equivalent to finding the coefficient of a^2b^2c^2d^2 in (ab + bc + ac + bc + bd + cd)^4. Products are either paired up in complementary ways such as in (3 \times \binom{4}{2} = 18 ways) or we have the three products,, (3 \times 4! = 72 ways). This gives us a total of 90 (this question appeared in the 1992 Australian Mathematics Competition). More terms of the sequence are found in OEIS A001499 and the 6 by 4 case (colouring two shaded squares in each row and three in each column in 1860 ways) appeared in the 2007 AIME I (see Solution 7 here). Example 6: If we wish to count the number of grid configurations in which reflections or rotations are considered equivalent, we may make use of Burnside’s lemma that the number of orbits of a group is the average number of points fixed by an element of the group. For example, to find the number of configurations of 2 by 2 grids up to rotational symmetry, we consider the cyclic group C_4. For quarter turns there are 2^4 configurations fixed (a quadrant determines the colouring of the remainder of the grid) while for half turns there are 2^8 configurations as one half determines the colouring of the other half. This gives us an answer of \displaystyle \frac{2^{16} + 2.2^4 + 2^8}{4} = 16456, which is part of OEIS A047937. If reflections are also considered equivalent we need to consider the dihedral group D_4 and we arrive at the sequence in OEIS A054247. If we want to count the number of 3 by 3 grids with four black squares up to equivalence, this is equivalent to the number of full noughts and crosses configurations. A nice video by James Grime explaining this is here (the answer is 23). Example 7: The number of ways of colouring an m by n grid black or white so that the regions form 2 by 1 dominoes has the amazing form \displaystyle 2^{mn/2} \prod_{j=1}^{\lceil m/2 \rceil} \prod_{k=1}^{\lceil n/2 \rceil} \left(4 \cos^2 \frac{\pi j}{m+1} + 4 \cos^2 \frac{\pi k}{n+1}\right). For example, the 36 ways of tiling a 4 by 4 grid are given here. A proof of the above formula using the Pfaffian of the adjacency matrix of the corresponding grid graph is given in chapter 10 of [2]. [1] L. Comtet, Advanced Combinatorics: The Art of Finite and Infinite Expansions (pp 235-6), D. Reidel Publishing Company, 1974. [2] M. Aigner, A Course in Enumeration, Springer, 2007. December 26, 2017 The evolution of ODI team totals Filed under: cricket,sport — ckrao @ 11:46 am Over the years one day international cricket scores have been on the rise and this post intends to look into this in some detail. We shall restrict ourselves to first innings scores where the team batting first lasted exactly 50 overs. Hence games greater than 50 overs per team long or where a team was bowled out prematurely are omitted. There are 2349 (out of 3945) such matches according to this query on Cricinfo Statsguru and on average  7 wickets fall over the 50 overs. The plot below shows a scatter plot of the scores over time. The red curve shows that mean scores were steady around 225 during the 1980s and have been on the rise since 1990 so that now the mean score is approaching 300. Note that the first data point in 1974 corresponds to a game that was reduced to 50 overs per side after originally intended to be a 55 over game. If we slice the data into eras marked by calendar years of roughly equal numbers of games, the mean score had a slight slow-down in the rate of increase from 2008-2012, then accelerated again in the past five years. Era Number of matches Mean score batting first 1974-1994 427 229 1995-1999 383 247 2000-2003 368 257 2004-2007 380 267 2008-2012 393 272 2013-2017 398 288 1974-2017 2349 260 The histograms below show how rarely teams score less than 200 runs in recent times when using the full quota of 50 overs. In fact these days a team is more likely to score over 400 than below 200 if using the full quota of 50 overs! Comparing the distribution of first innings winning versus losing scores we find that the mean scores are 275 vs 236 respectively with sample sizes 1392 vs 901 (34 games had no result and 22 were tied). Restricting to the past five years, the median score batting first for the full 50 overs in winning matches is exactly 300. Interestingly if we break down the runs scatter plot by team, the trends are not the same across the board. In particular England and South Africa have had more dramatic increases in recent times than the other teams, especially compared with India, Pakistan, Sri Lanka and West Indies. Restricting to the last five years (2013-2017), here are the mean first innings scores for each team based on the match result (assuming they bat the full 50 overs). Team Result mean score # matches Afghanistan lost 249 6 Afghanistan won 260 12 Australia lost 295 13 Australia n/r 253 3 Australia won 310 31 Bangladesh lost 263 16 Bangladesh won 275 15 Canada lost 230 3 England lost 282 13 England won 329 22 Hong Kong won 283 4 India lost 282 12 India won 310 27 Ireland lost 244 6 Ireland tied 268 1 Ireland won 289 3 Kenya lost 260 1 Netherlands lost 265 1 New Zealand lost 277 12 New Zealand tied 314 1 New Zealand won 308 27 P.N.G. lost 218 2 P.N.G. won 232 1 Pakistan lost 266 9 Pakistan n/r 296 1 Pakistan tied 229 1 Pakistan won 290 20 Scotland lost 238 6 Scotland won 284 8 South Africa lost 258 7 South Africa n/r 301 1 South Africa won 321 36 Sri Lanka lost 249 15 Sri Lanka n/r 268 2 Sri Lanka tied 286 1 Sri Lanka won 305 22 U.A.E. lost 279 3 U.A.E. won 267 3 West Indies lost 265 10 West Indies won 298 10 Zimbabwe lost 247 9 Zimbabwe tied 257 1 Zimbabwe won 276 1 The England and South Africa numbers stand out the most here in winning causes. Also Australia has a particularly high average score of 294 in losing causes. Sri Lanka has the largest difference (56 runs) between average winning and losing scores. Edit: The following shows the mean scores in the 100 matches prior to and after key rule changes (still focusing on first innings 50-over scores). Note that in two of the three cases, the average scores reduced. 1. Restriction of 2 outside the 30-yard circle in the first 15 overs (’92 World Cup) 03 Jan 88 to 20 Jan 92: 231 12 Feb 92  to 16 Feb 94: 222 2. Introduction of Powerplay overs 13 Mar 04 to 30 Jun 05: 267 07 Jul 05 to 08 Sep 06: 267 3. Removal of powerplay, fifth fielder allowed outside the circle in the last ten overs 17Aug 14 to 24 Jun 15: 301 10 Jul 15 to 19 Jan 17: 289 September 8, 2017 Notes on von Neumann’s algebra formulation of Quantum Mechanics Filed under: mathematics,science — ckrao @ 9:49 pm The Hilbert space formulation of (non-relativistic) quantum mechanics is one of the great achievements of mathematical physics. Typically in undergraduate physics courses it is introduced as a set of postulates (e.g. the Dirac-von Neumann axioms) and hard to motivate without some knowledge of functional analysis or at least probability theory.  Some of that motivation and the connection with probability theory is summarised in the notes here – in fact it can be said that quantum mechanics is essentially non-commutative probability theory [2]. Furthermore having an algebraic point of view seems to provide a unified picture of classical and quantum mechanics. The important difference between classical and quantum mechanics is that in the latter, the order in which measurements are taken sometimes matters. This is because obtaining the value of one measurement can disturb the system of interest to the extent that a consistently precise value of the other cannot be found. A famous example is position and momentum of a quantum particle – the Heisenberg uncertainty relation states that the product of their uncertainties (variances) in measurement is strictly greater than zero. If measurements are treated as real-valued functions of the state space of system, we will not be able to capture the fact that the measurements do not commute. Since linear operators (e.g. matrices) do not commute in general, we use algebras of operators instead. We make use of the spectral theory leading from a special class of algebras with norm and adjoint known as von Neumann algebras which in turn are a special case of C*-algebras. The spectrum of an operator A is the set of numbers \lambda for which (A-\lambda I) does not have an inverse. Self-adjoint operators have a real spectrum and will represent the set of values that an observable (a physical variable that can be measured) can take. Hence we have this correspondence between self-adjoint operators and observables. By the Gelfand-Naimark theorem C*-algebras can be represented as bounded operators on a Hilbert space {\cal H}. See Section II.6.4 of [3] for proof details. If the C*-algebra is commutative the representation is as continuous functions on a locally compact Hausdorff space that vanish at infinity. Furthermore we assume the C*-algebra and corresponding Hilbert space are separable, meaning the space contains a countable dense subset (analogous to how the subset of rationals are dense in the set of real numbers). This ensures that the Stone-von Neumann theorem holds which was used to show that the Heisenberg and Schrödinger pictures of quantum physics are equivalent [see pp7-8 here]. The link between C*-algebras and Hilbert spaces is made via the notion of a state which is a positive linear functional on the algebra of norm 1. A state evaluated on a self-adjoint operator outputs a real number that will represent the expected value of the observable corresponding to that operator. Note that it is impossible to have two different states that have the same expected values across over observables. A state \omega is called pure if it is an extreme point on the boundary of the (convex) space of states. In other words, we cannot write a pure state \omega as \omega = \lambda \omega_1 + (1-\lambda) \omega_2 where \omega_1 \neq \omega_2 are states and 0 < \lambda < 1). A state that is not pure is called mixed. Now referring to a Hilbert space {\cal H}, for any mapping \Phi of bounded operators B({\cal H}) to expectation values such that 1. \Phi(I) = 1 (it makes sense that the identity should have expectation value 1), 2. self-adjoint operators are mapped to real numbers with positive operators (those with positive spectrum) mapped to positive numbers and 3. \Phi is continuous with respect to the strong convergence in B({\cal H}) – i.e. if \lVert A_n \psi - A \psi \rVert \rightarrow 0 for all \psi \in H, then \Phi (A_n) \rightarrow \Phi (A), then there is a is a unique self-adjoint non-negative trace-one operator \rho (known as a density matrix) such that \Phi (A) = \text{trace}(\rho A) for all A \in B(H) (see [1] Proposition 19.9). (The trace of an operator A is defined as \sum_k \langle e_k, Ae_k \rangle where \{e_k \} is an orthonormal basis in the separable Hilbert space – in the finite dimensional case it is the sum of the operator’s eigenvalues.) Hence states are represented by positive self-adjoint operators with trace 1. Such operators are compact and so have a countable orthonormal basis of eigenvectors. When \rho corresponds to a projection operator onto a one-dimensional subspace it has the form \rho = vv^* where v \in {\cal H} and \lVert v \rVert = 1. In this case we can show \text{trace}(\rho A) = \langle v, Av \rangle = v^*Av, which recovers the alternative view that unit vectors of {\cal H} correspond to states (known as vector states) so that the expected value of an observable corresponding to the operator A is \langle v, Av \rangle. This is done by choosing the orthonormal basis \{e_k \} where e_1 = v and computing \begin{aligned} \text{trace}(\rho A) &= \sum_k \langle e_k, vv^*Ae_k \rangle\\ &= \sum_k e_k^* v v^* Ae_k\\ &= e_1^* e_1 e_1^*Ae_1 \quad \text{ (as }e_k^*v = \langle e_k, v \rangle = 0\text{ for } k > 1\text{)}\\ &= e_1^*Ae_1\\ &= \langle v, Av \rangle. \end{aligned} Trace-one operators \rho can be written as a convex combination of rank one projection operators: \rho = \sum \lambda_k v_k v_k^*. From this it can be shown that those density operators which cannot be written as a convex combination of other states (called pure states) are precisely those of the form \rho = vv^*. Hence vector states and pure states are equivalent notions. Mixed states can be interpreted as a probabilistic mixture (convex combination) of pure states. Let us now look at the similarity with probability theory. A measure space is a triple (X, {\cal S}, \mu) where X is a set, {\cal S} is a collection of measurable subsets of X called a \sigma-algebra and \mu:{\cal S} \rightarrow \mathbb{R} \cup \infty is a \sigma-additive measure. If g is a non-negative integrable function with \int g \ d\mu = 1 it is called a density function and then we can define a probability measure p_g:{\cal S} \rightarrow [0,1] by \displaystyle p_g(S) = \int_S  g\ d\mu \in [0,1], S \in {\cal S}. A random variable f:X\rightarrow \mathbb{R} maps elements of a set to real numbers in such a way that f^{-1}(B) \in {\cal S} for any Borel subset of \mathbb{R}. This enables us to compute their expectation with respect to the density function g as \displaystyle \int_X f \ dp_g = \int_X fg\ d\mu. This is like the quantum formula \text{Tr}(\rho A) with our density operator \rho playing the role of g and operator A playing the role of random variable f. Hence a probability density function is the commutative probability analogue of a quantum state (density operator). While Borel sets are the events from which we define simple functions and then random variables, in the non-commutative case we define operators in terms of projections (equivalently closed subspaces) of a Hilbert space {\cal H}. A projection operator P is self-adjoint, satisfies P^2 = P and has the discrete spectrum \{0,1\}. Hence they are analogous to 0-1 indicator random variables, the answers to yes/no events. For any unit vector v \in {\cal H} the expected value \displaystyle \langle v, Pv \rangle = \langle v, P^2v \rangle = \langle Pv, Pv \rangle = \lVert Pv \rVert^2 is interpreted as the probability the observable corresponding to P will have value 1 when measured in the state corresponding to v. In particular this probability will be 1 if and only if v is in the invariant subspace of P. We define meet and join operations \vee, \wedge on these closed subspaces to create a Hilbert lattice ({\cal P}({\cal H}), \vee, \wedge, \perp): • A \wedge B = A \cap B • A \vee B = \text{closure of } A + B • A^{\perp} = \{u: \langle u,v \rangle = 0\ \forall v \in A\} Borel sets form a \sigma-algebra in which the distributive law A \cap (B \cup C) = (A \cap B) \cup (A \cap C) holds for any elements of {\cal S}. However in the Hilbert lattice the corresponding rule A \wedge (B \vee C) = (A \wedge B) \vee (A \wedge C) (where A, B, C are projection operators) only holds some of the time (see here for an example). This failure of the distributive law is equivalent to the general non-commutativity of projections. A quantum probability measure \phi:{\cal P} \rightarrow [0,1] can be defined by combining projections in a \sigma-additive way, namely \phi(0) = 0, \phi(I) = 1 and \phi(\vee_i P_i) = \sum_i \phi(P_i) where P_i are mutually orthogonal projections (P_i \leq P_j^{\perp}, i \neq j). Gleason’s theorem says that for Hilbert space dimension at least 3 a state is uniquely determined by the values it takes on the orthogonal projections – a quantum probability measure can be extended from projections to bounded operators to obtain \phi(A) = \text{Tr}(\rho_{\phi} A), similar to how characteristic functions are extended to integrable functions. Hence this is a key result for non-commutative integration (note: the continuity conditions defining \Phi in 1-3 above are stronger). We choose von Neumann algebras over C*-algebras since the former contain all spectral projections of their self-adjoint elements while the latter may not [ref]. So far we have seen that expected values of observables A are derived via the formula \text{Tr}(\rho A). To derive the distribution itself, we make of the spectral theorem and for self-adjoint operators with continuous spectrum this requires projection valued measures. A self-adjoint operator A has a corresponding function E_A:{\cal S} \rightarrow {\cal P}({\cal H}) mapping Borel sets to projections so that E_A(S) represents the event that the outcome of measuring observable A is in the set S: we require that E_A(X) = I and S \mapsto \langle u,E_A(S)v \rangle is a complex additive function (measure) for all u, v \in {\cal H}. We use E_A(\lambda) as shorthand for E_A(\{x:x\leq \lambda\}). Similar to the way a finite dimensional self-adjoint matrix M may be eigen-decomposed in terms of its eigenvalues \lambda_i and normalised eigenvectors u_i as \begin{aligned} M &= \sum_i \lambda_i u_i u_i^T \\ &= \sum_i \lambda_i P_i \quad \text{(where }P_i := u_i u_i^T \text{ is a projection)}\\ &= \sum_i \lambda_i (E_i - E_{i-1}), \quad \text{(where } E_i := \sum{k \leq i} P_k\text{ ),} \end{aligned} the spectral theorem for more general self-adjoint operators allows us to write A = \int_{\sigma(A)} \lambda dE_A(\lambda) which means that for every u, v \in {\cal H}, \langle u, Av \rangle = \int_{\sigma(A)} \lambda d\langle u,E_A v \rangle. Here, the integrals are over the spectrum of A. Through this formula we can work with functions of operators and in particular the distribution of the random variable X corresponding to operator A in state \rho will be \text{Pr}(X \leq x) = E\left[ 1_{\{X \leq x\} }\right] = \text{Tr} \left( \rho\int_{-\infty}^x dE_A(\lambda) \right) = \text{Tr} \left( \rho E_A(x) \right). The similarities we have seen here between classical probability and quantum mechanics are summarised in the table below, largely taken from [2] which greatly aided my understanding. Note how the pairing between trace class and bounded operators is analogous to the duality of L^1 and L^{\infty} functions. Classical Probability Quantum Mechanics (non-commutative probability) (X,{\cal S}, \mu) – measure space ({\cal H}, {\cal P}({\cal H}), \text{Tr}) – Hilbert space model of QM X – set {\cal H} – Hilbert space {\cal S} – Boolean algebra of Borel subsets of X called events {\cal P}({\cal H})orthomodular lattice of projections (equivalently closed subspaces) of {\cal H} disjoint events orthogonal projections \mu:{\cal S} \rightarrow {\mathbb R}^{+} \cup \infty\sigma-additive positive measure \text{Tr} – functional g \in L^1(X,\mu), g \geq 0, \int g \ d\mu = 1 – integrable functions (probability density functions) \rho \in {\cal T}({\cal H}), \rho \geq 0, \text{Tr}(\rho) = 1 – trace class operators (density operators) p_g(S) = \int \chi_S g\ d\mu \in [0,1], S \in {\cal S}probability measure mapping Borel sets to numbers in [0,1] in a sigma-additive way \phi(S) = \text{Tr}(\rho_{\phi } S) \in [0,1], \rho_{\phi } \in {\cal T}({\cal H}), S \in {\cal P}({\cal H})quantum state mapping projections to numbers in [0,1] in a sigma-additive way f \in L^{\infty}(X,\mu) – essentially bounded measurable functions (bounded random variables) A \in {\cal B}({\cal H}) – von Neumann algebra of bounded operators (bounded observables) \int fg\ d\mu, g \in L^1(X,\mu) – expectation value of f \in L^{\infty}(X,\mu) with respect to p_g \text{Tr}(\rho A), \rho \in {\cal T}({\cal H}) – expectation value of A \in {\cal B}({\cal H}) in state \rho In summary, the fact that measurements don’t always commute lead us to consider non-commutative operator algebras. This leads us to the Hilbert space representation of quantum mechanics where a quantum state is a trace-one density operator and an observable is a bounded linear operator. We also saw that projections can be viewed as 0-1 events. The spectral theorem is used to decompose operators into a sum or integral of projections. The richer mathematical setting for quantum mechanics allows us to model non-classical phenomena such as quantum interference and entanglement. We have not mentioned the time evolution of states, but in short, state vectors evolve unitarily according to the Schrödinger equation, generated by an operator known as the Hamiltonian. References and Further Reading [1] Hall, B.C., Quantum Theory for Mathematicians, Springer, Graduate Texts in Mathematics #267, June 2013 (relevant section) [2] Redei, M., Von Neumann’s work on Hilbert space quantum mechanics [3] Blackadar, B., Operator Algebras: Theory of C*-Algebras and von Neumann Algebras [4] Wilce, Alexander, “Quantum Logic and Probability Theory“, The Stanford Encyclopedia of Philosophy (Spring 2017 Edition), Edward N. Zalta (ed.). [5] Wikipedia – Quantum logic [6] – Lattice of Projections [7] – Spectral Measure [8] quantum mechanics – Intuitive meaning of Hilbert Space formalism – Physics Stack Exchange [9] This answer to: mathematical physics – Quantum mechanics in a metric space rather than in a vector space, possible? – Physics Stack Exchange [10] functional analysis – Resolution of the identity (basic questions) – Mathematics Stack Exchange August 27, 2017 Busy roads of Melbourne Filed under: geography — ckrao @ 11:16 am June 30, 2017 The ballot problem and Catalan’s triangle Filed under: Uncategorized — ckrao @ 10:15 pm The ballot problem asks for the probability that candidate A is always ahead of candidate B during a tallying process if they respectively end up with p and q votes where p > q. For example if p = 2, q = 1 there are 3 ways in which the three votes are counted (AAB, ABA, BAA) but the only favourable outcome in which A remains ahead throughout is occurs if the tally appears as AAB. Hence the probability A remains ahead is 1/3. If there are no restrictions, the number of ways the votes are tallied is the binomial coefficient \binom{p+q}{p}. The number of favourable outcomes (the numerator of the desired probability) in which A remains ahead can counted recursively in a similar way to Pascal’s triangle (each number the sum of the two neighbours above it) except no number may appear to the left of the vertical midline, as illustrated below. For example, the second element of the fifth row (3) corresponds to the case p = 3, q = 1 (AAAB, AABA, ABAA). More generally, dividing into the cases where the final vote is A or B, the number of ways N_{p,q} in which A remains ahead of B is equal to N_{p-1,q} + N_{p,q-1} where N_{p,q} = 0 if q \geq p. This sequence appears as A008313 in the OEIS and is the reversed form of Catalan’s triangle. A way of generating the general term is making use of a beautiful reflection principle that gives a 1-1 correspondence between the number of tallies leading to a tie at some point and the number of tallies in which the first vote goes to candidate B: simply interchange A with B for all votes up to and including that tie. This amounts to reflecting the random walk about the midline, as illustrated below with the blue path corresponding to ABAA and the the red path BAAA. Since p > q, the probability candidate A always leads is 1 minus the probability the sequence ties at some point. But the bijection above shows an equal number of these start with A and with B, so our desired probability is \displaystyle 1 - 2 \text{Pr(sequence starts with B)} = 1 - 2\frac{q}{p+q} = \frac{p-q}{p+q}. The numbers in the triangle are also formed by differences of adjacent entries of Pascal’s triangle, namely row p+q has terms of the form \displaystyle \begin{aligned} N_{p,q} &= \binom{p+q}{p}\frac{p-q}{p+q}\\ &= \frac{(p+q-1)!(p-q)}{p!q!}\\&= \binom{p+q-1}{q}-\binom{p+q-1}{p}.\end{aligned} This can be interpreted as the number of unrestricted sequences with p As and q Bs of length (p+q) that start with A minus the corresponding number that start with B, again following from the reflection principle. As an aside, looking at the bottom row above we see N_{8,6} = N_{10,4} = 429, or equivalently \displaystyle \binom{13}{4} - \binom{13}{3} = \binom{13}{6} - \binom{13}{5} = 429. Finally we note that the Catalan numbers arise from the following parts of the triangle above: • as entries in the first column (counting Dyck paths) • as the sum of squares of each row • as the sum of entries in NE-SW diagonals Catalan’s triangle can be generalised to a trapezium in which we count the number of strings consisting of n As and k Bs such that in every initial segment of the string the number of Bs does not exceed the number of As by m or more. April 9, 2017 Highest aggregates and averages after n test matches/innings A more complete list of the top 10 scorers in these categories after n tests/innings is below. Statistics are from ESPN Cricinfo and are current to 16 September 2019. Corrections are welcome. Next Page » Blog at %d bloggers like this:
ce420ddebf66f0a9
Our “machine learning with nonlinear waves” paper featured in Physics! Riding waves in Neuromorphic Computing, Marios Mattheakis highlights with a thoughtful viewpoint our recent paper in PRL on the artificial intelligence of nonlinear waves. The Artificial Intelligence of Waves In a paper published in Physical Review Letters, with title Theory of Neuromorphic Computing by Waves: Machine Learning by Rogue Waves, Dispersive Shocks and Solitons we study artificial neural networks with nonlinear waves as a computing reservoir. We discuss universality and the conditions to learn a dataset in terms of output channels and nonlinearity. A feed-forward three-layered model, with an encoding input layer, a wave layer, and a decoding readout, behaves as a conventional neural network in approximating mathematical functions, real-world datasets, and universal Boolean gates. The rank of the transmission matrix has a fundamental role in assessing the learning abilities of the wave. For a given set of training points, a threshold nonlinearity for universal interpolation exists. When considering the nonlinear Schrödinger equation, the use of highly nonlinear regimes implies that solitons, rogue, and shock waves do have a leading role in training and computing. Our results may enable the realization of novel machine learning devices by using diverse physical systems, as nonlinear optics, hydrodynamics, polaritonics, and Bose-Einstein condensates. The application of these concepts to photonics opens the way to a large class of accelerators and new computational paradigms. In complex wave systems, as multimodal fibers, integrated optical circuits, random, topological devices, and metasurfaces, nonlinear waves can be employed to perform computation and solve complex combinatorial optimization. The paper was selected as Editors’Suggestion and Featured in Physics See also Minimizing large-scale Ising models with disorder and light: the “classical-optics advantage” Since the 80s we know how to build optical neural networks that simulate the Hopfield model, spin-glasses, and related. New developments in optical technology and light control in random media clearly demonstrate the “optical advantage,” even while limiting to the good old classical physics. Scalable spin-glass optical simulator Many developments in science and engineering depend on tackling complex optimizations on large scales. The challenge motivates an intense search for specific computing hardware that takes advantage of quantum features, stochastic elements, nonlinear dissipative dynamics, in-memory operations, or photonics. A paradigmatic optimization problem is finding low-energy states in classical spin systems with fully-random interactions. To date, no alternative computing platform can address such spin-glass problems on a large scale. Here we propose and realize an optical scalable spin-glass simulator based on spatial light modulation and multiple light scattering. By tailoring optical transmission through a disordered medium, we optically accelerate the computation of the ground state of large spin networks with all-to-all random couplings. Scaling of the operation time with the problem size demonstrates an optical advantage over conventional computing. Our results provide a general route towards large-scale computing that exploits speed, parallelism, and coherence of light. The Game of Light In memoriam: John Horton Conway The Enlightened Game of Life (EGOL) Quick and dirty implementation of the EGOL in a Python Notebook Optimal noise in Ising machines Ising machines are novel computing devices for the energy minimization of Ising models. These combinatorial optimization problems are of paramount importance for science and technology, but remain difficult to tackle on large scale by conventional electronics. Recently, various photonics-based Ising machines demonstrated ultra-fast computing of Ising ground state by data processing through multiple temporal or spatial optical channels. Experimental noise acts as a detrimental effect in many of these devices. On the contrary, we here demonstrate that an optimal noise level enhances the performance of spatial-photonic Ising machines on frustrated spin problems. By controlling the error rate at the detection, we introduce a noisy-feedback mechanism in an Ising machine based on spatial light modulation. We investigate the device performance on systems with hundreds of individually-addressable spins with all-to-all couplings and we found an increased success probability at a specific noise level. The optimal noise amplitude depends on graph properties and size, thus indicating an additional tunable parameter helpful in exploring complex energy landscapes and in avoiding trapping into local minima. The result points out noise as a resource for optical computing. This concept, which also holds in different nanophotonic neural networks, may be crucial in developing novel hardware with optics-enabled parallel architecture for large-scale optimizations. Published in Nanophotonics Noise-enhanced spatial-photonic Ising machine See also Large scale Ising machine by a spatial light modulator
ca969f2ba33bff39
Translator Disclaimer 19 June 2020 Advances in soliton microcomb generation Author Affiliations + Optical frequency combs, a revolutionary light source characterized by discrete and equally spaced frequencies, are usually regarded as a cornerstone for advanced frequency metrology, precision spectroscopy, high-speed communication, distance ranging, molecule detection, and many others. Due to the rapid development of micro/nanofabrication technology, breakthroughs in the quality factor of microresonators enable ultrahigh energy buildup inside cavities, which gives birth to microcavity-based frequency combs. In particular, the full coherent spectrum of the soliton microcomb (SMC) provides a route to low-noise ultrashort pulses with a repetition rate over two orders of magnitude higher than that of traditional mode-locking approaches. This enables lower power consumption and cost for a wide range of applications. This review summarizes recent achievements in SMCs, including the basic theory and physical model, as well as experimental techniques for single-soliton generation and various extraordinary soliton states (soliton crystals, Stokes solitons, breathers, molecules, cavity solitons, and dark solitons), with a perspective on their potential applications and remaining challenges. Optical microcavities, which emerged from the rapid development of modern micro/nanofabrication technologies, have grown to be revolutionary devices that light the way toward several fantastic applications, including advanced light sources, ultrafast optical signal processing, and ultrasensitive sensors, benefitting from their unprecedented small size and high buildup of energy inside the resonators.1 The resonant optical field can be strongly enhanced in a high-quality (Q) factor microcavity, which results in long light–matter interaction lengths and ultralow thresholds for nonlinear optical effects.2,3 In the case of microcavities made of materials with inversion symmetry (e.g., silica, silicon nitride, or crystals), the elemental nonlinear interaction is third-order nonlinearity, which gives rise to the parametric process of four-wave mixing (FWM). Together with the intrinsic filter character of microresonators, discrete optical frequency components with equal spacing can be generated, which are termed micro-optical frequency combs (μOFC or microcombs). Compared with traditional OFCs built on mode-locked solid-state or fiber lasers, microcomb is considered a new type of coherent light source that shows unique and promising advantages of lower power consumption as well as whole system integratability. Further, microcombs are also capable of generating ultrashort pulses with gigahertz to terahertz repetition rates,4,5 which is far beyond the limitations of physical cavity length for conventional lasers; thus, they can find important applications in fundamental physical precision metrology.6 Therefore, microcombs have been developed as a powerful alternative that enables miniaturization of OFCs and opens investigations into new nonlinear physics and corresponding applications. Technically speaking, the character of microcomb mainly depends on microresonator properties as well as the pumping parameters (e.g., pump power and frequency detuning). The Q factor, dispersion profile, and cavity mode of a microresonator directly decide the threshold, bandwidth, and frequency spacing (and other related features) of emitted combs, respectively, while the pump settings determine the operation states and output performance such as the spectral envelope and noise characteristic. Since the landmark demonstration in 2003 of the high-Q toroid microcavity with a Q-factor in excess of 100 million,7 considerable efforts have been made to improve the Q of microcavities on various material platforms by developing fabrication techniques as well as more reliable approaches for generating broadband low-noise microcombs, as shown in Fig. 1. For example, Kippenberg et al.2 obtained several discrete comb lines through optical parametric oscillation (OPO) in a silica toroid microcavity in 2004; thereafter, Del’Haye et al.8 successfully achieved a microcomb covering 500  nm in the telecom band in 2007. In 2011, Okawachi et al. 9 suggested that the noise of microcomb can be lowered through varying the pump detuning. One special mode-locked comb, termed a soliton microcomb (SMC), can be evolved from the primary comb and modulation instability (MI) comb while the pump sweeps across a resonance from the blue-to-red detuned regime.10 Physically, solitons can be spontaneously organized in a continuous wave (CW)-driven microresonator, where double balances, nonlinearity and dispersion as well as dissipation and gain, are reached.10 However, the states of microcombs are dependent on the pump-resonance detuning, where multiple operation possibilities exist for a resonator at a given pump power and frequency.11 When the chaotic MI state transitions to the soliton state, the dramatic intracavity power drop can result in pump frequency shifting out of the cavity resonance through a thermo-optical effect, so achieving stable soliton operation is a great challenge in practical experiments. To overcome the strong thermal-optical effect that hinders steady SMC access, various schemes have been implemented. A representative work is the first temporal soliton generation using a frequency-scanning method;10 since then “power-kicking,”12,13 thermal tuning,14 and auxiliary-laser-based15,16 methods have been introduced. Meanwhile, rich types of soliton states, including Stokes soliton induced by the Raman effect,17 dual-soliton generation in a single microcavity,18 soliton crystals,19 breathers,20 laser cavity solitons,21 soliton molecules,22 and dark pulse states operating in a normal-dispersion regime,23 have all been discovered (Fig. 1). And a variety of nonlinear phenomena (e.g., dispersive wave, mode crossing, and Raman self-frequency shift) that are closely related to the characteristics of microresonators have also been disclosed.12,2427 All of these achievements suggest that SMCs can establish an interface between soliton physics and integrated photonics as well as materials science. Fig. 1 Route map of microcombs. Until now, based on advanced experimental techniques, SMCs have been realized in an MgF2 whispering-gallery-mode (WGM) resonator,10 silica disk13 and microrod,28,29 AlN,30 LiNbO3,31 and CMOS-compatible microring resonator of SiN,12 Si,32 and high-index doped silica glass.15 Table 1 briefly summarizes some typical SMCs obtained on various material platforms with distinct cavity properties. Furthermore, because of the developed fabrication process and improved-dispersion engineering, SMCs with different spectral coverages have been demonstrated on various material platforms. As shown in Fig. 2, SMC generation in the visible,31,33 near-,10,13,15,30,34,35 and mid-infrared36 regions have all been achieved, covering a wavelength range of down to 750  nm31 and up to 4300  nm.36 Benefitting from the full coherent feature across the whole spectral coverage1012,37 the advent of SMCs has promoted research in various applications, such as dual-comb spectroscopy (DCS),38 terabit coherent optical communications,39 photonic-integrated frequency synthesizers,40 ultrafast distance measurements,41 and calibration of astrophysical spectrometers for exoplanet searching.42 Beyond these developed applications, SMCs are also relevant to a large variety of physical systems that could provide ideal testbeds for fundamental theory of nonlinear wave dynamics research.43,44 Table 1 Typical parameters of reported SMCs. MaterialStructureQFSR (GHz)Wavelength range (nm)TypesRefs. MgF2Rod4.3×10835.21520 to 1590Solitons10 MgF2Rod>1.0×108141520 to 1590Solitons24 MgF2Rod4.7×10825.781540 to 1580Solitons36 MgF2Rod>10912.51526 to 1548Solitons45 SiO2Disk4×108221510 to 1600Solitons13 SiO2Disk108201025 to 1125 and 760 to 790Solitons33 SiO2Disk>108221600 to 1650Stokes solitons17 SiO2Disk>1081.8 to 331530 to 1570Solitons4 SiO2Disk>10826, 16.41540 to 1580Soliton crystals46 SiO2Disk1.8×108221510 to 1590Solitons47 SiO2Rod>10855.61520 to 1590Solitons28 SiO2Rod3.7×108501530 to 1590Solitons29 AlNRing0.65×108>5001400 to 1700Solitons30 SiNRing5×1051891330 to 2000Solitons12 SiNRing0.4×10510001100 to 2300Solitons48 SiNRing1.4×10610001420 to 1700Solitons49 SiNRing0.6×1061000850 to 2000Solitons34 SiNRing0.5×1062001440 to 1660Solitons50 SiNRing>2.0×1062001470 to 1620Breathers51 SiNRing1.9×106>2001450 to 1700Breathers20 SiNRing8.0×1061941540 to 1640Solitons52 SiNRing>15×106991570 to 1630Solitons53 SiNRing0.65×1061000776 to 1630Solitons35 SiNRing0.77×106231.31460 to 1610Dark pulses23 Doped silica glassRing1.7×106491480 to 1650Solitons15 Doped silica glassRing1.7×106491480 to 1650Soliton crystals19 Doped silica glassRing1.3×106491510 to 1580Cavity solitons21 Graphene-nitrideRing10690TunableaSoliton crystals54 SiliconRing0.2×1061272800 to 3800Breathers51 LiNbO3Ring2.2×106199.7750 to 800 and 1460 to 1650Solitons31 LiNbO3Ring>1.1×1062001830 to 2130Solitons55 Note: All results are approximate values obtained from the reported literature. aSpectral coverage of 1450 to 1700 nm was experimentally verified in Ref. 54. Fig. 2 Typical spectral coverage of SMCs on various material platforms using different approaches. In this review, we summarize recent experimental achievements with a perspective on the potential and challenges. The remainder of this paper is organized as follows. Sec. 2 introduces the basic theory and physical model of microcomb generation. In Sec. 3, we mainly focus on the presented techniques for single SMC generation, including the rapid frequency and thermal tuning, power-kicking, forward and backward sweeping, auxiliary-laser-assistance, self-injection-locking using CW lasers, and synchronous driving by pulsed sources. Various extraordinary soliton states, including soliton crystals, Stokes solitons, breathers, soliton molecules, laser cavity solitons, and dark pulses, are discussed in Sec. 4. Finally, in Sec. 5, a brief summary of the valuable applications of SMCs is given, and an outlook on the underlying challenges and opportunities for this field is presented in Sec. 6. Physics and Numerical Models for Microcombs The generation of microcombs arises from parametric frequency conversion through the FWM effect that generates a pair of photons (a signal and an idler) that are equally spaced to the pump. The photonic interaction can be expressed as 2ωpωs+ωi, where ωp, ωs, and ωi are the pump, signal, and idler angular frequencies, respectively. The frequencies of the newly generated photons are resonance enhanced in microcavity resonances ωs,i=ωp±nΔΩ, where ΔΩ is the angular frequency of the free-spectral-range (FSR) that is determined by the cavity optical length and n=1,2, is the mode number with the assumption of the pump at mode 0. For theoretical description of the microcomb formation mechanism, firstly the evolution of each mode amplitude Aμ can be modeled using a nonlinear wave equation. Then the microcomb generation process can be completely described by a set of autonomous, nonlinear, and coupled ordinary differential equations, which are called coupled mode equations (CMEs):56,57 Eq. (1) where κ is the cavity decay rate, including the intrinsic and coupling decay rate. η=κext/κ is coupling efficiency, |sin,out|=Pin,out/ω0 denote the amplitudes of the pump and output powers. δμ0 is the Kronecker’s delta. For microcomb formation, δμ0 equals 1 for the pump mode and 0 for other modes. The nonlinear coupling coefficient g=(ω02cn2)/(n02Veff) describes the cubic nonlinear gain, where n0 and n2 are the refractive index and nonlinear refractive index, respectively, Veff is the effective cavity volume, and c and are the speed of light and the Planck constant, respectively. The resonant frequencies ωμ of a microcavity can be Taylor-expanded around the pump mode as Eq. (2) where ω0 is the angular frequency of the pump mode, also regarded as the reference frequency. μ is the relative mode number of the resonance away from the reference frequency. D1/2π is the FSR of the resonator at the frequency ω0. D2 and D3 are the second- and third-dispersion parameters, respectively. ωpω0 represents the pump detuning, where ωp is the pump frequency. The summation includes all μ, μ, μ respecting the relation μ=μ+μμ. The above model assumes that the power density of all modes are spatially overlapped. The CMEs have been successfully used to determine the threshold and explain the role of dispersion as well as other mechanisms in the microcomb formation. However, the amount of computation increases dramatically with increases in the mode number. Through considering the total intracavity field an entirety A(θ,t)=μAμ(t)eiμθ, where θ[π,π] is the azimuthal angle along the resonator, the CMEs can be simplified to the equivalent approach of the Lugiato–Lefever equation (LLE), which is also referred to as the spatiotemporal model:5860 Eq. (3) where ζk represents the dispersion parameters. The LLE can be optimally solved using the split-step Fourier algorithm considering the periodic boundary conditions of microresonators. Based on the LLE, mode-locked microcombs have been predicted and rich physical phenomena have been explained, including the single soliton with dispersion wave,12 soliton crystals by taking into account of perturbation,46 and dark pulse states with a modified form to involve mode interaction.23 Raman self-frequency shift was also precisely simulated by adding the Raman response item fRhR|A|2 to the LLE, where fR indicates the Raman fraction and hR is the Raman response function.25,27 Experimental Schemes for Single SMC Generation It has been found that SMC can be spontaneously formed while a CW pump stabilizes in the red-detuned regime of a dissipative nonlinear microcavity. However, the soliton existing range exhibits thermal instabibility for microcavities with negative temperature coefficients, which blinded SMC observation for more than 6 years since the first microcomb realization.8,10 Therefore, the major challenge for SMC generation has been how to stabilize a pump in the red-detuned regime of a microcavity. An intuitive thought is preventing the cavity from heating up before the pump sweeping to the soliton existing range, such as the frequency scanning method for SMC realization in a low thermal-optic coefficient MgF2 microcavity.10 Since then several equivalent and more universal schemes have been developed, e.g., the power-kicking, thermal tuning, and self-injecting locking methods. Another solution for SMC generation is realizing the intracavity thermal balance using an auxiliary laser to maintain the intracavity optical power. Taking advantage of the smooth spectral envelope and fixed temporal spacing, single SMC is the most desired soliton state for applications, which gives rise to great research interest for deterministic single SMC generation, wherein the backward frequency/temperature scanning and pulse-pumped schemes have been introduced. In this section, we will focus on the experimental progress of single SMC generation. Frequency-Scanning Method The basic idea of the frequency-scanning method is sweeping the pump to the red-detuned regime before the microcavity is heated up by the thermo-optic effect. Generally, the frequency-scanning speed is determined by the thermo-optic response time, Q-factor, as well as the pump power.10,25,44,6163 This method is first introduced for SMC generation in an MgF2 microresonator. The experimental setup is shown in Fig. 3(a).10 A tunable narrow-linewidth laser is used as the pump with its frequency sweeping speed and scanning range controlled by an electrical signal. The soliton existing range shows regional thermal stability due to the self-phase and cross-phase modulation (XPM) induced nonlinear phase shift. Therefore, the intracavity thermal equilibrium can be reached when the pump sweeps to the red-detuned regime with an appropriate speed, which leads to SMC generation and maintenance. Figure 3(b) presents the transmission power trace when the pump scans over one cavity resonance. Before the pump reaches the zero-detuning point, the intracavity optical field first evolves from the primary comb (I) to the subcomb (II) and then to the MI comb (III) state in succession; the corresponding optical spectra are shown in Fig. 3(c). Once the pump transmits into the red-detuned regime, the intracavity power suffers a sudden decline and exhibits transmission steps that indicate the generation of SMCs. Figure 3(d) presents the optical spectra for one-, two-, and five-soliton states. Since the SMCs are fully coherent, the corresponding beat notes exhibit sharp frequency lines [insets of Fig. 3(d)], reflecting excellent low-noise characteristics. Fig. 3 Experimental demonstration of stable temporal solitons in a high-Q MgF2 microresonator using the frequency-scanning method. (a) Experimental setup for stable temporal soliton generation. (b) Optical transmission power trace when the pump scans over a resonance. The discrete steps in the red-detuned regime (green shading) indicate existence of cavity solitons. (c) Optical spectral evolution while the pump sweeps in the blue-detuned regime. (d) Optical spectra for SMCs with 1, 2, and 5 solitons. OSA, optical spectrum analyzer; ESA, electrical spectrum analyzer; PD, photodetector; LO, local oscillator; FPC, fiber polarization controller; EDFA, erbium-doped fiber amplifier. Images are adapted with permission from Ref. 10. The pump frequency-scanning method is a fundamental and intuitional approach for SMC generation. The success of this method relies on the control of the pump frequency sweeping speed and accuracy. The laser scanning time should be comparable to the cavity lifetime and thermal lifetime of the microresonator, which has a very high Q and relatively large thermal volume. Meanwhile, the laser wavelength should exactly stabilize at the soliton steps. Because of the limited tuning speed and the frequency accuracy or stability of tunable lasers, it is a challenge for single SMC generation in microresonators with short thermal lifetimes.63 To improve the applicability of the frequency-scanning method, a single-sideband suppressed-carrier (SSB-SC) frequency shifter is introduced to improve the frequency tuning speed as well as the frequency control accuracy, which is determined by the driven radio-frequency signal. Using an SSB-SC as a frequency shifter, SMCs have been realized in Si3N4 and AlN microresonators with soliton steps lengths on the order of tens to hundreds of nanoseconds.30,47 For some special cases, SMCs can also be generated with relatively slow scanning speeds once the thermal dynamics during soliton formation can be stabilized by other approaches. For example, a partial overlapping mode can be used to compensate for the thermal dissipation when the pump tunes to the red-detuning regime. By taking advantage of an adjacent mode family in a specific Si3N4 microring, the thermal challenge is overcome for stable SMC generation.48 Additionally, through scanning the pump to a fixed frequency, delayed spontaneous soliton generation from the chaotic state is also observed, which is tuning speed independent.49 Power-Kicking Scheme For microresonators with a high thermal-optic effect, the durations of soliton steps are so short that stopping the laser frequency exactly within these steps is technically difficult.63 As shown in Figs. 4(c)4(e), the typical durations of soliton steps are on the order of submicroseconds for Si3N4 microresonators. It becomes a great challenge to stabilize the pump in such short soliton steps considering the performance of practical tunable lasers. A more universal solution is using modulators to control the pump power and timing sequence stringently, which is termed the power-kicking approach.12 Fig. 4 SMC generation based on power-kicking scheme. (a) Experimental setup of power-kicking scheme. The microresonator is pumped by an external cavity diode laser that is amplified and modulated by an AOM and an EOM. (b) Typical triangular shape of the transmission power while the pump sweeps across a resonance. (c)–(e) Soliton steps of Si3N4 microcavities with repetition rates of 38, 70, and 190 GHz, respectively. (f) Measured optical spectrum of single SMC covering over 2/3 octave bandwidth. AFG, arbitrary function generator; EDFA, erbium-doped fiber amplifier; FPC, fiber polarization controller; OM, optical modulator; RF, radio frequency; TLF, tapered-lensed fiber. Images are adapted with permission from Ref. 12. A typical experimental setup is shown in Fig. 4(a). More details concerning this technique are depicted in Fig. 5, where the pump laser passes through an electro-optic modulator (EOM), an EDFA, and an acousto-optic modulator (AOM) before coupling to a Si3N4 microresonator.63 First, the AOM lowers the pump power before the laser tunes into the resonance and increases the pump power within soliton steps at the end of the thermal triangle of the resonance. Second, a fast pump power drop modulated by EOM is introduced before the pump sweeps across the zero-detuning point. The fast power drop reduces the nonlinear thermal and induces a fast zero-detuning transition of the pump laser. Accordingly, the pump power, modulation depth, and initial timing should be carefully optimized for stable SMC generation.63 Examples of the timing sequence of the pump frequency sweeping, fast EOM modulation, and slow AOM modulation are shown in Figs. 5(b)5(e). Such a “power-kicking” method has proved to be capable of achieving reliable transition into soliton state, and solitons with over 2/3 octave bandwidth were observed [Fig. 4(f)] benefitting from the soliton-induced Cherenkov radiation.12 Fig. 5 Schematic and timing sequences of the power-kicking scheme. (a) Setup used to bring very short-lived soliton states to a steady state, including two modulators to adjust the pump power. (b) Timing sequences of the pump scanning, the fast and slow power modulation, and the converted light power. (c) Initial timing of the fast modulation with respect to the thermal triangle and slow power modulation. (d) The fast power modulation induced soliton steps. (e) Combined effect of the fast and slow modulation. Images are adapted with permission from Ref. 63. For cases with long enough soliton steps (e.g., longer than several microseconds), a single AOM can provide enough modulation speed for SMC generation. Additionally, the pump parameters can be adjusted with an active feedback loop to realize active capture and stabilization of temporal solitons.64 The power-kicking scheme has been widely used in some proof-of-concept applications such as DCS38 and microcomb-based range measurement.65 However, additional modulators and the precision control circuit complicate the SMC system, which increases the technical difficulties for miniaturized integration. Therefore, more compact SMC generation approaches, such as the thermal-tuning method and self-injected locked scheme, have been developed and are discussed next. Thermal-Tuning Method An equivalent approach for SMC generation is shifting the resonances of a microresonator through a thermal-tuning method rather than tuning the pump frequency. As shown in Fig. 6, a narrow-linewidth laser with a fixed frequency is used as the pump, and the microcavity resonances are thermally controlled by an on-chip heater.14 Through controlling the heater current, the cavity resonances can shift at a sufficiently high speed that is induced by changing the waveguide refractive index due to the thermo-optic effect. Along with the current reducing (temperature decreasing), one resonance passes through the pump and characteristic soliton steps can be observed as shown in Fig. 6(b). Although the lifetime of these soliton steps is on the order of microseconds, SMCs can still be obtained because of the high control accuracy of the heater current tuning speed and stop value. Fig. 6 SMC generation based on thermal-tuning method. (a) Experimental setup of thermally controlled SMC generation in a Si3N4 microresonator. (b) Transmission optical power trace of the generated microcomb. Steps marked by arrows indicate transitions between different multisoliton states. Images are adapted with permission from Ref. 14. In principle, the thermal-tuning method can be regarded as a variant of the pump frequency-scanning method. Compared with tunable lasers, fixed frequency lasers usually have much narrower linewidth and lower noise. So it is attractive for soliton generation using a fixed frequency laser to improve the microcomb performance. Meanwhile, a fixed frequency laser has a smaller footprint and the integration technique of current source is rather mature, so the thermal-tuning method has the potential to realize a fully integrated microcomb.52 Auxiliary-Laser-Based Method Because the challenges of SMC generation mainly arise from the thermal instability of the soliton existing range, it is reasonable to imagine that the thermal effect can be solved by maintaining the intracavity optical power at a similar level during SMC generation. It is noted that microresonators exhibit contrary thermal characters when pumps are located at the blue- and red-detuned regimes. As a result, the dramatic decrease of intracavity heat when a pump laser tunes into a soliton existing range can be effectively compensated for by an auxiliary laser located at the blue-detuned regime. This principle has been verified recently,15,16,28,29,34 and the requirement for a rigid tuning time (on the order of thermal lifetime) can be relaxed using the auxiliary-laser-assisted approach. A typical experimental setup is shown in Fig. 7(a), where the auxiliary and pump lasers are counter-coupled into a Si3N4 microresonator.16 Figure 7(b) shows the relative position of resonances and lasers during the tuning process. First, the auxiliary laser is settled at the blue-detuned side, which approaches the resonance peak. Then the pump is tuned into another resonance from the counter-propagating direction. Once the pump laser enters the red-detuned regime, the intracavity power drop can be compensated for by the auxiliary laser and the cavity resonance could be stabilized. Therefore, transition from the chaotic comb to soliton states can be stably realized, and the tuning process is no longer dependent on the tuning speed. As shown in Fig. 7(c), during the soliton generation process, the total intracavity power remains at a similar level, which avoids the resonance drifting. Fig. 7 SMC generation by the auxiliary-laser-based method. (a) Experimental setup. (b) Schematic of the counter-coupled auxiliary-laser-assisted thermal response control method. (c) The pump and auxiliary laser counter-balance thermal influences on the microcavity. (d) Optical spectrum of single SMC. Images are adapted with permission from Refs. 16 and 66. Based on the auxiliary-laser-assistant thermal-balance approach, a new SMC regime is discovered in which the soliton power exhibits a negative slope versus pump frequency detuning. It is distinct from the traditional soliton existence regime with a positive slope that is accessible via thermal locking by thermal-avoided methods. The negative slope implies that the increase of average comb power is less than the decrease of pump background, resulting in the total intracavity power decreasing with the increasing of detuning. In another experiment, it is proved that the durations of soliton steps can be extended by two orders of magnitude under the assistance of a codirectional-coupled 1.3-μm auxiliary laser, enabling robust soliton generation by even manual tuning of the pump frequency into resonance with sub-milliwatt-level power.29 Besides, SMCs are also realized in silica and high-index doped silica glass microresonators.15,28 All of these experimental results suggest that using an auxiliary laser can contribute to intracavity thermal equilibrium. It is regarded as an effective and universal method for stable SMC generation. Further, the auxiliary laser provides an additional degree of freedom for microcomb dynamic research. For example, the frequency spacing of the auxiliary and pump lasers has a significant impact on the microcomb states, and the beating between the auxiliary and pump lasers provides an optical lattice for soliton capture, which would be helpful for soliton crystal generation. This method can also provide a feasible approach to realizing spectral extension and synchronization of a microcomb in a single microresonator.67 Photorefractive Effect for Stable SMC Generation Because of the negative temperature coefficient of microresonators, the soliton existing range exhibits thermal instability, which results in complex pump tuning techniques for SMC generation. By contrast, if the refractive index of a microresonator decreases with increasing intracavity optical power, the pump can enter the red-detuned regime stably for SMC generation, just like the MI comb generation in a negative temperature coefficient microresonator. It has been discovered that the photorefractive effect in a Z-cut LiNbO3 waveguide can cause an intensity-dependent decrease in the refractive index.31 Moreover, the photorefractive effect has a stronger influence on the refractive index compared with that of the thermo-optic effect, which results in the red-detuned regime becoming a thermal stable region.31 Figure 8(a) schematically shows the influence of the photorefractive effect, which is opposite to that induced by the thermo-optic nonlinearity. The inset of Fig. 8(a) presents the measured transmission power trace while a laser passes through a resonance from the red-detuned side. It is found that the response time of the photorefractive effect is much slower than that of the thermo-optic effect. Accordingly, if the tuning speed of the pump frequency is faster than the response time of the photorefractive effect, the microresonator is first affected by the thermo-optic effect, which causes resonance red-shift and induces intracavity power increase. Therefore, three different dynamic processes may take place depending on the pump frequency tuning speed. When the pump scanning time is less than the thermal lifetime, the thermo-optic effect will dominate and no SMC can be stabilized. Once the pump scanning time is comparable to the thermal lifetime, power spikes will be observable as shown in Fig. 8(c). If the pump scanning speed further decreases, the power spikes can be effectively suppressed as shown in Fig. 8(b). Fig. 8 Bichromatic SMC generation in a LiNbO3 microresonator. (a) Schematic for influences of optical Kerr and photorefraction effects. The inset is the measured optical power traces when a pump sweeps across a resonance from the red-detuned side. (b) Comb power trace versus scanning time when the pump slowly sweeps forward and backward in the red-detuned regime. (c) Comb power trace versus scanning time when the pump rapidly sweeps from the red- to blue-detuned regimes. The power spikes are caused by the relatively slower response speed of the photorefraction effect. (d) and (e) Optical spectra of SMCs in near-infrared and visible bands, respectively. Images are adapted with permission from Ref. 31. Because of the photorefractive effect, the thermal instability of the soliton existing range is completely compensated for. Therefore, SMC can be stably generated by coupling a pump into the resonance from the red-detuned side, and the pump can freely tune forward and backward for soliton switching. Meanwhile, the thermal stability of the soliton existing range can contribute to simplification of control circuits for SMC generation, which is crucial for miniaturized integration and practical applications. More interestingly, the LiNbO3 waveguide has not only the Kerr effect but also second-order nonlinearity, which results in visible SMC generation by the second harmonic effect. It is an effective approach to realizing bichromatic microcombs when the cavity material has second- and third-orders nonlinearity simultaneously, which has also been verified in aluminum nitride microresonators.68 The second harmonic effect provides another approach for SMC generation in the visible frequency band where microcavities suffer from higher transmission loss. Forward and Backward Tuning Method Due to the inherently stochastic intracavity dynamics, it remains a challenge to realize repeatable soliton switching and deterministic single SMC generation if using the aforementioned strategies. For example, Fig. 9(b) shows the overlap optical power traces once the pump laser sweeps over a microresonator resonance using the forward frequency-tuning method. It clearly demonstrates that the soliton number N is random, with a high probability of N=6 (predominantly), 7, 8, or 9.69 In addition, the step duration decreases with decreasing N, indicating that the single SMC is not reliably accessible using only the forward-tuning technique. Fig. 9 (a) Scheme of the forward frequency-tuning method. (b) 200 overlaid experimental traces of the output comb light in the pump forward tuning, revealing the formation of a predominant soliton number of N=6. (c) Scheme of the laser forward and backward tuning. (d) Experimental traces of the forward tuning (in yellow) and backward tuning (in white) for soliton switching and deterministic single SMC generation. (e) Measured absolute soliton existing range of a Si3N4 microring. The lower boundary δL presents staircase pattern that can be stably accessed step by step using backward-scanning method. (f) Optical spectrum of single SMC in a 100-GHz Si3N4 microresonator. Images are adapted with permission from Ref. 69. After the frequency-scanning method was proposed, a forward and backward frequency-sweeping technique was introduced for deterministic single SMC generation.69 Briefly, the forward frequency tuning is first applied for multiple-SMC generation. In the next stage, the pump sweeps backward with a slow scanning speed, leading to successive reduction of the soliton number. The cavity dynamics comparison of forward and backward frequency tuning methods is shown in Figs. 9(a)9(d). Physically, it has been revealed that the soliton-switching process depends on the character of a soliton existence range (δL<δ<δH, where δL and δH are the lower and higher boundaries of the soliton existence range, respectively). Figure 9(e) shows the measured absolute soliton existence range with different soliton numbers in a Si3N4 microresonator. With the decrease of the soliton number, the resonances are blue-shifted to reduce the intracavity energy, and the absolute detuning of the pump reveals a staircase pattern. The microcomb power will “jump” down the staircase steps when the pump sweeps backward; by contrast, the microcomb power drops dramatically and SMC annihilates directly when the pump continues sweeping forward. So the backward tuning technique can help to reliably access the single SMC state starting from an arbitrary multisoliton state. The backward tuning process must be adiabatic for successive soliton annihilation, i.e., thermal equilibrium is satisfied at every soliton annihilation step. Therefore, the backward scanning speed has to be much slower than the thermal relaxation rate of the microresonator. Figure 9(f) exhibits the single SMC when the pump backward sweeps at a speed of 40  MHz/s, while the forward tuning speed is 100  GHz/s. A parallel progress on deterministic single SMC generation was fulfilled in a high-index doped silica glass microring through the forward and backward thermal-tuning method.15 The high-Q microcavity is butterfly packaged with a thermo-electric cooler, which contributes to convenient manipulation of the cavity temperature. An orthogonally polarized auxiliary laser is used to balance the intracavity thermal effect, which introduces a tuning speed independent of the soliton state generation. Throughout the experiments, the wavelengths of the pump and auxiliary lasers are fixed once chosen. Similarly, the multisoliton state is first obtained through decreasing the operation temperature, and then the temperature is backward-tuned for deterministic single SMC generation. The corresponding power traces for the comb light and the auxiliary laser are shown in Figs. 10(a) and 10(b), where each step represents one- or multisoliton annihilation. Figure 10(c) presents the measured optical spectra of SMCs. The forward and backward tuning method provides a feasible approach toward a program-controlled autogeneration of single SMC. Fig. 10 Deterministic single SMC generation using thermal-tuning method.15 (a) Power traces when just decreasing the operation temperature. (b) Power traces using forward and backward operation temperature tuning method. (c) Optical spectra for soliton number of 4, 3, 2, and 1 in a 49-GHz high-index doped silica glass microring. Images are adapted with permission from Ref. 15. Self-Injection Locking One of the ultimate goals for the microcomb field is fully integrated SMC sources. A common feature of all methods mentioned above is reliance on external narrow-linewidth pumps, which introduces great challenge for miniaturized integration. Benefitting from the advanced micro/nanofabrication technologies, ultra-high-Q resonators can pave the way toward soliton sources directly driven by an ordinary semiconductor laser. Recently, self-injection locking methods were demonstrated by combining an MgF2 WGM resonator and a Fabry–Pérot laser diode (FP-LD).45 Figure 11(a) shows the schematic of self-injection locking in an MgF2 WGM resonator. The high-Q microresonator plays double roles. First, it acts as an external cavity to select the LD lasing mode and narrow the linewidth via the self-injection locking effect.70,71 Second, it is used as a nonlinear low-threshold Kerr medium for soliton generation. Figures 11(b)11(d) depict the processes of self-injection locking and SMC generation. The system can be smoothly switched into a soliton regime by tuning the FP-LD driven current. Because the back-reflection time is shorter than the thermal relaxation time, the laser frequency follows the thermally shifted cavity resonance, which suppresses the thermal instability in real time.45 As a result, the thermal instability problem can be effectively solved, which results in tuning speed independence. Therefore, this method could eliminate the requirement of special techniques for delicate amplitude or frequency manipulation.10,63,64,69 Fig. 11 Self-injection locking and spectral narrowing of a multifrequency laser diode coupled to an MgF2 ultrahigh-Q WGM microresonator. (a) Experimental setup. (b)–(d) Spectrum and (e)–(g) the corresponding beat note signal for the free-running multifrequency diode laser, laser stabilized by the microcavity, and single SMC in the self-injection locking regime, respectively.45 A key factor of this technique is the Q of the microresonator, which not only improves the backscattering light for pump frequency-locking but also lowers the needed pump power for SMC generation. Along with the success of on-chip ultra-high-Q microresonator fabrication, a series of achievements on directly pumped soliton generation have been realized.7275 For example, one achievement utilizing this scheme is directly butt-coupling an FP-LD chip (InP) to a high-Q Si3N4 microcavity.72 Through tuning the drive current of the FP-LD, the laser is first switched to single-mode lasing; then MI comb, breather soliton, and single soliton are realized consecutively. Very recently, the Q factor of a Si3N4 microcavity has been successfully boosted to 1.6×107 with the improved fabrication process, and a butterfly packaged SMC source at a radio frequency rate of 15 GHz is realized.74 The employed self-injection locking scheme also shows a unique feature of permitting soliton generation through binary turn-on and turn-off of the pump laser, which results in the “turnkey” operation without additional photonic/electronic control circuits. Another investigation shows that Kerr nonlinearity can significantly modify the locking dynamics by red-shifting the laser emission frequency, where self-phase modulation and XPM of the clockwise and counter-clockwise light enable soliton formation.75 All of this progress suggests that the self-injection locking approach has proved to be a practical and competitive technique in the development of microcombs, benefiting from eliminated requirements for complex tuning schemes or feedback loops for soliton generation and stabilization, as well as relieved power consumption introduced by redundant electronic components. Until now, because of the limited lasing power of integrated pump lasers, the bandwidth of these integrated SMCs has been limited to tens of nanometers. It is believed that microcombs with a broader bandwidth and higher power will be reached with the realization of more powerful LDs in near future. The self-injected locking method offers a new route toward combining integrated microresonators and chip-scale lasers to establish an ultracompact electrically driven SMC system. Pulse-Pumped Single-Soliton Generation In addition to the narrow-linewidth CW lasers, temporally structured light sources can also be used as a pump for SMC generation.11,76,77 The merits of pulse-pumped schemes are reduction of the pump power and improvement in the conversion efficiency. Meanwhile, the generated soliton pulses are copropagating with pump pulses, which results in the synchronization of repetition rates. The principle for this scheme is depicted in Figs. 12(a) and 12(b). The CW-driven soliton is supported by the resonantly enhanced CW background, while for pulse-driven case SMC can form only when the inverse driving pulse repetition rate matches the soliton roundtrip time.76 This process is akin to but different from the conventional OPO systems that also utilize a synchronous pump. The pump pulses act as optical lattices for solitons capture, which results in the generated stable femtosecond dissipative solitons locating “on top” of the driving pulses. In the first pulse-pumped SMC generation experiment, an EOM-based picosecond pulse generator was used as the pump source. Both the repetition rate (frep) and carrier-envelope-offset frequency (fceo) of pump pulses are directly controlled via tuning the RF frequency and CW laser frequency, respectively. As shown in Fig. 12(c), when the central driving mode is scanned across a resonance from blue- to red-detuning like the frequency-tuning method, the resonator transmission shows characteristic soliton step features. The step can sustain for a wide spanning interval of frep [Fig. 12(d)], indicating that precise control of the driving pulse repetition rate is not strongly required for the SMC formation in a pulsed pump system. Fig. 12 Principle and experimental scheme for SMC generation driven by optical pulses.76 (a) For the CW-driven case, solitons propagate with a resonantly enhanced CW background. (b) For the pulse-driven case, pump pulses with repetition rate frep periodically drive the solitons. (c) Resonator transmission trace as the central driving mode scans across a resonance for an optimized repetition rate. (d) Contour plot of the resonator transmission showing soliton steps can exist for a wide (100 kHz) spanning interval of frep. Images are adapted with permission from Ref. 76. Following the pulse-pumped cavity soliton generation in fiber and Fabry–Pérot cavities,78,79 generation has also been realized in a chip-based SiN microring, which can provide a spurious-free spectrum of resolvable calibration lines in the demonstration of a proof-of-concept microphotonic astrocomb.77 Importantly, locking cavity solitons to the external driving pulse (or soliton self-synchronization) enables direct, all-optical control of both the repetition rate and carrier-envelope offset frequency for the microcomb. Through stabilizing the subharmonically (i.e., the soliton repetition rate is twice that of the EOM frequency) driven astrocomb to a frequency standard, the absolute calibration with a precision of 25  cms1 is successfully achieved.77 Further, through locking one microcomb mode to an atomic transition, absolute optical-frequency fluctuations at the kilohertz level over a few seconds and <1-MHz day-to-day have been achieved recently.80 Physically, a pulse pump can break the symmetry of a microcavity, which induces optical lattice for soliton capture. The soliton quantity is determined by the repetition rate, width of the pump pulse, and intrinsic properties of the microresonator (dispersion, Q-factor, etc.). Therefore, SMCs with a deterministic repetition rate can be realized through controlling the pump repetition rate and pulse width. Until now, the repetition rate of a pulse-pumped SMC is still limited to the bandwidth of EOM. Actually, repetition-rate-multiplicable SMCs can also be realized using harmonic or rational harmonic driving pulses,81 which is beneficial by improving the energy conversion efficiency and flexibility of SMCs. Extraordinary Soliton Microcombs Different soliton forms in microcavities besides typical single or multisolitons, such as the soliton crystals, Stokes solitons, breather solitons, soliton molecules, laser cavity solitons, and dark pulses (solitons), also exist. They present distinct behaviors in both frequency and time domains, as well as enrich soliton dynamics and physics for microcomb research. In this section, the generation of these extraordinary solitons and their unique characteristics are reviewed. Soliton Crystals Soliton crystals defined as spontaneously and collectively ordered ensembles of copropagating solitons that are regularly distributed in a microcavity were recently discovered in silica WGM,46 Si3N4,54,82 high-index doped silica glass,19 and LiNbO383 microresonators. The generation of soliton crystals is found to be related to the extended background wave, which is formed by the beating between the mode with excess power caused by mode crossing and the pump laser. Copropagating solitons interact with each other and “crystallize” by arranging themselves into a self-organized sequence.46 Theoretically, the energy of all steady solitons circulating in a given microresonator is expected to be quantized to a certain value, which is determined by the resonator properties, pump power, and detuning. This phenomenon can be understood as the discrete steps found in the transmission power trace while sweeping the pump across a resonance.19 Therefore, the intracavity power will linearly increase with the soliton number. Once there are enough solitons coexisting in a microresonator, the total intracavity energy would approach a similar level to that of a chaotic MI state. The typical soliton crystals step appears at the peak of the transmission power trace and exhibits excellent thermal stability. The intracavity temperature fluctuation is relatively small and has little effect on the resonance frequency, so soliton crystals can be stably formed using slow tuning techniques without complex techniques to overcome the thermo-optic effect. For example, the soliton crystal generation is almost independent of the thermal or pump frequency tuning speed, even allowing manual tuning with a second-level response time. Figure 13 shows the representative optical spectral and temporal characters of soliton crystals obtained in a silica disk46 and a high-index doped silica glass microresonator.19 Because of the strong interactions between the dense soliton ensembles, soliton crystals exhibit unique “palm-like” spectra, which are the superposition of the “primary-like” and the underlying soliton spectrum. There are different soliton arrangements that induce a rich variety of soliton crystal states, such as the perfect state [Fig. 13(I a)], Schottky defects with vacancies [Figs. 13(I b)13(I e) and 13(II a)13(II i)], Frenkel defects [Figs. 13(I f)13(I i) and 13(II j)], disorder [Fig. 13(I j)], superstructure [Figs. 13(I k)13(I n) and 13(II k)], and irregular intersoliton spacings [Figs. 13(I o) and 13(II l)]. In another experiment, soliton crystals with Schottky defects are also observed in a graphene-nitride microresonator, where the cavity dispersion becomes adjustable and affects the spectral bandwidth and shape.54 Fig. 13 (I) Soliton crystals in a silica disk resonator.46 Left panel: measured (in black) and simulated (in color) optical spectra. Right panel: schematic depictions of the corresponding soliton distribution in the resonator with major ticks indicating (expected) soliton location and minor ticks indicating peaks of extended background wave due to mode crossing. (II) Soliton crystals in a high-index doped silica glass microring.19 Left panel: measured (in red) and simulated (blue solid circles) optical spectra. Right panel: simulated temporal traces exhibiting (expected) soliton distributions of the corresponding soliton crystals. Images are adapted with permission from Refs. 19 and 46. A special state of soliton crystals, i.e., the perfect soliton crystals (PSCs), is defined as all solitons are evenly distributed in a cavity and experimentally observed recently.82,83 In a certain sense, such PSCs could be regarded as single SMC in a microresonator with larger FSR and thus, be capable of boosting the repetition rate to beyond THz level and breaking the limitation of bending loss for extremely small microresonators. Meanwhile, compared with single SMC in a same microresonator, the power of each comb line is multiplied by N×N times and energy conversion efficiency by N times (N for soliton number of PSCs). It is also found that, if the intracavity thermal influence is overcame, realization of PSCs with arbitrary soliton number becomes possible by adjusting the pump frequency to periodically change the background waves.83 Soliton crystals introduce a new regime of soliton physics and act as a test bed for the research of soliton interaction. The extreme degeneracy of the configuration space of soliton crystals suggests its capability in on-chip optical buffers.46 Meanwhile, benefitting from a tiny intracavity energy change, the easy accessibility and excellent stability of soliton crystals could facilitate SMCs toward a portable and adjustable system for out of laboratory applications. Stokes Soliton Stokes soliton is a special type of soliton that arises from Kerr-effect trapping and Raman amplification when a first soliton (primary soliton) is present.17 Figure 14(a) shows the principle of Stokes soliton, with its spectrum located in the Raman gain range and the repetition rate autolocked to the primary comb through Kerr-phase modulation. Considering the dispersion of microcavities, the Stokes soliton occupies a different mode family, which has a similar FSR at the Raman gain band. The FSRs of different transverse mode families of a silica microdisk cavity are shown in Fig. 14(b). It clearly shows that the Stokes soliton mode at a wavelength of 1593 nm has a near-identical FSR with the primary soliton mode at the pump wavelength of 1550 nm.17 In the experiment, primary soliton is first excited using the power-kicking method. As the Stokes soliton originates from the Raman effect, which has a threshold power to obtain sufficient Raman gain to overcome round-trip loss, once the Stokes soliton is generated, the primary soliton power is clamped to a steady value and the Stokes soliton power is increased beyond the primary soliton power. A typical spectrum of the Stokes soliton is shown in Fig. 14(d); its central wavelength is 1593 nm, which is consistent with the mode analysis. The inset of Fig. 14(d) shows the high-resolution spectrum, which confirms a different mode is used for Stokes soliton generation. The measured RF spectra of the isolated primary and Stokes solitons are shown in Fig. 14(c), where the beating spectra are aligned, which verify the automatching of repetition rates of the Stokes soliton and primary soliton. Fig. 14 Stokes soliton in a high-Q silica microdisk.17 (a) Stokes soliton (red) is overlapped with primary soliton (blue) in time and space, which introduces maximum Raman gain. Stokes soliton is trapped by optical potential well induced by Kerr effect, which locks the repetition rate to the primary soliton. (b) Measured FSRs of different mode families versus wavelength of a 3-mm silica microdisk cavity. (c) Beating RF spectra of isolated Stokes and primary soliton, indicating the repetition rate of Stokes soliton is locked to primary comb. (d) Measured optical spectrum of Stokes soliton. The inset shows the high-resolution spectrum of the overlapping range, which confirms that the Stokes soliton is formed in a different mode family. Images are adapted with permission from Ref. 17. The central wavelength of the Stokes soliton relies on the FSR matching of distinct mode families, which offers a potential approach for controllable multicolor soliton generation through advanced-dispersion engineering techniques.84 Thus, it contributes to the SMC generation even in the spectrum range where anomalous-dispersion or high-power pump is not achievable. Breather Solitons Distinct from the stationary soliton states mentioned above, breather solitons show periodic oscillation in both pulse amplitude and duration20,51,8587 as schematically shown in Figs. 15(a) and 15(b). Physically, breather solitons are nonlinear waves in which the energy is localized in space but oscillates in time (or vice versa), which has been found in various subfields of natural science.88,89 To date, breather solitons have been observed in different material platforms, including Si3N4,20,51,85,88 MgF2 crystalline,85,88 and Si51 microresonators. The typical operation regime and accessing method of breather solitons are illustrated in Fig. 15(c), whereas the simulated transmission power trace versus pump detuning is depicted in Fig. 15(d). It can be clearly seen that the breather soliton regime is between the unstable MI and steady soliton regimes,51 and breather solitons are generated with a relatively small pump detuning range. Therefore, an additional step is usually required for breather soliton generation. That is, first tune the laser to excite stable solitons as usual (e.g., via forward frequency scanning), and then, importantly, tune the pump laser backward together with increasing the pump power for the realization of breather solitons [Fig. 15(c)]. Note that this process is similar to but different from the forward and backward tuning method mentioned above, where the backward-tuning process is a necessary adiabatic step for deterministic single SMC generation. Compared with stationary solitons, the spectra of breather solitons are characterized by a sharp top and a quasitriangular envelope (on logarithmic scale) [Fig. 15(e)] instead of the sech2-like profile, resulting from the averaging of the oscillating comb bandwidth by an optical spectrum analyzer. Meanwhile, for the RF spectra, breather solitons are identified by sharp tones that indicate oscillation at a low frequency (the fundamental breathing frequency and its harmonics) rather than a single one at the cavity repetition rate, suggesting the low-noise feature of the stationary soliton state [Fig. 15(f)]. Fig. 15 Breather soliton in a Si3N4 microresonator. (a) Schematic of soliton “breathing” behavior in a microcavity. (b) Recorded power trace of breather solitons. (c) Operating regimes of microcombs. Breathers are generated at relatively small detuning and high pump power through three steps (illustrated by I, II, and III). (d) Simulated transmission power trace. States 1 to 4 correspond to the primary comb, unstable MI, breather solitons, and stationary soliton state, respectively. (e) Averaged spectrum of the breather soliton in a Si3N4 microresonator. (f) RF spectrum of breather soliton. Images are adapted with permission from Refs. 20 and 51. The breather soliton state also can be triggered by avoided mode crossings (regarded as the intermode breather soliton), which is a ubiquitous phenomenon in multimode microresonators as illustrated in Figs. 16(a) and 16(b).85 Usually, in the absence of intermode interactions, solitons in microcavities exist within a continuous range of the laser detuning where the soliton power smoothly evolves over the change of detuning. At the lower boundary of the soliton existing range [yellow shaded area in Fig. 16(a)], there is an intrinsic breathing state that has an oscillatory power trace, corresponding to the normal breather soliton existence range. When the intermode interaction is taken into account, a different breathing dynamic can be clearly seen in Fig. 16(b), which increases amplitude jitter close to the bistability region in the power trace. To observe intermode breather solitons, a single cavity soliton is first obtained; then continuously tuning the pump frequency within the soliton existing range, at a specific pump detuning, the intermode breather soliton can be excited, which is indicated in the form of sidebands on the RF beat note (breathing frequency). As this kind of breather soliton derives from the intermode interaction, the breathing mode generally has a narrow bandwidth, which results in the intermode breather soliton having an overall sech2-shape spectral envelope (similar to the stationary SMCs) in the primary mode family but featuring several spikes (i.e., enhanced power in comb teeth) due to the phase matching to the cavity soliton,85 as shown in Figs. 16(c) and 16(d). So this breathing behavior can be understood as being associated with a periodic energy exchange between the solitons and a second optical mode family. As a consequence, the discovery of widely existing breather solitons significantly enriches the dissipative soliton phenomena in a microresonator, as well as contributes to understanding the soliton dynamics within the larger context of nonlinear optics. Fig. 16 Intermode breather solitons in microcavities. (a) Simulated intracavity power trace over the laser detuning in the absence of intermode interactions. The intermode breather soliton exists in the region where stationary soliton is expected (orange area). (b) Simulated power trace based on the coupled LLEs, showing a hysteretic power transition (gray area) and an oscillatory behavior (orange area). (c), (d) Measured optical spectra for intermode breather solitons in (c) an MgF2 crystalline microresonator and (d) a SiN microring, which exhibits spikes that result from intermode interactions. Images are adapted with permission from Ref. 85. Soliton Molecules Soliton molecules are balanced states in which attractive force caused by group velocity dispersion (GVD) of bound solitons is counteracted by the intersoliton repulsive force induced by the XPM effect.22 Figure 17(a) shows the principle of the balance of attractive and repulsive forces between distinct solitons. When a microresonator is driven by discrete pumps [Fig. 17(c)], solitons with different pulse energies can form once the pumps are stabilized in the red-detuned regime simultaneously. The relationship of repulsive force (drifts of the distinct solitons) versus the soliton temporal gap can be calculated by Eq. (4) where As is the major soliton amplitude, Es and Eb are the major soliton and the background fields including both of the minor soliton and beating background wave. The calculated result is shown in Fig. 17(b). It is observed that the repulsive force appears when the intersoliton separation is <300  fs. Therefore, when the distinct solitons have a large temporal gap, the attractive force plays a leading role and solitons get close to each other until the attractive force is balanced by the repulsive force. Fig. 17 Heteronuclear soliton molecule generation using two discrete pumps. (a) Principle of bound solitons where attractive force and repulsive force are balanced. (b) Calculated repulsive force versus the temporal separation of solitons. (c) The experimental setup for soliton molecule generation. (d) Measured transmission power trace while the pumps sweep across a cavity resonance. The red-shaded area is the comb power of the major pump, while the comb power of the minor pump is indicated by the blue-shaded area. (e) Optical spectrum of soliton molecules of two bound solitons, which corresponds to a linear superposition of optical spectra of the major soliton (f) and minor soliton (g). Images are adapted with permission from Ref. 22. Concerning experimental realization, discrete pumps are obtained by modulating a CW laser using an EOM. The frequency separation of discrete pumps is controlled by the driven RF signal. An example work is implemented in an MgF2 WGM resonator with a loaded-Q of 1×109, as shown in Fig. 17.22 When the pumps rapidly sweep across a resonance, the recorded transmission power trace, shown in Fig. 17(d), is characterized by the double MI comb to SMC transitions. The red-shaded area is the power trace of the major microcomb, which has higher power and a broader bandwidth, and the optical spectrum for single SMC is shown in Fig. 17(f). Figure 17(g) shows the optical spectrum of minor soliton, which has a narrower spectrum bandwidth, and the transmission power trace is presented by the blue-shaded area in Fig. 17(d). Once these two solitons are bound with each other, the optical spectrum is the linear superposition as shown in Fig. 17(e). The temporal separation of bound soliton is about 500 to 800 fs, which is of the same order of individual soliton pulse width, indicating that a balance is established between the attractive and repulsive forces, resulting in the same propagation velocity in the cavity. Soliton molecules in microcavities go beyond the frame of their predecessors in fiber lasers, which enriches the soliton physics. In terms of applications, soliton molecules might contribute to comb-based sensing and metrology by providing an additional coherent comb, as well as optical telecommunications if storing and buffering soliton-molecule-based data come true.22 Laser Cavity Solitons SMCs can also be generated in a nested laser cavity in which a Kerr microresonator is embedded into a gain fiber cavity.21 The principle of laser cavity soliton is demonstrated in Fig. 18(a), which includes a microring cavity and a longer gain fiber cavity. The mode relationship of these two cavities is shown in Fig. 18(b), which ensures only a single fiber cavity mode in each microcavity mode to prevent supermode instability. The phases of comb lines are locked based on the intracavity FWM effect,90,91 and a typical optical spectrum of laser cavity soliton is presented in Fig. 18(c). The main gain of laser cavity soliton is obtained by stimulated radiation in the gain fiber cavity, which is different from the parametric gain of external-pumped microcomb generation schemes. Therefore, the intracavity power does not need to reach the OPO threshold, which results in laser cavity soliton formation with very low power. Fig. 18 Laser cavity solitons. (a) Principle of cavity soliton formation. The microresonator is nested into a gain fiber cavity. (b) Mode relationship of the nonlinear microresonator and gain fiber cavity. (c) Typical optical spectrum of laser cavity soliton, which includes two equidistant solitons per round-trip. Images are adapted with permission from Ref. 21. Comparatively, the laser cavity soliton is background-free, which is beneficial for improving the energy conversion efficiency. According to the LLE, the energy conversion efficiency is limited to 5% for CW laser pumped single soliton. However, the conversion efficiency of laser cavity soliton can be boosted to 96% in theory, and 75% is experimentally obtained.21 Furthermore, as the lasing modes are the common modes of nested cavities, the repetition rate of cavity soliton can be simply tuned through changing the fiber cavity length (e.g., using a high-precision delay line), providing a new approach for the realization of SMC frequency locking. Meanwhile, the generation of laser cavity solitons mainly relies on the modes relationship of nested cavities, so they exhibit high robustness against environment fluctuations. Dark Soliton Generation in the Normal-Dispersion Regime Dark solitons (or dark pulses) are generally understood as intensity dips on a constant background, which demonstrate some unique advantages (e.g., less sensitivity to the system loss than bright solitons and more stability against the Gordon–Haus jitter in long communication lines) and have attracted increasing interest in many areas. Based on the mean-field LLE in the context of ring cavities or Fabry–Pérot interferometer with transverse spatial extent, it is found that in the time domain the dark solitons manifest themselves as low-intensity dips embedded in a high-intensity homogeneous background with a complex temporal structure, as being a particular type of solitons appearing in dissipative systems.92,93 Although the original nonlinear Schrödinger equation admits a solution in the form of bright solitons in the anomalous-dispersion regime, dark solitons are in the normal-dispersion regime.94,95 It is worth noting that mode-locking transitions do not necessarily correspond to dark or bright pulse (soliton) generation in microresontors with normal dispersion.96,97 This is in contrast to the situations for the negative-dispersion regime where all soliton forms are actually “bright solitons.” Actually, in the field of traditional mode-locked fiber lasers, it has been proved that different types of bright pulses can emit from a laser cavity in the normal-dispersion regime, including dissipative solitons with rectangular spectrum (Gaussian in time domain), Gaussian spectrum (flat-topped pulses), broadband spectrum (wave-breaking-free pulses), and noise-like pulses (low-coherence pulse clusters).98 Or even, bright and dark solitons can coexist in the same fiber laser cavity with strong normal dispersion.94 A similar phenomenon is also theoretically revealed in normal-dispersion microresonators.95 In other words, it can be considered that there does not exist a rigid “barrier” to distinguish the two states (depending on the pulse duration and duty cycle). Since rich phenomena have been discovered exhibiting distinct features from different aspects and with rather complicated excitation dynamics, there have been various prediction and explanations related to the physical origin of the observed temporal behaviors for microcombs with normal dispersion in the literature, such as “platicons” (flat-topped bright solitonic pulses),99 dark pulses,23,92 and dark solitons93,95,96 or just normal-dispersion microcombs.97 In this part, we mainly focus on the mode-locked character, rather than a strict physical clarification for this kind of pulse. For simplicity, these localized reduction structures with an intense CW background in microcavites are all referred to as “dark solitons.” An obvious difference between the two opposite-dispersion regimes is that the anomalous region usually favors ultrashort pulse emission with time-bandwidth-product limited (chirp-free or nearly chirp-free) durations, while for the normal-dispersion regime it prefers strongly chirped waves that are far from the Fourier-transform limitation with the absence of intrinsic balancing between the nonlinearity and negative dispersion. Thus intuitively, the route to DS generation as well as its excitation dynamics will deviate from those for bright solitons in anomalous-dispersion microresonators. During the past years, several experiments with normal-dispersion resonators on diverse material platforms, including CaF2100 and MgF2 WGM resonators,101 as well as SiN microrings,97,102 emerged. Numerous trials regarding different skills concerning the microcomb formation were also conducted. For instance, as shown in Figs. 19(a) and 19(b), by utilizing backscattered light from a resonator to lock the laser to the WGM microresonator (self-injection locking method),100 a stable DS state corresponding to a manifold of short “dark” pulses with lower power compared to the background traveling inside the resonator is achieved, and the phase-locked comb shows three distinct maxima (peaks) on the spectrum envelope.101 Through numerical analysis, it is found that the position of these peaks depends on the normalized dispersion value, which is similar to the situations for the negative dispersion where the comb bandwidth and the position of the emitted dispersive wave are decided by the overall dispersion. Another example using a SiN microring is depicted in Fig. 19(c).102 The system is driven from the hyperparametric oscillation that is facilitated by the local dispersion disruptions induced by mode interactions to a mode-locked DS state over a 200-nm spectral bandwidth at a selected pump power and detuning. Interestingly, it is found that even square pulses can be generated directly with correct sets of wavelength-dependent Q-factor, GVD, and pump detuning [Fig. 19(d)].102 Other achievements in this field include using a dual-coupled microresonator integrated with an on-chip microheater to shift the auxiliary ring resonances via the thermo-optic effect, as shown in Figs. 19(e) and 19(f), thus permitting programmable and reliable control of mode interactions that help the comb initially generated at specified resonances to achieve repetition-rate selectable and mode-locked combs in the normal-dispersion regime.97 Furthermore, it has also been numerically proved that the third-order dispersion plays an important role in the existence and stability of dark solitons, even allowing for stable dark and bright solitons to coexist in a microcavity.95 Fig. 19 Frequency comb generation in normal-dispersion microcavities. (a) Experiment setup using a semiconductor laser self-injection locked to an MgF2 WGM resonator, wherein the spectral envelope shows three distinct maxima. (b) Numerically simulated envelope of intracavity optical pulses in terms of normalized amplitude (blue) and the pulse formed by only a limited number of modes with no pump frequency included (red).101 (c) Example comb spectrum spanning more than 200 nm obtained in a SiN microring (inset: an optical micrograph of the microring). (d) Square optical pulses directly generated under special conditions at high pump power.102 (e) Spectrum for the mode-locked state using dual-coupled SiN microrings in normal-dispersion regime.97 Insets: microscope image of microrings (upper left) and transmission spectra versus heater power showing the resonances can be selectively split (upper right). (f) Comb intensity noise corresponding to (e) measurement by an electrical spectrum analyzer (top) and autocorrelation of the transform-limited pulse after line-by-line shaping (bottom). All of this progress demonstrates that the mode-locking mechanism for normal-dispersion microcavities might have analogies to, but surely is not identical to, that in anomalous-dispersion regimes. One of the most obvious differences, compared with bright solitons observed in negative-dispersion microcavities, is that the soliton regime now favors the blue-detuned region instead of the red-shifted regime, which leads to the intracavity pump field staying on the upper branch of the bistability curve where modulational instability is generally absent.103,104 A representative work is shown in Figs. 20(a) and 20(b); a thermal-tuning method (equivalent to the traditional frequency tuning method) is employed to realize dark solitons in a SiN microring.23 The pump is gradually tuned to stably approach the resonance from the blue side (and always in the effectively blue-detuned regime with respect to the shifted resonance); the power transmission curve is depicted in Fig. 20(a) and different operation stages are observed in Fig. 20(b). Strikingly, it can be clearly seen that the MI process, which is commonly cited as an important mechanism for SMC generation in the negative-dispersion regime, is now weakened (or even disappears) in this positive-dispersion regime. This mechanism is further verified by a detailed bifurcation analysis of dark structures in the LLE with normal GVD, which predicts DS regions of existence and stability, and it suggests that the MI does not play a role in the DS existence.92 By contrast, mode coupling, which is usually considered to be detrimental as it inhibits the formation of solitons and limits the comb bandwidth in the anomalous-dispersion cavities, now acts as a contributing factor for initiation of DS generation.23,103 It is suggested that the initial comb lines are formed due to the interaction of different mode families; if resonances corresponding to different families of transverse modes approach each other in frequency, they may interact around mode crossing positions.23,104 This principle is also supported in a related work using the second-harmonic-assisted approach for DS generation in which the interaction between the fundamental and second-harmonic waves can provide a new way of phase matching for FWM in optical microresonators, effectively enabling the DS generation [Fig. 20(c)].104 One more interesting phenomenon is that, as shown in Figs. 20(d) and 20(e), at the through port bright solitons are observed, while at the drop port DSs are captured (i.e., they are reciprocal). Considering the physical nature, it is reasonable to imagine that the intracavity generation of bright pulses usually leads to generation of DSs at the output for the add-drop type microresonators. Moreover, with specific operation parameters, it is possible to transform the cavity state from the bright soliton regime to the DS state or vice versa.92 Therefore, the most important meaning of the DS research relies on such approaches providing a novel way of overcoming the dispersion limit for traditional microcombs. This is very important for effectively extending the accessible wavelength ranges, e.g., stretching to visible ranges that are essential for the stabilization of optical clocks where the atomic transitions locate but large normal material dispersion usually dominates.23 Fig. 20 DS generation in a normal-dispersion SiN microring using (a) and (b) the thermal-tuning method23 and (c)–(e) second-harmonic-assisted approach.93 (a) Drop-port power transmission when one mode is pumped. (b) Comb spectra (left panel) and intensity noise (right panel) corresponding to different stages in (a). (c) Experimental setup for the second-harmonic-assisted comb generation. Inset: microscope image of the microring with second-harmonic radiation. (d) Transition curves of the through port (top) and drop port (bottom) when the pump laser scans across the resonance from shorter to longer wavelengths. (e) Reconstructed waveforms at through port (top) and drop port (bottom), showing bright and dark pulses, respectively (inst. freq.: instantaneous frequency). Images (a) and (b) are adapted with permission from Ref. 23 and images (c)–(e) are adapted with permission from Ref. 104. Looking back on the research of extraordinary SMCs related to rich physical phenomena (including soliton crystals, Stokes solitons, breathers, molecules, and cavity solitons, as well as dark solitons), all of these encouraging discoveries have revealed deeper insight into the dynamics and properties of this new category of laser sources for integrated photonics. As a comparison, the developing route is, interestingly and legitimately, mimicking the evolution roadmap of the mode-locked fiber lasers of previous pioneers in nonlinear optics in the last few decades (and still continuously yielding cutting-edge progresses at present105). Actually, a lot of unusual phenomena in microcombs have already been predicted or verified in a similar manner to some degree. That is, although different in material platforms and generation mechanisms, classic fiber nonlinearities can still offer a guidance or reference for microcombs, especially regarding unexplored temporal and spectral behaviors (e.g., vector solitons,106,107 wave-breaking-free pulses,108,109 optical bullets,110 or even rouge waves111,112). Meanwhile, they can inspire new fundamental research on intrinsic soliton features that are not yet considered in this area, such as the chirp parameter (or time-bandwidth-product),113 which is crucial in traditional fiber lasers for dedicated control on pulse shaping qualities114 and contributes significantly to further system optimization. To date, various proof-of-concept experiments concerning extensive applications of SMCs have been demonstrated (Fig. 21), including massively parallel coherent communications, DCS, ultrafast distance measurements, low-noise microwave generation, optical frequency synthesizers, astrophysical spectrometer calibration, and quantum information processing. Some representative works are discussed by classification of application fields. Fig. 21 Application areas of SMCs. Coherent Optical Communications Future demand in “big data” interconnection leads to optical communication systems with terabits to petabits per second data rates in a single fiber with hundreds of parallel wavelength-division multiplexing channels. The SMC can act as a promising candidate of multiwavelength carriers due to its favorable characteristics of frequency stability, broad band, suitable mode spacing, and narrow linewidth. Even based on a nonsoliton-state microcomb with an imperfect spectrum, Pefeifle et al.115 demonstrated coherent transmission of 1.44  Tbit/s of data over 300 km, using 20 comb channels with spectral efficiency of 3  bit/(s·Hz) under quadrature phase-shift keying modulating and polarization multiplexing. When SMCs were used, the data rate was successfully boosted to 55  Tbit/s with spectral efficiency of 5.2  bit/(s·Hz), under the conditions of 179 channels, 50 GHz carrier spacing, 40 gigabaud rate, 16-state quadrature amplitude modulation (16-QAM), and 75-km transmitting distance.39 Higher order modulation of 64-QAM has also been achieved by employing dark SMCs.116 A record high spectral efficiency of up to 10  bit/(s·Hz) was recently realized using the traditional superchannel method.117 The DCS, which is similar to Fourier transform spectroscopy, provides an excellent method for gas composition detection with outstanding features of shorter sampling time, higher optical spectrum resolution, and multicomposition detection capability. Two SMCs in the telecommunication band with slightly different repetition rates that were generated in two separate SiO2 wedge disk resonators were used to build a DCS system, and the absorption spectrum of H13CN was analyzed.38 An improved approach utilizing counter-propagating solitons emitted from a single cavity was reported recently to implement a Vernier spectrometer, which provided a considerable technical simplification. The Vernier spectrometer enhanced the capability for arbitrarily tuned source measurement.118 High-performance dual-soliton-combs using two cascaded SiN microresonators with a single pump, which drastically reduces experimental complexity, have also been demonstrated.119 In the mid-infrared region, molecular transitions are much higher (typically 10 to 1000 times) than that in the visible or near-IR, and a proof-of-principle mid-infrared DCS system based on silicon microrings was successfully realized through a thermal-controlled method.120 In addition to the DCS, single- and triple-SMCs have also been proposed for atomic spectroscopy and multidimensional coherent spectroscopy, respectively.80,121 Distance Measurement OFCs are promising as excellent coherent sources for light detection and ranging (LIDAR) systems to fulfill fast and accurate distance measurements. Using a setup similar to the DCS system, dual-comb ranging systems have been carried out recently, which opens the door to low-SWaP LIDAR systems. For example, by employing the dual counter-propagating SMCs within a single silica wedge resonator, a dual-comb laser ranging system substantiates the time-of-flight measurement with 200-nm accuracy at an averaging time of 500 ms and within a range ambiguity of 16 mm.65 Another parallel progress is achieved on a SiN dual-microcomb LIDAR system, demonstrating ultrafast distance measurements with a precision of 12 nm at averaging times of 13  μs and acquisition rates of 100 MHz, even allowing for in-flight sampling of gun projectiles moving as fast as 150  m/s.41 Through fast chirping pump lasers in the soliton existence range, the pump frequency modulation was transferred to all spectral comb lines, which resulted in a true parallelism frequency-modulated continuous-wave LIDAR.122 SMC was also proposed for high-accuracy, long distance ranging based on a dispersive interferometry method. In a very recent experiment, a minimum Allan deviation of 27 nm was successfully achieved in an outdoor 1179 m ranging experiment.123 RF Related SMCs are promising candidates for microwave-related applications including optical atomic clocks, ultrastable microwave generation, and microwave signal processing. Early attempts of photonic-microwave links include locking a nonsoliton state microcomb to atomic Rb transitions124 and a self-referenced microcomb with a broadened spectrum locked to an atomic clock.125 Recently, more compact schemes with fully stabilized SMCs have been reported, taking a big step toward photonic integration of optical-frequency synthesis40,126 and optical atomic clocks.127 The proposed optical frequency synthesizer was verified to be capable of transferring the stability from a 10-MHz microwave clock to laser frequency within an uncertainty of 7.7×1015.40 An updated work demonstrated that with the help of a pair of interlocked SMCs, an optical atomic clock can provide fully coherent optical division to generate a 22-GHz RF clock signal with a fractional frequency instability of 1013.128 In addition to the photonic-microwave link, SMCs can also act as important tools for high spectral purity microwave generation129,130 and true time delays.131135 Quantum Optics Benefitting from the significant cavity enhancement, microresonators can offer attractive integrated platforms for single photon or entangled quantum state generation.136 Broadband quantum frequency comb has been realized in a high-refractive-index glass microring resonator via a high-efficiency spontaneous FWM effect at a relatively low pump power.137 Based on the quantum microcomb technique, more breakthroughs have been achieved, including the first integrated multiphoton entanglement137 and high-dimensional entangled quantum states.138 Compared with the quantum microcomb relying on the spontaneous parametric process, SMCs generated through the stimulated FWM effect can also play important roles in quantum optics. For example, a novel quantum key distribution system by demultiplexing the coherent microcomb lines was proposed very recently, showing the potential of the Gbps secret key rate.139 Considering other progress made in this field by ultilizing microcombs,140146 it is reasonable to imagine that the microcavities can find more important applications worth extensively exploring in quantum optics. Summary and Outlook The experimental realization of SMCs represents the successful convergence of materials science, physics, and engineering techniques. SMCs have been regarded as an outstanding candidate in the exploration of next generation of optical sources due to the unprecedented advantages of lower SWaP (size, weight, and power), higher repetition rate as well as high coherence across the spectral coverage.147 Until now, the challenge of SMC generation has been gradually overcome using a variety of advanced experimental techniques, from the universal power-kicking method to the “forward and backward tuning” scheme for deterministic single SMC generation. Meanwhile, the dynamics of cavity soliton physics are substantially understood along with discovery of various extraordinary solitons and the rich nonlinear effects of dispersive waves, mode-crossing effect, and Raman self-frequency shift. Although SMC-based applications present unprecedented performance improvements in many fields, generally they are still at the stage of proof-of-concept in laboratories at present. Considering engineering applications, the developments of SMCs (should) favor the tendency toward automatic generation, as well as higher integration density and higher energy conversion efficiency. The main challenge of automatic or programmable-controlled SMC generation comes from the ultrashort thermal lifetime of a microresonator, which is beyond the capacity of practical instruments for timely judgment on the soliton state through spectrum recognition. Fortunately, some advanced tuning speed independent schemes (e.g., the auxiliary-laser-assistant method and photorefractive effect in LiNbO3 microresonators) have been raised, paving a potential way for automatic SMC generation. Monolithic/hybrid integration is another key technique to promote applications of SMCs. The core issue of fully integrated SMC sources is improving the Q-factor of microresonators. The propagation loss in microcavities is still several orders of magnitude higher than that of standard optical fibers, so there is still great potential to further lower the loss through optimizing the fabrication process. Actually, many achievements have been obtained via improved microfabrication techniques. For example, environmentally stable silicon oxynitride (SiOxNy) with a Q even higher than 100 million (two orders higher than the previous level) has been successfully realized; it permits a submicrowatt threshold for microcomb generation148150. In addition, hybrid-integrated SMC sources through butt-coupling gain block or LD to SiN chips have been reported recently.52,72 Furthermore, higher energy conversion efficiency151,152 of SMC sources is demanded for reducing power consumption. Currently, researchers are purchasing higher conversion efficiency through different routes, including the self-injection method,33 perfect soliton crystals,83 dark pulse,23 and coupled-ring geometry,91 as well as nearly 100% via a pump recycling method.152 It should be noted that a majority of reported SMCs are operating at communication bands. However, there are still great challenges for the generation of visible and mid-infrared SMCs, which would enable broad applications in molecular spectroscopy and chemical/biological sensing. The bandwidth of visible SMC28 is rather limited, and no mid-infrared single-SMC has yet been reported. Therefore, more efforts to improve the performance of SMCs on existing platforms and explore new materials are expected for vastly extending the spectral coverage to reach their full potential.153,154 Furthermore, microcavities are yet to be thoroughly developed and recognized by revealing more characteristics in the space and time domains, so as to identify their intrinsic essences and ultimate capabilities.155158 Considering all of these breakthroughs, it is highly likely that SMCs will become revolutionary integrated optical sources with ultralow SWaP in a wide range of future applications. This work was supported by the National Natural Science Foundation of China (Grant Nos. 61635013 and 61675231), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB24030600), and the Youth Innovation Promotion Association of CAS (Grant No. 2016353). The authors thank Z. Z. Lu, X. Y. Wang, and B. L. Zhao for valuable discussions and contributions to document proofreading. K. J. Vahala, “Optical microcavities,” Nature, 424 839 –846 (2003). Google Scholar T. J. Kippenberg, S. M. Spillane and K. J. Vahala, “Kerr-nonlinearity optical parametric oscillation in an ultrahigh-Q toroid microcavity,” Phys. Rev. Lett., 93 083904 (2004). PRLTAO 0031-9007 Google Scholar A. A. Savchenkov et al., “Low threshold optical oscillations in a whispering gallery mode CaF2 resonator,” Phys. Rev. Lett., 93 243905 (2004). PRLTAO 0031-9007 Google Scholar M.-G. Suh and K. Vahala, “Gigahertz-repetition-rate soliton microcombs,” Optica, 5 65 –66 (2018). Google Scholar W. Q. Wang et al., “Dual-pump Kerr micro-cavity optical frequency comb with varying FSR spacing,” Sci. Rep., 6 28501 (2016). SRCEC3 2045-2322 Google Scholar T. J. Kippenberg, R. Holzwarth and S. A. Diddams, “Microresonator-based optical frequency combs,” Science, 332 (6029), 555 –559 (2011). SCIEAS 0036-8075 Google Scholar D. K. Armani et al., “Ultra-high-Q toroid microcavity on a chip,” Nature, 421 925 –928 (2003). Google Scholar P. Del’Haye et al., “Optical frequency comb generation from a monolithic microresonator,” Nature, 450 1214 –1217 (2007). Google Scholar Y. Okawachi et al., “Octave-spanning frequency comb generation in a silicon nitride chip,” Opt. Lett., 36 (17), 3398 –3400 (2011). OPLEDP 0146-9592 Google Scholar T. Herr et al., “Temporal solitons in optical microresonators,” Nat. Photonics, 8 145 –152 (2014). NPAHBY 1749-4885 Google Scholar D. C. Cole et al., “Kerr-microresonator solitons from a chirped background,” Optica, 5 1304 –1310 (2018). Google Scholar V. Brasch et al., “Photonic chip-based optical frequency comb using soliton Cherenkov radiation,” Science, 351 (6271), 357 –360 (2016). SCIEAS 0036-8075 Google Scholar X. Yi et al., “Soliton frequency comb at microwave rates in a high-Q silica microresonator,” Optica, 2 1078 –1085 (2015). Google Scholar C. Joshi et al., “Thermally controlled comb generation and soliton modelocking in microresonators,” Opt. Lett., 41 (11), 2565 –2568 (2016). OPLEDP 0146-9592 Google Scholar Z. Lu et al., “Deterministic generation and switching of dissipative Kerr soliton in a thermally controlled micro-resonator,” AIP Adv., 9 025314 (2019). AAIDBI 2158-3226 Google Scholar H. Zhou et al., “Soliton bursts and deterministic dissipative Kerr soliton generation in auxiliary-assisted microcavities,” Light Sci. Appl., 8 50 (2019). Google Scholar Q.-F. Yang et al., “Stokes solitons in optical microcavities,” Nat. Phys., 13 53 –57 (2017). NPAHAX 1745-2473 Google Scholar Q.-F. Yang et al., “Counter-propagating solitons in microresonators,” Nat. Photonics, 11 560 –564 (2017). NPAHBY 1749-4885 Google Scholar W. Q. Wang et al., “Robust soliton crystals in a thermally controlled microresonator,” Opt. Lett., 43 (9), 2002 –2005 (2018). OPLEDP 0146-9592 Google Scholar C. Bao et al., “Observation of Fermi–Pasta–Ulam recurrence induced by breather solitons in an optical microresonator,” Phys. Rev. Lett., 117 163901 (2016). PRLTAO 0031-9007 Google Scholar H. Bao et al., “Laser cavity-soliton microcombs,” Nat. Photonics, 13 384 –389 (2019). NPAHBY 1749-4885 Google Scholar W. Weng et al., “Heteronuclear soliton molecules in optical microresonators,” Google Scholar X. Xue et al., “Mode-locked dark pulse Kerr combs in normal-dispersion microresonators,” Nat. Photonics, 9 594 –600 (2015). NPAHBY 1749-4885 Google Scholar T. Herr et al., “Mode spectrum and temporal soliton formation in optical microresonators,” Phys. Rev. Lett., 113 123901 (2014). PRLTAO 0031-9007 Google Scholar M. Karpov et al., “Raman self-frequency shift of dissipative Kerr solitons in an optical microresonator,” Phys. Rev. Lett., 116 103902 (2016). PRLTAO 0031-9007 Google Scholar X. Yi et al., “Single-mode dispersive waves and soliton microcomb dynamics,” Nat. Commun., 8 14869 (2017). NCAOBW 2041-1723 Google Scholar Z. Lu et al., “Raman self-frequency-shift of soliton crystal in a high index doped silica micro-ring resonator,” Opt. Mater. Express, 8 2662 –2669 (2018). Google Scholar R. Niu et al., “Repetition rate tuning of soliton in microrod resonators,” (2018). Google Scholar S. Y. Zhang et al., “Sub-milliwatt-level microresonator solitons with extended access range using an auxiliary laser,” Optica, 6 206 –212 (2019). Google Scholar Z. Gong et al., “High-fidelity cavity soliton generation in crystalline AlN micro-ring resonators,” Opt. Lett., 43 (18), 4366 –4369 (2018). OPLEDP 0146-9592 Google Scholar Y. He et al., “Self-starting bi-chromatic LiNbO3 soliton microcomb,” Optica, 6 1138 –1144 (2019). Google Scholar M. Yu et al., “Mode-locked mid-infrared frequency combs in a silicon microresonator,” Optica, 3 854 –860 (2016). Google Scholar S. H. Lee et al., “Towards visible soliton microcomb generation,” Nat. Commun., 8 1295 (2017). NCAOBW 2041-1723 Google Scholar M. H. P. Pfeiffer et al., “Octave-spanning dissipative Kerr soliton frequency combs in Si3N4 microresonators,” Optica, 4 684 –691 (2017). Google Scholar M. Karpov et al., “Photonic chip-based soliton frequency combs covering the biological imaging window,” Nat. Commun., 9 1146 (2018). NCAOBW 2041-1723 Google Scholar I. S. Grudinin et al., “High-contrast Kerr frequency combs,” Optica, 4 434 –437 (2017). Google Scholar P.-H. Wang et al., “Intracavity characterization of micro-comb generation in the single-soliton regime,” Opt. Express, 24 (10), 10890 –10897 (2016). OPEXFF 1094-4087 Google Scholar M. Suh et al., “Microresonator soliton dual-comb spectroscopy,” Science, 354 (6312), 600 –603 (2016). SCIEAS 0036-8075 Google Scholar P. Marin-Palomo et al., “Microresonator-based solitons for massively parallel coherent optical communications,” Nature, 546 274 –279 (2017). Google Scholar D. T. Spencer et al., “An optical-frequency synthesizer using integrated photonics,” Nature, 557 81 –85 (2018). Google Scholar P. Trocha et al., “Ultrafast optical ranging using microresonator soliton frequency combs,” Science, 359 (6378), 887 –891 (2018). SCIEAS 0036-8075 Google Scholar M.-G. Suh et al., “Searching for exoplanets using a microresonator astrocomb,” Nat. Photonics, 13 25 –30 (2019). NPAHBY 1749-4885 Google Scholar A. Pasquazi et al., “Micro-combs: a novel generation of optical sources,” Phys. Rep., 729 1 –81 (2018). PRPLCM 0370-1573 Google Scholar L. Gaeta, M. Lipson and T. J. Kippenberg, “Photonic-chip-based frequency combs,” Nat. Photonics, 13 158 –169 (2019). NPAHBY 1749-4885 Google Scholar N. G. Pavlov et al., “Narrow-linewidth lasing and soliton Kerr microcombs with ordinary laser diodes,” Nat. Photonics, 12 694 –698 (2018). NPAHBY 1749-4885 Google Scholar D. C. Cole et al., “Soliton crystals in Kerr resonaotors,” Nat. Photonics, 11 671 –676 (2017). NPAHBY 1749-4885 Google Scholar J. R. Stone et al., “Thermal and nonlinear dissipative-soliton dynamics in Kerr-microresonator frequency combs,” Phys. Rev. Lett., 121 063902 (2018). Google Scholar Q. Li et al., “Stably accessing octave-spanning microresonator frequency combs in the soliton regime,” Optica, 4 193 –203 (2017). Google Scholar C. Bao et al., “Direct soliton generation in microresonators,” Opt. Lett., 42 (13), 2519 –2522 (2017). OPLEDP 0146-9592 Google Scholar Y. Geng et al., “Terabit optical OFDM superchannel transmission via coherent carriers of a hybrid chip-scale soliton frequency comb,” Opt. Lett., 43 (10), 2406 –2409 (2018). OPLEDP 0146-9592 Google Scholar M. Yu et al., “Breather soliton dynamics in microresonators,” Nat. Commun., 8 14569 (2017). NCAOBW 2041-1723 Google Scholar B. Stern et al., “Battery-operated integrated frequency comb generator,” Nature, 562 401 –405 (2018). Google Scholar J. Liu et al., “Ultralow-power chip-based SMCs for photonic integration,” Optica, 5 1347 –1353 (2018). Google Scholar B. Yao et al., “Gate-tunable frequency combs in graphene-nitride microresonators,” Nature, 558 410 –414 (2018). Google Scholar Z. Gong et al., “Soliton microcomb generation at 2  μm in z-cut lithium niobate microring resonators,” Opt. Lett., 44 (12), 3182 –3185 (2019). OPLEDP 0146-9592 Google Scholar Y. K. Chembo and N. Yu, “Modal expansion approach to optical frequency-comb generation with monolithic whispering gallery-mode resonators,” Phys. Rev. A, 82 033801 (2010). Google Scholar Y. K. Chembo and N. Yu, “On the generation of octave-spanning optical frequency combs using monolithic whispering-gallery-mode microresonators,” Opt. Lett., 35 (16), 2696 –2698 (2010). OPLEDP 0146-9592 Google Scholar A. B. Matsko et al., “Mode-locked Kerr frequency combs,” Opt. Lett., 36 (15), 2845 –2847 (2011). OPLEDP 0146-9592 Google Scholar Y. K. Chembo and C. R. Menyuk, “Spatiotemporal Lugiato–Lefever formalism for Kerr-comb generation in whispering-gallerymode resonators,” Phys. Rev. A, 87 053852 (2013). Google Scholar S. Coen et al., “Modeling of octave-spanning Kerr frequency combs using a generalized mean-field Lugiato–Lefever model,” Opt. Lett., 38 (1), 37 –39 (2013). OPLEDP 0146-9592 Google Scholar T. Carmon, L. Yang and K. J. Vahala, “Dynamical thermal behavior and thermal self-stability of microcavities,” Opt. Express, 12 (20), 4742 –4750 (2004). OPEXFF 1094-4087 Google Scholar V. B. Braginsky, M. L. Gorodetsky and V. S. Ilchenko, “Quality factor and nonlinear properties of optical whispering-gallery modes,” Phys. Lett. A, 137 393 –397 (1989). PYLAAG 0375-9601 Google Scholar V. Brasch et al., “Bringing short-lived dissipative Kerr soliton states in microresonators into a steady state,” Opt. Express, 24 (25), 29312 –29320 (2016). OPEXFF 1094-4087 Google Scholar X. Yi et al., “Active capture and stabilization of temporal solitons in microresonators,” Opt. Lett., 41 (9), 2037 –2040 (2016). OPLEDP 0146-9592 Google Scholar M.-G. Suh and K. J. Vahala, “Soliton microcomb range measurement,” Science, 359 (6378), 884 –887 (2018). SCIEAS 0036-8075 Google Scholar Y. Geng et al., “Kerr frequency comb dynamics circumventing cavity thermal behavior,” in Nonlinear Opt., (2017). Google Scholar S. Zhang, J. Silver and P. Del’Haye, “Spectral extension and synchronisation of microcombs in a single microresonator,” (2020). Google Scholar X. Guo et al., “Efficient generation of a near-visible frequency comb via Cherenkov-like radiation from a Kerr microcomb,” Phys. Rev. Appl., 10 014012 (2018). PRAHB2 2331-7019 Google Scholar H. Guo et al., “Universal dynamics and deterministic switching of dissipative Kerr solitons in optical microresonators,” Nat. Phys., 13 94 –102 (2017). NPAHAX 1745-2473 Google Scholar V. V. Vassiliev et al., “Narrow-line-width diode laser with a high-Q microsphere resonator,” Opt. Commun., 158 305 –312 (1998). OPCOB8 0030-4018 Google Scholar N. M. Kondratiev et al., “Self-injection locking of a laser diode to a high-Q WGM microresonator,” Opt. Express, 25 (23), 28167 –28178 (2017). OPEXFF 1094-4087 Google Scholar A. S. Raja et al., “Electrically pumped photonic integrated soliton microcomb,” Nat. Commun., 10 680 (2019). NCAOBW 2041-1723 Google Scholar M.-G. Suh et al., “Directly pumped 10 GHz microcomb modules from low-power diode lasers,” Opt. Lett., 44 (7), 1841 –1843 (2019). OPLEDP 0146-9592 Google Scholar B. Shen et al., “Integrated turnkey soliton microcombs operated at CMOS frequencies,” (2019). Google Scholar A. S. Voloshin et al., “Dynamics of soliton self-injection locking in a photonic chip-based microresonator,” (2020). Google Scholar E. Obrzud, S. Lecomte and T. Herr, “Temporal solitons in microresonators driven by optical pulses,” Nat. Photonics, 11 600 –607 (2017). NPAHBY 1749-4885 Google Scholar E. Obrzud et al., “A microphotonic astrocomb,” Nat. Photonics, 13 31 –35 (2019). NPAHBY 1749-4885 Google Scholar F. Leo et al., “Temporal cavity solitons in one-dimensional Kerr media as bits in an all-optical buffer,” Nat. Photonics, 4 471 –476 (2010). NPAHBY 1749-4885 Google Scholar M. Pang et al., “All-optical bit storage in a fibre laser by optomechanically bound states of solitons,” Nat. Photonics, 10 454 –458 (2016). NPAHBY 1749-4885 Google Scholar L. Stern et al., “Direct Kerr frequency comb atomic spectroscopy and stabilization,” Sci. Adv., 6 eaax6230 (2020). STAMCV 1468-6996 Google Scholar B. L. Zhao et al., “Repetition-rate multiplicable soliton microcomb generation and stabilization via phase-modulated pumping scheme,” Appl. Phys. Express, 13 032009 (2020). APEPC4 1882-0778 Google Scholar M. Karpov et al., “Dynamics of soliton crystals in optical microresonators,” Nat. Phys., 15 1071 –1077 (2019). NPAHAX 1745-2473 Google Scholar Y. He et al., “Perfect soliton crystals on demand,” (2019). Google Scholar K. Y. Yang et al., “Broadband dispersion-engineered microresonator on-a-chip,” Nat. Photonics, 10 316 –320 (2016). NPAHBY 1749-4885 Google Scholar H. Guo et al., “Intermode breather solitons in optical microresonators,” Phys. Rev. X, 7 041055 (2017). PRXHAE 2160-3308 Google Scholar C. J. Bao et al., “Effect of a breather soliton in Kerr frequency combs on optical communication systems,” Opt. Lett., 41 (8), 1764 (2016). OPLEDP 0146-9592 Google Scholar A. B. Matsko, A. A. Savchenkov and L. Maleki, “On excitation of breather solitons in an optical microresonator,” Opt. Lett., 37 (23), 4856 –4858 (2012). OPLEDP 0146-9592 Google Scholar E. Lucas et al., “Breathing dissipative solitons in optical microresonators,” Nat. Commun., 8 736 (2017). NCAOBW 2041-1723 Google Scholar B. Kibler et al., “The Peregrine soliton in nonlinear fibre optics,” Nat. Phys., 6 790 –795 (2010). NPAHAX 1745-2473 Google Scholar M. Peccianti et al., “Demonstration of a stable ultrafast laser based on a nonlinear microcavity,” Nat. Commun., 3 765 (2012). NCAOBW 2041-1723 Google Scholar W. Wang et al., “Repetition rate multiplication pulsed laser source based on a microring resonator,” ACS Photonics., 4 1677 –1683 (2017). Google Scholar P. P. Rivas et al., “Origin and stability of dark pulse Kerr combs in normal dispersion resonators,” Opt. Lett., 41 (11), 2402 –2405 (2016). OPLEDP 0146-9592 Google Scholar P. P. Rivas et al., “Dark solitons in the Lugiato–Lefever equation with normal dispersion,” Phys. Rev. A, 93 063839 (2016). Google Scholar L. R. Wang, “Coexistence and evolution of bright pulses and dark solitons in a fiber laser,” Opt. Commun., 297 129 –132 (2013). OPCOB8 0030-4018 Google Scholar P. P. Rivas, D. Gomila and L. Gelens, “Coexistence of stable dark- and bright-soliton Kerr combs in normal-dispersion resonators,” Phys. Rev. A, 95 053863 (2017). Google Scholar X. H. Hu et al., “Spatiotemporal evolution of continuous-wave field and dark soliton formation in a microcavity with normal dispersion,” Chin. Phys. B, 26 074216 (2017). 1674-1056 Google Scholar X. X. Xue et al., “Normal-dispersion microcombs enabled by controllable mode interactions,” Laser and Photonic Rev., 9 L23 –L28 (2015). Google Scholar L. R. Wang et al., “Observations of four types of pulses in a fiber laser with large net-normal dispersion,” Opt. Express, 19 (8), 7616 –7624 (2011). OPEXFF 1094-4087 Google Scholar V. E. Lobanov, G. Lihachev and M. L. Gorodetsky, “Generation of platicons and frequency combs in optical microresonators with normal GVD by modulated pump,” Europhys. Lett., 112 54008 (2015). Google Scholar A. A. Savchenkov et al., “Tunable optical frequency comb with a crystalline whispering gallery mode resonator,” Phys. Rev. Lett., 101 093902 (2008). PRLTAO 0031-9007 Google Scholar W. Liang et al., “Generation of a coherent near-infrared Kerr frequency comb in a monolithic microresonator with normal GVD,” Opt. Lett., 39 (10), 2920 –2923 (2014). OPLEDP 0146-9592 Google Scholar S. W. Huang et al., “Mode-locked ultrashort pulse generation from on-chip normal dispersion microresonators,” Phys. Rev. Lett., 114 053901 (2015). PRLTAO 0031-9007 Google Scholar Y. Liu et al., “Investigation of mode coupling in normal-dispersion silicon nitride microresonators for Kerr frequency comb generation,” Optica, 2 137 –144 (2014). Google Scholar X. X. Xue et al., “Second-harmonic assisted four-wave mixing in chip-based microresonator frequency comb generation,” Light Sci. Appl., 6 e16253 (2017). Google Scholar Z.-X. Ding et al., “All-fiber ultrafast laser generating gigahertz-rate pulses based on a hybrid plasmonic microfiber resonator,” Adv. Photon., 2 (2), 026002 (2020). Google Scholar H. Zhang et al., “Coherent energy exchange between components of a vector soliton in fiber lasers,” Opt. Express, 16 (17), 12618 –12623 (2008). OPEXFF 1094-4087 Google Scholar Y. Xiang et al., “Scalar and vector solitons in a bidirectional mode-locked fibre laser,” J. Lightwave Technol., 37 5108 –5114 (2019). JLTEDG 0733-8724 Google Scholar D. Mao et al., “Partially polarized wave-breaking-free dissipative soliton with super-broad spectrum in a mode-locked fiber laser,” Laser Phys. Lett., 8 (2), 134 –138 (2011). 1612-2011 Google Scholar N. Akhmediev and A. Ankiewicz, Dissipative Solitons, 661 Springer-Verlag, Berlin, Heidelberg (2005). Google Scholar G. Fibich and B. Ilan, “Optical light bullets in a pure Kerr medium,” Opt. Lett., 29 (8), 887 –889 (2004). OPLEDP 0146-9592 Google Scholar M. Tlidi et al., “Drifting cavity solitons and dissipative rogue waves induced by time-delayed feedback in Kerr optical frequency comb and in all fiber cavities,” Chaos, 27 114312 (2017). CHAOEH 1054-1500 Google Scholar Y. F. Song et al., “Recent progress on optical rogue waves in fiber lasers: status, challenges, and perspectives,” Adv. Photon., 2 (2), 024001 (2020). Google Scholar L. R. Wang, X. M. Liu and Y. K. Gong, “Giant-chirp oscillator for ultra-large net-normal dispersion fiber lasers,” Laser Phys. Lett., 7 (1), 63 –67 (2010). 1612-2011 Google Scholar L. R. Wang et al., “Dissipative soliton generation/compression in a compact all-fibre laser system,” Electron. Lett., 47 (6), 392 –393 (2011). ELLEAK 0013-5194 Google Scholar J. Pfeifle et al., “Coherent terabit communications with microresonator Kerr frequency combs,” Nat. Photonics, 8 375 –380 (2014). NPAHBY 1749-4885 Google Scholar A. Fülöp et al., “High-order coherent communications using mode-locked dark-pulse Kerr combs from microresonators,” Nat. Commun., 9 1598 (2018). NCAOBW 2041-1723 Google Scholar M. Mazur et al., “Enabling high spectral efficiency coherent super channel transmission with SMCs,” (2018). Google Scholar Q. Yang et al., “Vernier spectrometer using counter-propagating SMCs,” Science, 363 (6430), 965 –968 (2019). SCIEAS 0036-8075 Google Scholar A. Dutt et al., “On-chip dual-comb source for spectroscopy,” Sci. Adv., 4 e1701858 (2018). STAMCV 1468-6996 Google Scholar M. Yu et al., “Silicon-chip-based mid-infrared dual-comb spectroscopy,” Nat. Commun., 9 1869 (2018). NCAOBW 2041-1723 Google Scholar E. Lucas et al., “Spatial multiplexing of soliton microcombs,” Nat. Photonics, 12 699 –705 (2018). NPAHBY 1749-4885 Google Scholar J. Riemensberger et al., “Massively parallel coherent laser ranging using soliton microcombs,” (2019). Google Scholar J. Wang et al., “Long distance measurement using single soliton microcomb,” (2020). Google Scholar S. B. Papp et al., “Microresonator frequency comb optical clock,” Optica, 2 10 –14 (2014). Google Scholar P. Del’Haye et al., “Phase-coherent microwave-to-optical link with a self-referenced microcomb,” Nat. Photonics, 10 516 –520 (2016). NPAHBY 1749-4885 Google Scholar S.-W. Huang et al., “A broadband chip-scale optical frequency synthesizer at 2.7×1016 relative uncertainty,” Sci. Adv., 2 e1501489 (2016). STAMCV 1468-6996 Google Scholar Z. L. Newman et al., “Architecture for the photonic integration of an optical atomic clock,” Optica, 6 680 –685 (2019). Google Scholar F. Alishahi et al., “Reconfigurable optical generation of nine Nyquist WDM channels with sinc-shaped temporal pulse trains using a single microresonator-based Kerr frequency comb,” Opt. Lett., 44 (7), 1852 –1855 (2019). OPLEDP 0146-9592 Google Scholar W. Liang et al., “High spectral purity Kerr frequency comb radio frequency photonic oscillator,” Nat. Commun., 6 7957 (2015). NCAOBW 2041-1723 Google Scholar W. Weng et al., “Spectral purification of microwave signals with disciplined dissipative Kerr solitons,” Phys. Rev. Lett., 122 013902 (2019). PRLTAO 0031-9007 Google Scholar X. Xu et al., “Advanced RF and microwave functions based on an integrated optical frequency comb source,” Opt. Express, 26 (3), 2569 –2583 (2018). OPEXFF 1094-4087 Google Scholar X. Xu et al., “An optical micro-comb with a 50-GHz free spectral range for photonic microwave true time delays,” (2017). Google Scholar X. Y. Xu et al., “Reconfigurable broadband microwave photonic intensity differentiator based on an integrated optical frequency comb source,” APL Photonics, 2 (9), 096104 (2017). Google Scholar X. X. Xue and A. M. Weiner, “Microwave photonics connected with microresonator frequency combs,” Front. Optoelectron., 9 238 –248 (2016). Google Scholar X. X. Xue et al., “Microresonator frequency combs for integrated microwave photonics,” IEEE Photonics Technol. Lett., 30 1814 –1817 (2018). Google Scholar M. Kues et al., “Quantum optical microcombs,” Nat. Photonics, 13 170 –179 (2019). NPAHBY 1749-4885 Google Scholar C. Reimer et al., “Generation of multiphoton entangled quantum states by means of integrated frequency combs,” Science, 351 (6278), 1176 –1180 (2016). SCIEAS 0036-8075 Google Scholar M. Kues et al., “On-chip generation of high-dimensional entangled quantum states and their coherent control,” Nature, 546 622 –626 (2017). Google Scholar F.-X. Wang et al., “Quantum key distribution with on-chip dissipative Kerr soliton,” Laser Photon. Rev., 14 1900190 (2020). Google Scholar L. Caspani et al., “Multifrequency sources of quantum correlated photon pairs on-chip: a path toward integrated quantum frequency combs,” Nanophotonics, 5 (2), 351 –362 (2016). Google Scholar C. L. Xiong, B. Bell and B. J. Eggleton, “CMOS-compatible photonic devices for single-photon generation,” Nanophotonics, 5 (3), 427 –439 (2016). Google Scholar C. Reimer et al., “CMOS-compatible, multiplexed source of heralded photon pairs: towards integrated quantum combs,” Opt. Express, 22 (6), 6535 –6546 (2014). OPEXFF 1094-4087 Google Scholar W. C. Jiang et al., “Silicon-chip source of bright photon pairs,” Opt. Express, 23 (16), 20884 –20904 (2015). OPEXFF 1094-4087 Google Scholar R. Wakabayashi et al., “Time-bin entangled photon pair generation from Si micro-ring resonator,” Opt. Express, 23 (2), 1103 –1113 (2015). OPEXFF 1094-4087 Google Scholar D. Grassani et al., “Micrometer-scale integrated silicon source of time-energy entangled photons,” Optica, 2 88 –94 (2015). Google Scholar P. Imany et al., “50-GHz-spaced comb of high-dimensional frequency-bin entangled photons from an on-chip silicon nitride microresonator,” Opt. Express, 26 (2), 1825 –1840 (2018). OPEXFF 1094-4087 Google Scholar T. J. Kippenberg et al., “Dissipative Kerr solitons in optical microresonators,” Science, 361 (6402), eaan8083 (2018). SCIEAS 0036-8075 Google Scholar D. Chen et al., “On-chip ultra-high-Q silicon oxynitride optical resonators,” ACS Photonics, 4 2376 –2381 (2017). Google Scholar D. Chen et al., “Normal dispersion silicon oxynitride microresonator Kerr frequency combs,” Appl. Phys. Lett., 115 051105 (2019). APPLAB 0003-6951 Google Scholar A. Kovach et al., “Emerging material systems for integrated optical Kerr frequency combs,” Adv. Opt. Photonics, 12 135 –222 (2020). AOPAC7 1943-8206 Google Scholar B. Y. Kim et al., “Turn-key, high-efficiency Kerr comb source,” Opt. Lett., 44 (18), 4475 –4478 (2019). Google Scholar X. X. Xue, X. P. Zheng and B. K. Zhou, “Super-efficient temporal solitons in mutually coupled optical cavities,” Nat. Photonics, 13 616 –622 (2019). NPAHBY 1749-4885 Google Scholar L. R. Wang et al., “Frequency comb generation in the green using silicon nitride microresonators,” Laser Photonics Rev., 10 631 –638 (2016). Google Scholar M. Zhang et al., “Broadband electro-optic frequency comb generation in a lithium niobate microring resonator,” Nature, 568 373 –377 (2019). Google Scholar J. G. Zhu et al., “On-chip single nanoparticle detection and sizing by mode splitting in an ultrahigh-Q microresonator,” Nat. Photon., 4 46 –49 (2010). Google Scholar B.-Q. Shen et al., “Detection of single nanoparticles using the dissipative interaction in a high-Q microcavity,” Phys. Rev. Appl., 5 024011 (2016). Google Scholar D. Xu et al., “Synchronization and temporal nonreciprocity of optical microresonators via spontaneous symmetry breaking,” Adv. Photon., 2 (4), 046002 (2019). Google Scholar J. Liu et al., “Photonic microwave generation in the X- and K-band using integrated soliton microcombs,” Nat. Photon., (2020). Google Scholar Weiqiang Wang is an associate professor at the State Key Laboratory of Transient Optics and Photonics of Xi’an Institute of Optics and Precision Mechanics (XIOPM) of the Chinese Academy of Sciences (CAS). His research has focused on planar waveguide and devices, semiconductor lasers, Kerr optical frequency comb, and related applications. Leiran Wang received his PhD from the University of CAS in 2011. At present, he is a professor at the State Key Laboratory of Transient Optics and Photonics of XIOPM and at the School of Future Technology of the University of CAS. His current research interests include integrated photonics and ultrafast nonlinear optics. Wenfu Zhang is a professor at the State Key Laboratory of Transient Optics and Photonics of XIOPM of CAS and at the School of Future Technology of the University of CAS. His research interests focus on integrated photonics, nonlinear optics, and microstructure devices. Weiqiang Wang, Leiran Wang, and Wenfu Zhang "Advances in soliton microcomb generation," Advanced Photonics 2(3), 034001 (19 June 2020). Received: 28 January 2020; Accepted: 23 April 2020; Published: 19 June 2020 Back to Top
91e3550317533773
All Issues Volume 40, 2020 Volume 39, 2019 Volume 38, 2018 Volume 37, 2017 Volume 36, 2016 Volume 35, 2015 Volume 34, 2014 Volume 33, 2013 Volume 32, 2012 Volume 31, 2011 Volume 30, 2011 Volume 29, 2011 Volume 28, 2010 Volume 27, 2010 Volume 26, 2010 Volume 25, 2009 Volume 24, 2009 Volume 23, 2009 Volume 22, 2008 Volume 21, 2008 Volume 20, 2008 Volume 19, 2007 Volume 18, 2007 Volume 17, 2007 Volume 16, 2006 Volume 15, 2006 Volume 14, 2006 Volume 13, 2005 Volume 12, 2005 Volume 11, 2004 Volume 10, 2004 Volume 9, 2003 Volume 8, 2002 Volume 7, 2001 Volume 6, 2000 Volume 5, 1999 Volume 4, 1998 Volume 3, 1997 Volume 2, 1996 Volume 1, 1995 Discrete & Continuous Dynamical Systems - A March 2007 , Volume 19 , Issue 1 Select all articles Finite-time blow-down in the evolution of point masses by planar logarithmic diffusion Juan Luis Vázquez 2007, 19(1): 1-35 doi: 10.3934/dcds.2007.19.1 +[Abstract](2764) +[PDF](419.8KB) We are interested in a remarkable property of certain nonlinear diffusion equations, which we call blow-down or delayed regularization. The following happens: a solution of one of these equations is shown to exist in some generalized sense, and it is also shown to be non-smooth for some time $ 0 < t < t_1$, after which it becomes smooth and still nontrivial. We use the logarithmic diffusion equation to examine an example of occurrence of this phenomenon starting from data that contain Dirac deltas, which persist for a finite time. The interpretation of the results in terms of diffusion is also unusual: if the process starts with one or several point masses surrounded by a continuous distribution, then the masses decay into the medium over a finite period of time. The study of the phenomenon implies consideration of a new concept of measure solution which seems natural for these diffusion processes. Global well-posedness for a periodic nonlinear Schrödinger equation in 1D and 2D Daniela De Silva, Nataša Pavlović, Gigliola Staffilani and Nikolaos Tzirakis 2007, 19(1): 37-65 doi: 10.3934/dcds.2007.19.37 +[Abstract](2376) +[PDF](351.3KB) The initial value problem for the $L^{2}$ critical semilinear Schrödinger equation with periodic boundary data is considered. We show that the problem is globally well-posed in $H^{s}( T^{d} )$, for $s>4/9$ and $s>2/3$ in 1D and 2D respectively, confirming in 2D a statement of Bourgain in [4]. We use the "$I$-method''. This method allows one to introduce a modification of the energy functional that is well defined for initial data below the $H^{1}(T^{d} )$ threshold. The main ingredient in the proof is a "refinement" of the Strichartz's estimates that hold true for solutions defined on the rescaled space, $T^{d}_\lambda = R^{d}/{\lambda Z^{d}}$, $d=1,2$. Dynamical properties of singular-hyperbolic attractors Aubin Arroyo and Enrique R. Pujals 2007, 19(1): 67-87 doi: 10.3934/dcds.2007.19.67 +[Abstract](1606) +[PDF](380.7KB) We provide a dynamical portrait of singular-hyperbolic transitive attractors of a flow on a 3-manifold. Our Main Theorem establishes the existence of unstable manifolds for a subset of the attractor which is visited infinitely many times by a residual subset. As a consequence, we prove that the set of periodic orbits is dense, that it is the closure of a unique homoclinic class of some periodic orbit, and that there is an SRB-measure supported on the attractor. Entropy of polyhedral billiard Nicolas Bedaride 2007, 19(1): 89-102 doi: 10.3934/dcds.2007.19.89 +[Abstract](1772) +[PDF](183.7KB) We consider the billiard map in a convex polyhedron of $\mathbb{R}^3$, and we prove that it is of zero topological entropy. Generalized snap-back repeller and semi-conjugacy to shift operators of piecewise continuous transformations Wei Lin, Jianhong Wu and Guanrong Chen 2007, 19(1): 103-119 doi: 10.3934/dcds.2007.19.103 +[Abstract](1510) +[PDF](1338.7KB) In this paper, we attempt to clarify an open problem related to a generalization of the snap-back repeller. Constructing a semi-conjugacy from the finite product of a transformation $f:\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}$ on an invariant set $\Lambda$ to a sub-shift of the finite type on a $w$-symbolic space, we show that the corresponding transformation associated with the generalized snap-back repeller on $\mathbb{R}^{n}$ exhibits chaotic dynamics in the sense of having a positive topological entropy. The argument leading to this conclusion also shows that a certain kind of degenerate transformations, admitting a point in the unstable manifold of a repeller mapping back to the repeller, have positive topological entropies on the orbits of their invariant sets. Furthermore, we present two feasible sufficient conditions for obtaining an unstable manifold. Finally, we provide two illustrative examples to show that chaotic degenerate transformations are omnipresent. Dynamics of { $\lambda tanh(e^z): \lambda \in R$\ ${ 0 }$ } M. Guru Prem Prasad and Tarakanta Nayak 2007, 19(1): 121-138 doi: 10.3934/dcds.2007.19.121 +[Abstract](1577) +[PDF](238.0KB) In this paper, the dynamics of transcendental meromorphic functions in the one-parameter family $\mathcal{M} = { f_{\lambda}(z) = \lambda f(z) : f(z) = \tanh(e^{z}) \mbox{for} z \in \mathbb{C} \mbox{and} \lambda \in \mathbb{R} \setminus \{ 0 \} }$ is studied. We prove that there exists a parameter value $\lambda^$* $\approx -3.2946$ such that the Fatou set of $f_{\lambda}(z)$ is a basin of attraction of a real fixed point for $\lambda > \lambda^$* and, is a parabolic basin corresponding to a real fixed point for $\lambda = \lambda^$*. It is a basin of attraction or a parabolic basin corresponding to a real periodic point of prime period $2$ for $\lambda < \lambda^$*. If $\lambda >\lambda^$*, it is proved that the Fatou set of $f_{\lambda}$ is connected and, is infinitely connected. Consequently, the singleton components are dense in the Julia set of $f_{\lambda}$ for $\lambda >\lambda^$*. If $\lambda \leq \lambda^$*, it is proved that the Fatou set of $f_{\lambda}$ contains infinitely many pre-periodic components and each component of the Fatou set of $f_{\lambda}$ is simply connected. Finally, it is proved that the Lebesgue measure of the Julia set of $f_{\lambda}$ for $\lambda \in \mathbb{R} \setminus \{ 0 \}$ is zero. The connected Isentropes conjecture in a space of quartic polynomials Anca Radulescu 2007, 19(1): 139-175 doi: 10.3934/dcds.2007.19.139 +[Abstract](1559) +[PDF](562.0KB) This note is a shortened version of my dissertation paper, defended at Stony Brook University in December 2004. It illustrates how dynamic complexity of a system evolves under deformations. The objects I considered are quartic polynomial maps of the interval that are compositions of two logistic maps. In the parameter space $P^{Q}$ of such maps, I considered the algebraic curves corresponding to the parameters for which critical orbits are periodic, and I called such curves left and right bones. Using quasiconformal surgery methods and rigidity, I showed that the bones are simple smooth arcs that join two boundary points. I also analyzed in detail, using kneading theory, how the combinatorics of the maps evolve along the bones. The behavior of the topological entropy function of the polynomials in my family is closely related to the structure of the bone-skeleton. The main conclusion of the paper is that the entropy level-sets in the parameter space that was studied are connected. Numerical and finite delay approximations of attractors for logistic differential-integral equations with infinite delay Tomás Caraballo, P.E. Kloeden and Pedro Marín-Rubio 2007, 19(1): 177-196 doi: 10.3934/dcds.2007.19.177 +[Abstract](1688) +[PDF](238.9KB) The upper semi-continuous convergence of approximate attractors for an infinite delay differential equation of logistic type is proved, first for the associated truncated delay equation with finite delay and then for a numerical scheme applied to the truncated equation. Two nontrivial solutions for periodic systems with indefinite linear part D. Motreanu, V. V. Motreanu and Nikolaos S. Papageorgiou 2007, 19(1): 197-210 doi: 10.3934/dcds.2007.19.197 +[Abstract](1655) +[PDF](188.3KB) We consider second order periodic systems with a nonsmooth potential and an indefinite linear part. We impose conditions under which the nonsmooth Euler functional is unbounded. Then using a nonsmooth variant of the reduction method and the nonsmooth local linking theorem, we establish the existence of at least two nontrivial solutions. Nodal solutions for Laplace equations with critical Sobolev and Hardy exponents on $R^N$ Yinbin Deng, Qi Gao and Dandan Zhang 2007, 19(1): 211-233 doi: 10.3934/dcds.2007.19.211 +[Abstract](2114) +[PDF](272.2KB) This paper is concerned with the existence and nodal character of the nontrivial solutions for the following equations involving critical Sobolev and Hardy exponents: $-\Delta u + u - \mu \frac{u}{|x|^2}=|u|^{2^*-2}u + f(u),$ $u \in H^1_r (\R ^N),(1)$ where $2^$*$=\frac{2N}{N-2}$ is the critical Sobolev exponent for the embedding $H^1_r (\R ^N) \rightarrow L^{2^}$*$ (\R ^N)$, $\mu \in [0, \ (\frac {N-2}{2})^2)$ and $f: \R \rightarrow\R $ is a function satisfying some conditions. The main results obtained in this paper are that there exists a nontrivial solution of equation (1) provided $N\ge 4$ and $\mu \in [0, \ (\frac {N-2}{2})^2-1] $ and there exists at least a pair of nontrivial solutions $u^+_k$, $u^-_k$ of problem (1) for each k $\in N \cup \{0\}$ such that both $u^+_k$ and $u^-_k$ possess exactly k nodes provided $N\ge 6$ and $\mu \in [0, \ (\frac {N-2}{2})^2-4]$. 2019  Impact Factor: 1.338 Email Alert [Back to Top]
c6f660c1c3f93ef7
Webrelaunch 2020 Nachwuchsgruppe "Analysis of PDEs" There are plenty of interesting PDE models and the analysis methods adapted to them vary a lot. The group focuses on the study of the mathematical theories of various prototypical PDE models (such as Navier-Stokes/Euler equations, nonlinear Schrödinger equations), by use of the powerful analysis toolboxes (such as harmonic analysis, Fourier analysis, functional analysis). In particular we are interested in those PDE models involving variable physical coefficients (which bring strong nonlinearities), discontinuous data (which bring singularities), nonzero boundary conditions (which bring more functional structures than the spatially-homogeneous case), singular limits (which deal with considerably different scales of parameters), etc., which should find their sources and applications in natural sciences. Personen in der Nachwuchsgruppe Name Tel. E-Mail 0721 608 43703 zihui.he@kit.edu 0721 608 42616 xian.liao@kit.edu 0721 608 46215 ruoci.sun@kit.edu Aktuelles Lehrangebot Semester Titel Typ Wintersemester 2018/19 Vorlesung
99a41f782e06f1f9
Modern Physics (PHYS 211) 2020 Fall Faculty of Engineering and Natural Sciences Durmuş Ali Demir, Click here to view. MATH101 NS101 Formal lecture Interactive,Discussion based learning Click here to view. Special relativity. Historical experiments and theoretical foundations in quantum mechanics. Quantum theory of light, blackbody radiation, photoelectric effect, Compton effect. Bohr model of atoms, Frank Hertz experiment. De Broglie waves, the wave particle duality, uncertainty principle. The Schrödinger equation. Tunneling phenomena. Quantization of angular momentum, electron spin. Pauli exclusion principle. Fundamentals of statistical physics, Maxwell Boltzmann distribution, indistinguishability and quantum statistics. Selected topics from atomic and solid state physics, complex systems. The course includes demonstration experiments in which the students are involved in performing as well the data analysis. Refer to the course content 1. Describe the Einstein's postulates of special relativity and explain their consequences. 2. Explain Lorentz transformation of coordinates and velocities. 3. Discuss the historical developments and experiments leading to quantum theory of light. 4. Explain Bohr model of the hydrogen atom. 5. Explain the wave particle duality and uncertainty principle. 6. Describe the meaning of Schrödinger equation and its simple applications. 7. Explain the quantization of physical quantities. 8. Discuss Pauli Exclusion Principle. 9. Discuss basic principles of quantum statistics.   Percentage (%) Final 40 Midterm 50 Assignment 10 "Modern Physics" , R.A Surway, C.J. Moses, C.A. Moyer
40ee7aba13930d76
Coherent imaging of an attosecond electron wave packet See allHide authors and affiliations Science  16 Jun 2017: Vol. 356, Issue 6343, pp. 1150-1153 DOI: 10.1126/science.aam8393 A detailed look at an electron's exit When a burst of light ejects an electron from an atom, the later detection of two charged particles masks a great deal of intermittent quantum mechanical complexity. Villeneuve et al. provide a striking look at the wavelike properties of the electron just as it emerges from neon, expelled by two photons from an attosecond pulse train in a strong infrared field. The phase distribution displays the characteristic three-node structure of an f-wave, which the Stark shift from the strong field appears to select with a single magnetic quantum number of 0. Science, this issue p. 1150 Electrons detached from atoms or molecules by photoionization carry information about the quantum state from which they originate, as well as the continuum states into which they are released. Generally, the photoelectron momentum distribution is composed of a coherent sum of angular momentum components, each with an amplitude and phase. Here we show, by using photoionization of neon, that a train of attosecond pulses synchronized with an infrared laser field can be used to disentangle these angular momentum components. Two-color, two-photon ionization via a Stark-shifted intermediate state creates an almost pure f-wave with a magnetic quantum number of zero. Interference of the f-wave with a spherically symmetric s-wave provides a holographic reference that enables phase-resolved imaging of the f-wave. In the Copenhagen interpretation of quantum mechanics, a particle is fully described by its complex wave function Ψ, which is characterized by both an amplitude and phase. However, only the square modulus of the wave function, |Ψ|2, can be directly observed (1, 2). Recent developments in attosecond technology based on electron-ion recollision (3) have provided experimental tools for the imaging of the electronic wave function (not its square) in bound states or ionization continua. High-harmonic spectroscopy on aligned molecules was used to reconstruct the highest-occupied molecular orbital of nitrogen (4, 5) and to observe charge migration (6). Strong-field tunneling was used to measure the square modulus of the highest-occupied molecular orbital for selected molecules (7). Furthermore, recollision holography (8, 9) permitted a measurement of the phase and amplitude of a continuum electron generated in an intense laser field. Complementary to recollision-based measurements, photoelectron spectroscopy with attosecond extreme ultraviolet (XUV) pulses has also measured photoelectron wave packets in continuum states (1016) by exploiting quantum interferences (1719). However, decomposition of the wave function of an ejected photoelectron into angular momentum eigenstates with a fully characterized amplitude and phase is more difficult. First, in general, a one-photon transition with linearly polarized light generates two orbital angular momentum (Embedded Image) states, according to the selection ruleEmbedded Image. Second, because the initial state has a Embedded Image-fold degeneracy (labeled by m, the magnetic quantum number) and because m is conserved for interactions with linearly polarized light, photoelectron waves with a range of m are produced. Hence, the photoelectron momentum distribution contains a sum of contributions from different initial states, each of which is a coherent sum of different angular momentum components, making it difficult to decompose the continuum state into individual angular momentum components (2022). Here we preferentially create an almost pure f-wave continuum wave function with m = 0 in neon by using an attosecond XUV pulse train synchronized with an infrared (IR) laser pulse through the process of high-harmonic generation. The isolation of the f-wave with m = 0 is attributed to the XUV excitation to a resonant bound state that is Stark-shifted by the IR field. By adding an additional coherent pathway that produces an isotropic electron wave, we create a hologram and reveal the alternating sign of the lobes of the f-wave. By controlling the phase of the interfering pathways with attosecond precision, we are able to determine the amplitudes and phases of all six partial-wave components that contribute to the continuum wave function. The experimental setup is described in detail in the supplementary materials (SM). An 800-nm wavelength laser pulse with a 35-fs duration is focused onto an argon gas jet, producing high-harmonic emission that we label “XUV.” In the frequency domain, the emission has peaks at odd-integer multiples of the driving laser frequency. In the time domain, the XUV pulse is composed of a train of attosecond pulses. The high-harmonic emission is focused onto a second gas jet containing neon gas. The neon atoms are excited and photoionized by different high-harmonic orders, and the resulting photoelectrons are recorded by a velocity map–imaging (VMI) spectrometer, which measures their two-dimensional (2D) projection onto a detection plane (23). For the phase-resolved measurements, we generate an XUV spectrum that contains both even and odd harmonics, using both 800- and 400-nm driving laser pulses (24). In both cases, part of the 800-nm pulse (called “IR”) is also focused onto the neon gas, permitting resonant (1 + 1′)-photon, XUV + IR ionization and Stark-shifting of the resonant bound states (25). The two-color temporal control and stability of the experiment is <50 as. We first consider the situation where the XUV is generated by 800 nm only (i.e., no 400-nm contribution). The XUV spectrum then consists of a comb of odd harmonics of the IR driver laser frequency (i.e., no even harmonics). Figure 1A shows the XUV + IR photoelectron momentum distribution for the ionization of neon that is measured under these conditions. At very low momentum, i.e., close to the ionization threshold, a six-fold angular structure is clearly observed. For comparison, an image recorded for helium under the same conditions is shown in Fig. 1B. This experiment may be viewed as the angular-resolved version of a previous study in helium by Swoboda et al. (26), in which the phase shift due to an intermediate resonance was mapped out. For neon, in Fig. 1A, the outer ring is produced through direct ionization by harmonic 15 (H15), whereas the inner structure results from (1 + 1′)-photon, H13 + IR ionization through the 3d intermediate resonance. The widths in the radial direction of all observed features are a consequence of the frequency bandwidth of the XUV and IR pulses (27). Fig. 1 Experimental velocity-map electron images. The observed photoelectron momentum distributions result from the ionization of (A) neon and (B) helium by an attosecond pulse train synchronized with the fundamental IR laser pulse. Both pulses were polarized along the vertical (z) axis. In both images, the outer rings are due to direct ionization by harmonics 15 (neon) and 17 (helium). The central feature in the neon image results from (1 + 1′)-photon, XUV + IR ionization via the 3d state. The slight left-right asymmetry arises from imperfections in the microchannel plate detector. An energy level diagram in (C) shows the levels that are relevant for understanding the neon experiment. The green line labels the six-fold low-energy feature seen in (A). Figure 1C shows an energy level diagram that rationalizes the experimental observations in neon. The XUV photon energy and the IR intensity create a resonance condition for H13 with the Stark-shifted 3d level (see SM). The addition of an IR photon enables (1 + 1′)-photon ionization, producing the central feature seen in Fig. 1A. In Fig. 1C, the atomic eigenstates are labeled with the usual atomic physics notation, i.e., with principal quantum number n and with the orbital angular momentum labeled as s (Embedded Image), p (Embedded Image), d (Embedded Image), and f (Embedded Image). A dipole transition between states changes Embedded Image by Embedded Image. For neon (1s22s22p6), the 2p→3d transition is dipole-allowed, and in the dipole approximation, the continuum electron resulting from XUV + IR ionization must have either p- or f-wave character. We show that the experimental results are consistent with a continuum electron wave function that is predominantly an f-wave with m = 0. The amplitude of the six-fold structure is modulated when the relative delay between the XUV and the IR laser pulses is varied. This modulation is due to the interference between the resonant H13 + IR pathway and the nonresonant H15 – IR pathway (we use the notation H13 + IR and H15 – IR to denote two-photon pathways composed of one harmonic order plus or minus one infrared photon). The SM shows that the phase of the six-fold structure is different from that of the higher-order sidebands, consistent with the occurrence of a phase shift due to the 3d resonance. This result is consistent with the observations of Swoboda et al. (26) in helium. Experimentally, the resonant excitation to the Stark-shifted 3d state can be confirmed by measurements of the photoelectron momentum distribution as a function of both the photon energy of the XUV and the IR laser intensity (0 to 4 × 1012 W/cm2; see SM). At a given XUV photon energy, the six-fold structure is observed when the H13 photon energy matches the 2p→3d resonant energy plus the ponderomotive shift resulting from the IR laser intensity (see SM). However, when the XUV photon energy is larger than the Stark-shifted 2p→3d transition, the six-fold structure disappears into a broad distribution. The initial 2p state of neon has three orthogonal orbitals, px, py, and pz (we consider that in the experiment the laser is polarized along the z direction, and the photoelectron is detected in the xz plane). Ionization from each initial state should contribute to the final angular distributions. The three components of a continuum f-wave resulting from (1+1′)-photon ionization from the three p orbitals are illustrated in Fig. 2, along with their simulated VMI projections. It is clear that the six-fold structure of Fig. 1A corresponds only to the m = 0 case, which is the only orbital that displays the experimentally observed node in the horizontal direction (x direction). The dominance of the m = 0 channel is both notable and unexpected. Like the ground state, in the absence of the laser field, the Embedded Image and Embedded Image components of the 3d resonance are degenerate. Our experiment thus suggests that a Stark shift of the 3d resonant state may be responsible for the selection of the m = 0 component. We show in the SM that the Stark shift and ionization rate may be different for m = 0 and Embedded Image, causing only the m = 0 channel to be shifted into resonance. Figure S5 shows that, for a particular combination of XUV frequency and IR intensity, the contribution of photoelectrons produced through the m = 0 channel exceeds by an order of magnitude the contributions from the Embedded Image channels. This calculation was performed with a 3D time-dependent Schrödinger equation (TDSE) solver by using an effective potential for argon, not neon. As discussed in the SM, this calculation demonstrates the plausibility of m = 0 selection by the Stark shift, but the calculation must be done for a benchmarked neon potential. Fig. 2 Calculated continuum wave functions and predicted VMI projections. The individual wave functions for the possible f-wave components are shown to the left of the corresponding projections of the square of the wave function on a 2D plane. Quantization axis is along the vertical (z) axis. Only the m = 0 case (left) is consistent with the experiment, which always exhibits a node along the horizontal axis. The radial part of the wave functions was simulated with a Gaussian width to correspond to the experimental energy width of the VMI images; the radial information in the experiment is not used—only the angular distributions are used. Embedded Image andEmbedded Image. We next modified the experiment by introducing a third, XUV-only, one-photon pathway to the final continuum state as a homodyne phase reference. Experimentally, this was done by adding the second harmonic of the 800-nm laser pulse to the high-harmonic generation process, resulting in the creation of both even and odd harmonics (24). Even-order harmonic H14 creates photoelectrons with the same energy as the H13 + IR and H15 – IR pathways (see Fig. 1C). Direct ionization from the 2p ground state by H14 produces s- and d-waves, which interfere with the predominant f-wave that is created by both (1 + 1′)-photon processes. By varying the relative delay between the XUV and IR pulses, the phases of the XUV + IR, (1 + 1′)-photon processes are altered, whereas the s- and d-waves are unaffected by the delay, providing a constant phase reference for the other channels. Figure 3 shows measured photoelectron momentum distributions from neon at three different XUV-IR time delays. Compared with Fig. 1A, the lobes in the six-fold angular pattern alternate in intensity, and the intensity distribution is controlled by the XUV-IR delay. The alternating three-fold features can be rationalized in a simple picture by coherently adding an f-wave to an s-wave, or taking their difference, as illustrated in Fig. 3, while neglecting the p- and d-wave components. Fig. 3 Electron momentum angular distributions with three pathways. (Top) Experimental electron momentum distributions resulting from the ionization of neon via the three pathways (H13, H14, H15) shown in Fig. 1C. The polarization direction is vertical. (Bottom) Calculated images for a pure s-wave added to a pure f-wave (m = 0) with equal amplitudes, squared and projected onto a plane, to show that the experimental results are dominated by these two components. For simplicity (and as supported by the data in Table 1), the p- and d-wave contributions are not included. The s-wave component is produced by direct one-photon ionization with H14 and provides a phase reference for the other two interfering pathways. As the phase of the IR pulse is advanced by the times shown above each figure, the phase of the f-wave component is varied. The resulting interference introduces an up-down asymmetry in the momentum distribution that can be controlled by the IR phase. The three VMI images shown in Fig. 3A are taken from a series of 100 images recorded at different XUV-IR time delays. These images were binned into 8° angular sectors, and the counts in each sector were integrated to extract the angular distribution for each image. In Fig. 4A, we plot the observed electron angular distributions of the central structure as a function of the XUV-IR delay. The experimental results are compared to a model in which six possible spherical harmonics are added coherently and then projected onto the xz plane to simulate the VMI images. The total continuum wave function is written as Embedded Image(1)where the A’s represent amplitudes of each partial wave contribution, ϕ are the corresponding phases, ω is the IR laser frequency, τ is the XUV-IR delay time, and Embedded Imageare spherical harmonics. The first two terms of the right side of the equation describe the one-photon ionization by H14 producing s- and d-waves, whereas the latter two terms (containing the dependence on the XUV-IR delay τ) result from the pathway through the 3d resonant state involving H13, and the direct ionization channel involving H15, both producing p- and f-waves. A fit of this model to the experimental data yields the results shown in Fig. 4B; the fitting parameters are listed in Table 1. To ensure that a global optimum was found, we employed a particle swarm optimization algorithm with 107 initial conditions. The amplitudes in Table 1 confirm the dominance of the f- and s-wave components over the respective p- and d-wave components that we have used in the discussion of Fig. 3. Fig. 4 Angular distribution of the central feature of the VMI images versus XUV-IR delay. Angle zero is defined as the upward direction in the VMI images, parallel to the polarization (z) axis. A delay of π radians corresponds to a delay of half an IR optical period (1.33 fs). (Left) Experimental data. (Right) Calculated angular distribution based on fitting a 12-parameter model (see Eq. 1) to the experimental data. The amplitudes and phases of each partial wave are listed in Table 1. The dominant pattern is reproduced: Alternating lobes at 0° and 180°, with minor lobes at –60°, 60°, 120°, and 240°. This pattern is associated with the six-fold structure of the dominant f-wave contribution. Table 1 Parameters of the model fit. The experimental photoelectron angular distributions as a function of XUV-IR delay, shown in Fig. 4A, are fitted to a model composed of six partial waves (see Eq. 1). The amplitudes and phases of each partial wave are listed. The amplitudes are normalized so that the sum of their squares equals one. The phase of the s-wave is defined as zero. Embedded Image represents an arbitrary common phase that determines time zero. The column labeled “Amplitude2” is the square of the values in the “Amplitude” column. Errors shown are the range of each parameter such that the residual least-squares error between the model and the experiment increases by 10%. View this table: As an additional check, we show in the SM that the partial-wave amplitudes and phases in Table 1 are consistent with several further experiments. One is the series of experiments that produced the data shown in Fig. 1A, which were recorded without H14 present; here the equal intensities of all six lobes can only be reproduced when the f- and p-waves are added with the relative phase and amplitude shown in Table 1. In a further experiment, no IR was present, and the inner structure was produced by H14 alone; here the observed angular distribution is in approximate agreement with the relative phase and amplitude of the s- and d-waves in Table 1. We have shown that, by combining coherent photoionization pathways through a Stark-shifted resonant state, we can create almost pure f-waves with a single magnetic quantum number m = 0. The addition of a direct photoionization pathway producing predominantly an s-wave provides a constant phase reference that allows a determination of the phase of the f-wave lobes. By varying the relative phase of the pathways, we can control the direction in which the electrons emanate from the atom, and we can verify the quantum phase of the lobes of the f-wave. We have spatially imaged the angular structure of the continuum wave function and coherently interfered it using a holographic reference composed largely of an isotropic s-wave, leading to the determination of the sign of the quantum wave function. This is a form of coherent control, in which the parity and direction of the electrons can be controlled (13, 19). In addition, the fitting of a model to the complete experimental data set allows us to determine the exact makeup of the total continuum wave function. In particular, we can determine the amplitude and phase of each partial-wave component. In photoionization parlance, this is a “complete” experiment (20). We have implemented a number of novel approaches, such as a sophisticated two-color interference experiment with careful use of both even and odd harmonics and the use of Stark-tuning to include or exclude desired quantum pathways. These new tools in the attosecond toolbox may allow us to study more complex systems. For example, can we apply a similar approach to a molecule? By exploiting rotational wave packets, will it be possible to determine both the amplitude and phase of transition moments in the molecular frame? If the photon energy of the XUV can be tuned widely to select a particular intermediate quantum state, our method allows the measurement of phase-resolved orbital images of other states and in different atoms. For instance, if the electron is excited from a lower-lying level to a doubly excited state, dynamical changes in the amplitude and phase resulting from electron correlation can be imaged directly with attosecond time resolution. Supplementary Materials Materials and Methods Figs. S1 to S8 Table S1 References (2832) References and Notes 1. Acknowledgments: The authors gratefully acknowledge discussions with T. Morishita, A. Stolow, M. Ivanov, I. Tamblyn, and A. Korobenko, as well as funding from the Japan Society for the Promotion of Science (JSPS) Grants-in-Aid for Scientific Research (KAKENHI) grant no. 25247069. Data are available upon request from View Abstract Stay Connected to Science Navigate This Article
4aaa94c74b6396bc
A DFT journey… Quantum Chemistry in Spain & Alicante 20 abril, 2018 A DFT journey… In 1964, P. Hohenberg and W. Kohn (In 1998, Walter Kohn received -shared with John A. Pople- the Nobel prize in Chemistry for his work on DFT) published a pair of theorems constituting the basis for Density-Functional Theory (DFT). Only one year later, the development of the Kohn-Sham (KS) scheme allowed to make DFT a practical theory for all kind of (intended) calculations, as it is known today (KS-DFT). These authors showed that there always exists a one-to-one relation (correspondence) between the energy and the electron density of a system, i.e. it is in principle possible to obtain directly the exact energy from this density through an universal functional. However, the mathematical formulation that delivers this energy is still unknown. This approach completely circumvents the paths classically forming the core of Quantum Chemistry: the wavefunction is no longer needed and the associated Schrödinger equation does not need to be correspondingly solved. The key is thus to model or mimic the subtle effects dominating matter at the quantum scale by means of a functional of the electronic density. The machinery should accurately include exchange and correlation effects, in order to address structure and bonding of molecules, and it should be more advantageous than the ab initio methods, either by reducing the computational cost associated to any calculation or by introducing theoretical models able to rationalise chemical reactivity or physical concepts. It was not until the 1980s that modern approximations to that universal functional were proposed. That means to dispose of expressions able to deliver the stabilising effects of matter arising from a purely quantum-mechanical (non-classical) origin after inserting the density of any system into the specific chosen mathematical form. The development of these expressions (the density functionals) is normally a hard work, needing extensive calibrations and applications before its wide adoption by the community. Apart from the Local Density Approximation (LDA), the extensions coined as Generalized Gradient Approximation (GGA), the HYBRID functionals containing a portion of exact-like exchange, the meta-GGA functionals, the DOUBLE-HYBRID functionals also containing a portion of perturbative-like correlation, local hybrid functionals, range-separated hybrid functionals, and other orbital-dependent functionals (i.e. RPA) are available today for running calculations. Welcome to DFT! Welcome to Alicante! Deja un comentario
3f7903da68a938d5
Skip to main content Chemistry LibreTexts 3: The Schrödinger Equation and the Particle-in-a-Box Model • Page ID • The particle in a box model (also known as the infinite potential well or the infinite square well) describes a particle free to move in a small space surrounded by impenetrable barriers. The model is mainly used as a hypothetical example to illustrate the differences between classical and quantum systems. In classical systems, for example a ball trapped inside a large box, the particle can move at any speed within the box and it is no more likely to be found at one position than another. However, when the well becomes very narrow (on the scale of a few nanometers), quantum effects become important. The particle may only occupy certain positive energy levels. The particle in a box model provides one of the very few problems in quantum mechanics which can be solved analytically, without approximations. This means that the observable properties of the particle (such as its energy and position) are related to the mass of the particle and the width of the well by simple mathematical expressions. Due to its simplicity, the model allows insight into quantum effects without the need for complicated mathematics. It is one of the first quantum mechanics problems taught in undergraduate physics courses, and it is commonly used as an approximation for more complicated quantum systems. • 3.1: The Schrödinger Equation Erwin Schrödinger posited an equation that predicts both the allowed energies of a system as well as address the wave-particle duality of matter. Schrödinger equation for de Broglie's matter waves cannot be derived from some other principle since it constitutes a fundamental law of nature. Its correctness can be judged only by its subsequent agreement with observed phenomena (a posteriori proof). • 3.2: Linear Operators in Quantum Mechanics An operator is a generalization of the concept of a function. Whereas a function is a rule for turning one number into another, an operator is a rule for turning one function into another. • 3.3: The Schrödinger Equation is an Eigenvalue Problem To every dynamical variable \(a\) in quantum mechanics, there corresponds an eigenvalue equation, usually written  \[\hat{A}\psi=a\psi\label{3.3.2}\]  The \(a\) eigenvalues represents the possible measured values of the \(A\) operator. • 3.4: The Quantum Mechanical Free Particle The simplest system in quantum mechanics has the potential energy V=0 everywhere. This is called a free particle since it has no forces acting on it. We consider the one-dimensional case, with motion only in the x-direction. We discuss that the wavefunction can be a linear combination of eigenfunctions and wavepackets can be constructed of eigenstates to generate a localized particle picture that a single eigenstate does not posess. • 3.5: Wavefunctions Have a Probabilistic Interpretation The most commonly accepted interpretation of the wavefunction that the square of the module is proportional to the probability density (probability per unit volume) that the electron is in the volume dτ located at r. Since the wavefunction represents the wave properties of matter, the probability amplitude P(x,t) will also exhibit wave-like behavior. • 3.6: The Energy of a Particle in a Box is Quantized The particle in the box model system is the simplest non-trivial application of the Schrödinger equation, but one which illustrates many of the fundamental concepts of quantum mechanics. • 3.7: Wavefunctions Must Be Normalized The probability of a measurement of \(x\) yielding a result between \(-\infty\) and \(+\infty\) is 1. So wavefunctions should be normalized if possible. • 3.8: The Average Momentum of a Particle in a Box is Zero From the mathematical expressions for the wavefunctions and energies for the particle-in-a-box, we can answer a number of interesting questions. Key to addressing these questions is the formulation and use of expectation values. This is demonstrated in the module and used in the context of evaluating average properties (energy, position, and momentum of the particle in a box). • 3.9: The Uncertainty Principle Redux - Estimating Uncertainties from Wavefunctions The operators x and p are not compatible and there is no measurement that can precisely determine both x and p simultaneously. The uncertainty principle is a consequence of the wave property of matter. A wave has some finite extent in space and generally is not localized at a point. Consequently there usually is significant uncertainty in the position of a quantum particle in space. • 3.11: A Particle in a Two-Dimensional Box A particle in a 2-dimensional box is a fundamental quantum mechanical approximation describing the translational motion of a single particle confined inside an infinitely deep well from which it cannot escape. • 3.12: A Particle in a Three-Dimensional Box The 1D particle in the box problem can be expanded to consider a particle within a 3D box for three lengths \(a\), \(b\), and \(c\). When there is NO FORCE (i.e., no potential) acting on the particles inside the box. Motion and hence quantization properties of each dimension is independent of the other dimensions. This Module introduces the concept of degeneracy where multiple wavefunctions (different quantum numbers) have the same energy. • 3.E: The Schrödinger Equation and a Particle in a Box (Exercises) These are homework exercises to accompany the chapter. • 3.I: Interactive Worksheets Thumbnail: The quantum wavefunction of a particle in a 2D infinite potential well of dimensions \(L_x\) and \(L_y\). The wavenumbers are \(n_x=2\) and \(n_y=2\). (Public Domain; Inductiveload).
3b929d9cb5c725de
News Release  MSU scientists solve half-century-old magnesium dimer mystery Michigan State University The lowest fourteen Mg2 vibrational states were discovered in the 1970s, but both early and recent experiments should have observed a total of nineteen states. Like a quantum cold case, experimental efforts to find the last five failed, and Mg2 was almost forgotten. Until now. Piotr Piecuch, Michigan State University Distinguished Professor and MSU Foundation Professor of chemistry, along with College of Natural Science Department of Chemistry graduate students Stephen H. Yuwono and Ilias Magoulas, developed new, computationally derived evidence that not only made a quantum leap in first-principles quantum chemistry, but finally solved the 50-year-old Mg2 mystery. Their findings were recently published in the journal Science Advances. "Our thorough investigation of the magnesium dimer unambiguously confirms the existence of 19 vibrational levels," said Piecuch, whose research group has been active in quantum chemistry and physics for more than 20 years. "By accurately computing the ground- and excited-state potential energy curves, the transition dipole moment function between them and the rovibrational states, we not only reproduced the latest laser-induced fluorescence (LIF) spectra, but we also provided guidance for the future experimental detection of the previously unresolved levels." So why were Piecuch and his team able to succeed where others had failed for so many years? The persistence of Yuwono and Magoulas certainly revived interest in the Mg2 case, but the answer lies in the team's brilliant demonstration of the predictive power of modern electronic structure methodologies, which came to the rescue when experiments encountered unsurmountable difficulties. "The presence of collisional lines originating from one molecule hitting another and the background noise muddied the experimentally observed LIF spectra," Piecuch explained. "To make matters worse, the elusive high-lying vibrational states of Mg2 that baffled scientists for decades dissipate into thin air when the molecule starts rotating." Instead of running costly experiments, Piecuch and his team developed efficient computational strategies that simulated those experiments, and they did it better than anyone had before. Like the quantized vibrational states of Mg2, in-between approximations were not acceptable. They solved the electronic and nuclear Schrödinger equations, tenets of quantum physics that describe molecular motions, with almost complete accuracy. "The majority of calculations in our field do not require the high accuracy levels we had to reach in our study and often resort to less expensive computational models, but we provided compelling evidence that this would not work here," Piecuch said. "We had to consider every conceivable physical effect and understand the consequences of neglecting even the tiniest details when solving the quantum mechanical equations." Their calculations reproduced the experimentally derived vibrational and rotational motions of Mg2 and the observed LIF spectra with remarkable precision--on the order of 1 cm-1, to be exact. This provided the researchers with confidence that their predictions regarding the magnesium dimer, including the existence of the elusive high-lying vibrational states, were firm. Yuwono and Magoulas were clearly excited about the groundbreaking project, but emphasized they had initial doubts whether the team would be successful. "In the beginning, we were not even sure if we could pull this investigation off, especially considering the number of electrons in the magnesium dimer and the extreme accuracies required by our state-of-the-art computations," said Magoulas, who has worked in Piecuch's research group for more than four years and teaches senior level quantum chemistry courses at MSU. "The computational resources we had to throw at the project and the amount of data we had to process were immense--much larger than all of my previous computations combined," added Yuwono, who also teaches physical chemistry courses at MSU and has worked in Piecuch's research group since 2017. The case of the high-lying vibrational states of Mg2 that evaded scientists for half a century is finally closed, but the details of the computations that cracked it are completely open and accessible on the Science Advances website. Yuwono, Magoulas, and Piecuch hope that their computations will inspire new experimental studies. "Quantum mechanics is a beautiful mathematical theory with a potential of explaining the intimate details of molecular and other microscopic phenomena," Piecuch said. "We used the Mg2 mystery as an opportunity to demonstrate that the predictive power of modern computational methodologies based on first-principles quantum mechanics is no longer limited to small, few-electron species."
62e77d1fc49081d0
It’s easy to take time’s arrow for granted – but the gears of physics actually work just as smoothly in reverse. Maybe that time machine is possible after all? An experiment earlier this year shows just how much wiggle room we can expect when it comes to distinguishing the past from the future, at least on a quantum scale. It might not allow us to relive the 1960s, but it could help us better understand why not. Researchers from Russia and the US teamed up to find a way to break, or at least bend, one of physics’ most fundamental laws on energy. The second law of thermodynamics is less a hard rule and more of a guiding principle for the Universe. It says hot things get colder over time as energy transforms and spreads out from areas where it’s most intense. It’s a principle that explains why your coffee won’t stay hot in a cold room, why it’s easier to scramble an egg than unscramble it, and why nobody will ever let you patent a perpetual motion machine. It’s also the closest we can get to a rule that tells us why we can remember what we had for dinner last night, but have no memory of next Christmas. “That law is closely related to the notion of the arrow of time that posits the one-way direction of time from the past to the future,” says quantum physicist Gordey Lesovik from the Moscow Institute of Physics and Technology. Virtually every other rule in physics can be flipped and still make sense. For example, you could zoom in on a game of pool, and a single collision between any two balls won’t look weird if you happened to see it in reverse. On the other hand, if you watched balls roll out of pockets and reform the starting pyramid, it would be a sobering experience. That’s the second law at work for you. On the macro scale of omelettes and games of pool, we shouldn’t expect a lot of give in the laws of thermodynamics. But as we focus in on the tiny gears of reality – in this case, solitary electrons – loopholes appear. Electrons aren’t like tiny billiard balls, they’re more akin to information that occupies a space. Their details are defined by something called the Schrödinger equation, which represents the possibilities of an electron’s characteristics as a wave of chance. If this is a bit confusing, let’s go back to imagining a game of pool, but this time the lights are off. You start with the information – a cue ball – in your hand, and then send it rolling across the table. The Schrödinger equation tells you that ball is somewhere on the pool table moving around at a certain speed. In quantum terms, the ball is everywhere at a bunch of speeds … some just more likely than others. You can stick your hand out and grab it to pinpoint its location, but now you’re not sure of how fast it was going. You could also gently brush your finger against it and confidently know its velocity, but where it went… who knows? There’s one other trick you could use, though. A split second after you send that ball rolling, you can be fairly sure it’s still near your hand moving at a high rate. In one sense, the Schrödinger equation predicts the same thing for quantum particles. Over time, the possibilities of a particle’s positions and velocities expands. “However, Schrödinger’s equation is reversible,” says materials scientist Valerii Vinokur from the Argonne National Laboratory in the US. “Mathematically, it means that under a certain transformation called complex conjugation, the equation will describe a ‘smeared’ electron localising back into a small region of space over the same time period.” It’s as if your cue ball was no longer spreading out in a wave of infinite possible positions across the dark table, but rewinding back into your hand. In theory, there’s nothing stopping it from occurring spontaneously. You’d need to stare at 10 billion electron-sized pool tables every second and the lifetime of our Universe to see it happen once, though. Rather than patiently wait around and watch funding trickle away, the team used the undetermined states of particles in a quantum computer as their pool ball, and some clever manipulation of the computer as their ‘time machine’. Each of these states, or qubits, was arranged into a simple state which corresponded to a hand holding the ball. Once the quantum computer was set into action, these states rolled out into a range of possibilities. By tweaking certain conditions in the computer’s setup, those possibilities were confined in a way that effectively rewound the Schrödinger equation deliberately. To test this, the team launched the set-up again, as if kicking a pool table and watching the scattered balls rearrange into the initial pyramid shape. In about 85 percent of trials based on just two qubits, this is exactly what happened. On a practical level, the algorithms they used to manipulate the Schrödinger equation into rewinding in this way could help improve the accuracy of quantum computers. It’s not the first time this team has given the second law of thermodynamics a good shake. A couple of years ago they entangled some particles and managed to heat and cool them in such a way they effectively behaved like a perpetual motion machine. Finding ways to push the limits of such physical laws on the quantum scale just might help us better understand why the Universe ‘flows’ like it does. This research was published in Scientific Reports. Products You May Like Articles You May Like The Amazon is burning at a record rate, and the devastation can be seen from space Incredible images show something uncannily familiar about the rocks on asteroid Ryugu Practising for 10,000 hours may not turn you into a star after all, says new study DNA analysis just made the eerie mystery of Himalayan ‘Skeleton Lake’ even stranger Leave a Reply
bf1fcd155a383f3d
In physics, quasiparticles and collective excitations (which are closely related) are emergent phenomena that occur when a microscopically complicated system such as a solid behaves as if it contained different weakly interacting particles in free space. For example, as an electron travels through a semiconductor, its motion is disturbed in a complex way by its interactions with all of the other electrons and nuclei; however it approximately behaves like an electron with a different mass (effective mass) traveling unperturbed through free space. This "electron with a different mass" is called an "electron quasiparticle".[1] In another example, the aggregate motion of electrons in the valence band of a semiconductor or a hole band in a metal[2] is the same as if the material instead contained positively charged quasiparticles called electron holes. Other quasiparticles or collective excitations include phonons (particles derived from the vibrations of atoms in a solid), plasmons (particles derived from plasma oscillations), and many others. These particles are typically called "quasiparticles" if they are related to fermions, and called "collective excitations" if they are related to bosons,[1] although the precise distinction is not universally agreed upon.[3] Thus, electrons and electron holes are typically called "quasiparticles", while phonons and plasmons are typically called "collective excitations". The quasiparticle concept is most important in condensed matter physics since it is one of the few known ways of simplifying the quantum mechanical many-body problem. General introductionEdit Solids are made of only three kinds of particles: electrons, protons, and neutrons. Quasiparticles are none of these; instead, each of them is an emergent phenomenon that occurs inside the solid. Therefore, while it is quite possible to have a single particle (electron or proton or neutron) floating in space, a quasiparticle can only exist inside interacting many-particle systems (primarily solids). Motion in a solid is extremely complicated: Each electron and proton is pushed and pulled (by Coulomb's law) by all the other electrons and protons in the solid (which may themselves be in motion). It is these strong interactions that make it very difficult to predict and understand the behavior of solids (see many-body problem). On the other hand, the motion of a non-interacting classical particle is relatively simple; it would move in a straight line at constant velocity. This is the motivation for the concept of quasiparticles: The complicated motion of the real particles in a solid can be mathematically transformed into the much simpler motion of imagined quasiparticles, which behave more like non-interacting particles. In summary, quasiparticles are a mathematical tool for simplifying the description of solids. Relation to many-body quantum mechanicsEdit Any system, no matter how complicated, has a ground state along with an infinite series of higher-energy excited states. The principal motivation for quasiparticles is that it is almost impossible to directly describe every particle in a macroscopic system. For example, a barely-visible (0.1mm) grain of sand contains around 1017 nuclei and 1018 electrons. Each of these attracts or repels every other by Coulomb's law. In principle, the Schrödinger equation predicts exactly how this system will behave. But the Schrödinger equation in this case is a partial differential equation (PDE) on a 3×1018-dimensional vector space—one dimension for each coordinate (x,y,z) of each particle. Directly and straightforwardly trying to solve such a PDE is impossible in practice. Indeed, solving a PDE on a 2-dimensional space is typically much harder than solving a PDE on a 1-dimensional space (whether analytically or numerically); solving a PDE on a 3-dimensional space is significantly harder still; and thus solving a PDE on a 3×1018-dimensional space is quite impossible by straightforward methods. One simplifying factor is that the system as a whole, like any quantum system, has a ground state and various excited states with higher and higher energy above the ground state. In many contexts, only the "low-lying" excited states, with energy reasonably close to the ground state, are relevant. This occurs because of the Boltzmann distribution, which implies that very-high-energy thermal fluctuations are unlikely to occur at any given temperature. Quasiparticles and collective excitations are a type of low-lying excited state. For example, a crystal at absolute zero is in the ground state, but if one phonon is added to the crystal (in other words, if the crystal is made to vibrate slightly at a particular frequency) then the crystal is now in a low-lying excited state. The single phonon is called an elementary excitation. More generally, low-lying excited states may contain any number of elementary excitations (for example, many phonons, along with other quasiparticles and collective excitations).[4] When the material is characterized as having "several elementary excitations", this statement presupposes that the different excitations can be combined together. In other words, it presupposes that the excitations can coexist simultaneously and independently. This is never exactly true. For example, a solid with two identical phonons does not have exactly twice the excitation energy of a solid with just one phonon, because the crystal vibration is slightly anharmonic. However, in many materials, the elementary excitations are very close to being independent. Therefore, as a starting point, they are treated as free, independent entities, and then corrections are included via interactions between the elementary excitations, such as "phonon-phonon scattering". Therefore, using quasiparticles / collective excitations, instead of analyzing 1018 particles, one needs to deal with only a handful of somewhat-independent elementary excitations. It is, therefore, a very effective approach to simplify the many-body problem in quantum mechanics. This approach is not useful for all systems, however: In strongly correlated materials, the elementary excitations are so far from being independent that it is not even useful as a starting point to treat them as independent. Distinction between quasiparticles and collective excitationsEdit Usually, an elementary excitation is called a "quasiparticle" if it is a fermion and a "collective excitation" if it is a boson.[1] However, the precise distinction is not universally agreed upon.[3] There is a difference in the way that quasiparticles and collective excitations are intuitively envisioned.[3] A quasiparticle is usually thought of as being like a dressed particle: it is built around a real particle at its "core", but the behavior of the particle is affected by the environment. A standard example is the "electron quasiparticle": an electron in a crystal behaves as if it had an effective mass which differs from its real mass. On the other hand, a collective excitation is usually imagined to be a reflection of the aggregate behavior of the system, with no single real particle at its "core". A standard example is the phonon, which characterizes the vibrational motion of every atom in the crystal. However, these two visualizations leave some ambiguity. For example, a magnon in a ferromagnet can be considered in one of two perfectly equivalent ways: (a) as a mobile defect (a misdirected spin) in a perfect alignment of magnetic moments or (b) as a quantum of a collective spin wave that involves the precession of many spins. In the first case, the magnon is envisioned as a quasiparticle, in the second case, as a collective excitation. However, both (a) and (b) are equivalent and correct descriptions. As this example shows, the intuitive distinction between a quasiparticle and a collective excitation is not particularly important or fundamental. The problems arising from the collective nature of quasiparticles have also been discussed within the philosophy of science, notably in relation to the identity conditions of quasiparticles and whether they should be considered "real" by the standards of, for example, entity realism.[5][6] Effect on bulk propertiesEdit By investigating the properties of individual quasiparticles, it is possible to obtain a great deal of information about low-energy systems, including the flow properties and heat capacity. In the heat capacity example, a crystal can store energy by forming phonons, and/or forming excitons, and/or forming plasmons, etc. Each of these is a separate contribution to the overall heat capacity. The idea of quasiparticles originated in Lev Landau's theory of Fermi liquids, which was originally invented for studying liquid helium-3. For these systems a strong similarity exists between the notion of quasiparticle and dressed particles in quantum field theory. The dynamics of Landau's theory is defined by a kinetic equation of the mean-field type. A similar equation, the Vlasov equation, is valid for a plasma in the so-called plasma approximation. In the plasma approximation, charged particles are considered to be moving in the electromagnetic field collectively generated by all other particles, and hard collisions between the charged particles are neglected. When a kinetic equation of the mean-field type is a valid first-order description of a system, second-order corrections determine the entropy production, and generally take the form of a Boltzmann-type collision term, in which figure only "far collisions" between virtual particles. In other words, every type of mean-field kinetic equation, and in fact every mean-field theory, involves a quasiparticle concept. Examples of quasiparticles and collective excitationsEdit This section contains examples of quasiparticles and collective excitations. The first subsection below contains common ones that occur in a wide variety of materials under ordinary conditions; the second subsection contains examples that arise only in special contexts. More common examplesEdit • In solids, an electron quasiparticle is an electron as affected by the other forces and interactions in the solid. The electron quasiparticle has the same charge and spin as a "normal" (elementary particle) electron, and like a normal electron, it is a fermion. However, its mass can differ substantially from that of a normal electron; see the article effective mass.[1] Its electric field is also modified, as a result of electric field screening. In many other respects, especially in metals under ordinary conditions, these so-called Landau quasiparticles[citation needed] closely resemble familiar electrons; as Crommie's "quantum corral" showed, an STM can clearly image their interference upon scattering. • A hole is a quasiparticle consisting of the lack of an electron in a state; it is most commonly used in the context of empty states in the valence band of a semiconductor.[1] A hole has the opposite charge of an electron. • A phonon is a collective excitation associated with the vibration of atoms in a rigid crystal structure. It is a quantum of a sound wave. • A magnon is a collective excitation[1] associated with the electrons' spin structure in a crystal lattice. It is a quantum of a spin wave. • In materials, a photon quasiparticle is a photon as affected by its interactions with the material. In particular, the photon quasiparticle has a modified relation between wavelength and energy (dispersion relation), as described by the material's index of refraction. It may also be termed a polariton, especially near a resonance of the material. For example, an exciton-polariton is a superposition of an exciton and a photon; a phonon-polariton is a superposition of a phonon and a photon. • A plasmon is a collective excitation, which is the quantum of plasma oscillations (wherein all the electrons simultaneously oscillate with respect to all the ions). • A polaron is a quasiparticle which comes about when an electron interacts with the polarization of its surrounding ions. • An exciton is an electron and hole bound together. • A plasmariton is a coupled optical phonon and dressed photon consisting of a plasmon and photon. More specialized examplesEdit • A roton is a collective excitation associated with the rotation of a fluid (often a superfluid). It is a quantum of a vortex. • Composite fermions arise in a two-dimensional system subject to a large magnetic field, most famously those systems that exhibit the fractional quantum Hall effect.[7] These quasiparticles are quite unlike normal particles in two ways. First, their charge can be less than the electron charge e. In fact, they have been observed with charges of e/3, e/4, e/5, and e/7.[8] Second, they can be anyons, an exotic type of particle that is neither a fermion nor boson.[9] • Stoner excitations in ferromagnetic metals • Bogoliubov quasiparticles in superconductors. Superconductivity is carried by Cooper pairs—usually described as pairs of electrons—that move through the crystal lattice without resistance. A broken Cooper pair is called a Bogoliubov quasiparticle.[10] It differs from the conventional quasiparticle in metal because it combines the properties of a negatively charged electron and a positively charged hole (an electron void). Physical objects like impurity atoms, from which quasiparticles scatter in an ordinary metal, only weakly affect the energy of a Cooper pair in a conventional superconductor. In conventional superconductors, interference between Bogoliubov quasiparticles is tough for an STM to see. Because of their complex global electronic structures, however, high-Tc cuprate superconductors are another matter. Thus Davis and his colleagues were able to resolve distinctive patterns of quasiparticle interference in Bi-2212.[11] • A Majorana fermion is a particle which equals its own antiparticle, and can emerge as a quasiparticle in certain superconductors, or in a quantum spin liquid.[12] • Magnetic monopoles arise in condensed matter systems such as spin ice and carry an effective magnetic charge as well as being endowed with other typical quasiparticle properties such as an effective mass. They may be formed through spin flips in frustrated pyrochlore ferromagnets and interact through a Coulomb potential. • Skyrmions • Spinon is represented by quasiparticle produced as a result of electron spin-charge separation, and can form both quantum spin liquid and strongly correlated quantum spin liquid in some minerals like Herbertsmithite.[13] • Angulons can be used to describe the rotation of molecules in solvents. First postulated theoretically in 2015,[14] the existence of the angulon was confirmed in February 2017, after a series of experiments spanning 20 years. Heavy and light species of molecules were found to rotate inside superfluid helium droplets, in good agreement with the angulon theory.[15][16] • Type-II Weyl fermions break Lorentz symmetry, the foundation of the special theory of relativity, which cannot be broken by real particles.[17] • A dislon is a quantized field associated with the quantization of the lattice displacement field of a crystal dislocation. It is a quantum of vibration and static strain field of a dislocation line.[18] See alsoEdit 1. ^ a b c d e f E. Kaxiras, Atomic and Electronic Structure of Solids, ISBN 0-521-52339-7, pages 65–69. 2. ^ Ashcroft and Mermin (1976). Solid State Physics (1st ed.). Holt, Reinhart, and Winston. pp. 299–302. ISBN 978-0030839931. 3. ^ a b c A guide to Feynman diagrams in the many-body problem, by Richard D. Mattuck, p10. "As we have seen, the quasiparticle consists of the original real, individual particle, plus a cloud of disturbed neighbors. It behaves very much like an individual particle, except that it has an effective mass and a lifetime. But there also exist other kinds of fictitious particles in many-body systems, i.e. 'collective excitations'. These do not center around individual particles, but instead involve collective, wavelike motion of all the particles in the system simultaneously." 4. ^ Ohtsu, Motoichi; Kobayashi, Kiyoshi; Kawazoe, Tadashi; Yatsui, Takashi; Naruse, Makoto (2008). Principles of Nanophotonics. CRC Press. p. 205. ISBN 9781584889731. 5. ^ Gelfert, Axel (2003). "Manipulative success and the unreal". International Studies in the Philosophy of Science. 17 (3): 245–263. CiteSeerX doi:10.1080/0269859032000169451. 6. ^ B. Falkenburg, Particle Metaphysics (The Frontiers Collection), Berlin: Springer 2007, esp. pp. 243–46 7. ^ "Physics Today Article". 8. ^ "Cosmos magazine June 2008". Archived from the original on 9 June 2008. 9. ^ Goldman, Vladimir J (2007). "Fractional quantum Hall effect: A game of five halves". Nature Physics. 3 (8): 517. Bibcode:2007NatPh...3..517G. doi:10.1038/nphys681. 10. ^ "Josephson Junctions". Science and Technology Review. Lawrence Livermore National Laboratory. 11. ^ J. E. Hoffman; McElroy, K; Lee, DH; Lang, KM; Eisaki, H; Uchida, S; Davis, JC; et al. (2002). "Imaging Quasiparticle Interference in Bi2Sr2CaCu2O8+δ". Science. 297 (5584): 1148–51. arXiv:cond-mat/0209276. Bibcode:2002Sci...297.1148H. doi:10.1126/science.1072640. PMID 12142440. 12. ^ Banerjee, A.; Bridges, C. A.; Yan, J.-Q.; et al. (4 April 2016). "Proximate Kitaev quantum spin liquid behaviour in a honeycomb magnet". Nature Materials. 15 (7): 733–740. arXiv:1504.08037. Bibcode:2016NatMa..15..733B. doi:10.1038/nmat4604. PMID 27043779. 13. ^ Shaginyan, V. R.; et al. (2012). "Identification of Strongly Correlated Spin Liquid in Herbertsmithite". EPL. 97 (5): 56001. arXiv:1111.0179. Bibcode:2012EL.....9756001S. doi:10.1209/0295-5075/97/56001. 14. ^ Schmidt, Richard; Lemeshko, Mikhail (18 May 2015). "Rotation of Quantum Impurities in the Presence of a Many-Body Environment". Physical Review Letters. 114 (20): 203001. arXiv:1502.03447. Bibcode:2015PhRvL.114t3001S. doi:10.1103/PhysRevLett.114.203001. PMID 26047225. 15. ^ Lemeshko, Mikhail (27 February 2017). "Quasiparticle Approach to Molecules Interacting with Quantum Solvents". Physical Review Letters. 118 (9): 095301. arXiv:1610.01604. Bibcode:2017PhRvL.118i5301L. doi:10.1103/PhysRevLett.118.095301. PMID 28306270. 16. ^ "Existence of a new quasiparticle demonstrated". Retrieved 1 March 2017. 17. ^ Xu, S.Y.; Alidoust, N.; Chang, G.; et al. (2 June 2017). "Discovery of Lorentz-violating type II Weyl fermions in LaAlGe". Science Advances. 3 (6): e1603266. Bibcode:2017SciA....3E3266X. doi:10.1126/sciadv.1603266. PMC 5457030. PMID 28630919. 18. ^ Li, Mingda; Tsurimaki, Yoichiro; Meng, Qingping; Andrejevic, Nina; Zhu, Yimei; Mahan, Gerald D.; Chen, Gang (2018). "Theory of electron–phonon–dislon interacting system—toward a quantized theory of dislocations". New Journal of Physics. 20 (2): 023010. arXiv:1708.07143. doi:10.1088/1367-2630/aaa383. Further readingEdit • L. D. Landau, Soviet Phys. JETP. 3:920 (1957) • L. D. Landau, Soviet Phys. JETP. 5:101 (1957) • A. A. Abrikosov, L. P. Gor'kov, and I. E. Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics (1963, 1975). Prentice-Hall, New Jersey; Dover Publications, New York. • D. Pines, and P. Nozières, The Theory of Quantum Liquids (1966). W.A. Benjamin, New York. Volume I: Normal Fermi Liquids (1999). Westview Press, Boulder. • J. W. Negele, and H. Orland, Quantum Many-Particle Systems (1998). Westview Press, Boulder External linksEdit
212845032c2922e5
DOI: 10.14704/nq.2010.8.3.295 Towards a Theory of Everything Part II: Introduction of Consciousness in Schrödinger equation and Standard Model Ram Lakhan Pandey Vimal Theory of everything must include consciousness. In Part I (Vimal, 2010b) of this series of 3 articles, the subjective experience (SE) aspect of consciousness was introduced in classical physics by examining the invariance of various components of theories under PE-SE transformations, where PEs (proto-experiences) are precursors of SEs. We found that (i) classical physics is invariant under the PE-SE transformation, (ii) SEs are embedded in spacetime geometry for the structure of spacetime in superposed form, (iii) SEs can move with spatiotemporal coordinates of matter for matter field because both mental and material aspects are always together in the dual-aspect-dual-mode optimal PE-SE framework, and (iv) our specific SE is the result of matching and selection processes and can change with space and time. For example, experiencing redness has neural correlates of V4/V8/VO-red-green neural-net with redness state. When a subject moves, the specific SE redness also moves with the subject’s correlated neural-net. In the current Part II, the SE aspect of consciousness is introduced in orthodox quantum physics by examining its invariance under the PE-SE transformations. We found that the followings are invariant under the PE-SE transformations: Schrödinger equation, current, Dirac Lagrangian, the Lagrangian for a charged self-interacting scalar field, and Standard Model (the Lagrangian for free gauge field and Lagrangian for the electromagnetic interaction of a charged scalar field (Higgs Mechanism)). In Part III (Vimal, 2010c), the SE aspect of consciousness will be introduced to unify it with fundamental forces in loop quantum gravity and string theory of modern quantum physics. All parts together lead us towards the theory of everything. Theory of everything; proto-experiences (PEs) and subjective experiences; aspect of consciousness; superposition; elementary particles; dual-aspect model; PE-SE transformations; orthodox quantum physics; Schrödinger equation Full Text: Supporting Agencies | NeuroScience + QuantumPhysics> NeuroQuantology :: Copyright 2001-2019
8896fb11fffdfa25
In physics and geometry, there are two closely related vector spaces, usually three-dimensional but in general could be any finite number of dimensions. Position space (also real space or coordinate space) is the set of all position vectors r in space, and has dimensions of length. A position vector defines a point in space. If the position vector of a point particle varies with time it will trace out a path, the trajectory of a particle. Momentum space is the set of all momentum vectors p a physical system can have. The momentum vector of a particle corresponds to its motion, with units of [mass][length][time]−1. Mathematically, the duality between position and momentum is an example of Pontryagin duality. In particular, if a function is given in position space, f(r), then its Fourier transform obtains the function in momentum space, φ(p). Conversely, the inverse transform of a momentum space function is a position space function. These quantities and ideas transcend all of classical and quantum physics, and a physical system can be described using either the positions of the constituent particles, or their momenta, both formulations equivalently provide the same information about the system in consideration. Another quantity is useful to define in the context of waves. The wave vector k (or simply "k-vector") has dimensions of reciprocal length, making it an analogue of angular frequency ω which has dimensions of reciprocal time. The set of all wave vectors is k-space. Usually r is more intuitive and simpler than k, though the converse can also be true, such as in solid-state physics. Quantum mechanics provides two fundamental examples of the duality between position and momentum, the Heisenberg uncertainty principle ΔxΔpħ/2 stating that position and momentum cannot be simultaneously known to arbitrary precision, and the de Broglie relation p = ħk which states the momentum and wavevector of a free particle are proportional to each other.[1] In this context, when it is unambiguous, the terms "momentum" and "wavevector" are used interchangeably. However, the de Broglie relation is not true in a crystal. Position and momentum spaces in classical mechanicsEdit Lagrangian mechanicsEdit Most often in Lagrangian mechanics, the Lagrangian L(q, dq/dt, t) is in configuration space, where q = (q1, q2,..., qn) is an n-tuple of the generalized coordinates. The Euler–Lagrange equations of motion are (One overdot indicates one time derivative). Introducing the definition of canonical momentum for each generalized coordinate the Euler–Lagrange equations take the form The Lagrangian can be expressed in momentum space also,[2] L′(p, dp/dt, t), where p = (p1, p2,..., pn) is an n-tuple of the generalized momenta. A Legendre transformation is performed to change the variables in the total differential of the generalized coordinate space Lagrangian; where the definition of generalized momentum and Euler–Lagrange equations have replaced the partial derivatives of L. The product rule for differentials[nb 1] allows the exchange of differentials in the generalized coordinates and velocities for the differentials in generalized momenta and their time derivatives, which after substitution simplifies and rearranges to Now, the total differential of the momentum space Lagrangian L′ is so by comparison of differentials of the Lagrangians, the momenta, and their time derivatives, the momentum space Lagrangian L′ and the generalized coordinates derived from L′ are respectively Combining the last two equations gives the momentum space Euler–Lagrange equations The advantage of the Legendre transformation is that the relation between the new and old functions and their variables are obtained in the process. Both the coordinate and momentum forms of the equation are equivalent and contain the same information about the dynamics of the system. This form may be more useful when momentum or angular momentum enters the Lagrangian. Hamiltonian mechanicsEdit In Hamiltonian mechanics, unlike Lagrangian mechanics which uses either all the coordinates or the momenta, the Hamiltonian equations of motion place coordinates and momenta on equal footing. For a system with Hamiltonian H(q, p, t), the equations are Position and momentum spaces in quantum mechanicsEdit In quantum mechanics, a particle is described by a quantum state. This quantum state can be represented as a superposition (i.e. a linear combination as a weighted sum) of basis states. In principle one is free to choose the set of basis states, as long as they span the space. If one chooses the eigenfunctions of the position operator as a set of basis functions, one speaks of a state as a wave function  (r) in position space (our ordinary notion of space in terms of length). The familiar Schrödinger equation in terms of the position r is an example of quantum mechanics in the position representation.[3] By choosing the eigenfunctions of a different operator as a set of basis functions, one can arrive at a number of different representations of the same state. If one picks the eigenfunctions of the momentum operator as a set of basis functions, the resulting wave function  (k) is said to be the wave function in momentum space.[3] A feature of quantum mechanics is that phase spaces can come in different types: discrete-variable, rotor, and continuous-variable. The table below summarizes some relations involved in the three types of phase spaces.[4] Comparison and summary of relations between conjugate variables in discrete-variable (DV), rotor (ROT), and continuous-variable (CV) phase spaces (taken from arXiv:1709.04460). Most physically relevant phase spaces consist of combinations of these three. Each phase space consists of position and momentum, whose possible values are taken from a locally compact Abelian group and its dual. A quantum mechanical state can be fully represented in terms of either variables, and the transformation used to go between position and momentum spaces is, in each of the three cases, a variant of the Fourier transform. The table uses bra-ket notation as well as mathematical terminology describing Canonical commutation relations (CCR). Relation between space and reciprocal spaceEdit Functions and operators in position spaceEdit Suppose we have a three-dimensional wave function in position space  (r), then we can write this functions as a weighted sum of orthogonal basis functions  j(r): or, in the continuous case, as an integral It is clear that if we specify the set of functions  j(r), say as the set of eigenfunctions of the momentum operator, the function  (k) holds all the information necessary to reconstruct  (r) and is therefore an alternative description for the state  . In quantum mechanics, the momentum operator is given by (see matrix calculus for the denominator notation) with appropriate domain. The eigenfunctions are and eigenvalues ħk. So and we see that the momentum representation is related to the position representation by a Fourier transform.[6] Functions and operators in momentum spaceEdit Conversely, a three-dimensional wave function in momentum space  (k) as a weighted sum of orthogonal basis functions  j(k): or as an integral: the position operator is given by with eigenfunctions and eigenvalues r. So a similar decomposition of  (k) can be made in terms of the eigenfunctions of this operator, which turns out to be the inverse Fourier transform:[6] Unitary equivalence between position and momentum operatorEdit The r and p operators are unitarily equivalent, with the unitary operator being given explicitly by the Fourier transform. Thus they have the same spectrum. In physical language, p acting on momentum space wave functions is the same as r acting on position space wave functions (under the image of the Fourier transform). Reciprocal space and crystalsEdit For an electron (or other particle) in a crystal, its value of k relates almost always to its crystal momentum, not its normal momentum. Therefore, k and p are not simply proportional but play different roles. See k·p perturbation theory for an example. Crystal momentum is like a wave envelope that describes how the wave varies from one unit cell to the next, but does not give any information about how the wave varies within each unit cell. When k relates to crystal momentum instead of true momentum, the concept of k-space is still meaningful and extremely useful, but it differs in several ways from the non-crystal k-space discussed above. For example, in a crystal's k-space, there is an infinite set of points called the reciprocal lattice which are "equivalent" to k = 0 (this is analogous to aliasing). Likewise, the "first Brillouin zone" is a finite volume of k-space, such that every possible k is "equivalent" to exactly one point in this region. For more details see reciprocal lattice. See alsoEdit 2. ^ Hand, Louis N; Finch, Janet D (1998). Analytical Mechanics. ISBN 978-0-521-57572-0. p.190 3. ^ a b Peleg, Y.; Pnini, R.; Zaarur, E.; Hecht, E. (2010). Quantum Mechanics (Schaum's Outline Series) (2nd ed.). McGraw Hill. ISBN 978-0-07-162358-2. 4. ^ Albert, Victor V; Pascazio, Saverio; Devoret, Michel H (2017). "General phase spaces: from discrete variables to rotor and continuum limits". arXiv:1709.04460 [quant-ph]. 5. ^ Abers, E. (2004). Quantum Mechanics. Addison Wesley, Prentice Hall Inc. ISBN 978-0-13-146100-0.
93d7b49ca397a814
I have just started learning density matrix and quantum master equations, and I am given a problem set that asks to find the solution to the Lindblad equation with $H$, $L_+$, $L_-$, $L_z$, and $\rho(0)$ given in the matrix form. The thing is, I know how to solve a system of linear differential equations such as Schrödinger equations to find a vector solution, but not a matrix equation with a matrix solution. Can anybody give me a hint or a link to the steps, because I think I am too dumb to find any, having searched the web and the library for about half the day. bumped to the homepage by Community 1 hour ago • 2 $\begingroup$ The master equation is a system of linear, ordinary differential equations (ODEs) for the components of the density matrix (write them out explicitly!). If it helps, you could even collect these density matrix components into a vector... $\endgroup$ – Mark Mitchison Mar 20 '18 at 15:27 • $\begingroup$ physics.stackexchange.com/q/115066 Im guessing what you mean by collect components into a vector is in the answer provided in the link above. Thanks anyways $\endgroup$ – Jin Mar 20 '18 at 15:34 We always solve the Lindblad form linear differential equations by numerical methods, such as fourth order Runge-Kutta method. If you want the steady state solution in analytic method, you can read this paper: PHYSICAL REVIEW A 92, 022116 (2015). I don't know if I solved your question. • $\begingroup$ $\uparrow$ Link? $\endgroup$ – Qmechanic May 18 '18 at 11:16 Your Answer
65ca1ef2d2942752
Do all quantum trails inevitably lead to Everett? I’ve been thinking lately about quantum physics, a topic that seems to attract all sorts of crazy speculation and intense controversy, which seems inevitable.  Quantum mechanics challenges our deepest held most cherished beliefs about how reality works.  If you study the quantum world and you don’t come away deeply unsettled, then you simply haven’t properly engaged with it.  (I originally wrote “understood” in the previous sentence instead of “engaged”, but the ghost of Richard Feymann reminded me that if you think you understand quantum mechanics, you don’t understand quantum mechanics.) At the heart of the issue are facts such as that quantum particles operate as waves until someone “looks” at them, or more precisely, “measures” them, then they instantly begin behaving like particles with definite positions.  There are other quantum properties, such as spin, which show similar dualities.   Quantum objects in their pre-measurement states are referred to as being in a superposition.  That superposition appears to instantly disappear when the measurement happens, with the object “choosing” a particular path, position, or state. How do we know that the quantum objects are in this superposition before we look at them?   Because in their superposition states, the spread out parts interfere with each other.  This is evident in the famous double slit experiment, where single particles shot through the slits one at a time, interfere with themselves to produce the interference pattern that waves normally produce.  If you’re not familiar with this experiment and its crazy implications, check out this video: So, what’s going on here?  What happens when the superposition disappears?  The mathematics of quantum theory are reportedly rock solid.  From a straight calculation standpoint, physicists know what to do.  Which leads many of them to decry any attempt to further explain what’s happening.  The phrase, “shut up and calculate,” is often exclaimed to pesky students who want to understand what is happening.  This seems to be the oldest and most widely accepted attitude toward quantum mechanics in physics. From what I understand, the original Copenhagen Interpretation was very much an instrumental view of quantum physics.  It decried any attempt to explore beyond the observations and mathematics as hopeless speculation.  (I say “original” because there are a plethora of views under the Copenhagen label, and many of them make ontological assertions that the original formulation seemed to avoid, such as insisting that there is no other reality than what is described.) Under this view, the wave of the quantum object evolves under the wave function, a mathematical construct.  When a measurement is attempted, the wave function “collapses”, which is just a fancy way of saying it disappears.  The superposition becomes a definite state. What exactly causes the collapse?  What does “measurement” or “observation” mean in this context?  It isn’t interaction with just another quantum object.  Molecules have been held in quantum superposition, including, as a new recent experiment demonstrates, ones with thousands of atoms.  For a molecule to hold together, chemical bonds have to form, and for the individual atoms to hold together, the components have to exchange bosons (photons, gluons, etc) with each other.  All this happens and apparently fails to cause a collapse in otherwise isolated systems. One proposal thrown out decades ago, which has long been a favorite of New Age spiritualists and similarly minded people, is that maybe consciousness causes the collapse.  In other words, maybe it doesn’t happen until we look at it.  However, most physicists don’t give this notion much weight.  And the difficulties of engineering a quantum computer, which require that a superposition be maintained to get their processing benefits, seems to show (to the great annoyance of engineers) that systems with no interaction with consciousness still experience collapse. What appears to cause the collapse is interaction with the environment.  But what exactly is “the environment”?  For an atom in a molecule, the environment would be the rest of the molecule, but an isolated molecule seems capable of maintaining its superposition.  How complex or vast does the interacting system need to be to cause the collapse?  The Copenhagen Interpretation merely says a macroscopic object, such as a measuring apparatus, but that’s an imprecise term.  At what point do we leave the microscopic realm and enter the classical macroscopic realm?  Experiments that succeed at isolating ever larger macromolecules seem able to preserve the quantum superposition. If we move beyond the Copenhagen Interpretation, we encounter propositions that maybe the collapse doesn’t really happen.  The oldest of these is the deBroglie-Bohm Interpretation.  In it, there is always a particle that is guided by a pilot wave.  The pilot wave appears to disappear on measurement, but what’s really happening is that the wave decoheres, loses its coherence into the environment, causing the particle to behave like a freestanding particle. The problem is that this interpretation is explicitly non-local in that destroying any part of the wave causes the whole thing to cease any effect on the particle.  Non-locality, essentially action at a distance, is considered anathema in physics.  (Although it’s often asserted that quantum entanglement makes it unavoidable.) The most controversial proposition is that maybe the collapse never happens and that the superposition continues, spreading to other systems.  The elegance of this interpretation is that it essentially allows the system to continue evolving according to the Schrödinger equation, the central equation in the mathematics of quantum mechanics.  From an Occam’s razor standpoint, this looks promising. Well, except for a pesky detail.  We don’t observe the surrounding environment going into a superposition.  After a measurement, the measuring apparatus and lab setup seem just as singular as they always have.  But this is sloppy thinking.  Under this proposition, the measuring apparatus and lab have gone into superposition.  We don’t observe it because we ourselves have gone into superposition. In other words, there’s a version of the measuring apparatus that measures the particle going one way, and a version that measures it going the other way.  There’s a version of the scientist that sees the measurement one way, and another version of the scientist that sees it the other way.  When they call their colleague to tell them about the results, the colleague goes into superposition.  When they publish their results, the journal goes into superposition.  When we read the paper, we go into superposition.  The superposition spreads ever farther out into spacetime. We don’t see interference between the branches of superpositions because the waves have decohered, lost their phase with each other.  Brian Greene in The Hidden Reality points out that it may be possible in principle to measure some remnant interference from the decohered waves, but it would be extremely difficult.  Another physicist compared it to trying to measure the effects of Jupiter’s gravity on a satellite orbiting the Earth: possible in principle but beyond the precision of our current instruments. Until that becomes possible, we have to consider each path as its own separate causal framework.  Each quantum event expands the overall wave function of the universe, making each one its own separate branch of causality, in essence, its own separate universe or world, which is why this proposition is generally known as the Many Worlds Interpretation. Which interpretation is reality?  Obviously there’s a lot more of them than I mentioned here, so this post is unavoidably narrow in its consideration.  To me, the (instrumental) Copenhagen Interpretation has the benefit of being epistemically humble.  Years ago, I was attracted to the deBroglie-Bohm Interpretation, but it has a lot of problems and is not well regarded by most physicists. The Many Worlds Interpretation seems absurd, but we need to remember that the interpretation itself isn’t so much absurd, but its implications.  Criticizing the interpretation because of those implications, as this Quanta Magazine piece does, seems unproductive, akin to criticizing general relativity because we don’t like the relativity of simultaneity, or evolution because we don’t like what it says about humanity’s place in nature. With every experiment that increases the maximally observed size of quantum objects, the more likely it seems to me that the whole universe is essentially quantum, and the more inevitable this interpretation seems. Now, it may be possible that Hugh Everett III, the originator of this interpretation, was right that the wave function never collapses, but that some other factor prevents the unseen parts of the post-measurement wave from actually being real.  Referred to as the unreal version of the interpretation, this seems to be the position of a lot of physicists.  Since we have no present way of testing the proposition as Brian Greene suggested, we can’t know. From a scientific perspective then, it seems like the most responsible position is agnosticism.  But from an emotional perspective, I have to admit that the elegance of spreading superpositions are appealing to me, even if I’m very aware that there’s no way to test the implications. What do you think?  Am I missing anything?  Are there actual physics problems with the Many Worlds Interpretation that should disqualify it?  Or other interpretations that we should be considering? This entry was posted in Science and tagged , , , , . Bookmark the permalink. 55 Responses to Do all quantum trails inevitably lead to Everett? 1. Matthew Ritchie says: There is no such thing as objectivity. Not all physicist dismiss some interaction with consciousness. Some very prominent physicists at least think its a plausible possibility. I love how folks want to dismiss any quantum connections to consciousnesses as mysticism. One has to wonder what would ever be enough evidence to at least get those who don’t want to give credence to at least say its plausible. They so easily accept Hawking’s many worlds and parallel universe hypotheses even though thus far no real method to test them exists either. But suggests that perhaps consciousness is more than the body and your a mystic. I like Penrose’s Biocentrism ideas. But I like others to include perhaps its an illusion or we live in a real matrix. But I certainly don’t consider myself a mystic. Liked by 1 person • john zande says: Hard not to arrive at that conclusion. If consciousness is a factor, then surely it lends support to a simulation. How, though, to test it? Liked by 1 person • Matthew, I appreciate your comment. What would you say is the evidence for consciousness causing the collapse? • Matthew Ritchie says: Well Im not as learned as you so I would say what is the evidence for it not causing the collapse? Testable verifiable evidence? Just like several hypotheses in quantum Physics testable, repeatable and verifiable evidence is often out of reach. I’m not saying anything is absolutely true but to dismiss a hypothesis out of hand when your preferred solution is just as untestable and unverifiable is not objective and not fair. In the end all of it might be wrong. If consciousness is merely something that fades after the death of the organism so be it, nothing we can do its natural law in that case. But until we know, if we can know, we should at least aknowledge some very smart Physicist do indeed think consciousness may play a role. Roger Penrose is no crazy person nor is Stuart Hammerhoff an uneducated loon. Those are just two people who have a wide variety of hypotheses that say its plausible that consciousness interacts with exotic quantum particles. Many point out the double slit Experiment among other things as an example of what might be. Nobody knows what is as of yet but to call legitimate scientists mystics for saying maybe is just unfair. In the end you and scientists like you might indeed be right but until its proven please give all legitimate scientists the same respect you gave Hawking when he proposed String Theory and all the craziness, parallel worlds, many copies of me on parallel worlds, and all the other things I watched on The Scifi Channel, that come with it. Now some scientists are saying consciousness is an illusion and that’s funny really. When you cant solve it say it doesn’t exist solves the problem only it doesn’t because it does exist only its nature is a mystery 2. Mark Titus says: I think this is a very fine presentation/overview of quantum mechanics and its puzzles. Snappy title to the essay too! Liked by 2 people 3. paultorek says: To the very limited extent that I understand the mathematics of decoherence, it does seem to make Everett the most natural interpretation. Why should orthogonal states just vanish when their effect on us diminishes? “Us” meaning the states of observers whose device registered a particle going through the left slit, for example, and “orthogonal ” meaning approximately orthogonal, to within some rounding error. The fact that decoherence is in principle a smooth process, albeit a fast one, takes a lot of the sting out of the Many Worlds label. It’s kind of a misnomer. It would be equally fair to say there’s one world in Everett, but many superposed states that have extremely weak interactions. A good resource is the wiki article on decoherence. Another is David Wallace, The Emergent Multiverse . Liked by 1 person • Thanks for the references. I agree on the wiki article. I’ll check out the Wallace one. Good point about the label. The main reason I described MWI the way I did was to downplay the new universes thing. Dewitt reportedly used it as a selling tool, but I think it makes too many people dismiss it as outlandish without understanding what’s actually being proposed. 4. Matthew Ritchie says: Nobody knows the source or nature of consciousness. There is evidence you remain conscious after the heart stops and blood flow to the brain ceases. For how long is still being examined. Previously this was not thought possible. Now some adjust there position saying activity continues till clinical brain death. No one as of yet can provide evidence consciousness is not affecting quantum particles or the double slit Experiment because nobody knows the nature, origin, components or make up of consciousness. Hell some just give up altogether and say its not real anyway, its an illusion. So all human beings are, what they have acomplished over millions of years of evolution is an illusion. Anyone who matter of factly claims they can prove consciousnesses is not affecting the quantum relm or vis versa know there wrong. Nobody even knows what consciousness is composed of let alone its origins so they can’t say for sure one way or another. They can dismiss it as woo or mysticism, they can belittle those who at least say maybe but, just like those who subjectively hope consciousness doesn’t die, they cant prove anything one way or the other. I wouldn’t be so harsh if people disparage brilliant scientists like Penrose and others by calling it mysticism. No better way to disparage a scientist than to call his or her hypothesis mysticism. Nobody called Hawking a mystic when ge hypothesized String Theory which is a parallel worlds theory with absolutely no direct evidence of it being true. Honestly parallel universes with my double in them sounds pretty darn mystical to me. 5. Matthew Ritchie says: Don’t confuse the scientific method with the actual scientists. Scientists are people, human beings, and like all human beings they are almost incapable of objectivity on there own. If you can pick it up, put it in a beaker, and test it using the Scientific Method thats objective. Supposedly if the math works that is a good sign it could be true but even if tte math works it still can be wrong. If you can’t pick it up and test it it could be wrong. Quantum Physics reaches out into a largely untestable area of science. In fact many well known scientists ponder aloud that maybe we have reached or soon will reach all we are capable of knowing leaving infinite amounts of questions unanswered and unknowable. • Hi Matthew, “…I would say what is the evidence for it not causing the collapse? Testable verifiable evidence? ” I alluded to some in the post: the difficulty in constructing a quantum computer. Quantum computing’s unique value is being able to process possible paths in parallel, which requires maintaining a superposition as long as possible. However, long before any conscious entity becomes aware of what’s happening, the superposition decoheres. This is a serious challenge for QC. If it could be overcome simply by keeping conscious systems from seeing it, it likely would have been solved decades ago. As it is, many QC processors have to operate at near 0 Kelvin to minimize interaction with the environment and even that only keeps the qubit circuits in superposition for a very brief time. “Nobody knows the source or nature of consciousness.” I think neuroscience is making steady progress in understanding it. (See the posts in my Mind and AI category for why.) Of course, many people don’t like what’s being found, so the assertion that science is utterly helpless in this area remains a popular one. “Don’t confuse the scientific method with the actual scientists.” A crucial part of scientific methods (there isn’t just one) is guarding against human bias. It’s why results must be repeatable, transparent, and subject to peer review. In my experience, the ones that pass this test don’t affirm expansive conceptions of consciousness. But as you note, there is no unique evidence for any one interpretation of quantum physics. It’s why I said that the responsible position is agnosticism on them. For now. 6. s7hummel says: Maybe a little beside the point … Please forgive me. As someone who could not even bother with elementary school and for several years has not been able to master English … he claims that scientists do not understand the basic processes of the universe. Well, it can be said, it’s just a stupid Pole. But I will not be giving hundreds of examples of scientific indolence. Only one. Just what to think of the state of the scientific mind, when one of the most prominent minds, carries out such a thought experiment … whether it was just a joke or just a word of despair Throw a book into the black hole. The book carries information. Perhaps that information is about physics, perhaps that information is the plot of a romance novel – it could be any kind of information. But as far as anyone knows, the outgoing Hawking radiation is the same no matter what went into the black hole. The information is apparently lost – where did it go? Do we see one of the greatest idiocys of quantum physics? Do we see how beautiful minds are stupidity? Maybe just a stupid pole is dumber than it would seem? Liked by 1 person • Stan, From what I understand, information lost to a black hole remains a problem that hasn’t been solved. I’ve read some speculation that maybe it’s smeared across the event horizon as a sort of hologram, which sounds like it could conceivably affect Hawking radiation, but it all sounds highly speculative. One of the problems with physics today is that too much of the theoretical work happens far outside of testable conditions. On the one hand, this should be fine since we never know when such exploration might turn up something testable. But until it does, we have to be stringent in remembering that it’s informed speculation. Liked by 1 person 7. s7hummel says: Only this is not a problem with the information that carries the object that falls into a black hole. This applies to the information that the object carries about itself. Is known that information is the basis of the quantum universe. 1. Throw two stones into a black hole. On one we paint the flag US and the second flag of Poland. Does such information mean something. 2. Now we will fire two cannonballs towards the black hole. A stone ball from Poland and a ball of uranium from the US. Is this the sense of information for quantum physics? Liked by 1 person 8. s7hummel says: If I didn’t believe in your wonderful reasoning … after all, I read your wise statements. If something is to blame, it is my tragic English. Besides, the scientists themselves, although they are so wonderful in quantum physics, admit that they absolutely have no idea why this works. so I disappear… but not on twitter. Liked by 1 person 9. Wyrd Smythe says: My problem with MWI is the same one many have: where do all those new realities come from? What does it suggest about matter and energy? Tegmarkians can talk about how the square root of 4 is both +2 and -2, and no one worries about where the extra answer came from. But I don’t believe we live in a Tegmarkian universe. There is also, to me, an issue of reality explosion: Wear a pair of polarizing sunglasses, and each photon that hits them has a chance of passing through or not. So each photon seems to be creating new realities. Billions and billions of new realities. Every instant. MWI fans have said this doesn’t happen, but I’m not clear on why not. I have played with the idea that what happens is that the standing wave of the universe becomes more complex with each possible branch such that all possible paths that could have been taken are part of that wave. But there’s only one actual reality that emerges from that wave. I’ve never found the waveform collapse all that mysterious. A particle in flight is a vibration in the relevant particle field, the energy of that quanta is spread out in the wave. But for that energy to interact with, say, an electron in the wall it hits, that single spread out quanta “drains” into the contact point. The mystery, if I understand it, has to do with what “selects” that contact point, and how does the energy of the wave “drain” into that point? We have no maths for that. I suspect the contact point gets selected per the same mechanism that “selects” which atom of a radioactive sample decays next. Or as how the first bird of a flock decides to take to the air. Maybe it is literally random (which it seems to be). I sure wish someone would discover something new. QFT and GR have been at loggerheads far too long. Liked by 1 person • I have to admit that I wonder about the energy aspect of this as well. If every part of the wave becomes a full particle in its own branch of the superposition, then how is the energy of that wave, and every other wave, not effectively magnified? My understanding is that we still don’t understand at a fundamental level how mass is generated. (The Higgs supposedly only explains a subset of it.) If the non-visible parts of the post-measurement wave aren’t real, then maybe that has something to do with it. What’s interesting about the explosion of superpositions, is virtually all quantum events average out until the macroscopic deterministic world emerges. To me, that implies that most of the “universes” being generated are virtually identical. (There would have been far more divergence in the early instances of the big bang when quantum events generated patterns that later grew into voids and galactic superclusters.) Today, it seems like it would only be the rare case of quantum indeterminancy “bleeding” through that would lead to divergences. It might be that most of the exploding superpositions end up converging back to one reality, or only a few of them. (I have no idea if the mathematics lend any credence whatsoever to just conjecture.) And I’ve read some variances of the interpretation that, instead of proliferating universes, it’s really just interacting ones. That actually isn’t my understanding of what happens. As I understand it, the entire wave instantly disappears, replaced by the particle, even if the wave has been spread around and fragmented over vast distances, that there’s no timeline for it to drain. (Which admittedly also makes “collapse” a questionable word for the phenomenon.) That said, decoherence isn’t supposed to be instantaneous either, just very fast, so who knows. Totally agreed that it would be good to see progress somewhere. I remember many physicists hoping the LHC would provide something, anything, unexpected so they’d have something to work with, but other than failing to confirm supersymmetry, most of what they’ve gotten just seemed to reaffirm the Standard Model. Liked by 1 person • Wyrd Smythe says: Yeah, the mass of protons and neutrons, for example, comes mainly from the energy of the quark and gluon interactions, which means most of the mass from matter isn’t due to the Higgs. Which is why I find it easier to think about in terms of energy, although I usually see mass and energy as two faces of the same thing. “To me, that implies that most of the “universes” being generated are virtually identical.” Which I think is how MWI fans respond to the question about sunglasses and photons. My question in return is how identical is “virtually” identical? Remember Bradbury’s famous short story, The Sound of Thunder? Do worldlines converge and merge, or do even quantum differences ultimately diverge and result in separate realities? A lot of MWI fans think Occam and parsimony support their position, but I (so far) see it the opposite. MWI doesn’t sound like the simple explanation, and the explosion problem defies parsimony. But then I’m not sure I truly understand MWI, and I’ve gotten the impression a lot of its fans don’t really understand it, either. Plus, there seem to be multiple versions of the theory since Everett. Greg Egan has a short story, The Infinite Assassin, in his collection, Axiomatic. It’s about an illegal drug that allows users to interact with parallel universes, which turns out to be a Very Bad Thing. What I really liked about the story was the sense of continuum Egan gives to parallel worlds. One can’t help but wonder what makes them distinct. Sean Carroll gave a talk about MWI (which I found unconvincing), and he had an experiment set up remotely that did a photon-half-silver-mirror thing with two detectors. Through a phone app he was able to trigger the experiment and get a (random) result which he used to determine if he should jump to the left or to the right. (The right, in this case, IIRC.) The claim was that this generated two realities accommodating his jumping both ways. Which generated two different audiences (and sets of video viewers) who remember him jumping both ways. Which led to this comment where I recall him jumping right. Presumably the alternate me remembers it differently. But I keep wondering about those sunglasses and all the quantum interactions happening all the time. I’ve just never heard anything from MWI that gets me past this key objection. Yes, agreed. (That’s why I quoted “drains” — best word I could think of but hardly adequate.) I think we’re on the same page here, I’m just trying to imagine an ontology that makes sense of “waveform collapse.” I’ve been thinking about this a bit as I try to wrap my head around some of the strange variations of the two-slit thing. (Have you see the three-slit experiment? Mind-blowing!) In a single photon event, the laser emits a “photon” with no location but a wave (with momentum) that expands from the laser into the surrounding environment. It’s a single quanta of energy causing a vibration in the EM field. Now that energy has to go somewhere, and what we see happening is that waveform somehow interacting with some electron in some atom such that the electron is raised to a new energy level. At that point, the photon does have a location (and presumably we can no longer talk about its momentum). That interaction requires the full energy of the quanta, so the energy in the field “goes” (or “drains” or some better word) into that interaction. But this is just me pondering the “waveform collapse” issue and WAG-ing at an ontology. “I remember many physicists hoping the LHC would provide something, anything,” Yeah, and now it’s shut down for two years for an upgrade. You’d think not finding SUSY at all would take the wind out of certain sails, but they just keep redefining the target. Part of the problem is that String Theory seems to need it, so no SUSY threatens ST. There’s also that chart you’ve probably seen showing how the three forces unify at very high energies? Those curves intersect at the same point only if SUSY is true. Without SUSY, they don’t. So it’s a dream that’s hard to kill. There was some hope of seeing something new in very esoteric sectors involving (IIRC) weak decay. I can’t recall what it was exactly, and no one is jumping up and down, so whatever they saw may have not survived more analysis. They were seeing bumps in both CMS and ATLAS, I think, and combining the two bumps gave them a nice sigma, but the data weren’t compatible so combining them didn’t really say anything. Or something like that. Merry Christmas! Liked by 1 person • “My question in return is how identical is “virtually” identical?” My conception is that normal events, such as all the deterministic events we see in nature where the quantum events average out, don’t create deviations. It’s only when we tie a macroscopic event to a specific quantum outcome, that a notable divergence happens. As you note, even a minor “meaningless” macroscopic event (such as which way Carroll jumped) might eventually butterfly into major changes. Of course, we can’t rule out the possibility that quantum indeterminacy doesn’t “bleed” into the macroscopic world outside the precision of our instruments and butterfly all on its own, so the idea of similar universes may not be tenable. There are definitely lots of versions in the Everettian family of interpretations. One I recently heard about on the Rationally Speaking podcast was relational quantum mechanics, which posits that whether a wave has decohered is relative to an observer. In other words, like the relativity of simultaneity in Einstein’s theories, this holds that where you are in the sequence of events determines when you see the collapse. Schrodinger’s cat sees the collapse as soon as the detection device is triggered, but Schrodinger himself doesn’t see it until he opens the box. However, the relational interpretation is reportedly agnostic about the reality of the other outcomes. (It doesn’t seem agnostic to me, but I probably don’t grasp the full idea.) I need to look up that Egan story. It sounds interesting. Ah, ok, I missed the quotes on “drain.” Thanks for the description of the photon. Part of what I find interesting about this is that the electrons are presumably constantly exchanging photons with each other and the nucleus, but despite that exhibit quantum waveness to those of us outside the relationship, which makes me think of the relational interpretation again. I don’t think I knew that uniting all three forces required SUSY. Interesting. I know the weak and electomagnetic one were already shown to be the same. (Which strikes me as an odd pair.) All in all, I think I’m happy I’m not a physicist right now. Merry Christmas! Liked by 1 person • Wyrd Smythe says: “It’s only when we tie a macroscopic event to a specific quantum outcome, that a notable divergence happens.” That matches what I’ve heard from MWI fans, but it seems to suffer the same micro/macro issues as many quantum things do. What is a “notable divergence” and what happens? Reality doesn’t diverge at all (why not?), or the diverged lines merge into one (again, why?). That Egan story is good at pointing out how, if we take MWI at face value, our own reality is a fuzzy continuum of indistinguishable nearby realities. At what point am “I” no longer really me? Chaos theory suggests (to me) that even minute differences may result in large changes down the road. What if, butterfly fashion, a photon that did pass through my sunglasses accounts for some minute change that ultimately destroys Saturn? I’ve long wanted to sit down with a working theoretical physisict who’s really into, has really studied, MWI, because I’d like to understand how people like Sean Carroll identify MWI as their preferred interpretation. Some even say it’s the mostly glaringly obvious interpretation! Doesn’t part of that thinking also come up in Copenhagen? The idea that the cat isn’t superposed to itself, but is to the scientist who hasn’t opened the box. Likewise, the science writer standing outside the lab is superposed until the scientist informs them of the result. And millions of readers are superposed until they read the writer’s article. (And everyone in Andromeda remains superposed probably forever.) I’m not sure I believe in the idea of macro objects being superposed. What does it mean to suggest I’m superposed? Can experiments demonstrate it? Or is it just that I lack knowledge? Ugh. We really need some advances in HE physics. We’re just grasping in the dark here. I think at least some of that is accounted for in the difference between virtual photons and actual photons. I’ve seen some physics videos recently emphasizing the difference between them and how you can’t treat virtual photons as real — they’re almost an accounting device, although obviously something physical is going on. Lamb shift and so forth. Same here! Electro-weak theory. (And the weak force is the one many books hand-wave on that “has something to do with radioactive decay” … yeah, and making the sun work, too!) It sure made it seem like unification was a thing though, didn’t it. If two things as seemingly different as EM and weak force are unified, why not the strong force? Again, we need more information! We don’t even really know if gravity is a force! Liked by 1 person • “At what point am “I” no longer really me?” Michael and I discussed this as well somewhere else on this thread. It seems like reality likes ruining our clean little categories, such as what is life or non-life (see prions or viriods), what is the border between species (some members of species A can mate with species B, but others can’t), what is computation, or what is a planet. It won’t surprise me too much if it scrambles our ideas of the self. I told you to stop playing with those glasses Wyrd! Now look at what you’ve done. Who’s going to clean up this mess? We’ve got Saturn all over everything! 🙂 I recently went back and read Sean Carroll’s blog post on the MWI. I’m not sure his instincts on explaining it are the best. He tends to emphasize the multiple universes thing, which I think is a mistake. Paul Torek above recommended David Wallace’s ‘The Emergent Multiverse’, which I’m thinking about picking up. It looks pretty good in the preview. My only pause is it’s pricey. Of course I’ve often spent more on neuroscience books. I just have to decide if I’m interested enough and willing to invest the work it would require. I can see why people say the MWI is the most straightforward interpretation though. It does explain a lot. I see it as a candidate for reality. The only question is whether the implications of it in any way falsify it. But as I commented on Carroll’s post, that’s the problem with these interpretations. None of them are uniquely testable. “— they’re almost an accounting device, although obviously something physical is going on. ” Didn’t quantum physics start with Max Planck introducing a quanta purely as an accounting device? There was a similar disclaimer on Copernicus’ book. It seems like a lot of physics starts with someone saying, “Don’t worry, this is only for calculating convenience. It’s not it’s real or anything.” “Again, we need more information! We don’t even really know if gravity is a force!” Totally agreed on needing more information. Although wouldn’t you say we know gravity is a force? Or did you mean if it’s a force like the others in the Standard Model, with bosons (gravitons) and the like? Liked by 1 person • Wyrd Smythe says: “It won’t surprise me too much if it scrambles our ideas of the self.” Yeah. The more I learn and think about “the self” the more complex and puzzling it seems. “[MWI] does explain a lot.” That I do realize. I’m confounded by the whole multiple universes thing; that’s pretty much the entirety craw stick. I vaguely remember reading that Sean Carroll post. Think I’ll go back and re-read it this evening. The Wallace book sounds kinda interesting… once I read about it. The title put me off, because while I’m open-minded-but-skeptical on MWI, I’m disbelieving (and disinterested) in multiverse theories. I found an online review of the Wallace book that sounds like another read for this evening. “Didn’t quantum physics start with Max Planck introducing a quanta purely as an accounting device?” Ha, yes, good point! “Or did you mean if [gravity is] a force like the others in the Standard Model, with bosons (gravitons) and the like?” Exactly. I want GR to be essentially correct with some minor correction to accommodate quantum, and I want QFT to turn out to be essentially epicyles — a theory that matches our instruments but is seriously wrong in some key regard. We know matter/energy is quantized, but the jury is out on time/space. I want them to be smooth (providing yet another duality to reality). And that gravity is due to warped spacetime and there is no such thing as a graviton. My spacetime wishlist. 😀 Liked by 1 person • Wow, that review is 19 pages long. I thought I might sneak a quick read before responding, but I think I’ll just add it to my queue too. Thanks for linking to it! On GR and QM, I don’t really have preferences on which one wins (assuming they both don’t eventually have to be heavily modified). If spacetime does appear to be smooth, I wonder if we could ever be sure it wasn’t quantized at a size below the level of precision of whatever we were using to measure it. And an infinitely divisible spacetime seems like it would come with its own potential multiverses. If the space between elementary particles is infinitely divisible, it allows patterns to exist there below our notice, such as entire micro-universes. And entire other universes could have been born, existed, and died in the Planck time at the beginning of the big bang. For that matter, an infinity of universes might have existed during the time you read this reply. (Don’t hit me.) Liked by 1 person • Wyrd Smythe says: I gave up (for now) on that review once I got to the discussion section. They were a little too glowing in their assessment for me to trust, and there was already a bit of a “yelling at the screen” thing going on here on the material they mentioned to that point. The book does sound interesting, though. I found myself wondering if Wallace explains some of the stuff that was making me yell. Continuous spacetime does seem to have the same weird issues the real numbers have. Maybe matter/energy being quantized saves the day? While space might be infinitely small, matter isn’t, so no micro-galaxies hiding in the dust motes. Quantum limits on energy might also affect the minimum time it takes anything to happen (like c limits causality). The question might be whether we can trust scale. Atoms have sizes due to their properties, so maybe certain things can only happen on certain scales. (And we use atomic vibrations to define the second.) Or maybe they’ll find a graviton (or a chronon), and that will end the matter. But until then… well, just say that I look at GR and think, yes, that makes sense, but look at QFT and think, wait, what?! Obviously the universe is under no obligation to fulfill my sense of how it ought to behave (oh, if only). 🙂 Liked by 1 person • “While space might be infinitely small, matter isn’t, so no micro-galaxies hiding in the dust motes.” I actually wasn’t thinking the micro-universe patterns would be made of any matter/energy as we understand it, but something else, something we never see because it exists too far below the scales we can detect. Call it Mini-Me matter which could have it own smaller Mini-me quanta sizes. Of course, between Mini-Me matter might be Mini-mini-Me matter, and so forth and so on. Turtles all the way down. Or if in fact there is only the matter/energy we’re familiar with, that means an infinite emptiness between every occurrence of it, which would itself be profound. Liked by 1 person • Wyrd Smythe says: Yes, as profound as the next real number after zero! Talk about macro objects in superposition… I’m totally superposed on the real numbers being, in fact, real or, as sure seems sometimes, a fabrication of our imagination. The thing is: how real is a circle, its diameter, and their ratio? If they are real, so is pi. Liked by 1 person 10. Callan says: I don’t get the whole ‘measuring changes quantum particles behavior’ thing. And by ‘not get’ it seems like it doesn’t work or is a simplification that lost important details on the way. For example if ‘measuring’ changes the quantum particles, then at what distance can you measure them? Any distance? If so wow, you’ve invented an instantaneous communication device that’s…faster than light. Nice. Or if the distance actually matters, then ‘measure’ is a term that is a heuristic and lacks the actual details like what distances are involved and where does the effect run out? Liked by 1 person • You’re totally right not to get it. “Measurement” or “observation” is a maddeningly vague aspect of this. It reflects the lived experiences of scientists running experiments on quantum phenomena. Niels Bohr reportedly insisted that the description of this be limited to “ordinary” language, presumably because any attempt at a more precise description would imply knowledge we don’t really have. It’s called “the measurement problem,” and it’s at the heart of the absurd nature of quantum mechanics. Attempts to solve it have led people down all kinds of bizarre paths. I sometimes think QM represents the limits of our reality, where that reality emerges from some other underlying meta-reality. It might be that any “interpretation” is simply a vain attempt to map that meta-reality back into our little parochial reality. As patterns in and of the parochial reality, we simply may not be equipped to understand the wider meta-reality. Liked by 1 person • Wyrd Smythe says: FWIW, I see “measurement” as anything that resolves superposition. For me, the cat was always (obviously) either alive or dead, because the detector monitoring the radioactive sample is the measurement. There is no superposition; there is only a lack of knowledge about the cat. Liked by 1 person 11. Michael says: Excellent post, Mike. I enjoy mulling these quantum conundrums around. I am left feeling like an extremely poor sommelier of ideas–I get hints of different flavors but… really I have no idea what I’m tasting. It’s just really, really complex and intriguing. My own opinion is that we just don’t really know what we’re studying, and that at some point there will be a breakthrough in our conception of what reality actually is that will assist us in fitting the pieces of the puzzle we’ve found so far into a more insightful framework. As an example, I think our notions of physical and non-physical have pretty much broken down, and we have only vague ideas as to what consciousness might be, most of them extremely myopic, so that we’re in the position of using pretty poor tools for the job. Just as one example, in that Quanta article to which you linked, Brian Greene suggests that each copy of you in the MWI is really you, and that the true you is the sum total of these you’s. Something like that. When a scientist says that a “self” might be a superposition of conscious selves occupying subtly related windows of reality, it’s an interesting idea to some folks and frowned upon by others–while when the classic New Age book Seth Speaks posits the same notion it is deemed woo woo foo foo to that crowd, but accepted by the other. This is, in a sense, what I mean about once clear concepts and divisions breaking down. So my own feeling is everyone’s a little bit right, and the answer is somehow a superposition of a great many ideas out there… 🙂 I don’t suspect a ton of physicists are lining up to endorse Brian Greene’s idea of the self. I have no idea, actually. But it’s always interesting to me when these parallels emerge. I think it’s safe to say whatever “models” or “conceptual frameworks” we use to try and organize our phenomenal observations are all wanting right now. What I dislike about the Copenhagen Interpretation is that it seems like a consequential moment in defining the purpose of science–which accepts setting aside questions about what the universe really is, and accepting as complete descriptions of what it does. For me, science is much less interesting when only one of the two questions remains in play… Happy Holidays, Mike! Liked by 1 person • Thanks Michael, and great hearing from you! Your comments are always thought provoking. On Brian Greene’s notion of the self spanning multiple copies, I think, much like the notion of additional selves that originate from the idea of mind uploading, it’s a matter of philosophy, in other words, not a fact of the matter, but a personal choice. In both cases, the issue gets blurred as the copies get farther and farther away from the original. For example, is someone born with my exact genetics, but due to an early quantum branching, lived a radically different life, still me? What about someone who branched away from me before I became a skeptic? Or even before I became interested in science? Or someone who branched away before I broke up with one of my old girlfriends, but instead married her and proceeded to have a large family? My attitude is that these would all be a sort of sibling, albeit in the case of recent copies, far closer to me than any brother or sister. The only way I might be tempted to ever consider them to be me is if we could somehow share memories, but even then I’d expect difference to arise based on the order in which the various copies received the different memories. On the Copenhagen Interpretation, I can understand not liking its inherent instrumentalism. I totally agree it’s a lot more inspirational to think of science as the pursuit of truth. The pursuit of models that accurately predict future observations…just doesn’t have the same inspirational resonance. On the other hand, maybe the idea that the pursuit of truth is anything other than the pursuit of predictive models is an illusion. The real dividing line is whether we want to get into models that make predictions we can’t test. The Copenhagen Interpretation (apparently heavily influenced by the logical positivism in vogue during its formulation), labels that as undesirable. I think by calling these models that go beyond the mathematics of quantum mechanics “interpretations”, physics has found a way to have its cake and eat it too. It allows us to label the predictive aspects of QM as settled science, but keep trying to figure out what it means. Although as I’ve noted to you before, and as I did to Callan above, I sometimes wonder if quantum phenomena isn’t right at the edge of the reality we, as a subset of that reality, have any ability to make sense of. It might be a hole we can navigate around mathematically, but can never enter. (Although I hope we never stop trying.) Happy Holidays to you too Michael! Liked by 1 person 12. I have some strong opinions about this issue, and have been meaning to bring this up with Sabine Hossenfelder over at So far I’ve been too shy however. This is a woman who I absolutely love! She’d like to help “fix” a physics community that seems to have gotten “lost in the math”. Similarly I’d like to help a science community that attempts to function without generally accepted principles of metaphysics, epistemology, and axiology (or the three elements of “philosophy”). Perhaps if I feel that I’m able to develop my QM ideas here well enough, then I’ll become confident enough to speak with her about this over there some time? Well maybe. Rather than get caught up in all sorts of higher speculation initially, I like to begin with QM basics. We humans perceive matter in terms of “particles” and in terms of “waves”. Are such perceptions good enough? Apparently they are not. When we try to pin down the exact state of a particle we’re confounded with wave like characteristics. Then when we try to pin down the exact state of a wave we’re confounded by particle like characteristics. So it should instead be better to consider matter to function as both. But apparently we can’t measure matter as some kind of hybrid of the two. Therefore it makes sense to me that we’d witness fundamental uncertainty as expressed by Heisenberg’s uncertainty principle, or an inequality that references Planck’s constant. So to me there isn’t too much to worry about here. If we must measure particles in one way and waves in another way, though matter ultimately functions as neither but both, then we should expect to be confounded by more exacting measurements in either regard. Given the circumstances, is this not logical? For example, let’s say that we find a material that’s similar to both rock and wood. So if we assess it as a kind of rock then the harder we look at it from this perspective, the more confounding this stuff should seem to us. Or the same could be said if we assess it as a kind of wood. So that’s essentially what I’m saying is happening with our assessments of matter. If it’s effectively “particle-wave”, though we can only provide measurements in one way or the other, then we should naturally fail as our measurements become more precise. Thus I’m good with quantum mechanics as I understand it. Apparently we’re too stupid or whatever to understand what’s going on. The controversy however seems to be that most physicists (unlike Einstein) haven’t been content settling for such human epistemic failure. So apparently they’ve decided that no, it’s not that we’re trying to measure something as particle or wave that’s neither. Instead it must be that the uncertainty associated with either variety of measurement reflects an ontological uncertainty which exists in nature itself! So the argument is not that we’re stupid, but rather that nature itself functions outside the bounds of causality, or thus nature functions “stupidly”. It could be that this view is entirely correct, but what irks me here it is that these physicists also refuse to admit that they thus forfeit their naturalism. Apparently they want to call themselves naturalists, but interpreting QM such that nature functions without causality — well that ain’t natural! It’s the borderlands of science, such as here, brain study, and so on, that seem most in need of effective principle of philosophy. For this issue I offer my single principle of metaphysics. It reads: To the extent that causality fails, there’s nothing to figure out anyway. Unless I’m missing something this “Many worlds” interpretation appears in violation. I interpret it as physicists deciding that reality functions without causality (or “magically”), and then attempt to make sense of this anyway by theorizing “many worlds”. The more that we leave the bounds of causality behind, or thus introduce magical function, explanations should grow obsolete. From here reality should just be what it is. So I consider these sorts of interpretations of quantum mechanics to illustrate category error. Liked by 1 person • A lot of your criticism seems aimed at the more ontological versions of the Copenhagen Interpretation, the ones that say that not only are we faced with an epistemic limit, but that there’s nothing else there, that reality isn’t set until the measurement. That’s usually the version of the CI that critics inveigh against, and I agree with that criticism. The ontological versions of the CI seem excessively pessimistic. I think Neil Bohr’s version of the CI was closer to your sentiment. Here are the observations, and here are mathematics that can make predictions about those observations, with limitations, but within those limitations predictions are accurate enough to build technologies on top of them, so, “shut up and calculate!” I’ve grown to respect this view more as I’ve continued to learn about quantum physics. It’s not satisfying, but it’s at least epistemically humble. But I think an MWI enthusiast would respond to you that their interpretation does restore determinism. Unfortunately, it’s determinism for reality overall, not a determinism we can observe. Which of course raises the question, if something is deterministic but not deterministic from any observer’s perspective, is that really deterministic? Who is it deterministic for? One question I’d have for you is, how do you define naturalism? Is that definition mutable on new evidence? Myself, if I encounter phenomena that doesn’t meet my understanding of naturalism, I would still want to understand the phenomena as much as I could. But naturalism for me is just a set of working assumptions, ones subject to being adjusted as I learn more. Liked by 1 person • Wyrd Smythe says: If I may interrupt, two quick thoughts: Firstly, I’m also a big fan of Sabine’s blog, been reading it for years. I highly recommend it. (Peter Woit also has a good blog.) Secondly, just as (and I very much agree) physicists benefit from philosophy, philosophers can benefit from looking into some of the math involved. Quantum physics is highly mathematical, and the wave-particle duality confusion is, at least in part, a failure of language. At the math level, the confusion essentially goes away. The way it’s usually put is that matter (as in particles) is something outside our direct experience that has wave-like properties and particle-like properties depending on what aspect of the particle one tests. Liked by 2 people • Wyrd, I was hoping to hear from you most of all! Perhaps on some level I mentioned Sabine because I recall you mentioning her another time? Anyway it was late 2015 that I became interested in her. Massimo Pigliucci had blogged about her position from a Munich physics conference that he attended. On philosophers benefiting from math and physics, I certainly agree. I was initially most interested in philosophy as a university student, but didn’t want to become acclimated to accept no generally accepted agreements in the field. And beyond questions what could they teach me without generally accepted positions? Mental and behavioral sciences were next, though I found them far too speculative for comfort. So I looked for a field that could teach me how to learn. Yes physics! But alas, my own mind would not get me through upper division courses. I eventually earned a degree in economics, which I chose somewhat because it corresponded with my own amoral theory of value. I didn’t mean to imply that modern physicists would improve if they were to become versed in modern philosophy. I actually believe that the field has tremendous problems, though needs improvement in order to better found science. Regarding language, that’s one of my own main themes. So QM interpretations work pretty well mathematically? But I suppose that natural language explanations are needed most. Mathematics is many orders less descriptive than English. Notice that there’s nothing in mathematics which can’t be described in English, and yet much in English can’t be described in mathematics. Still the English interpretation of the mathematical QM interpretation that you’ve provided seems pretty close to mine. It’s good to hear that you oppose the ontological version of the Copenhagen Interpretation. Actually I was under the impression that Bohr’s interpretation was more ontological, though perhaps not. Did he ever support Einstein’s “I, at any rate, am convinced that He [God] does not throw dice.”? (Though in practice I support Einstein about that, my own metaphysics is a bit more pragmatic. It’s more like “To the extent that God throws dice, nothing exists to figure out anyway!”) If Many World enthusiasts are truly causal determinists, then tell me this. Do you think their position holds that all of these worlds actually exist? As in ontologically exist? As a solipsist I can stomach all sorts of crazy notions from a supernatural premise. But in a causal sense that position seems utterly ridiculous. Conversely if these many worlders are simply going epistemological with their position, as in “It can be helpful for us to think about QM this way…” then I could give their position some reasonable consideration. Yep Mike, it’s deterministic. Who for? All that exists. Once again, I’m a solipsist. Reality is reality regardless of the human’s various idiotic notions. I define naturalism as a belief that reality functions causally in the end. This definition is a definition, and therefore isn’t mutable to new evidence. Even if I ultimately decide that reality does not function causally, I should still consider this to be a useful definition. Here I’d either be a supernaturalist, or a hypocrite that changes my definition in order to call myself a naturalist. I understand the desire to understand. This seems quite human and adaptive. Even the most faithful god fearing person should need to use reason in his or her life in order to get along. But to the extent that causality fails, as in ontological interpretations of the uncertainty associated with Heisenberg’s principle, things should not exist to figure out anyway. Liked by 1 person • Eric, Bohr very much did not support Einstein in his statement about God not playing dice. His response was along the lines of, “Einstein, don’t tell God what to do.” Honestly, while I think his and Heisenberg’s initial strategy was more epistemic, more instrumental, I do get the impression that they crossed the line in later debates. But it’s the instrumental version that I think remains useful. “Do you think their position holds that all of these worlds actually exist? As in ontologically exist?” It depends on which ones you talk to. Some are agnostic about whether the other wave function branches continue to exist. Others feel they don’t. But the most vocal proponents tend to think they do exist. As I mentioned to Wyrd, it’s an old trick in physics to introduce something but then say, “Don’t panic, this is just a useful accounting gimmick. It’s not like this crazy thing is real or anything.” This has been particularly true for quantum mechanics. Max Planck originally introduced quanta purely to make his calculations work. I suspect some Everettians take this tack to side step the ontological debates. The thing is, many things that are mathematically convenient go on to become ontological necessity. “Reality is reality regardless of the human’s various idiotic notions.” That may be true, but how do we know whether we know reality? I think the only answer is whether our predictions are accurate. Of course, QM can’t predict a single quantum event, only the probabilities of certain outcomes. But as the numbers of events climb, those probabilities average out to solid predictions. Given the above, whatever QM is, it has to be isomorphic with the reality in some way, otherwise those predictions would fail. As Wyrd mentioned, this may only be in the sense that epicycles were useful in Ptolemaic cosmology. (Interestingly, epicycles today remain as a useful perspective observational concept, despite the fact that we know they’re an illusion.) Liked by 1 person • Mike, If it’s the case that Bohr and Heisenberg began with a responsible epistemological position for their Copenhagen Interpretation, then why would they escalate it to ontology? Might I suggest a bit of jealousy? Even then Einstein was “the great one”. How wonderful it would feel to up him! But perhaps Einstein should mainly be blamed for selfishly not realizing that a responsible epistemological position had actually been presented, and so he chose to interprete their interpretation ontologically? Notice that “God doesn’t play dice” is an ontological claim. If he used this to counter the CI then he effectively should have goaded them into an irresponsible ontological position. And apparently they not only accepted, but used it to kick his ass! Today in popular media, and even among physicists, it’s thought that Einstein really blew it regarding QM. I account for this incidence through a far larger structural problem. Notice that we’re asking physicists to do physics, though without provide them with any effective rules of metaphysics or epistemology to work from. Thus we should need a community of professionals armed with generally accepted rules from which to guide the function of science. Notice that the field of philosophy today has the flavor of “art and culture” rather than “science” to it. I’m not saying that this needs to change however. I’m saying that a new community of professionals must emerge that has a single mission — to straighten out science by means of its own accepted principles of metaphysics, epistemology, and axiology. And what specifically do I propose to fix this particular mess? I’d mandate that the authors of any given position clearly state whether their proposal is theorized to just be “useful” (epistemology), or to also be “real” (ontology). Then as for those ambitions theorists that insist upon proposing an ontology regarding QM, there would be my single principle of metaphysics to contend with. Theorizing that any given bit of reality is not causally determined to occur exactly as it does occur, takes the theorist beyond the bounds of naturalism. Here there can be nothing to explain because without causal dynamics, no explanation will thus exist. This is the realm of magic. And I’m not saying that this doesn’t effectively occur. I’m saying that the position of Einstein and I, conversely, happens to be “natural”. Well yes today, though once we have a community of professionals that’s able to effectively regulate the function of science through proven principles, there should only be “epistemic necessity”. The only reality that I “know” exists, is that I exist in some form or other. If you’re conscious then you could say the same about yourself. And I consider it quite special to be able to truly know even that. Conversely my computer shouldn’t know that it exists (if it does exist), let along anything else. I consider quantum mechanics to mark an incredible human achievement, though epistemologically rather than ontologically. And I do believe that it’s isomorphic with reality. But if any associated dynamic is not causally determined to occur exactly as it does occur, or “ontological uncertainty”, then the theory should effectively describe the function of magic. But wait a minute, as I define it no explanation can exist to describe non-causal function, or magic. Right… So the effectiveness of QM theory suggests that all associated dynamics must be causally determined to occur exactly as they do occur. You’re not going to like that bit of circularity! I’ll remind you however that we’re measuring particles and waves here, though apparently matter functions as something associated but different. Liked by 1 person • Eric, I don’t know if you remember, but I actually think the distinction between instrumentalism and scientific-realism is a false dichotomy. We never have access to reality. We only ever have theories, predictive models about that reality. The “real” is only another more primally felt model. In the end, all we have are the models. (This actually includes our model of self, as counter-intuitive as that sounds. Psychology has shown that access to our own mind is subject to just as many limitations as the information we get from the outside world.) The only real distinction is between predictions that are testable and those that aren’t. The ones that are testable, and which have been demonstrated to have some level of accuracy, are “right” to whatever level they meet. But predictions that haven’t or can’t be tested should be regarded as speculative to varying degrees. An untested or untestable prediction which is tightly bound to a tested prediction has a higher chance of eventually being shown to be accurate. But the more steps beyond observation to get to the prediction, the shakier the ground it rests on. Under this guideline, the successfully tested predictions we have are the evolution of the wave function according to the Schrodinger equation, until information about it leaks into the environment, then we have the more definite state (position of the particle), etc. This is the instrumental Copenhagen Interpretation. Everything else: assertions that the Copenhagen Interpretation is the only reality, pilot waves, spreading superpositions continuing under the Schrodinger equation, etc, have to be viewed as speculation, at least until someone can figure out some way to test them. Still, speculation is fun, and should be fine as long as we acknowledge what we’re doing. Liked by 1 person • Mike, Well it sounds like we’re generally on the same page with that, though I wouldn’t refer to the distinction between instrumentalism and scientific–realism as a false dichotomy. Even if science only ever has models, we of course need words such as “real” which reference what actually exists beyond our models. And if some of these MWI’ers have decided that the lack of certainty in our measurements mandate “many worlds” in truth rather than simply as an accounting heuristic, then this would seem to be a wonderful example of “scientific realism”. This also strikes me as “the tail wagging the dog”. Furthermore I don’t mind going ontological myself in some ways. I happen to believe that “God doesn’t throw dice”, which is to say I believe in absolute causality regardless of what we humans are able to figure out. Perhaps a reasonable name for this position would be “extreme naturalist”? So then what shall a person be called who makes the ontological claim that some things under a QM framework aren’t causality determined to occur exactly as they do occur? “Super-naturalist” seems over the top, and even quasi-naturalist”. So I’ll just go with straight “naturalist”, but in addition note that from this distinction “spooky stuff” does ontologically occur in some capacity. Then there is my logical proposition from last time. My metaphysics holds that if something functions without causality, then nothing exists here to even theoretically figure out. Why? Because it’s the causality that would found any ontological explanation for any given event. The causality would be the vital element regardless of any potential understanding — nothing would otherwise exist to even look for. I’m fine with how the QM probability distribution produces a macroscopic world which seems to function causally. But how can it be possible for something that is not perfectly caused to do whatever it does, to in the end become a causal constituent for a causal realm? I see that as a contradiction. Non-causal function, where by definition nothing exists to potentially figure out, should have no potential to produce causal function. (I suspect that there’s a simple way for this to be illustrated mathematically.) Thus if we notice that quantum function does produce causal function, then from here it must only be possible that all elements of quantum function occur causally in the end, and even if things continue to seem random to us humans. Yes speculation is fun! Furthermore once science has better rules from which to work, it should also become more productive than today. (I see you’ve now put up a post on Sean Carrol. Sweet!) Liked by 1 person • Steve Morris says: Eric, causality can arise from non-causal events provided that the number of events is sufficiently large. It’s the Law of Large Numbers, from probability theory. Liked by 1 person • Interesting observation Steve! I’ve noticed a couple of interpretations for the Law of Large Numbers. One is that with enough trials, all sorts of implausible things eventually occur. The other seems more relevant however. It’s that the more times that you run a given experiment, the more statistically verified a given result will be. It’s essentially that all of these “random” results end up building a stronger and stronger case for a given figure. Is that what you meant? I can see how it seems appropriate to apply this principle to quantum mechanics given that we’re discussing probability distributions for matter rather than exact states of being. But then again, my sense is that the LLN was set up to address every day causal events rather than quantum events that are theorized to not function causally. Does it address quantum strangeness as well? Have you found an infinitely better challenge to Einstein than the utterly pathetic “Don’t tell God what to do”? Is this a true answer, as in “God’s dice create order”? This deserves some academic consideration! I’d be surprised if something fully beyond causality in an ontological sense is able to then go on to construct the causal function observed in nature. Causality is kind of my thing. But I’d love for this theory to get out there as a challenge to us causalists. Liked by 1 person • Steve Morris says: While it’s true that a large number of random events will yield some rare outliers as part of the ensemble, when taken as a whole, it leads to highly predictable results. It’s the basis of statistical mechanics. Even in classical statistical mechanics, individual particles are assumed to behave randomly, but when the ensemble contains 10^23 particles, the values of pressure, temperature, etc are entirely deterministic. My statistical mechanics lecturer at university joked that when very large numbers are involved, “it is better to gamble than to count.” Causality may be an illusion, as well as ontological fact. Liked by 2 people • I prefer “emergent” to “illusion”, but it’s the same concept. Causality may not be the fundamental thing we take it to be. Liked by 2 people • I agree entirely with your former professor’s observations Steve, and indeed, the Law of Large Numbers as I believe it’s traditionally been used. This is to say that if you do a single experiment a large number of times, it will continue to validate the same point in the end. And I also agree with that other interpretation. Even though a psychic may get a given prediction right, the LLN shall demonstrate the truth or falsity of this person’s powers over time. And why does the LLN remain solid? Because of causality itself. Without an ordered world where cause leads to associated effect and the converse, it might be that the exact same experiment would not generally continue to provide the same sort of result. Or it might be that a human could indeed gain psychic powers and all sorts of “spooky” stuff. Causal order is required in order for the LLN to remain valid. Otherwise we’d need to count rather than to gamble. I suppose that this is why advocates of ontological voids in causality haven’t yet tried to use the LLN to argue their case. Thus we instead get pedigreed snake oil carnival hawkers like Sean Carrol. Apparently people love hearing this sort of thing. (I haven’t yet found a mathematical proof that causality can’t emerge from non-causality, but perhaps I will.) If it’s true that there is a fundamental uncertainty to QM function, then yes, the causality that we observe must emerge from non-causality. Or it could be that there is a causality which we don’t grasp here given that we erroneously perceive existence in terms of particles and waves. Right. But a better way to say this might be that causality may or may not be absolute. Somehow to me your statement implies that we’d still call something “causal” even if it isn’t. Or perhaps I’m being pedantic? You wouldn’t term something “causal” if it weren’t causally mandated to occur in the exact manner that it does would you? Liked by 2 people • Eric, If causality is emergent, that is, real but a composite process made up of lower level processes which are not themselves causal, then I would use it in the same manner I use “temperature”, “weather”, or “molecule”. Each of these things objectively exist, but are composed of things which are not that thing, in other words, they are composite phenomena. The idea that causality is a composite phenomena is very counter-intuitive, but then so are many things in science. Liked by 3 people • All true Mike, so apparently I was being pedantic there. If causality emerges from non-causality then it isn’t the fundamental thing that we take it for, similar to “molecule” and all the rest. But given our flawed perspectives I do still suspect that it’s fundamental in the end. Liked by 1 person 13. Steve Morris says: Great post, and a clear summary of the position. I (like most people) have problems with all the proposed solutions, and that is as it should be, since none of them are entirely persuasive. The most unconvincing commentators are those who argue passionately for one particular interpretation. My gut feeling is that we are still missing a fundamental insight, and I hope this will emerge either through some new observation, or else a new theory. My instinct is that entanglement holds the key to unlocking the answer. Disclaimer – it may be that this is wrong, and that it is just me who is lacking the fundamental insight 🙂 Liked by 1 person • Thanks Steve! In recent decades, decoherence has become the preferred description of what happens when the wave appears to become a particle. Under that description, what actually happens is the wave becomes “entangled” with the environment. So your gut may be on to something! It feels like all physicists can keep doing is testing the boundaries of this stuff until something unexpected comes up. After all, it was the necessity of dealing with bizarre observations that initially forced them to their current understanding of QM, such as it is. The answer probably lies in continuing to pile up those observations until something new emerges from the data, but that might take decades or centuries. Liked by 1 person 14. Pingback: Sean Carroll on the Many Worlds Interpretation of quantum mechanics | SelfAwarePatterns 15. J.S. Pailly says: I hadn’t heard of the spreading superposition idea before. I can’t give much of an opinion about that except to say that it’s a really cool idea. Liked by 2 people 16. Pingback: Sean Carroll’s Something Deeply Hidden | SelfAwarePatterns Leave a Reply to SelfAwarePatterns Cancel reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
e39e4c7cdc9ef115
Reading this question What happens to an electron in a molecule once it has absorbed a photon and transitioned? it occurs the question to me is the ground state say of a hydrogen electron the only one? Why I ask? Because under normal conditions by 24°C the electron is exposed by thermal radiation, means it is influenced by EM radiation all the time. So does the ground state depends from the surrounding temperature and to be exact THE ground state has to be mentioned always with the temperature for which is it meant? The same holds for the gravitational potential? • $\begingroup$ Isn't the ground state always at zero temperature? $\endgroup$ – jinawee Sep 8 '16 at 6:03 • $\begingroup$ @jinawee Perhaps yes but this was not clear to me in the last consequence:-) $\endgroup$ – HolgerFiedler Sep 8 '16 at 6:15 When we talk about the ground state of hydrogen we generally mean the lowest energy eigenfunction of the time independent Schrodinger equation. Strictly speaking no hydrogen atom is ever in that state because time independence means it would have had to be in that state for an infinite time and continue in that state for an infinite time into the future. However under most circumstances this is an unnecessarily pedantic viewpoint. A real hydrogen atom is bathed in a sea of EM radiation - even if floating in space it interacts with the cosmic microwave background. This will indeed perturb the ground state and we can calculate the effects using perturbation theory. However this doesn't have any significant effect unless the energy of the radiation is large enough to stimulate a transition. On average the ground state remains so close to the theoretical ground state as to be indistinguishable. One example of where environmental effects are important is in very strong magnetic fields where the ground state can be significantly altered. We can shift the energies of the eigenfunctions in the lab by applying magnetic fields, and we expect that there are natural examples like neutron stars where the magnetic filds are large enough to have a big effect on the atomic states. It isn't obvious what you mean by gravitational interactions. A hydrogen atom would only be affected by tidal forces, and on the atomic scale these are negligably small unless the poor atom happens to be right next to a black hole singularity. • $\begingroup$ John, about gravitational potential it is clear that in a higher one the electron will be closer to the nucleus and far away from huge masses the atoms radius will be bigger. But have I to ask in a different question how looks the ground state nearby zero temperature and what is the behaviour of the electric field of the electron and what is the behaviour of its magnetic dipole moment? $\endgroup$ – HolgerFiedler Sep 8 '16 at 7:03 • 1 $\begingroup$ @HolgerFiedler: An external gravitational potential does not change the size of the atom. It does not mean, to use your words, that the electron will be closer to the nucleus and far away from huge masses the atoms radius will be bigger. $\endgroup$ – John Rennie Sep 8 '16 at 7:06 • $\begingroup$ I will ask about gravitational potential influence in a different question. It was not well thought of mine to mesh this with the maun question :-) $\endgroup$ – HolgerFiedler Sep 8 '16 at 7:09 • $\begingroup$ Terminology nitpick: Equations don't have eigenfunctions, operators do. The time-independent Schrödinger equation is precisely the equation for eigenfunctions of the Hamiltonian. $\endgroup$ – ACuriousMind Sep 8 '16 at 13:54 Your Answer
6f748183ae692c5a
de | en Theoretical Physics 3 (Quantum Mechanics) Module PH0007 [ThPh 3] Module version of SS 2020 (current) available module versions SS 2020SS 2019SS 2017SS 2016SS 2011 Basic Information • Mandatory Modules in Bachelor Programme Physics (4th Semester) • Physics Modules for Students of Education Total workloadContact hoursCredits (ECTS) 270 h 120 h 9 CP Responsible coordinator of the module PH0007 is Björn Garbrecht. Content, Learning Outcome and Preconditions 1 Particles and Waves 2 States and measurements, entangled states 3 Time Evolution 4 One-dimensional potentials 5 Approximative Methods 6 Angular momentum in QM, Symmetry 7 Schrödinger equation in the central field, H atom 8 electron in external electromagnetic field, photo effect, time-dependent perturbation theory 9 Spin, two state systems A Mathematical foundations Learning Outcome After successful participation, students are able to: 1. understand the implications of Schrödinger's equation and how to describe states with wave functions 2. solve Schrödinger's equation for one-dimensional problems and interpret the solution 3. apply the bra-ket formalism 4. solve the hydrogen atom and other basic problem in three-dimensions 5. explain the concept of spin and the Stern-Gerlach experiment 6. solve problems that involve two quantum states 7. solve problems using approximate methods 8. understand the concept of density matrices and quantum entanglement PH0005, PH0006, MA9201, MA9202, MA9203, MA9204 for students studying bachelor of science education mathematics / physics: PH0005, PH0006, PH0003, MA9937, MA9938, MA9939, MA9940 Courses, Learning and Teaching Methods and Literature Learning and Teaching Methods Lecture: black-board presentation Blackboard or powerpoint presentation accompanying informations on-line D.J. GRIFFITHS, Introduction to Quantum Mechanics, Prentice Hall. Good introductory materials. F. SCHWABL, Quantenmechanik, Springer. Highler level of detail and good presentation J.L. BASDEVANT, J. DALIBARD, Quantum Mechanics, 2005. Cleanly worked out; discusses both the mathematical basics as well as conceptual questions. Focuses allso an new experiments and applications. R. SHANKAR, Principles of Quantum Mechanics, 2011. Includes a mathematical description. Quite detailed. M. LE BELLAC, Quantum Physics, 2012. Sorgfältige Darstellung, aber auf quite high-level. Not useful as the only sorce for the first contact with Quantum Mechanics J.J. SAKURAI, J.J. NAPOLITANO Modern Quantum Mechanics, 2010. Good textbook which is also on a higher level. R.P. FEYNMAN, R.B. LEIGHTON, M. SANDS, Feynman Vorlesungen über Physik III: Quantenmechanik, 1988. Feynmans remarkable style with very detailed explanations. Not as systematic as other books. Module Exam Description of exams and course work For example an assignment in the exam might be: • set-up and solution of the Schrödinger equation for a particle in a potential and interpretation of the solutions • interpretation of the physical consequences of a given wave function Exam Repetition The exam may be repeated at the end of the semester. Top of page
8bb5f58855efcc21
User Tools Site Tools Add a new page: Equations of Motion The most important equation on modern physics are equations of motions. These equations tell us how a system will evolve as time passes on. We can derive these equations using symmetry considerations from the corresponding Lagrangian using the Euler-Lagrange Equations. Important in: Relationship: Used For: Schrödinger Equation Quantum Mechanics, Quantum Field Theory non-relativistic limit of the Klein-Gordon Equation Describes time evolution linear Klein-Gordon Equation Quantum Field Theory Equation of motion for particles with spin 0 linear Pauli Equation Quantum Mechanics non-relativistic limit of the Dirac Equation Equation of motion for particles with spin 1/2 linear Dirac Equation Quantum Field Theory Equation of motion for particles with spin 1/2 linear Maxwell Equations Classical Electrodynamics, Quantum Field Theory special case of the Yang-Mills equation for a non-abelian gauge theory Equation of motion for particles with spin 1 in abelian gauge theories linear Einstein Equation General Relativity Describes how spacetime gets curved through energy and matter non-linear Yang-Mills Equation Quantum Field Theory Equation of motion for particles with spin 1 in non-abelian gauge theories non-linear The Navier-Stokes Equations Hydrodynamics Describe the flow of fluids non-linear Supplementary equations and boundary conditions Equation of Motion System specific additions, like the interactions/forces acting on the object in question Boundary conditions The equations of motion are usually not enough to describe a system. Especially in the Newtonian framework, we need additional equations that give us, for example, the correct formulas which describe a force that acts on the object in question. For example, In addition, we always need to specify the Boundary Conditions for the system in question. Unfortunately, knowing how to write down the equations is not the same as being able to solve them. For example, we know very well the equations of motions describing how a river flows. But as soon as it flows quickly over rough grounds, such that it becomes turbulent, we are no longer able to solve the equations. In such cases we are often forced to revert to the simulation methods discussed previously. […] Thus, only for very, very simple theories it is possible to solve these equations exactly. For theories like the standard model, one has to introduce severe approximations (often called truncations) to be able to solve them. If these approximations are made wisely and with insight, these approximations are such that still questions we have to the theory can be answered correctly. But it takes often very long to understand how to do approximations right. Linear vs. Non-Linear Equations An important distinction is between linear and non-linear equations. It depends on whether an equation is linear or non-linear in what kind of solutions we are interested in: • Linear equations do not permit non-linear solitonic solutions, while non-linear equations do. • While both linear and non-equations permit plane wave solutions, such solutions are only really important for linear equations. The thing is that plane wave solutions of a nonlinear equation cannot be superposed to form other solutions. We consider first the sourceless equation in four dimensions, $$ D_\mu F^{\mu\nu} = 0. \tag{2.65}$$ The first issue concerns the existence of regular solutions. If regular initial data is taken, will the solution evolve in a regular fashion, or will the nonlinearities produce singularities? This question has been answered: regular solutions to (2.65) do exist, and the same is true if one considers a larger system: scalar and spinor fields interacting with gauge fields [20]. However, physicists are not so interested in the general solution which depends on arbitrary initial data, but rather in specific solutions which reflect some physically interesting situation. For example, in the Maxwell theory we are interested in plane wave solutions. Let us note that any Maxwell solution is a solution of the Yang-Mills equation, when one makes the Ansatz that the space and internal symmetry degrees of freedom decouple. If one forms $A_\mu^a(x) = \eta^a A_\mu (x)$ with $\eta^a$ constant and $A_\mu(x)$ satisfying the Maxwell equation, then $A_\mu^a(x)$ is a solution to the Yang-Mills equation, which we shall call "Abelian. Thus it is interesting the see whether there are plane wave solutions in the non-Abelian theory, which are not Abelian. By "plane wave", we shall mean a configuration of finite energy $(0 < \mathcal{e} < \infty$), of constant direction for the Poynting vector $\mathcal{P}(x) = \hat{\mathcal{P}}|\mathcal{P}(x)|$ with $\hat{\mathcal{P}}$ constant, and with magnitude of the Poynting vector equal to the energy density $ \mathcal{e} = \mathcal{P}(x)|$. Such solutions have been constructed [21], but unlike their Maxwell analogs, they do not seem to have any physical significance. certainly, if gauge quanta are confined, one cannot make a coherent superposition of the to construct an observable plane wave. Alternatively, one may view the Maxwell waves as quantum mechanical wave functions for the photon. However, the non-Abelian plane waves solve a nonlinear equation; they cannot be superposed to form other solutions, and it is hard to see how they can be used as wave functions. Another class of solutions, more appropriate to nonlinear field theories, are the celebrated solitons, which do have a quantum meaning - they are the starting point of a semi-classical description of coherently bound quantum states [22]. A soliton should be a static solution, have finite energy, and be stable in the sense that small perturbations do not grow exponentially in time. However, one proves with virial theorems that no such solution exists in the pure Yang-Mills theory in four, three or two dimensions [23]. Another tack that one take is that of symmetry. Recall that the classical Yang-.Mills theory in four dimensions possesses conformal $SO(4,2)$ symmetry. One may seek solutions invariant under the maximal compact subgroup, i.e. $SO(4) \times SO(2)$. This solution has been constructed [24]; it is called a "meron". But again no physical significance has been attached to it, or to its generalization which possesses the smaller compact invariance symmetry group $SO(4)$ [25]. There are many other solutions to (2.65) that have been found [26], and while their discoverers invariably highlight some unique characteristic, no physical application has been given thus far - although doubtlessly they are mathematically interesting. There is one more class of solutions, which I shall describe later. These do not solve the Yang-Mills equations (2.65) in Minkowski space, but rather in Euclidean space, and are called instantons (pseudoparticles). In fact instantons solve the self-duality equation $$ ^\star F^{\mu\nu} = \pm F^{\mu\nu} $$ and then (the Euclidean-space analog of) (2.65) follows by the Bianchi identity. […] Of all the solutions, the instantons have interested mathematicians most; for physicists they give a semi-classical understanding of some of the topological effects that are present in Yang-Mills theory. Topological Investigations of Quantized Gauge Theories, by R. Jackiw (1983) Moreover, it's important whether the equation contains spinors or scalars and vectors. If there are spinors in the equation we can't construct macroscopic solutions. The reason for this is the Pauli principle that forbids that particles describe by spinors occupy the same state. [V]ia the Pauli exclusion principle, fermions cannot occupy the same state within the same macro system. So, whereas photons (bosons) can occupy the same state and a lot of them can therefore reinforce one another to produce a macroscopic electromagnetic field, spinors (fermions) cannot do so. In other words, we have no classical macroscopic spinor fields to sense, interact with, and study experimentally. And thus, we have no classical theory of spinors. Student Friendly Quantum Field Theory by Klauber In addition, it is important to note that the solutions of, for example, the Maxwell equations can be interpreted to describe a single photon or also the whole electromagnetic field. Similarly, solutions of the Dirac equation can be interpreted to describe a single electron or the whole electron field. Like the Hamiltonian formalism for classical physics, the Schrödinger equation is not so much a specific equation, but a framework for quantum mechanical equations generally. Once one has obtained the appropriate Hamiltonian, the time evolution of the state according to Schrödinger's equation proceeds rather as though $|\Psi>$ were a classical field subject to some classical field equation such as Maxwell's. In fact, if $|\Psi>$ describes the state of a single photon, then it turns out that Schrodinger's equation actually becomes Maxwell's equations! The equation for a single photon is precisely the same as the equation for an entire electromagnetic field. (However, there is an important difference in the type of solution for the equations that is allowed. Classical Maxwell fields are necessarily real whereas photon states are complex. There is also a so-called 'positive frequency condition that the photon state must satisfy). This fact is responsible for the Maxwell-field-wavelike behaviour and polarization of single photons that we caught glimpses of earlier. As another example, if 11Ji} describes the state of a single electron, then Schröinger's equation becomes Dirac's remarkable wave equation for the electron discovered in 1928 after Dirac had supplied much additional originality and insight The Emperor's New Mind by R. Penrose One popular way to derive the fundamental equations of nature is to use the Lagrangian formalism. 1. The first step is to write down the Lagrangian of the system. We use the Lagrangian to derive the equation of motions. Hence, if we want that our equations of motion are the same in all allowed frames of reference, the Lagrangian must be invariant under all these transformations. This is a powerful constraint that we can use to find the correct Lagrangian. Formulated differently, our Lagrangian must always be invariant under all symmetries of the system. In practice this means we write down all possible terms that respect the symmetries of the system but only include the lowest non-trivial order terms. 2. We then put the Lagrangian into the Euler-Lagrange equations. This yields the equations of motion. This procedure is demonstrated explicitly, for example, in the book "Physics from Symmetry" by J. Schwichtenberg. A broad outline of the derivation looks as follows: Most of the fundamental equations can be used either in a particle theory or in a field theory. In a particle theory, the solutions describe particle trajectories: while in a field theory the solutions describe sequences of field configurations: For a nice discussion, see The equations that describe the world by Axel Maas. The Equations of Motion yield the Most Important Paths In the path integral formuation of quantum mechanics, particles do not follow one individual path but instead all of them. Hence there can't be one equation whose solution yields the correct particle trajectory. However, the equations of motion are still important and the path integral formalism tells us why. While we must consider all possible paths, the most important path is still the path that we get by solving the equation of motion. In the path integral formalism, all paths are weighted by their corresponding action. The equations of motion are derived by extremizing the action. Hence, the biggest contribution to the whole path integral comes from those paths with extremal action, which are precisely those that we get by solving the equations of motion. Formulated differently, the thing is that a particle really has some probability to go all possible ways. However, the classical path is the most probable path, because paths close this path infer constructively and hence yield a big probability. In contrast, for other paths far away from the classical path the interference is destructive and hence the probability is tiny. The path with minimal action gives the biggest contribution to the path integral in the classical limit $\hbar \to 0$. This is explained nicely in Section 3 here. Take note that the interpretation also works for field theories. However, in field theory we, of course, do not consider particle paths but sequences of field configurations. The solutions of the field equation then describe the most probably sequence of field configurations. The Equations of Motion are Constraint Equations A massless spin 1 particle has 2 degrees of freedom. However, we usually describe it using four-vectors, which have four components. Hence, somehow we must get rid of the superfluous degrees of freedom. This job is done by the Maxwell equations. “In some sense, Maxwell's equations were a historical accident. Had the discovery of quantum mechanics preceded the unification of electricity and magnetism, Maxwell's equations might not have loomed so large in the history physics. … Since the quantum description has only two independent components associated with each four momentum, there are four dimensions worth of linear combinations of the classical field components that do not describe physically allowed states, for each four momentum. Some mechanism must be derived for annihilating these superpositions. This mechanism is the set of equations discovered by Maxwell. In this sense, Maxwell's equations are an expression of our ignorance.”Gilmore's "Lie Groups, Physics, and Geometry" A simpler but longer description of the same line of thought Analogously, we can interpret the other equations of motion. For the Dirac equation, this is discussed nicely at page 444ff in the book "Spin in Particle Physics" by Elliot Leader: equations.txt · Last modified: 2018/12/19 11:00 by jakobadmin
d9ba440802fc5ccc
Partial differential equation Partial differential equation A visualisation of a solution to the heat equation on a two dimensional plane In mathematics, partial differential equations (PDE) are a type of differential equation, i.e., a relation involving an unknown function (or functions) of several independent variables and their partial derivatives with respect to those variables. PDEs are used to formulate, and thus aid the solution of, problems involving functions of several variables. PDEs are for example used to describe the propagation of sound or heat, electrostatics, electrodynamics, fluid flow, and elasticity. These seemingly distinct physical phenomena can be formalized identically (in terms of PDEs), which shows that they are governed by the same underlying dynamic. PDEs find their generalization in stochastic partial differential equations. Just as ordinary differential equations often model dynamical systems, partial differential equations often model multidimensional systems. A partial differential equation (PDE) for the function u(x1,...xn) is an equation of the form F(x_1, \cdots x_n,u,\frac{\partial}{\partial x_1}u, \cdots \frac{\partial}{\partial x_n}u,\frac{\partial^2}{\partial x_1 \partial x_1}u, \frac{\partial^2}{\partial x_1 \partial x_2}u, \cdots ) = 0 \, If F is a linear function of u and its derivatives, then the PDE is called linear. Common examples of linear PDEs include the heat equation, the wave equation and Laplace's equation. A relatively simple PDE is \frac{\partial}{\partial x}u(x,y)=0\, . This relation implies that the function u(x,y) is independent of x. Hence the general solution of this equation is u(x,y) = f(y),\, which has the solution u(x) = c,\, where c is any constant value (independent of x). These two examples illustrate that general solutions of ordinary differential equations (ODEs) involve arbitrary constants, but solutions of PDEs involve arbitrary functions. A solution of a PDE is generally not unique; additional conditions must generally be specified on the boundary of the region where the solution is defined. For instance, in the simple example above, the function f(y) can be determined if u is specified on the line x = 0. Existence and uniqueness Although the issue of existence and uniqueness of solutions of ordinary differential equations has a very satisfactory answer with the Picard–Lindelöf theorem, that is far from the case for partial differential equations. There is a general theorem (the Cauchy–Kowalevski theorem) that states that the Cauchy problem for any partial differential equation whose coefficients are analytic in the unknown function and its derivatives, has a locally unique analytic solution. Although this result might appear to settle the existence and uniqueness of solutions, there are examples of linear partial differential equations whose coefficients have derivatives of all orders (which are nevertheless not analytic) but which have no solutions at all: see Lewy (1957). Even if the solution of a partial differential equation exists and is unique, it may nevertheless have undesirable properties. The mathematical study of these questions is usually in the more powerful context of weak solutions. \frac{\part^2 u}{\partial x^2} + \frac{\part^2 u}{\partial y^2}=0,\, with boundary conditions u(x,0) = 0, \, \frac{\partial u}{\partial y}(x,0) = \frac{\sin n x}{n},\, u(x,y) = \frac{(\sinh ny)(\sin nx)}{n^2}.\, u_x = {\partial u \over \partial x} u_{xy} = {\part^2 u \over \partial y\, \partial x} = {\partial \over \partial y } \left({\partial u \over \partial x}\right). Especially in (mathematical) physics, one often prefers the use of del (which in cartesian coordinates is written \nabla=(\part_x,\part_y,\part_z)\, ) for spatial derivatives and a dot \dot u\, for time derivatives. For example, the wave equation (described below) can be written as \ddot u=c^2\nabla^2u\,   (physics notation), \ddot u=c^2\Delta u\,   (math notation), where Δ is the Laplace operator. Heat equation in one space dimension u_t = \alpha u_{xx} \, u(t,x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} F(\xi) e^{-\alpha \xi^2 t} e^{i \xi x} d\xi, \, where F is an arbitrary function. To satisfy the initial condition, F is given by the Fourier transform of f, that is F(\xi) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} f(x) e^{-i \xi x}\, dx. \, F(\xi) = \frac{1}{\sqrt{2\pi}}, \, and the resulting solution of the heat equation is u(t,x) = \frac{1}{2\pi} \int_{-\infty}^{\infty}e^{-\alpha \xi^2 t} e^{i \xi x} d\xi. \, This is a Gaussian integral. It may be evaluated to obtain u(t,x) = \frac{1}{2\sqrt{\pi \alpha t}} \exp\left(-\frac{x^2}{4 \alpha t} \right). \, This result corresponds to the normal probability density for x with mean 0 and variance 2αt. The heat equation and similar diffusion equations are useful tools to study random phenomena. Wave equation in one spatial dimension u_{tt} = c^2 u_{xx}. \, u(0,x) = f(x), \, u_t(0,x) = g(x), \, u(t,x) = \frac{1}{2} \left[f(x-ct) + f(x+ct)\right] + \frac{1}{2c}\int_{x-ct}^{x+ct} g(y)\, dy. \, x - ct = \hbox{constant,} \quad x + ct = \hbox{constant}, \, Spherical waves u_{tt} = c^2 \left[u_{rr} + \frac{2}{r} u_r \right]. \, This is equivalent to (ru)_{tt} = c^2 \left[(ru)_{rr} \right],\, u(t,r) = \frac{1}{r} \left[F(r-ct) + G(r+ct) \right],\, Laplace equation in two dimensions φxx + φyy = 0. Solutions of Laplace's equation are called harmonic functions. Connection with holomorphic functions u_x = v_y, \quad v_x = -u_y,\, and it follows that u_{xx} + u_{yy} = 0, \quad v_{xx} + v_{yy}=0. \, A typical boundary value problem \varphi(r,\theta) = \frac{1}{2\pi} \int_0^{2\pi} \frac{1-r^2}{1 +r^2 -2r\cos (\theta -\theta')} u(\theta')d\theta'.\, Euler–Tricomi equation The Euler–Tricomi equation is used in the investigation of transonic flow. u_{xx} \, =xu_{yy}. Advection equation The advection equation describes the transport of a conserved scalar ψ in a velocity field {\bold u}=(u,v,w). It is: \psi_t+(u\psi)_x+(v\psi)_y+(w\psi)_z \, =0. If the velocity field is solenoidal (that is, \nabla\cdot{\bold u}=0), then the equation may be simplified to \psi_t+u\psi_x+v\psi_y+w\psi_z \, =0. In the one-dimensional case where u is not constant and is equal to ψ, the equation is referred to as Burgers' equation. Ginzburg–Landau equation The Ginzburg–Landau equation is used in modelling superconductivity. It is iu_t+pu_{xx} +q|u|^2u \, =i\gamma u where p,q\in\mathbb{C} and \gamma\in\mathbb{R} are constants and i is the imaginary unit. The Dym equation u_t \, = u^3u_{xxx}. Initial-boundary value problems Many problems of mathematical physics are formulated as initial-boundary value problems. Vibrating string u(t,0)=0, \quad u(t,L)=0, \, as well as the initial conditions u(0,x)=f(x), \quad u_t(0,x)=g(x). \, The method of separation of variables for the wave equation u_{tt} = c^2 u_{xx}, \, leads to solutions of the form u(t,x) = T(t) X(x),\, T'' + k^2 c^2 T=0, \quad X'' + k^2 X=0,\, k= \frac{n\pi}{L}, \, X(0) =0, \quad X'(L) = 0.\, The general problem of this type is solved in Sturm–Liouville theory. Vibrating membrane \frac{1}{c^2} u_{tt} = u_{xx} + u_{yy}, \, u(t,x,y) = T(t) v(x,y),\, which in turn must satisfy \frac{1}{c^2}T'' +k^2 T=0, \, v_{xx} + v_{yy} + k^2 v =0.\, The latter equation is called the Helmholtz Equation. The constant k must be determined to allow a non-trivial v to satisfy the boundary condition on C. Such values of k2 are called the eigenvalues of the Laplacian in D, and the associated solutions are the eigenfunctions of the Laplacian in D. The Sturm–Liouville theory may be extended to this elliptic eigenvalue problem (Jost, 2002). Other examples The Schrödinger equation is a PDE at the heart of non-relativistic quantum mechanics. In the WKB approximation it is the Hamilton–Jacobi equation. Except for the Dym equation and the Ginzburg–Landau equation, the above equations are linear in the sense that they can be written in the form Au = f for a given linear operator A and a given function f. Other important non-linear equations include the Navier–Stokes equations describing the flow of fluids, and Einstein's field equations of general relativity. Also see the list of non-linear partial differential equations. Some linear, second-order partial differential equations can be classified as parabolic, hyperbolic or elliptic. Others such as the Euler–Tricomi equation have different types in different regions. The classification provides a guide to appropriate initial and boundary conditions, and to smoothness of the solutions. Equations of first order Equations of second order Assuming uxy = uyx, the general second-order PDE in two independent variables has the form Au_{xx} + 2Bu_{xy} + Cu_{yy} + \cdots = 0, where the coefficients A, B, C etc. may depend upon x and y. If A2 + B2 + C2 > 0 over a region of the xy plane, the PDE is second-order in that region. This form is analogous to the equation for a conic section: Ax^2 + 2Bxy + Cy^2 + \cdots = 0. More precisely, replacing \partial_x by X, and likewise for other variables (formally this is done by a Fourier transform), converts a constant-coefficient PDE into a polynomial of the same degree, with the top degree (a homogeneous polynomial, here a quadratic form) being most significant for the classification. Just as one classifies conic sections and quadratic forms into parabolic, hyperbolic, and elliptic based on the discriminant B2 − 4AC, the same can be done for a second-order PDE at a given point. However, the discriminant in a PDE is given by B2AC, due to the convention of the xy term being 2B rather than B; formally, the discriminant (of the associated quadratic form) is (2B)2 − 4AC = 4(B2AC), with the factor of 4 dropped for simplicity. 1. B^2 - AC \, < 0 : solutions of elliptic PDEs are as smooth as the coefficients allow, within the interior of the region where the equation and solutions are defined. For example, solutions of Laplace's equation are analytic within the domain where they are defined, but solutions may assume boundary values that are not smooth. The motion of a fluid at subsonic speeds can be approximated with elliptic PDEs, and the Euler–Tricomi equation is elliptic where x<0. 2. B^2 - AC = 0\, : equations that are parabolic at every point can be transformed into a form analogous to the heat equation by a change of independent variables. Solutions smooth out as the transformed time variable increases. The Euler–Tricomi equation has parabolic type on the line where x=0. 3. B^2 - AC \, > 0  : hyperbolic equations retain any discontinuities of functions or derivatives in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds can be approximated with hyperbolic PDEs, and the Euler–Tricomi equation is hyperbolic where x>0. L u =\sum_{i=1}^n\sum_{j=1}^n a_{i,j} \frac{\part^2 u}{\partial x_i \partial x_j} \quad \hbox{ plus lower order terms} =0. \, 2. Parabolic : The eigenvalues are all positive or all negative, save one that is zero. Systems of first-order equations and characteristic surfaces The classification of partial differential equations can be extended to systems of first-order equations, where the unknown u is now a vector with m components, and the coefficient matrices Aν are m by m matrices for \nu=1, \dots,n. The partial differential equation takes the form Lu = \sum_{\nu=1}^{n} A_\nu \frac{\partial u}{\partial x_\nu} + B=0, \, \varphi(x_1, x_2, \ldots, x_n)=0, \, Q\left(\frac{\part\varphi}{\partial x_1}, \ldots,\frac{\part\varphi}{\partial x_n}\right) =\det\left[\sum_{\nu=1}^nA_\nu \frac{\partial \varphi}{\partial x_\nu}\right]=0.\, Q(\lambda \xi + \eta) =0, \, Equations of mixed type If a PDE has coefficients that are not constant, it is possible that it will not belong to any of these categories but rather be of mixed type. A simple but important example is the Euler–Tricomi equation u_{xx} \, = xu_{yy} Infinite-order PDEs in quantum mechanics Weyl quantization in phase space leads to quantum Hamilton's equations for trajectories of quantum particles. Those equations are infinite-order PDEs. However, in the semiclassical expansion one has a finite system of ODEs at any fixed order of \hbar. The equation of evolution of the Wigner function is infinite-order PDE also. The quantum trajectories are quantum characteristics with the use of which one can calculate the evolution of the Wigner function. Analytical methods to solve PDEs Separation of variables In the method of separation of variables, one reduces a PDE to a PDE in fewer variables, which is an ODE if in one variable – these are in turn easier to solve. This is possible for simple PDEs, which are called separable partial differential equations, and the domain is generally a rectangle (a product of intervals). Separable PDEs correspond to diagonal matrices – thinking of "the value for fixed x" as a coordinate, each coordinate can be understood separately. This generalizes to the method of characteristics, and is also used in integral transforms. Method of characteristics In special cases, one can find characteristic curves on which the equation reduces to an ODE – changing coordinates in the domain to straighten these curves allows separation of variables, and is called the method of characteristics. More generally, one may find characteristic surfaces. Integral transform An integral transform may transform the PDE to a simpler one, in particular a separable PDE. This corresponds to diagonalizing an operator. An important example of this is Fourier analysis, which diagonalizes the heat equation using the eigenbasis of sinusoidal waves. Change of variables \frac{\partial V}{\partial t} + \frac{1}{2}\sigma^2 S^2\frac{\partial^2 V}{\partial S^2} + rS\frac{\partial V}{\partial S} - rV = 0 is reducible to the heat equation \frac{\partial u}{\partial \tau} = \frac{\partial^2 u}{\partial x^2} V(S,t) = K v(x,\tau)\, x = \ln(S/K)\, \tau = \frac{1}{2} \sigma^2 (T - t) v(x,\tau)=\exp(-\alpha x-\beta\tau) u(x,\tau).\, Fundamental solution Inhomogeneous equations can often be solved (for constant coefficient PDEs, always be solved) by finding the fundamental solution (the solution for a point source), then taking the convolution with the boundary conditions to get the solution. This is analogous in signal processing to understanding a filter by its impulse response. Superposition principle Because any superposition of solutions of a linear, homogeneous PDE is again a solution, the particular solutions may then be combined to obtain more general solutions. Methods for non-linear equations See also the list of nonlinear partial differential equations. There are no generally applicable methods to solve non-linear PDEs. Still, existence and uniqueness results (such as the Cauchy–Kowalevski theorem) are often possible, as are proofs of important qualitative and quantitative properties of solutions (getting these results is a major part of analysis). Computational solution to the nonlinear PDEs, the split-step method, exist for specific equations like nonlinear Schrödinger equation. Nevertheless, some techniques can be used for several types of equations. The h-principle is the most powerful method to solve underdetermined equations. The Riquier–Janet theory is an effective method for obtaining information about many analytic overdetermined systems. Lie Group Methods From 1870 Sophus Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact. A general approach to solve PDE's uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras and differential geometry are used to understand the structure of linear and nonlinear partial differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform and finally finding exact analytic solutions to the PDE. Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines. Numerical methods to solve PDEs The three most widely used numerical methods to solve PDEs are the finite element method (FEM), finite volume methods (FVM) and finite difference methods (FDM). The FEM has a prominent position among these methods and especially its exceptionally efficient higher-order version hp-FEM. Other versions of FEM include the generalized finite element method (GFEM), extended finite element method (XFEM), spectral finite element method (SFEM), meshfree finite element method, discontinuous Galerkin finite element method (DGFEM), etc. Finite Element Method The finite element method (FEM) (its practical application often known as finite element analysis (FEA)) is a numerical technique for finding approximate solutions of partial differential equations (PDE) as well as of integral equations. The solution approach is based either on eliminating the differential equation completely (steady state problems), or rendering the PDE into an approximating system of ordinary differential equations, which are then numerically integrated using standard techniques such as Euler's method, Runge-Kutta, etc. Finite Difference Method Finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives. Finite Volume Method Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, volume integrals in a partial differential equation that contain a divergence term are converted to surface integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods are conservative. See also • Courant, R. & Hilbert, D. (1962), Methods of Mathematical Physics, II, New York: Wiley-Interscience . • Evans, L. C. (1998), Partial Differential Equations, Providence: American Mathematical Society, ISBN 0821807722 . • Ibragimov, Nail H (1993), CRC Handbook of Lie Group Analysis of Differential Equations Vol. 1-3, Providence: CRC-Press, ISBN 0849344883 . • John, F. (1982), Partial Differential Equations (4th ed.), New York: Springer-Verlag, ISBN 0387906096 . • Jost, J. (2002), Partial Differential Equations, New York: Springer-Verlag, ISBN 0387954287 . • Lewy, Hans (1957), "An example of a smooth linear partial differential equation without solution", Annals of Mathematics, 2nd Series 66 (1): 155–158 . • Olver, P.J. (1995), Equivalence, Invariants and Symmetry, Cambridge Press . • Petrovskii, I. G. (1967), Partial Differential Equations, Philadelphia: W. B. Saunders Co. . • Pinchover, Y. & Rubinstein, J. (2005), An Introduction to Partial Differential Equations, New York: Cambridge University Press, ISBN 0521848865 . • Polyanin, A. D. (2002), Handbook of Linear Partial Differential Equations for Engineers and Scientists, Boca Raton: Chapman & Hall/CRC Press, ISBN 1584882999 . • Polyanin, A. D. & Zaitsev, V. F. (2004), Handbook of Nonlinear Partial Differential Equations, Boca Raton: Chapman & Hall/CRC Press, ISBN 1584883553 . • Polyanin, A. D.; Zaitsev, V. F. & Moussiaux, A. (2002), Handbook of First Order Partial Differential Equations, London: Taylor & Francis, ISBN 041527267X . • Solin, P. (2005), Partial Differential Equations and the Finite Element Method, Hoboken, NJ: J. Wiley & Sons, ISBN 0471720704 . • Solin, P.; Segeth, K. & Dolezel, I. (2003), Higher-Order Finite Element Methods, Boca Raton: Chapman & Hall/CRC Press, ISBN 158488438X . • Stephani, H. (1989), Differential Equations: Their Solution Using Symmetries. Edited by M. MacCallum, Cambridge University Press . • Zwillinger, D. (1997), Handbook of Differential Equations (3rd ed.), Boston: Academic Press, ISBN 0127843957 . External links Wikimedia Foundation. 2010. Look at other dictionaries: • partial differential equation — Math. a differential equation containing partial derivatives. Cf. ordinary differential equation. [1885 90] * * * In mathematics, an equation that contains partial derivatives, expressing a process of change that depends on more than one… …   Universalium • partial differential equation — noun a differential equation involving a functions of more than one variable • Hypernyms: ↑differential equation …   Useful english dictionary • partial differential equation — noun Date: 1845 a differential equation containing at least one partial derivative …   New Collegiate Dictionary • partial differential equation — noun a differential equation that involves the partial derivatives of a function of several variables …   Wiktionary • partial differential equation — noun Mathematics an equation containing one or more partial derivatives …   English new terms dictionary • Hyperbolic partial differential equation — In mathematics, a hyperbolic partial differential equation is usually a second order partial differential equation (PDE) of the form :A u {xx} + 2 B u {xy} + C u {yy} + D u x + E u y + F = 0 with: det egin{pmatrix} A B B C end{pmatrix} = A C B^2 …   Wikipedia • First order partial differential equation — In mathematics, a first order partial differential equation is a partial differential equation that involves only first derivatives of the unknown function of n variables. The equation takes the form: F(x 1,ldots,x n,u,u {x 1},ldots u {x n}) =0 …   Wikipedia • List of partial differential equation topics — This is a list of partial differential equation topics, by Wikipedia page. Contents 1 General topics 2 Specific partial differential equations 3 Numerical methods for PDEs 4 …   Wikipedia • Parabolic partial differential equation — A parabolic partial differential equation is a type of second order partial differential equation, describing a wide family of problems in science including heat diffusion and stock option pricing. These problems, also known as evolution problems …   Wikipedia • Dispersive partial differential equation — In mathematics, a dispersive partial differential equation or dispersive PDE is a partial differential equation that is dispersive. In this context, dispersion means that waves of different wavelength propagate at different phase velocities.… …   Wikipedia Share the article and excerpts Direct link Do a right-click on the link above and select “Copy Link”
23fd211b6e997139
Sunday, September 24, 2006 ... Deutsch/Español/Related posts from blogosphere Wavefunctions and hydrodynamics: crackpots vs. rational thinking It is no secret that I consider all people whose main scientific focus is a revision of the basic postulates of quantum mechanics - and a return to the classical reasoning - to be crackpots. They just seem too stubborn and dogmatic or too intellectually limited to understand one of the most important results of the 20th century science. Every new prediction based on the assumption that there is a classical theory that underlies the laws of quantum mechanics has been proven wrong. The local hidden variables have first predicted wrong outcomes in the EPR experiments and later they predicted the validity of Bell's inequalities and we know for sure that these inequalities are violated in Nature, just like quantum mechanics implies and quantifies. The non-local hidden variables predict a genuine violation of the Lorentz symmetry. I think that all these theories predict such a brutal violation of the Lorentz symmetry that they are safely ruled out, too. But even if someone managed to reduce the violation of the laws of special relativity in that strange framework, these theories will be ruled out in the future. Their whole philosophy and basic motivation is wrong. The whole political movement to return physics to the pre-quantum era is a manifestation of a highly regressive attitude to science - an even more obvious crackpotism than the attempts to return physics to the era prior to string theory. But among the proposals to undo the 20th century in physics, some of the papers are even more stupid than the average. This is also the case of the recent preprint There are many meaningless words in that paper but let me focus on a section whose content is meant to be very clear and it is very clear, except that it is also totally dumb. The author claims that Timothy Wallstrom was wrong in his criticism of a hydrodynamic approach to the wavefunction. What did Wallstrom point out? He looked at the theories in which • abs(psi)^2 is interpreted as a density of some liquid, while the usual "classical velocity" calculated from the wavefunction - the ratio of the probabilistic current and the probability density - is interpreted as an actual velocity of the same liquid. Wallstrom said that this map might look plausible locally but it is wrong globally. The argument is trivial. Take a generic wavefunction in three dimensions. It will satisfy • psi=0 at a one-dimensional curve - a "cosmic string" - because the one complex or two real conditions above remove two dimensions from space. What happens with "psi" if you make a round trip around this one-dimensional curve? In quantum mechanics, you will return to the same wavefunction: the wavefunctions must be single-valued and the locus where "psi=0" is not too special, after all. Most smart high school students who are really interested in physics know that the wavefunctions are single-valued. This is the fact that underlies the quantization of the orbital angular momentum as well as other observables. It's the ultimate reason why quantum mechanics has "quantum" in its name. On the other hand, in the hydrodynamics toy model, you can pick an arbitrary phase - which is a wrong result. The whole quantum character of quantum mechanics will disappear until you constrain the contour integrals for the velocity by a condition that is equivalent to the quantization condition in Bohr's old quantum theory, as Wallstrom pointed out. I claim that this generic lethal flaw of the hydrodynamical model should be comprehensible to every undergraduate student who has registered for the introductory course of quantum mechanics within a few minutes after the sixth class. This argument would certainly be easy for the physicists 80 years ago, but at any rate, it has been published for 12 years. How is it possible that someone who claims to work on these things is unable to get such a simple point at least for 12 years? The author of "Could quantum mechanics..." not only misunderstands the simple argument but he promotes this misunderstanding to a new branch of science. The author indeed admits that the phase monodromy can be arbitrary in the hydrodynamical theory - but he views it as an advantage over the conventional quantum mechanics. In other words, he indeed believes that there is no quantization of things like the angular momentum in the real world - no kidding - despite billions of experiments that say otherwise, and the hydrodynamical theory is apparently claimed to be better because it can transcend the quantization rules of quantum mechanics: the discrete spectrum of many observables is apparently another example of the sexist white male rules and stereotypes that reduce the diversity of ideas in science and that discriminate against the numbers that were not among the "priviliged" eigenvalues so far. ;-) I just can't stand this bigotic approach. I can't stand pompous fools. The paper is an extreme example of stupidity, and no matter how many books will be written about this stupidity - books promoting the people working on similar "problems" as original scientists who are almost as good as the string theorists if not better - and no matter how many thousands of impressionable laymen and idiotic bloggers will become convinced that it is a deep idea, all these ideas will continue to be the very same patently false stupidities. And that's the memo. P.S.: The very idea that the wavefunction should be reparameterized into different variables by a non-linear transformation is a deeply flawed misconception. The linearity of the Hilbert space of the quantum mechanical wavefunctions is one of the key principles that allows quantum mechanics to work. If one thinks about some other variables, the devastating effect of the non-linear reparameterization will become clearest near the points where "psi=0" because this is where the non-linear transformation becomes extremely singular. This is why the places where "psi" is approximately zero could have been used by Wallstrom to show that the hydrodynamical model is wrong. The hydrodynamics toy models always create a singular earthquake near the loci of "psi=0" even though there is obviously nothing too special about these points in quantum mechanics or in reality. But even if you picked any other model that is either redefining the wavefunction in a non-linear way or that is distinguishing priviliged operators on the Hilbert space in which your non-quantum description will be more classical, you will be able to find a proof that the theory is flawed. It's because the linearity of the wavefunction, the philosophical democracy between different observables (such as position and momentum - you can't say that one of them is classical and the other is not), and other postulates of quantum mechanics are not only beautiful and robust pillars of modern physics, but they are also experimentally proven facts. It is fundamentally wrong to single out some observables - such as positions - to be more classical than others. In reality just like in quantum mechanics, one can talk about the spectrum of all observables, and which of them behave more classically than others is dynamically determined by the Hamiltonian - by decoherence - not by pre-established dogmas. This fact has been known at least for 20 years and everyone who understood foundations of quantum mechanics has known this fact for 20 years. Did he know? How could have Lee Smolin submitted such a silliness? When you try to think about his wording, it is conceivable that he does not realize that it is silly. He just thinks that the multiply-valued functions are square-integrable, and therefore they should be a part of the Hilbert space. That's of course wrong because while they might be square-integrable, they are not really functions, and therefore they are not elements of the Hilbert space. A person familiar with the mathematical terminology would know that they are not in "L^2". Most physicists would know that because they aware, unlike Smolin, of physical considerations that make them certain that the multiply-valued "functions" are not allowed. It is also impossible to choose one value for each point which would translate multiply-valued functions on a circle to discontinuous functions. It's because the discontinuous functions wouldn't satisfy the Schrödinger equation near the discontinuity. In other words, the velocity of the liquid calculated from a discontinuous wavefunction will have an extra delta-function localized near the discontinuity, and it will thus differ from what Smolin claims to be the same thing. Equivalently, the discontinuity makes the energy diverge while the energy in the liquid picture is finite. The discontinuous wavefunctions are certainly not a part of the physically realizable Hilbert space. Above, I assumed that Lee doesn't realize why his comments are silly. Alternatively, you might imagine that Lee Smolin realizes that what he wrote is crap, but he wants a particular preprint with a preprint number that he will cite whenever someone tells him that the classical models of quantum mechanics are impossible because of Wallstrom's argument, among other things. Lee will tell them "Wallstrom's argument has been invalidated in quant-ph/yymmnnn but unfortunately I don't have enough time now to tell you what's the argument - just read the paper". The people will eventually find out that the preprint is rubbish but Lee will earn his 15 minutes of doubts which may be enough to survive one of his public talks in which he is pumping his silliness into the audience. Incidentally, a Harvard grad student (A.P.) has pointed out a paper by Marcel Reginatto about a very similar topic plus the Fisher information. It seems more serious. On the bottom of page 13, Reginatto also struggles with the Wallstrom's problem. As far as I can say, he also fails although not as miserably as L.S. because Reginatto at least admits that the wavefunction should be single-valued in a correct theory. ;-) Reginatto says that things look nice and simpler with a single-valued function which is not exactly what I call a physical explanation. There might be some interesting mathematical and philosophical idea in the "Fisher information" but I am probably not able to go through all the "epistemilogical" junk that has, as admitted on the bottom of page 8, no physical consequences. ;-) Add to Digg this Add to reddit snail feedback (0) :
694997c590d3daa5
Quantum mechanics/Particle in the box Nuvola apps edu miscellaneous.svg Type classification: this is a lesson resource. We want to solve the time independent Schrödinger equation (Eq. 1) for some specific case. We will consider a few different potential energies V(x) and see what the eigenvalues and eigenfunctions look like. We will also practice with some numerical exercises while making several observations on the behaviour of quantum particles. From now on, we may refer to the time independent Schrödinger equation as just the 'Schrödinger' equation. Particle in the boxEdit Consider a particle of mass m which can only occupy the position between x=0 and x=L, and cannot escape from this portion of space. This is commonly known as the particle in a one dimensional box. A classical particle would go back and forth between the two boundaries. The potential energy for such a system can be written as V(x) = 0 when 0<x<L; (Eq. 2) V(x) = +∞ elsewhere The wavefunction must be zero for x<0 and x>L. The wavefunction must also be continuous and so it must be that ψ(0) = 0 and ψ(L) = 0 (Eq. 3) These two conditions are known as boundary conditions in the theory of differential equations. Very often you can find many solutions to a differential equation but only a few would satisfy the boundary conditions. If we consider only 0<x<L then the Schrödinger equation looks like   or   (Eq. 4) We need to find a function whose second derivative is proportional to the function itself and multiplied by a negative constant. As we know that   and   (Eq. 5) it looks like a couple of possible solutions to Equation 1 are ψ(x) = cos(ax) or ψ(x) = sin(ax) where  . With differential equations, if you find two solutions, any linear combination of these solutions is still a solution. So the general solution of Equation 4 is   (Eq. 6) where A and B are constants that can take any values. Equation 6 is the solution if we ignore the boundary conditions (Equation 3). However, if we impose that ψ(0) = 0 we immediately see that it must be A = 0. If we impose ψ(L) = 0 we get   (Eq. 7) At this point, you need to remember that sin(y) = 0 if y = 0, ±π, ±2π,..., ±nπ and so it must be   where n = 1, 2,... (Eq. 8) This is only possible if the energy E takes discrete values:   (Eq. 9) The eigenvalues (energies) of the particle in the box Hamiltonian are therefore:   where n = 1, 2,..., (Eq. 10) The index n is called the quantum number as it is a label of the energy level. The eigenfunctions (wavefunction) of the same Hamiltonian are:   (Eq. 11) Note that the wavefunctions are also labelled with the quantum number. B is any arbitrary constant. You must be able to draw these wavefunctions and the corresponding energy levels. Probability and normalizationEdit Any eigenfunction can be multiplied by any constant and it is still the same eigenfunction. It is customary to multiply them by a constant such that the integral of |ψn(x)|² over all space is unity. The wavefunction is said to be normalized if it has the following property:   (Eq. 12) When a wavefunction is normalized, |ψn(x)|²dx is the probability of finding the particle in the interval between x and x+dx (Equation 2 Lesson 2) In the case of the particle in the box, the normalized wavefunctions are:   (Eq. 13) Some observations to keep in mindEdit The regions where the wavefunction is zero are called nodes (there are nodal points in 1D and nodal planes in 3D). Almost always the number of nodes increases with the energy. For the particle in the box, there are no nodes in the ground state (n = 1), 1 node for n = 2, 2 nodes for n = 3, etc.) The energy levels are discrete because of the boundary condition (without them all the values of the energy were allowed - see question 8 below). The particle in the box is a model system for all quantum mechanical systems. Whenever a particle is confined, discrete levels appear. It is possible to understand qualitatively many phenomena by just considering the particle in the box. 1. Calculate the lowest three energy levels of a particle of mass 10-26 Kg in a box of length L = 10-9 m. 2. Calculate the lowest two energy levels (in eV) of an electron in a 2 Å long one-dimensional box. 3. Plot   and its square ψn(x)2 for n=1,2,3,4 for 0<x<L 4. State for which values of x the probability of finding the particle is maximum (for the one dimensional particle in the box) if the system is in state n=1, n=2, or n=3. 5. Show that if ψn(x) is an eigenfunction of the Hamiltonian, n(x) is also an eigenfunction (where C is any constant). 6. Show that   is normalized. Is it always possible to normalize a wavefunction by multiplying it by an appropriate constant? 7. If ψn(x) is normalized, ψn(x)*ψn(x)=|ψn|² is the probability density of finding the particle around x and   is the probability of finding the particle in the region between a<x<b (for any one dimensional system). Calculate the probability of finding a particle between x=0 and x=L/4 for a particle in a box in state n. 8. A free particle is a particle without any interactions with potential energy V(x)=0 everywhere. • Write the Hamiltonian for this system. • Show that ψ(x) = eikx is an eigenfunction of this Hamiltonian. • Find the eigenvalue corresponding to the eigenfunction eikx. • Is the energy of the free particle quantized? Next: Lesson 4 - Harmonic Oscillator
d704b7fed31eec82
Highly advanced wave pool used to sink tiny toy boat Sam Downing By Sam Downing If you had a highly advanced machine that could create controlled waves in a laboratory, you would also totally use it to sink tiny toy boats. The actual purpose of this machine, used by scientists from Aalto University in Finland, is to study rogue waves — unusually large waves on the ocean's surface that seem to appear out of nowhere. For a long time it was thought rogue waves were just a marine legend. However, in the last 50 years or so they've not only been proven to exist, but blamed for the mysterious and sudden disappearances of a number of ocean-going vessels. (The most spectacular example: the 1978 disappearance of the "unsinkable" German super-tanker MS Munchen, believed to have been sunk by rogue waves in a fierce storm. Only a lifeboat was ever found.) Scientists still don't know a lot about rogue waves, but Aalto researchers have now learned to simulate how the waves occur in realistic ocean conditions. According to a university press release: "The birth of rogue waves can be physically explained through the modulation instability of water waves.  In mathematical terms, this phenomenon can be described through exact solutions of the nonlinear Schrödinger equation, also referred to as 'breathers'." Well, that all makes perfect sense. (On Reddit, a commenter offers this slightly less complicated explanation of the hard science behind the experiment.) While us regular people are too dumb to understand the physics, we can all appreciate the joy of a toy boat being taken out by an artificially created wave. According to Professor Amin Chabchoub, the research has plenty of practical applications. "This will help us not only to predict oceanic extreme events, but also in the design of safer ships and offshore rigs," he said. "In fact, newly designed vessels and rig model prototypes can be tested to encounter in a small scale, before they are built, realistic extreme ocean waves."
1929bfa0c08d9f70
1861: Quantum Explain xkcd: It's 'cause you're dumb. Revision as of 14:50, 11 July 2017 by (talk) (Transcript) Jump to: navigation, search Title text: If you draw a diagonal line from lower left to upper right, that's the ICP 'Miracles' axis. Ambox notice.png This explanation may be incomplete or incorrect: Initial explanation of the idea behind the comic, still needs more detail. General relativity not mentioned yet. Seems it is listed as needing as much math as QM but gives less philosophical arguments...? The comic depicts a relationship between how philosophically exciting the questions in a field of study are, versus how many years are required to understand the answers. For example, special relativity poses very intriguing philosophical questions, such as "can the temporal ordering of spatially separated events depend on the observer?", or "can time run at different rates for differerent observers?". But it doesn't take a lot of mathematical knowledge to understand the answers - that when objects move very close to the speed of light, time slows down and their lengths contract: the key Lorentz transformations ultimately involve little more than high-school algebra. Hence, Special Relativity is very high up on the y-axis but not very far on the x-axis. Basic physics is not very philosophically interesting but also not very complicated. Fluid dynamics, as captured by the Navier–Stokes equations is very complicated, but it's concerned with a very specific topic - how water or other fluids flow around - so it doesn't lead to big philosophical questions. The "danger zone" in the top right of the chart is when a field of study is wide-ranging enough to pose broad philosophical questions, and also so complicated that most people can't answer those questions. Quantum mechanics deals with some very strange concepts that readily lend themselves to philosophical questions, such as the idea that merely observing something can change it, or the idea that something can be both a wave and a particle at the same time. However, the explanation for those phenomena is a very complicated piece of math, notably the Schrödinger equation, which means that most people don't have accurate answers to those questions. Randall suggests that this is the reason why so many people have "weird ideas" about quantum mechanics. 1240: Quantum Mechanics also discusses weird ideas that people have about quantum mechanics. General relativity also presupposes considerable mathematical sophistication to understand the Einstein field equations. However, the main contribution of GR – the explanation of gravity in terms of a curved spacetime – does not seem to induce a lot of philosophical novelty beyond that already seen in special relativity, possibly with the exception of black holes. The title text references the Insane Clown Posse (ICP) song "Miracles", made memetic by the lyric "Fucking magnets, how do they work?" An axis is the direction on a graph in which some quantity is increasing or decreasing. So things that are far along the "miracle" axis are presumably more miraculous. As you move from bottom-left to top-right on the graph, items become both more philosophically interesting and harder to understand. It would be fair to describe something that's hard to understand and raises big philosophical questions as a "miracle". The ICP "Miracles" axis would also intersect the topic "magnets" infamously mentioned in the song. [A chart with the Y-axis titled "How Philosophically Exciting the Questions Are to a Novice Student" and the X-axis titled "How Many Years of Math are Needed to Understand the Answers". The upper-right portion of the chart is labeled "Danger Zone". The following topics are charted as follows: Basic Physics: low excitement, low prerequisites Fluid Dynamics: low excitement, high prerequisites Magnets: medium excitement, medium prerequisites General Relativity: medium excitement, high prerequisites (on the border to the "Danger Zone") Special Relativity: high excitement, low prerequisites Quantum Mechanics: high excitement, high prerequisites (in the "Danger Zone")] [Caption below the panel:] Why so many people have weird ideas about Quantum Mechanics The final paragraph probably should note that Magnets are directly on the ICP "Miracles" axis. JamesCurran (talk) 18:34, 10 July 2017 (UTC) And now I have to listen to "Miracles" again. Thanks explainxkcd. OldCorps (talk) 19:03, 10 July 2017 (UTC) Unless Randall includes Quantum Field Theory in Quantum Mechanics (which is unusual), General Relativity certainly must be on the right of QM, but on the chart they are almost same level, why? All physics students learn QM, but only small minority take GR course, because mathematically it's much more demanding. If you look closely, General Relativity is slightly to the right of Quantum Mechanics. 20:33, 10 July 2017 (UTC) _I'M_ extremely intrigued by Special Relativity being depicted as requiring not much more math than Basic Physics (the only thing I've studied on this chart - I'm not counting magnets as all I know are the grade school basics), but as being vastly more exciting (I enjoyed the physics courses I took, as far as I remember). :) NiceGuy1 (talk) 04:46, 11 July 2017 (UTC) It's interesting that special relativity is to the left of magnets when you can explain magnetism as a consequence of special relativity, from each charged particle's frame of reference, it's experiencing an electrostatic attraction or repulsion due to length contraction or an altered electric current due to time dilation. 05:11, 11 July 2017 (UTC) That's way more complicated than special relativity, at least to me.--TheSandromatic (talk) 07:55, 11 July 2017 (UTC) The thin with magnets is that they are like lasers; they are easy to get used to, but hard to understand the math behind. 07:19, 6 November 2017 (UTC) He forgot entropy. Maybe around where Special Relativity is? 22:22, 11 July 2017 (UTC) The Maxwell equations are more complicated than the Lorenz equations. That is why Magnets are to the right of special relativity. 08:33, 11 July 2017 (UTC) Now I'm listening to "Highway To The Danger Zone". Thanks, upper-right corner! 13:03, 11 July 2017 (UTC) Every idea anyone has about quantum mechanics is weird. That includes those who can do the math for basic field theory (I have) and beyond. There are no non-weird mental models that fit what the math describes, and experiments validate. 15:02, 12 July 2017 (UTC) The explanation mentions a couple of philosophical questions, but I'm not sure that a novice to the field would even understand the question. I just can't imagine a room full of people getting excited if you said "Lets explore whether the temporal ordering of spatially separated events depend on the observer." Pudder (talk) 08:06, 11 August 2017 (UTC)
871ceae6e0b58557
Optical Properties of Quasiperiodically Arranged Semiconductor Nanostructures This work consists of two parts which are entitled "One-Dimensional Resonant Fibonacci Quasicrystals" and "Resonant Tunneling of Light in Silicon Nanostructures". A microscopic theory has been applied to investigate the optical properties of the respective semiconductor nanostruc... Ausführliche Beschreibung Gespeichert in: Bibliographische Detailangaben 1. Verfasser: Werchner, Marco Beteiligte: Kira, Mackillo (Prof. Dr.) (BetreuerIn (Doktorarbeit)) Format: Dissertation Veröffentlicht: Philipps-Universität Marburg 2009 Online Zugang:PDF-Volltext Tags: Tag hinzufügen Keine Tags, Fügen Sie den ersten Tag hinzu! Zusammenfassung:This work consists of two parts which are entitled "One-Dimensional Resonant Fibonacci Quasicrystals" and "Resonant Tunneling of Light in Silicon Nanostructures". A microscopic theory has been applied to investigate the optical properties of the respective semiconductor nanostructures. The studied one-dimensional resonant Fibonacci quasicrystals consist of GaAs quantum wells (QW) that are separated by either a large spacer L or a small one S. These spacers are arranged according to the Fibonacci sequence LSLLSLSL... The average spacing satisfies a generalized Bragg condition with respect to the 1s-exciton resonance of the QWs. A theory, that makes use of the transfer-matrix method and that allows for the microscopic description of many-body effects such as excitation-induced dephasing caused by the Coulomb scattering of carriers, has been applied to compute the optical spectra of such structures. Based on an appropriate single set of fixed sample parameters, the theory provides reflectance spectra that are in excellent agreement with the corresponding measured linear and nonlinear spectra. A pronounced sharp reflectivity minimum is found in the vicinity of the heavy-hole resonance both in the measured as well as in the calculated linear 54-QW spectra. Such sharp spectral features are suitable for application as optical switches or for slow-light effects. Hence, their properties have been studied in detail. Specifically, the influence of the carrier density, of the QW arrangement, of a detuning away from the exact Bragg condition, of the average spacing as well as of the ratio of the optical path lengths of the large and small spacers L and S, respectively, and of the QW number on the optical properties of the samples have been studied. The features of measured spectra could have been attributed to different sample properties related to the sample setup. Additionally, self-similarity among reflection spectra corresponding to different QW numbers that exceed a Fibonacci number by one is observed, which identifies certain spectral features as true fingerprints of the Fibonacci spacing. In the second part, resonant tunneling of light in stacked structures consisting of alternating parallel layers of silicon and air have been studied theoretically. While usually total internal reflection is expected for light shined on a silicon-air interface under an angle larger than the critical angle, light may tunnel through the air barrier due to the existence of evanescent waves inside the air layers if the neighboring silicon layer is close enough. This tunneling of light is in analogy to the well-known tunneling of a quantum particle through a potential barrier. In particular, the wave equation and the stationary Schrödinger equation are of the same form. Hence, the resonant tunneling of light can be understood in analogy to the resonant tunneling of e.g. electrons as well. The characteristic feature of resonant tunneling is a complete transmission through the barrier at certain resonance energies. The transmission, reflection, and propagation properties of the samples have been determined numerically using a transfer-matrix method. Analytical expressions for the energetic resonance positions have been derived and are in excellent agreement with the numerical simulations. Special attention has been drawn to the lowest resonance out of a series of resonant-tunneling resonances. There, light has been observed to be concentrated within silicon layers the extension of which is smaller than the corresponding wavelength of the light. Specifically, the quality factor is large at the resonance energies, so that the resonant light leaves the sample delayed, which allows for the study of slow light. A detailed investigation of how the sample geometry influences the optical properties of the sample has been presented. In particular, it has been outlined how to design a sample to obtain certain desired optical properties. The optical properties that are related to the resonant tunneling strongly rely on the (mirror-)symmetry of the samples. If asymmetries - especially of the silicon wells inside the air barrier - are present in the sample setup, the resonant-tunneling efficiency is diminished. Such asymmetries are unavoidable in the production of the samples. Therefore, a parameter range has been identified in which reasonable transmission above a transmission probability of 50% can be expected taking typical fluctuations caused by the production process into account. Silicon-based resonant-tunneling structures of a setup proposed by the presented theory have already been fabricated and first experiments are under way. This will allow for theory-experiment comparisons.
910456f683299533
Sunday, June 24, 2012 The trouble with using the aufbau to find electronic configurations Eric Scerri Department of Chemistry & Biochemistry Los Angeles, CA 90095 The Periodic Table and the Aufbau One of the biggest topics in the teaching and learning of chemistry is the use of the aufbau principle to predict the electronic configurations of atoms and to explain the periodic table of the elements.  This method has been taught to many generations of students and is a favorite among instructors and textbooks when it comes to setting questions.  In this blog I am going to attempt to blow the lid off the aufbau because it is deeply flawed, or at least the sloppy version of the aufbau.  The flaw is rather subtle and seems to have escaped the attention of nearly all chemistry and physics textbooks and the vast majority of chemistry professors that I have consulted on the subject. The error comes from what may be an innocent attempt to simplify matters or maybe just an understandable slip as I will try to explain.  Whatever the cause there is no excuse for perpetuating this educational myth as I will try to explain. So what’s the problem? The aufbau method was originally proposed by the great Danish physicist Niels Bohr who was the first to bring quantum mechanics to the study of atomic structure and one of the first to give a fundamental explanation of the periodic table in terms of arrangements of electrons (electronic configurations).  Bohr proposed that we can think of the atoms of the periodic table as being progressively built up starting from the simplest atom of all, that of hydrogen which contains just one proton and one electron.  The other atoms differ from hydrogen by the addition of one proton and one electron.  Helium has two protons and two electrons, lithium has three of each, beryllium has four of each, all the way to uranium which at that time, (1913), was the heaviest known atom, weighing in at 92 protons and 92 electrons.  Neutron numbers vary and are quite irrelevant to this story incidentally. The next ingredient is a knowledge of the atomic orbitals into which the electrons are progressively placed in an attempt to reproduce the natural sequence of electrons in atoms that occur in the real world.  Oddly enough these orbitals, at least in their simplest form, nowadays come from solving the Schrödinger equation for the hydrogen atom but let’s not get too sidetracked for the moment. The orbitals The different atomic orbitals come in various kinds that are distinguished by labels such as s, p, d and f.  Each shell of electrons can be broken down into various orbitals and as we move away from the nucleus each shell contains a progressively larger number of kinds of orbitals.  Here is the well-known scheme, First shell contains                            1s orbital only Second shell contains                       2s and 2p orbitals Third shell contains                          3s, 3p and 3d orbitals Fourth shell contains                        4s, 4p, 4d and 4f orbitals and so on. The next part is that one needs to know how many of these orbitals occur in each shell.  The answer is provided by the simple formula 2(l+ 1) where l takes different values depending on whether we are speaking of s, p, d or f orbitals. For s orbitals l = 0, for p orbitals l = 1, for d orbitals l = 2 and so on. As a result there are potentially one s orbital, three p orbitals, five d orbitals, seven f orbitals and so on for each shell. So far so good.  Now comes the magic ingredient which claims to predict the order of filling of these orbitals and here is where the fallacy lurks.  Rather than filling the shells around the nucleus in a simple sequential sequence, where each shell must fill completely before moving onto the next shell, we are told that the correct procedure is more complicated.   But we are also reassured that there is a nice simple pattern that governs the order of shell and consequently of orbitals filling. And this is finally the point at which the aufbau diagram, which I am going to claim lies at the heart of the trouble, is trotted out.  The order of filling is said to be obtained by starting at the top of the diagram and following the arrows pointing downwards and towards the left-hand margin of this diagram.  Following this procedure gives us the order of filling of orbitals with electrons according to this sequence, 1s < 2s < 2p < 3s < 3p < 4s < 3d < 4p < 5s < 4d … This recipe when combined with a knowledge of how many electrons can be accommodated in each kind of orbital and the number of such available orbital in each shell is now supposed to give us a prediction of the complete electronic configuration of all but about 20 atoms in which further irregularities occur, such as the cases of chromium and copper.  Again I don’t want to get side-tracked and so will concentrate on one of the far more numerous regular configurations. Some examples To see how this simplified and ultimately flawed method works let me consider a few examples.  The atom of magnesium has a total of 12 electrons.  Using the method above this means that we obtain an electronic configuration of, 1s2, 2s2, 2p6, 3s2 in beautiful agreement with experiments which can examine the configuration directly through the spectra of atoms.  Let’s look at another example, an atom of calcium which has 20 electrons.  Following the well-known method gives a configuration of, 1s2, 2s2, 2p6, 3s2, 3p6, 4s2 and once again there is perfect agreement with experiments on the spectrum of calcium atoms.  But now let’s see what happens for the very next atom, namely scandium with its 21 electrons.  According to the time honored aufbau method the configuration should be, 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d1 and indeed it is.  But many books proceed to spoil the whole thing by claiming, not unreasonably perhaps, that the final electron to enter the atom of scandium is a 3d electron when in fact experiments point quite clearly to the fact that the 3d orbital is filled before the 4s orbital.  The correct version can be found in very few textbooks but seems to have been unwittingly forgotten or distorted in many cases by generations of instructors and textbook authors as I mentioned at the outset.  How can such an odd situation arise? Why the mistake occurs But how can such an apparently blatant mistake have occurred and taken such root in chemical education circles?  The answer is as interesting as it is subtle.  First of all there is the fact that the overall configuration is in fact correctly given by following the sloppy approach.  But if one asks questions about the order of filling the sloppy approach gives the wrong answer as I have been pointing out.  But even worse, it has led many teachers and textbooks to invent all kinds of contorted schemes in order to explain why even though the 4s orbital fills preferentially (as it does in the sloppy version) it is also the 4s electron that is preferentially ionized to form an ion of Sc+.  Since these contortions are pure inventions I will not waste the reader’s time by looking into them.  They are quite simply incorrect since as a matter of fact, the 4s orbital fills last and consequently, as simple logic dictates, is the first orbital to lose an electron on forming a positive ion.  What’s the evidence?  But how can I be so confident in claiming that the vast majority of chemistry teachers, professors and textbook authors have erred in presenting the sloppy version.  The answer is that one can just consider the experimental evidence on the ions of any particular transition metal atom such as scandium,  Sc3+ (tri-positive ion)                        1s2, 2s2, 2p6, 3s2, 3p6, 3d0, 4s0 Sc2+ (di-positive ion)                         1s2, 2s2, 2p6, 3s2, 3p6, 3d1, 4s0 Sc1+ (mono-positive ion)                  1s2, 2s2, 2p6, 3s2, 3p6, 3d1, 4s1 Sc     (neutral atom)                            1s2, 2s2, 2p6, 3s2, 3p6, 3d1, 4s2 On moving from the Sc3+ ion to that of Sc2+ it is plain to see that the additional electron enters a 3d orbital and not a 4s orbital as the sloppy scheme dictates.  Similarly on moving from this ion to the Sc1+ ion the additional electron enters a 4s orbital as it does in finally arriving at neutral scandium atom or Sc.  Similar patterns and sequences are observed for the subsequent atoms in the periodic table including titatium, vanadium, chromium(with further complications), manganese and so on. Psychological factors I have been thinking about what psychological factors contribute to the retention of the sloppy aufbau.  As I already mentioned it does give the correct overall configuration for all but about 20 atoms that show anomalous configurations, such as chromium, copper, molybdenum and many others. Another factor is that it gives chemistry professors the impression that they really can predict the way in which the atom is built-up starting from a bare nucleus to which electrons are successively added.  Presumably it also gives students the impression that they can make similar predictions and perhaps convinces them of the worthiness of the aufbau and scientific knowledge in general.  The fact remains that it is not possible to predict the configuration in any of the transition metals, and indeed the lanthanides, or if it comes down to it even the p-block elements.  Let’s go back to scandium.  Contrary to the sloppy aufbau that is almost invariably taught, the 3d orbitals have a lower energy than 4s starting with this element.  If we were to try to predict the way that the electrons fill in scandium we might suppose that the final three electrons after the core argon configuration of 1s2, 2s2, 2p6, 3s2, 3p6 would all enter into some 3d orbitals to give, 1s2, 2s2, 2p6, 3s2, 3p6, 3d3 The observed configuration however is, What’s really happening? This amounts to saying that all three of the final electrons enter 3d but two of them are repelled into an energetically less favorable orbital, the 4s, because the overall result is more advantageous for the atom as a whole.  But this is not something that can be predicted.  Why is it 2 electrons, rather than one or even none?  In cases like chromium and copper just one electron is pushed into the 4s orbital.  In an analogous case from the second transition series, the palladium atom, the competition occurs between the 5s and 4d orbitals.  In this case none of the electrons are pushed up into the 5s orbital and the resulting configuration has an outer shell of [Kr]4d10 None of this can be predicted in simple terms from a rule of thumb and so it seems almost worth masking this fact by claiming that the overall configuration can be predicted, at least as far as the cases in which two electrons are pushed up into the relevant s orbital.  To those who like to present a rather triumphal image of science it is too much to admit that we cannot make these predictions.  The use of the sloppy aufbau seems to avoid this problem since it gives the correct overall configuration and hardly anybody smells a rat. But why do electrons get pushed up into the relevant s orbital? Finally, it is natural to now ask why it is that one or two electrons are usually pushed into a higher energy orbital, other than the answer I already gave which is to say that doing so produces a more stable atom overall.  The answer lies in the fact that 3d orbitals are more compact than 4s to consider the first transition, and as a result any electrons entering 3d orbitals will experience greater mutual repulsion.  The slightly unsettling feature is that although the relevant s orbital can relieve such additional electron-electron repulsion different atoms do not always choose to make full use of this form of sheltering because the situation is more complicated than the way in which I have described it.  After all there is the fact that nuclear charge increases as we move through the atoms.  At the end of the day there is a complicated set of interactions between the electrons and the nucleus as well as between the electrons themselves.  This is what ultimately produces an electronic configuration and contrary to what some educators would wish for, there is no simple qualitative rule of thumb that can cope with this complicated situation.     Bottom line There is absolutely no reason for chemistry professors and textbook authors to continue to teach the sloppy version of the aufbau.  Not only does it give false predictions regarding the order of electron filling in atoms but it also causes authors and instructors to tell further educational lies.  They are forced to invent some elaborate explanations in order to undo the error in an attempt to explain why 4s is occupied preferentially (which it is not) but also preferentially ionized which it is. The sloppy version also implies that the 4s orbital has a lower energy than 3d for all atoms which is not the case, or that the 5s orbital has a lower energy than 4d which is not the case for all atoms and so on.  Similar issues arise in the f-block elements. It is high time that the teaching of aufbau and electronic configurations were carried out properly in order to reflect the truth of the matter rather than taking a short-cut and compounding it with a further imaginary story. The following references are among the few that give the correct explanation; S-G. Wang, W. H. E. Schwarz, Angew. Chem. Int. Ed. 2009, 48, 19, 3404–3415. S. Glasstone, Textbook of Physical Chemistry, D. Van Nostrand, New York, 1946. D.W. Oxtoby, H.P. Gillis, A. Campion, Principles of Modern Chemistry, Sixth Edition, Thomson/Brooks Cole, 2007. General Reference on the Periodic Table Eric Scerri, A Very Short Introduction to the Periodic Table, Oxford University Press, 2011
5f9409053210cadb
Text Size Tag » Basil Hiley Me: So does Bohm's ontological interpretation. Thanks again for the clarification --" Jack Sarfatti ...See More Ruth wrote: Subquantum Information and Computation Antony Valentini Subjects: Quantum Physics (quant-ph) DOI: 10.1007/s12043-002-0117-1 Report number: Imperial/TP/1-02/15 Cite as: arXiv:quant-ph/0203049 (or arXiv:quant-ph/0203049v2 for this version) On Jun 20, 2013, at 1:10 AM, Basil Hiley wrote: On 19 Jun 2013, at 22:52, Ruth Kastner wrote: OK, not sure what the 'yes' was in response to, but I should perhaps note that you probably need to choose between the Bohmian theory or the transactional picture, because they are mutually exclusive. There are no 'beables' in TI. But there is a clear solution to the measurement problem and no discontinuity between the relativistic and non-relativistic domains as there are in the Bohmian theory (which has to abandon particles as beables at the relativistic level). This last statement is not correct. Bohmian theory can now be applied to the Dirac particle. You do not have to abandon the particle for Fermions at the relativistic level. There is a natural progression from Schrödinger → Pauli → Dirac. See Hiley and Callaghan, Clifford Algebras and the Dirac-Bohm Quantum Hamilton-Jacobi Equation. {em Foundations of Physics}, {f 42} (2012) 192-208. More details will be found in arXiv: 1011.4031 and arXiv: 1011.4033. Like · · Share • Jack Sarfatti On Jun 21, 2013, at 3:54 AM, Basil Hiley <b.hiley@bbk.ac.uk> wrote: My work on the ideas that Bohm and I summarised in "The Undivided Universe" have moved on considerably over the last decade. But even in our book, we were suggesting that the particle could have a complex and subtle structure (UU p. 37) which could be represented as a point-like object only above the level of say 10^-8 cm. This comment, taken together with point 2 in our list of key points on p. 29 implies that we are not dealing with 'small billiard balls'. There could be an interesting and subtle structure that we have not explored-indeed we can't explore with the formalism in common use, i.e. the wave function and the Schrödinger equation. This is my reason for exploring a very different approach based on a process philosophy (See my paper arXiv: 1211.2098). In the case of the electron, we made a partial attempt to discuss the Dirac particle in our book (UU chapter 12). The presentation there (section12.2) only scratched the surface since we had no place for the quantum potential. However we showed in arXiv: 1011.4033 that if we explored the role of the Clifford algebra more throughly, we could provide a more detailed picture which included a quantum potential. We could then provide a relativistic version of what I call the Bohm model or, more recently, Bohmian non-commuting dynamics to distinguish it from a number of other variants of the model. In our approach all fermions could then be treated by one formalism which in the classical limit produced our 'rock-like' point classical particles. Bosons had to be treated differently, after all we do not have a 'rock-like' classical limit of a photon. Rather we have a coherent field. Massive bosons have to be treated in a differently way, but I won't go into that here. reference? I have been struggling with that in my dreams. We noted the difference between bosons and fermions in the UU and treated bosons as excited states of a field. In this case it was the field that became the beable and it was the field that was organised by what we called a 'super quantum potential'. In this picture the energy of say an emitted photon spread into the total field and did not exist as a localised entity. Yes, a rather different view from that usually accepted, but after all that was the way Planck himself pictured the situation. John Bell immediately asked, "What about the photon?" so we put an extra section in the UU (sec. 11.7). The photon concept arises because the level structure of the atom. It is the non-locality and non-linearity of the super quantum potential that sweeps the right amount of energy out of the field to excite the atom. Since the photon is no longer to be thought of as a particle, merely an excitation of the field, there is no difficulty with the coherent state. It is simply the state of the field whose energy does not consist of a definite number of a given hν. A high energy coherent field is the classical limit of the field, so there is no problem there either. All of this is discussed in detail in "The Undivided Universe". Hope this clarifies our take on these questions. • Jack Sarfatti The Brown-Wallace is an interesting paper, but I do not agree with its conclusions. Of course, this is exactly what you would expect me to say! What is needed is a careful response which I don't have time to go into here, so let me be brief. The sentence that rang alarm bells in their paper was "Our concern rather is with the fact that for Bohm it is the entered wave packet that determines the outcome; the role of the hidden variable, or apparatus corpuscle, is merely to pick or select from amongst all the other packets in the configuration space associated with the final state of the joint object-apparatus system." (See top of p. 5 of arXiv:quant-ph/0403094v1). As soon as I saw that sentence, I knew the conclusion they were going to reach. It gives the impression that it is the wave packet that is the essential real feature of the description and there need be nothing else. For us the 'wave packet' was merely short hand which was meant to signify the quantum potential that would be required to describe the subsequent behaviour of the particle. For us it was the quantum Hamilton-Jacobi equation that was THE dynamical equation. The Schrödinger equation was merely an part of an algorithm for calculating the probable outcomes of a given experimental arrangement. ( Yes it's Bohr!) But for us THERE IS an underlying dynamics which is a generalisation of the classical dynamics. Indeed my recent paper (arXiv 1211.2098) shows exactly how the classical HJ equation emerges from the richer quantum dynamics. The term 'wave packet' was merely short hand. There is no wave! This is why we introduced the notion of active information which is universally ignored. On Jun 20, 2013, at 5:21 AM, Ruth Kastner <rekastner@hotmail.com> wrote: Thank you Basil, but what about other particles? E.g. photons and quanta of other fields. -RK On Jun 20, 2013, at 9:19 AM, Ruth Kastner wrote: Well my main concern re photons is coherent states where there isn't a definite number of quanta. Perhaps this has been addressed in the Bohmian picture -- if so I'd be happy to see a reference. However I still think that TI provides a better account of measurement since it gives an exact physical basis for the Born Rule rather than a statistical one, and also the critique of Brown and Wallace that I mentioned earlier is a significant challenge for Bohmian approach. What B & W point out is that it is not at all clear that the presence of a particle in one 'channel' of a WF serves as an effective reason for collapse of the WF. From: adastra1@me.com Subject: Re: Reality of possibility Date: Thu, 20 Jun 2013 09:13:10 -0700 To: rekastner Never a problem for boson fields just look at undivided universe book now online Sent from my iPhone Subject: Re: Reality of possibility From: b.hiley Date: Thu, 20 Jun 2013 09:10:39 +0100 CC: adastra1@me.com > Subject: Reality of possibility > From: adastra1@me.com > Date: Wed, 19 Jun 2013 13:14:42 -0700 > To: rekastne > Yes > That's what i mean when I say that Bohm's Q is physically real. > Sent from my iPhone Begin forwarded message: From: Ruth Elinor Kastner <rkastner@umd.edu> Subject: Re: [ExoticPhysics] Basil Hiley's update on current state of work in Bohm's ontological picture of quantum theory Date: November 25, 2012 12:36:53 PM PST To: JACK SARFATTI <sarfatti@pacbell.net>, Exotic Physics <exoticphysics@mail.softcafe.net> In this approach I still don't see a clear answer to the question 'what is a particle,' unless it is that particles are projection operators. In PTI a 'particle' is just a completed (actualized) transaction. PTI deals with both the non-rel and relativistic realms with the same basic model, which testifies to the power of that model. It is straightforwardly realist: quantum states describe subtle (non-classical) physical entities. It seems to me that approaches dealing with conceptual problems in terms of abstract algebras are intrinsically non-realist or even anti-realist. Physics is the study of physical reality. Algebra is purely formal. Unless one wants to say that reality is purely formal,i.e. has no genuine physical content, I don't see how appealing to an abstract algebra as the fundamental content of quantum theory can provide interpretive insight into reality. Put more simply, a physical theory may certainly contain formal elements, but those elements need to be understood as *referring to something in the real world* in order for us to understand what the theory is describing or saying about the physical world. That is, it is the physical world that dictates what the theory's mathematical content and structure should be, because of the contingent features of the physical world. Saying that a theory has a certain mathematical structure or certain formal components does not specify what the theory is saying about reality. I think an interpretation of a theory should be able to provide specific physical insight into what a theory is telling us about the domain it mathematically describes. Begin forwarded message: From: JACK SARFATTI <Sarfatti@PacBell.net> Subject: [Starfleet Command] Basil Hiley's update on current state of work in Bohm's ontological picture of quantum theory Date: November 25, 2012 11:58:26 AM PST To: Exotic Physics <exoticphysics@mail.softcafe.net> Reply-To: SarfattiScienceSeminars@yahoogroups.com On Nov 25, 2012, at 2:55 AM, Basil Hiley <b.hiley@bbk.ac.uk> wrote: As I dig deeper into the mathematical structure that contains the mathematical features that the Bohm uses, Bohm energy, Bohm momentum, quantum potential etc. are essential features, as you imply, of a non-commutative phase space; strictly a symplectic structure with a non-commutative multiplication (the Moyal-star product).  This product combines into two brackets, the Moyal bracket, (a*b-b*a)/hbar and the Baker bracket (a*b+b*a)/2.  The beauty of these brackets is to order hbar, Moyal becomes the Poisson and Baker becomes the ordinary product ab. Time evolution requires two equations, simply because you have to distinguish between 'left' and 'right' translations.  These two equations are in fact the two Bohm equations produced from the Schrödinger equation under polar decomposition in disguised form.  There is no need to appeal to classical physics at any stage. Nevertheless these two equations reduce in the limit order hbar to the classical Liouville equation and the classical Hamilton-Jacobi equation respectively. This then shows that the quantum potential becomes negligible in the classical limit as we have maintained all along.  There are not two worlds, quantum and classical, there is just one world.  It was by using this algebraic structure that I was able to show that the Bohm model can be extended to the Pauli and Dirac particles, each with their own quantum potential.  However here not only do we have a non-commutative symplectic symmetry, but also a non-commutative orthogonal symmetry, hence my interests in symplectic and orthogonal Clifford algebras. In this algebraic approach the wave function is not taken to be something fundamental, indeed there is no need to introduce the wave function at all!.  What is fundamental are the elements of the algebra, call it what you will, the Moyal algebra or the von Neumann algebra, they are exactly the same thing.  This is algebraic quantum mechanics that Haag discusses in his book "Local Quantum Physics, fields, particles and algebra".  Physicists used to call it matrix mechanics, but then it was unclear how it all hung together.  In the algebraic approach there is no collapse of the wave function, because you don't need the wave function.  All the information contained in the wave function is encoded in the algebra itself, in its left and right ideals which are intrinsic to the algebra itself.  Where are the particles in this approach?  For that we need Eddington's "The Philosophy of Science", a brilliant but neglected work.  Like a point in geometry, what is a particle?  Is it a hazy general brick-like entity out of which the world is constructed, or is it a quasi-local, semi-autonomous feature within the total structure-process?  Notice the change, not things-in-interaction, but structure-process in which any invariant feature takes its form and properties from the structure-process that gives it subsistence. If an algebra is used to describe this structure-process, then what is the element that subsists?  What is the element of existence?  The idempotent E^2=E has eigenvalues 0 or 1: it exists or it doesn't exist.  An entity exists in a structure-process if it continuously turns itself into itself.  The Boolean logic of the classical world turns existence into a permanent order: quantum logic turns existence into a partial order of non-commutative E_i!  Particles can be 'created' or 'annihilated' depending on the total overall process. Here there is an energy threshold, keep the energy low and it is the properties of the entity that are revealed through non-commutativity, these properties becoming commutativity to order hbar.  The Bohm model can be used to complement the standard approach below the creation/annihilation threshold.  Raise this threshold and then the field theoretic properties of the underlying algebras become apparent. All this needs a different debate from the usual one that seems to go round and round in circles, seemingly resolving very little. Basil. On 24 Nov 2012, at 19:10, JACK SARFATTI wrote: What is the ontology of "possibility"? In Bohm's picture it is a physical field whose domain is phase space (Wigner density) and whose range is Hilbert space. They are physically real, but not classical material. The basic problem is how can a non-physical something interact with a physical something? This is a contradiction in the informal language. Only like things interact with unlike things. Otherwise, it's "then a miracle happens" and we are back to magick's "collapse". We simply replace one mystery by another in that case. On Nov 24, 2012, at 5:59 AM, Ruth Elinor Kastner <rkastner@umd.edu> wrote: Yes. It serves as a probability distribution because it is an ontological descriptor of possibilities. From: JACK SARFATTI [sarfatti@pacbell.net] Sent: Saturday, November 24, 2012 1:56 AM To: Jack Sarfatti's Workshop in Advanced Physics Subject: Re: [ExoticPhysics] Asher Peres's Bohrian epistemological view of quantum theory opposes Einstein-Bohm's ontological view. Commentary #2 On Nov 23, 2012, at 9:24 PM, Paul Zielinski <iksnileiz@gmail.com<mailto:iksnileiz@gmail.com>> wrote: Did it ever occur to anyone in this field that the quantum wave amplitude plays a dual role, first as an ontological descriptor, and second as probability distribution? This I think is consistent with Bohm's ideas. When there is sub-quantal thermal equilibrium (A. Valentini) the Born probability rule works, but not otherwise. It seems reasonable to suppose that the wave interference phenomena of quantum physics reflect an underlying objective ontology, while the probability distributions derived from such physical wave amplitudes reflect both that and also our state of knowledge of a system. That a classical probability distribution suddenly "collapses" when the information available to us changes is no mystery. The appearance of collapse is explained clearly in Bohm & Hiley's Undivided Universe. See also Mike Towler's Cambridge Lectures. I will provide details later. So the trick here I think is to disentangle the objective ontic components from the subjective state-of-knowledge-of-the-observer components of the wave function and its associated probability density -- to "diagonalize" the conceptual matrix, so to speak. However, other than Bohm it looks like no one in foundations of quantum physics has yet figured out a way to do that. My favorite example is an apple orchard at harvest, the trees having fruit with stems of randomly varying strength. Let's suppose there is an earthquake and a seismic wave propagates along the ground. The amount of shaking of the trees at any given time and place will be proportional to the intensity of the seismic wave, given by the square of the wave amplitude, and therefore the smoothed density of fallen apples left on the ground after the earthquake will naturally be derivable from the square seismic wave amplitude (since that determines the energy available for shaking the trees). However, when we see that a particular apple has fallen, the derived probability density (initially describing *both* the intensity of the seismic wave *and* our state of knowledge about the likelihood of any particular apple falling to the ground) suddenly "collapses", but in this example such "collapse" is purely a function of our state of knowledge about a particular apple, and does not have any bearing on the wave amplitude from which it was initially derived. In this example, it is quite clear that the probability distribution applying to any particular apple can "collapse" due to an observation being made of any particular apple, even while the wave amplitude from which it was initially derived is entirely unaffected by the observation of the state of any particular apple. My question is, why is wave mechanics any different? Isn't this also a "Born interpretation" of the seismic wave? On Nov 23, 2012, at 10:25 PM, "Kafatos, Menas" <kafatos@chapman.edu<mailto:kafatos@chapman.edu>> wrote: I disagree, if one insists on just one view (realism) being the only possibility. We have to ask what do we mean by "real"? What kind of "space" does that wave function reside in? What are its units if not in Hilbert space referring to the Born interpretation? There are numerous attempts to ontologize the wave function (see Kafatos and Nadeau, "The Conscious Universe", Springer 2000). The hidden metaphysics is to assume axiomatically that an external reality exists independent of conscious observers. This ultimately leads to an increased number of theoretical constructs without closure of anything (e.g. the multiverse). Moreover, in the matrix mechanics the wave function is not needed. If psi were real, shouldn't it have been discovered long ago? Unless one argues that the theory of QM didn't exist until the 20th century so we couldn't have "discovered" it which case it gets us back to a description of nature dependent on observers! It is OK to ontologize anything but in that case, please follow the hidden metaphysics that is implied. And state this metaphysics. In a practical way to conduct science, we should remember how specific scientific constructs were developed. It didn't happen that somehow scientists like Bohr, Schroedinger, Heisenberg, Born, etc. stumbled on a physical quantity called the wave function psi. It was developed as part of wave mechanics which was complementary to Heisenberg's matrix mechanics. The other ontology is that consciousness is real. This one naturally follows from orthodox quantum theory and leads to a pragmatic view of the cosmos. Two ontologies, take your pick for specific science to do. One leads to many worlds interpretation and ultimately to, perhaps, an infinity of universes, one of a few (or only one?) that happens to be "right" one (including having something called the wave function) to have conscious observers; the other leads to one universe that is self-driven by itself. Can the two views/ontologies be reconciled? Yes, in a generalized complementarity framework, although one would negate the other in specific applications. What is "real" in this view is generalized principles applying at all levels and whatever science one works with. One deals with an objective view of the universe. The other with a subjective view of the universe (which relies on qualia). I won't go any further. See also a series of articles by Chopra, Tanzi and myself in the last several months in Huffington Post and San Francisco Chronicle. Menas Kafatos Sent from my iPhone On Nov 24, 2012, at 1:53 PM, "JACK SARFATTI" <sarfatti@pacbell.net<mailto:sarfatti@pacbell.net><mailto:sarfatti@pacbell.net>> wrote: Yes, I agree with Ruth. I think Peres is fundamentally mistaken. However, there are some important insights in his papers nevertheless. On Nov 23, 2012, at 7:22 PM, Ruth Elinor Kastner <rkastner@umd.edu<mailto:rkastner@umd.edu><mailto:rkastner@umd.edu>> wrote: Concerning this statement by Peres and Fuchs in what is quoted below: "Here, we must be careful: a quantum jump (also called collapse) is something that happens in our description of the system, not to the system itself. " How do they know that? That is just an anti-realist assumption; that is, it presupposes that quantum states and processes do not refer to entities in the world but only to our knowledge (i.e. that quantum states are epistemic). This view has come under increasing criticism (e.g. via the PBR theorem which disproves some types of 'epistemic' interpretations). I present a contrary, realist view in my new book on TI, in which measurements are clearly accounted for in physical terms and quantum states do refer to entities, not just our knowledge. Quantum 'jumps' can certainly be considered real and can be  understood as a kind of spontaneous symmetry breaking. Details on that? In my view, quantum theory is not just about knowledge or epistemic probability; it is about the real world. There is no need to give up realism re quantum theory. Prior realist interpretations simply have not been able to solve the measurement problem adequately, because they neglect the relativistic level in which absorption and emission are acknowledged as equally important physical processes. ExoticPhysics mailing list Reply via web post                            Reply to sender                             Reply to group                            Start a New Topic               Messages in this topic (1)                       RECENT ACTIVITY: Visit Your Group These are the logs of the starship NCC-1701-280Z.  Its five-year mission to seek out new minds, new quantum realms.  To boldly explore physics where no physicist  has gone before (in physical, virtual, or quantum worlds)! Starmind(tm) -- Your daily journal to the industry's brightest stars.  You get infinite knowledge only with Starmind: All hits.  All Physics. All the time.  And now in parallel and diverging universes.  (Thus proving they don't exist as separate entities --But have we gotten to them yet or not?) ** Patronize any Yahoo! Group Sponsor at your own risk. - - - - - - Message From Starfleet  - - - (Read below) - - - - - - - - - - - To change any characteristic of your online membership access, visit via web: Join in our ongoing discussions and theoretical science writings: Dr. Sarfatti may be reached at his e-mail or using Internet site: To respond or comment directly to the group's archive, reply via e-mail: Switch to: Text-Only, Daily Digest • Unsubscribe • Terms of Use • Send us Feedback . On Aug 11, 2012, at 1:41 AM, Basil Hiley <b.hiley@bbk.ac.uk> wrote: On 27 Jul 2012, at 07:00, nick herbert wrote: On Jul 26, 2012, at 9:50 AM, nick herbert <quanta@cruzio.com> wrote: 1. The oft-cited remark that non-relativistic Bohmian mechanics gives the same result as conventional QM for all conceivable experiments is plain wrong. The two theories possess radically different ontologies which lead to radically different consequences. BH: How can it be wrong?  It uses exactly the same mathematics, without the addition or subtraction of any new mathematical structure.  Its predicted expectation values found in all experiments are identical to those found from the conventional rules.  If you want to criticise it, why not simply say "It adds no new experimental predictions, so why bother with it?"  Then you can get into arguments about which interpretation is better in your opinion.  Then it is a matter of opinion not experimental science. JS: However, Antony Valentini's extension does add new predictions consistent with my own independent investigations and also Brian Josephson's which already has observational evidence in its favor (Libet, Radin, Bierman, Puthoff-Targ, Bem) NH: What exists in QM is a wavefunction, spread out in configuration space (and this wavefunction is "real" according to PBR). For a given quantum state all systems represented by that state have the same ontology. BH: The ontology gives meaning to the notion of a "quantum state".  What does it mean to say "For a given quantum state  all systems represented by that state have the same ontology"? NH: What exists in BM is an actual particle which for S-states has the remarkable property that v=0. In BM all systems represented by the same state are different--their difference (in the S-state case) being the differing positions of the static electron. A Bohmian S-state consists of an ensemble of stationary electrons each in a different position whose position pattern is given by psi squared. It is this v=0 property of BM S-wave electrons that is used to create counterexamples to the contention that BM and QM give the same predictions. 1. Muonic Hydrogen. Like t! he electron the muon in the BM picture is stationary. Hence the muon lifetime in BM is the just the natural lifetime. However in QM the muon has a velocity distribution so the lifetime is lengthened by relativity. BM and QM predict different lifetimes for the muonic atom. One may object that I have introduced relativity into a non-rel situation. However the QM and BM states are still non-rel. The lifetime of the muon can be seen as a measuring device probing the ontology of the muonic hydrogen. The probe uses a relativity effect to measure a non-rel configuration. BH: I recall having already answered this criticism some time ago.  Time dilation is a relativistic phenomenon so you must use the relativistic Dirac theory in this case.  JS:: Yes, Nick's error here is obvious. He appeals to the wrong equation for the problem. It's a Red Herring. BH: In the past there I have been entirely happy with the treatment of the Bohm model of the Dirac equation that we have given.  However Bob Callaghan and myself have now obtained a new complete treatment of the Dirac equation with which I am completely happy. It uses the Clifford algebra in a fundamental way as it must to link with the known successful spinor structure.  See Hiley and Callaghan:  Clifford Algebras and the Dirac-Bohm Quantum Hamilton-Jacobi Equation.  Foundations of Physics,  42 (2012) 192-208.  DOI:  10.1007/s10701-011-9558-z and in more detail in The Clifford Algebra Approach to Quantum Mechanics B: The Dirac Particle and its relation to the Bohm Approach,  (2010)    aXriv: 1011.4033. Our work shows that the Bohm charge velocity of the electron is, in fact, given by v= Psi alpha Psi,  where alpha is the Dirac 4x4 matrix, which is related to the Dirac gamma matrices. (See Bohm and Hiley The Undivided Universe, p. 272 for our original treatment which is confirmed by our latest work.)  If you now look at the wave function of the ground state of the Dirac hydrogen atom which you can find in Bjorken and Drell p. 55 you will find the electron is moving in the ground state.  What is interesting is that when you take this expression and go to the non-relativistic limit you find the velocity is zero, exactly the result that the Schrödinger equation gives.  Remember the energy levels calculated from  the Schrödinger hydrogen atom are only approximations to those calculated using the Dirac hydrogen atom. Do you have a reference to the paper that measures the lifetime of the muon in muonic hydrogen?  I can't find a good reference to a clean experiment which shows exactly how to measure the time dilation you mention.  I have recently written up the details of the calculation that I have outlined above but I would like to add a better reference to the actual measurement. 2. Electron Capture decay. Certain radioactive elements (Beryllium 7, for instance) possess an excess positive charge and do not have enough energy to decay by positron emission. Instead they capture the S-state electron which transforms a nuclear proton into a neutron and neutrino (inverse beta decay). Electron Capture (EC) is a very delicate probe of the ontology of the S-state electron. QM ontology (all electrons the same) predicts a smooth exponential decay. After many half-lifes all the Be7 is gone. BM ontology predicts a very different outcome: exponential decay for all electrons located inside the nucleus; infinite li! fe for stationary Bohmian electrons located outside the nucleus. BH: You must read past the simple Bohm model introduced in chapter three of our book, "The Undivided Universe".  The first ten chapters contain a discussion of the non-relativistic Bohm model.  There we show that if you want to apply the theory to problems where the particles interact either with other particle or with fields like the electro-magnetic field, you must introduce an appropriate interaction Hamiltonian.  In section 5.3 to 5.5 we show how to deal with a very simple example of two-particle interactions.  These sections were written simply to illustrate how the mathematics work and how you can explain the results using the Bohm interpretation. NB the interpretation is only applied after we have solved the Schrödinger equation containing the interaction Hamiltonian.  You can't solve these equations exactly so you have to use perturbation theory.  Remember the maths is the same as for the standard interpretation.  It is the interpretation that is different. What happens if the interaction Hamiltonian involves the electromagnetic potentials?  To discuss interaction with the electromagnetic field you must go to a relativistic theory.  This means you must use the Dirac equation.  Chapter 12 of our book begins to show you how to do this.  The work of Bob Callaghan and myself mentioned above takes this further.  What we have done is to discuss the free Dirac electron for simplicity.  We simply wanted to show how it worked without introducing more realistic interaction Hamiltonians.  Now let me try to answer your question as to how we deal with electron capture.  In order to describe this capture, we have to introduce the appropriate interaction Hamiltonian.  What is the appropriate interaction Hamiltonian in this case?  To find this we have to go to a review article like "Orbital electron capture by the nucleus" [Rev. Mod Phys. 49 (1977) 77-221].  You will see that the interaction Hamiltonian is a weak electron current-hadron current interaction.  You must now put that into the Dirac equation and calculate away.  Well the calculations are all done in the Rev. Mod. Phys. paper and all we need to do is to interpret the results according to the Bohm model. Where your analysis goes wrong is that you assume (1) the non-relativistic theory and (2)  there is no interaction between the nucleus and the electron.  You can do that to a first approximation to explain the principle of the Bohm model to, say, a first year undergraduate, but you must not say that's all there is.  It is not a true reflection of the processes that are involved!  There is an interaction between the nucleon and the electron and you must take this into account even in the Bohm model if you want to understand the physics. If your message is simply to say that the naive Bohm model based on the Schrödinger is inadequate to deal with these problems then I totally agree with you.  Bohm and I have always recognised that the '52 work was just a first step.  Let me quote from his Causality and Chance book p. 118 “It must be emphasized, however, that these criticisms are in no way directed at the logical consistency of the model, or at its ability to explain the essential characteristics of the quantum domain.  Rather they are based on broader criteria, which suggest that many features of the model are implausible and, more generally, that the interpretation proposed in section 4 [of the ‘52 paper] does not go deep enough.” I thought that in our book, "The Undivided Universe", we made it clear that chapter 3 was a first step.  All the remaining chapters were to show how the model was to be developed to meet many different actual situations found in nature.  Finally in chapter 15, we outlined what was going to be developed in a second volume, which would probe a much deeper structure but unfortunately Bohm died just as we were finishing the first book. NH: If these two counter-examples to the QM/BM experimental identity conjecture have been discussed in the literature, I! am unaware of it. But they should be. BH: You are quite right, these points should be discussed in the literature.  Unfortunately I have been too involved in developing the ideas outlined in chapter 15 and that means going deeper into what I think really underlies quantum phenomena.  You will find some of this work in the latest publications of mine which are accessible on the net.  A good place to find a comprehensive review of my latest efforts is in my paper Process, Distinction, Groupoids and Clifford Algebras: an Alternative View of the Quantum Formalism, in New Structures for Physics, ed Coecke, B., Lecture Notes in Physics, vol. 813, pp. 705-750, Springer (2011).  Unfortunately I don't think it is available on the net at present but if you are interested I can send you a copy. Thank you for your interest in our work. Nick Herbert
ebdbe80a63859b14
Thursday, June 30, 2016 13- Change is the only constant (Heidi Toffler) a) In memoriam Alvin Toffler 42. The Future Shock was amortized by irrationality. b) LENR's specific shock(s)  Please complete the details yourself. Of Mice, Materials and Men Who is talking about LENR on social media forums? Do not go gentle into that good night Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light. Though wise men at their end know dark is right, Because their words had forked no lightning they Do not go gentle into that good night. Good men, the last wave by, crying how bright Their frail deeds might have danced in a green bay, Rage, rage against the dying of the light. Wild men who caught and sang the sun in flight, Do not go gentle into that good night. Grave men, near death, who see with blinding sight Blind eyes could blaze like meteors and be gay, Rage, rage against the dying of the light. And you, my father, there on the sad height, Do not go gentle into that good night. Rage, rage against the dying of the light. Space team discovers universe is self-cleaning it is also about a source of cosmic energy A Good slogan: "Face your fears" (Jeff Wise) tensions of modern learning Wednesday, June 29, 2016 The Rhino Principle. A rhino is not a particularly subtle or intelligent creature, yet it has managed to dominate the savanna through sheer determination and aim. It takes initiative when it sees something it wants and puts everything into what it does best: charge! I've always been suspicious of collective truths. (Eugene Ionesco) a) Why is IH fabricating so many memes? More readers have asked why the supporters of IH try to create so many anti-Rossi and again so many pro-IH memes? A first aspect showing they are not good in the art of Memetics is the "optimal density" of memes. Exactly as in agriculture where it is an optimal density of plants at crops, too dense memes start to mutually annihilate each other; just remember the incompatible "the plant is impossible" (people killed by heat not removable) and the other"the plant does not work at all" Indeed is if a list of memes  is done and used according to a plan, or an "everything goes" ineffective mentality rules without control.  Why do they do this? Because, very probably,  they do not have anything better! If they did, they would not have  made another motion to dismiss The situation is: Rossi wants to go in Court, IH wants to escape. You can guess three causes why is this soo......please!  b) The stoppable subtleties of Jed Rothwell's personal memes Admirable hyperactive for his followers< Jed Rothwell has created his own discussion thread serving simultaneously as a meme generator, a guillotine for everything Andrea Rossi and a training camp for his, very specific subtleties. This it is: I was wrong about Rossi, but what I fear most is that I might be partly right Subtle as a rhinoceros, Jed writes here about the very document that has irreversibly convinced him that there was no excess heat in the 1MW experiment- he has received this from Andrea Rossi Subtle as a rhinoceros he seemingly suggest here: "Andrea Rossi is a collection of sins and evilness, however even he is able to appreciate a genius, a genuine high class expert and this must be the reason that this otherwise very secret document has found me. Unfortunately, contrary to Rossi's expectation the paper has convinced me quasi instantly but beyond any doubt ,that the  plant does not work at all, the Test  is worse than a disaster."   The text is imagined but this must be the idea. A possible- but morally impossible! alternative is that Jed has invented the ghost document thing as a - again- subtle trap for Rossi who desperate seeing that Jed convinces everybody that no excess heat- will come with is data and Jed will officially mas sacrate and annihilate them- an easy pray for him!. Rossi did not want to comment about Rothwell's calorimetric genius, he knows why?! I have to confess that I have not searched the rothwellisms thoroughly- surly I will not be one of his biographers however the following statement- a strong pro-IH mme: has an even higher degree of rhinocerian subtlety- it is text and context- not extract: "Our enemies will put fraud front and center. This will be another blow against cold fusion, thanks to Rossi. By the grace of God we may still have money from I.H.,  without which this field would be dead, dead, dead." Some critics have found it needs a few definitions, then Jed has never spoken about the grace of God- as far I remember- the message is crystal clear: "IH is the savior of LENR!"    Isn't it too much to say that without money from IH LENR would be 3-times dead; simply dead is not sufficient? Money from SKINR, possible from the ARMY, money outside US as in Japan, India, Russia, China,  EU etc.will not be able to keep LENR  in a half dead  state? For Jed, the idea that LENR needs even more new ideas and young researchers than (micro)-funding is too subtle? He says directly to al LENR researchers: "Be against Rossi, be with IH- otherwise your research will die, die, die!" Further- no comment! c) Re-read Eugene Ionesco's play- we LENR fighters have to avoid rhinocerization! It is plenty of absurdity in this process of creating the memes of IH. I well remember this play about people losing their humanity I am terrified and will not tell more, my readers can decide if the meme-factory has something rhinocerian in it or, if on the contrary it is a congregation of angels?! However, I will finish in Jed';s NEW style asking: "For God's sake, IH please go boldly and openly to the Trial!" 1) Is Clueless Jed Rothwell Paid or Played to Slander Penon and the ERV Reports on the MW COP~50 E-Cat Plant? 2) Andrea Rossi answers Gerard McEk June 28, 2016 at 8:11 AM Dear Andrea, You recently said that the light of the QuarkX has given you an idea how the Rossi-effect may work. (In other words: you may have seen the light). 1. Do you make any progress with the theory and 2. Do you expect it to lead to new patents? In the past you said that you were preparing many patents. 3. Do you expect some of these to be published soon? 4. Is there any progress in the domestic QuarkX or 5. Do you expect that the lower temperature Ecat to be the most suitable solution? Thank you for answering our questions. Kind regards, Gerard Andrea Rossi June 28, 2016 at 3:55 PM Gerard McEk: 1. yes 2. yes 3. no 4. yes 5. I do not know yet Thank you for your attention, Warm Regards, 3) Andrea Rossi does not answers and does not comment: June 28, 2016 at 1:52 PM Dear Dr Andrea Rossi: sifferkoll link given here My comment: IH again tries to escape from the litigation. If 1/100 of the slanders and the lies deposited in the blogs by the mad dogs of IH were true, IH would be eager to go in court…the fact that they are trying to delay and to suffocate the litigation makes clear that they are afraid of it. Evidently they know that you have evidence that will defeat them in Court, where what counts is not the chattering of the mad dogs, but the real evidence. In fact it appears that you are fighting to go in Court, they are trying to run away. 4) Russian language video: "News re LENR and CNF philosophical storm: Seminar: "Philosophical Storm"  June 28, 2016 presentation of Igor Iurievich Danilov First part: QuarkX of Andrea Rossi Second part: Microbes of Tamara Vladimirovna Sahno and Viktor Mihailovich Kurashov 5) Did Jed Rothwell Admit Being an IH Contracted Spin Doctor with a Freudian Slip? The correct link to the Calaon paper is this: Understanding of molecular hydrogen has implications from industry to medicine Tuesday, June 28, 2016 The rule or domination by a meme or memes which are cultural practices or ideas that are transmitted verbally or by repeated actions from one person's conceptions to the minds of other people. My Septoe: "20. We live in memecracies, ideas dominate us." Image result for memes quotes a) IH 's plan seems to be based on memes- killer for Rossi and friendly for them 'Meme', the cultural equivalent of "gene" is a concept and word of vital importance  however paradoxically it is not a strong meme itself- it is a bit too intellectual. However you cannot think well if you do not consider the existence of memes. I have written a lot about them including in this Blog.  If you are not familiar with memes please read at least: It is my pleasure to announce you that now again memes have helped me to solve a prob;em I found first very difficult- in retrospective I was slow and non-creative and rigid in thinking. It is about the enigma of the furious and seemingly senseless character and plant and technology assassination campaign of the IH propagandist lead by Jed Rothwell. (see a new opus by him below).  Why, for Hermes's sake, if they are right and can automatically  win the Trial? Why, for Minerva's sake, if they are wrong, does it help when facts speak at the Trial? First it is obvious that IH manifests a totally negative enthusiasm toward the Trial and try very hard to escape from it see papers at 4) Legal battle. No traces of the noble spirit of "Fiat Justitia, pereat mundus",_et_pereat_mundus Justice at any price, but perhaps the cost is too high and the chances to win not so very high. So what they actually do is clear: Stay calm but angry, inventive, efficient, and make MEMES- two types:  A- Killer anti-Rossi and anti- what belongs to Rossi memes; B- Friendly nice pro-IH memes A-memes are cheap, free, but B-memes have a cost and need more fantasy.. and money. PLEASE read for that the opinions of Doug Marker. The plan is to disseminate these memes on the Web, make them contagious, the Press, the public opinion and perhaps even the jurors from the Court  will be memefied so the 'obviously good' will increase tremendously its chances to defeat the 'evidently malefic.' We live in memecracies. Indeed?  b) Jed Rothwell's  new opus  So dear Jed there are only technical questions, later you say we have to apply the Scientific Method. Please apply it to Rossi's question regarding persuasion of the investors, OK? 1) Ok, So What Did Really Happen When Industrial Heat F*cked Up the Deal with Leonardo/Rossi? And Why? 2) Jones Day Lawyer Drones on Repeat in Another MTD. However, again Showing the Malicious Intent of IH! 6) The mystery of the irrational withdraw of the ECAT support  7) TheNewFire - LENR News 8) Yet Another LENR Theory: Electron-mediated Nuclear Reactions (EMNR)  9) Andrea Calaon∗ Independent Researcher, Monza, Italy  An attempt is made to build an LENR theory that does not contradict any basic principle of physics and gives a relatively simple explanation to the plethora of experimental results. A single unconventional assumption is made, namely that nuclei are kept together by a magnetic attraction mechanism, as proposed in the 1980s of the past century by Valerio Dallacasa and Norman Cook. This assumption contradicts a non-proven detail of the standard model, which instead attributes the nuclear force to a residual effect of the strong interaction. The theory is based also on a property of the electron which has been known for long, but has rarely been used: the Zitterbewegung (ZB). This property should allow the magnetic attraction mechanism that binds nucleons together, to manifest also between the electron and any isotope of hydrogen, leading to the formation of three neutral pseudo-particles (the component particles remain separate entities), collectively named here Hydronions (or Hyd). These pseudo-particles can then couple with other nuclei and lead to a fusion reaction “inside” the electron. The Coulomb barrier is not overcome kinetically, but through what could be interpreted as a range extension of the nuclear force itself, realized by the electron when some specific conditions are satisfied. The most important of these necessary conditions is that the electron has to “orbit” the hydrogen nucleus at a frequency of 2.055 × 1016Hz. This frequency corresponds to photons with an energy of about 85 eV or equivalently a wavelength of 14.6 nm in the Extreme Ultra Violet (EUV). So the large quanta of nuclear energy fractionate into EUV photons during the formation of the Hydronions and during the coupling of Hydronions to other nuclei. The formation of Hydronions requires the so called Nuclear Active Environment (NAE), which is what makes LENR so rare and difficult to reproduce. The numbers suggest that the NAE forms when an unshielded atomic core electron orbital that has an “orbital frequency” near to the coupling frequency is stricken by a naked Hydrogen Nucleus (HNu). This theory therefore implies that the NAE is not inside the metal matrix, but in its immediate neighbourhood. The best candidate atoms for a NAE are listed, based on the energy of their ionization energies. The coincidence with the most common LENR materials appears noteworthy. The Electron Mediated Nuclear Reactions (EMNR) theory can explain also very rapid runaway conditions, radio emissions, biological NAE, and the so called “strange radiation”. ⃝ c 2016 ISCMNS. All rights reserved. ISSN 2227-3123 Keywords: EMNR theory, Extreme ultra violet, Hydronion, 10) Electron Deep Orbits of the Hydrogen Atom J. L. Paillet-1 , A. Meulenberg- 2  1 Aix-Marseille University, France,  2 Science for Humanity Trust, Inc., USA,  This work continues our previous work [1] and in a more developed form [2]), on electron deep orbits of the hydrogen atom. An introduction shows the importance of the deep orbits of hydrogen (H or D) for research in the LENR domain, and gives some general considerations on the EDO (Electron Deep Orbits) and on other works about deep orbits. A first part recalls the known criticism against the EDO and how we face it. At this occasion we highlight the difference of resolution of these problems between the relativistic Schrödinger equation and the Dirac equation, which leads for this latter, to consider a modified Coulomb potential with finite value inside the nucleus. In the second part, we consider the specific work of Maly and Va’vra [3], [4]) on deep orbits as solutions of the Dirac equation, so-called Deep Dirac Levels (DDLs). As a result of some criticism about the matching conditions at the boundary, we verified their computation, but by using a more complete ansatz for the “inside” solution. We can confirm the approximate size of the mean radii of DDL orbits and that decreases when the Dirac angular quantum number k increases. This latter finding is a self-consistent result since (as distinct from the atomic-electron orbitals) the binding energy of the DDL electron increases (in absolute value) with k. We observe that the essential element for obtaining deep orbits solutions is special relativity. All such questions are valid and deserve answers. Doug Marker Monday, June 27, 2016 Image result for paavo nurmi quotationImage result for jesse owens quotations “Citius, Altius, Fortius. Faster, Higher, Stronger.” (Olympic quote)  In soccer, LENR and Life Citius always wins ! b) Facts can be understood only in their context. Jed Rothwell: This question, his not mine can be formulated as:  Facts have significance only in context. A first fast example. You read; for a successful test. 2) LENR afternoon with Ubaldo Mastromatteo- more videos Pomeriggio Lenr Ubaldo Mastromatteo (5) [Vo]:Ukrainian Paper on the active particle of LENR 4) A cold fusion paper in Dutch: At the press conference have participated Tamar SahnoViktor Kutashov scientists Link to the patent for this invention 7) Also see the above info, here: 8 ) Greg Goble Energy 54+ Black Swans listed by Paul Maher Umair Haque: "The Art of Awakening" It is time for a LENR awakening! Why rudeness at work is contagious and difficult to stop
acab2481a78a97b8
Open Access The effects of porosity on optical properties of semiconductor chalcogenide films obtained by the chemical bath deposition • Yuri V Vorobiev1Email author, • Paul P Horley2, • Jorge Hernández-Borja1, • Hilda E Esparza-Ponce2, • Rafael Ramírez-Bon1, • Pavel Vorobiev1, • Claudia Pérez1 and • Jesús González-Hernández2 Nanoscale Research Letters20127:483 Received: 16 April 2012 Accepted: 4 August 2012 Published: 29 August 2012 This paper is dedicated to study the thin polycrystalline films of semiconductor chalcogenide materials (CdS, CdSe, and PbS) obtained by ammonia-free chemical bath deposition. The obtained material is of polycrystalline nature with crystallite of a size that, from a general point of view, should not result in any noticeable quantum confinement. Nevertheless, we were able to observe blueshift of the fundamental absorption edge and reduced refractive index in comparison with the corresponding bulk materials. Both effects are attributed to the material porosity which is a typical feature of chemical bath deposition technique. The blueshift is caused by quantum confinement in pores, whereas the refractive index variation is the evident result of the density reduction. Quantum mechanical description of the nanopores in semiconductor is given based on the application of even mirror boundary conditions for the solution of the Schrödinger equation; the results of calculations give a reasonable explanation of the experimental data. polycrystalline filmschalcogenide materialsnanoporesquantum confinement in pores Chemical bath deposition (CBD) is a cheap and energy-efficient method commonly used for the preparation of semiconductor films for sensors, photodetectors, and solar cells. It was one of the traditional methods to obtain chalcogenide semiconductors including CdS and CdSe [16]. However, large-scale CBD deposition of CdS films raises considerable environmental concerns due to utilization of highly volatile and toxic ammonia. On the other hand, the volatility of ammonia modifies pH of the reacting solution during the deposition process, causing irreproducibility of thin film properties for the material obtained in different batches [1, 3]. We manufacture CdS, CdSe, and PbS films using the CBD process to minimize the production cost and energy consumption. Ammonia-free CBD process was used to avoid negative environmental impact (see [7] reporting an example of CBD-made solar cell with structure glass/ITO/CdS/PbS/conductive graphite with quantum efficiency of 29% and energy efficiency of 1.6% ). All these materials have the melting temperatures above 1,000°C, remaining stable during the deposition process. It is also known that PbS is very promising for solar cell applications, confirmed by the recent discovery of multiple exciton generation in their nanocrystals [8]. Chemical bath-deposited films [9] have a particular structure. As a rule, at initial deposition stages, small (3 to 5 nm) nanocrystals are formed. They exhibit strong quantum confinement leading to large blueshift of the fundamental absorption edge. Historically, blueshift was first discovered namely in CBD-made CdSe films [9, 10]. At later stages, the crystallite size becomes larger so that the corresponding blueshift decreases. Another feature characteristic to the process is a considerable porosity [3, 9] inherent to the growth mechanism, which takes place ion by ion or cluster by cluster depending on the conditions or solution used (see also [11, 12]). The degree of porosity decreases for larger deposition time because the film becomes denser. At the initial stage, the porosity can be up to 70% [9], and at final stages, it will be only about 5% to 10% . In this paper, we present the experimental results for the investigation of porosity effects for relatively large deposition times upon the optical characteristics of CBD-made semiconductor materials such as CdS, CdSe, and PbS. We show that the nanoporosity can blueshift the absorption edge, leading to the variation observed for material with pronounced nanocrystallinity. For theoretical study of nanopores in a semiconductor, we use mirror boundary conditions to solve the Schrödinger equation, which were successfully applied to nanostructures of different geometries [1315]. We show that the same treatment of pores allows to achieve a good correlation between theorical and experimental data. The authors successfully developed ammonia-free CBD technology for polycrystalline CdS, CdSe, and PbS films, described in detail elsewhere [47, 11, 12]. We characterize the obtained structures by composition, microstructure (including average grain size), and morphology using X-ray diffraction, SEM, and EDS measurements. Optical properties were investigated with UV–vis and FTIR spectrometers. All experimental methods are described in the aforementioned references, together with the detailed results of this complex material study. Here, we would like to discuss optical phenomena characteristic to the entire group of semiconductor film studied, skipping the technological details that are given in [47, 11, 12]. Results and discussion For CBD-made materials obtained after long deposition time (which resulted into dense films with crystallite size of about 20 nm), we observed a blueshift of the fundamental absorption edge relative to the bulk material data [16] in all cases with the following shift values: 0.06 eV for CdS [7], 0.15 eV for CdSe [6] (see also Figure 1), and 0.1 to 0.4 eV for different samples of PbS (Figure 2). This effect was accompanied by reduction of refractive index n (in comparison with bulk crystal data, see Figure 3 for CdSe and Figure 4 for PbS). This reduction is larger for samples obtained with small deposition times, but it is always present in the films discussed here. We connect both effects with pronounced porosity of the films obtained by CBD method. In particular, the blueshift in the dense CBD films is attributed to the quantum confinement in pores. Figure 1 Transmission spectrum of 0.5-μm thick CdSe film. Figure 2 Diagram used to determine bandgap of PbS CBD sample with growth time of 3 h. The value of D corresponds to optical density. Figure 3 Refractive index of CdSe. Squares indicate the data for the bulk material adapted from [14], and circles correspond to CBD film. Figure 4 Optical constants n, k of PbS CBD films with different deposition times. Figure 1 presents the transmission spectrum of 0.5-μm-thick CdSe films (deposition time of 4 h) displaying a clear interference pattern, characterized with transmission maxima at 2dn = and minima at 2dn = (N − 1/2) λ. Here, λ is the wavelength, d is the film thickness, and N is an integer defining the order of interference pattern. With these expressions, we calculated the spectrum of refractive index (Figure 3, circles). The squares in the same figure present the data for the bulk material [17] displaying a considerable drop of refractive index for the film in comparison with bulk material. Figure 2 presents the diagram for PbS allowing to determinate the bandgap via direct interband transitions observed for all the materials studied by plotting the squared product of optical density and photon energy as a function of the latter. The similar diagrams for CdS and CdSe were given in [6, 7]. The case of PbS requires more attention. Figure 5 presents the dependence of the crystallite size upon the deposition time. Figure 4 shows the spectra for optical constants (refractive index n and extinction coefficient k) measured for four PbS films deposited with growth time ranging from 1 to 4 h; in the latter case, the result was a 100-nm-thick film. It is clear that for larger deposition time, the film becomes denser so that refraction index and extinction coefficients increase. Their spectral behavior follows qualitatively the corresponding curves of the bulk material, but the values are essentially lower, even when deposited film has a considerable thickness. For example, the refractive index for film is 4 at most for the wavelength 450 nm, whereas for the bulk material, the corresponding value is 4.3. As for extinction coefficient k, the maximum of 2.75 is achieved at the wavelength of 350 nm, with the corresponding bulk value of 3.37. Figure 5 Dependence of the grain size of PbS CBD samples on growth time. The line is given as eye guide only. We assume that the pores in a dense CBD film correspond to the spaces between crystallites' boundaries. Therefore, in cubic crystals, the pores most probably will be of prismatic shape, defined by plane boundaries of the individual grains. These prismatic pores most probably will have a length (height) equal to the grain size, with quadratic or rectangular triangle cross-section. As pores and crystallites are considered to be of equal height, the question of a volume fraction of pores reduces to two dimensions by being equal to the ratio of pore cross-sectional area to the total cross-section of the film, assuming that in the average there will be one pore per one crystallite. The dimensions of the pore will define the blueshift observed, which can be seen from the following theoretical consideration. Electron confined in pores: quantum mechanical approach It was proposed (see [1315]) to treat semiconductor quantum dots (QDs) as ‘mirror-wall boxes’ confining the particle, resulting in mirror boundary conditions for analytical solution of the Schrödinger equation in the framework of the effective mass approximation. The basic assumption is that a particle (an electron or a hole) is specularly reflected by a QD boundary, which sets the boundary conditions as equivalence of particle's Ψ-function in an arbitrary point r inside the semiconductor (Ψr) with wave function in the image point im (Ψim). It must be mentioned that Ψ-function in real and image points can be equated by its absolute values since the physical meaning is connected with |Ψ|2, so that mirror boundary conditions can have even and odd forms (Ψr = Ψim in the former case, and Ψr = −Ψim in the latter). The ‘odd’ case is equivalent to the impenetrable boundary conditions and strong confinement because Ψ-function vanishes at the boundary. The milder case of even mirror boundary conditions represents weak confinement and occurs when a particle is allowed to have tunnel probability inside the boundary. It is evident that our basic assumption is favorable for effective mass approximation as it increases the length of effective path for a particle in a semiconductor material. Besides, in high symmetry case, the assumption of mirror boundary conditions forms a periodic structure filling the space. We have shown [15] that the use of even mirror boundary conditions gives the same solution as Born-von Karman boundary conditions applied to a periodic structure. The treatment performed in [1315] of the QDs with different shapes (rectangular prism, sphere and square base pyramid) yielded the energy spectra that have a good agreement with the published experimental data achievable without any adjustable parameters. Let us consider an inverted system: a pore formed by a void surrounded by a semiconductor material. The reflection accompanied with a partial tunneling into QD boundary (for the case of even mirror boundary conditions) can be described as equivalence of Ψ-function values in a real point in the vicinity of the boundary and a reflection point in a mirror boundary. Hence, the solution of the Schrödinger equation for a pore within semiconductor material will be the same as that for a QD of equal geometry with an equal expression for the particle's energy spectrum. Table 1 summarizes the expressions for energy spectra obtained for QDs of several basic shapes with application of even mirror boundary conditions. All spectra have the same character, with a quadratic dependence on quantum numbers (all integers or odd numbers for a particular case of spherical QD [15]) and an inverse quadratic dependence on QD's dimensions. Besides, the position of energy levels has an inverse dependence on the effective mass [18, 19]. Table 1 Energy spectra of different QDs Shape of a QD Cube, side a Prism (square base, side a ), height c > > a Sphere, diameter a Prism, triangular base (side a ), height c > > a Energy spectrum E = 3 8 h 2 n 2 m a 2 E = 1 4 h 2 n 2 m a 2 E = 1 8 h 2 2 n + 1 2 m a 2 E = h 2 n 2 2 m a 2 Here, n is a quantum number, and m is the effective mass of the particle. Comparison with the experiment In the following discussion, we take into account that typical pores in CBD materials have a characteristic size a of several nanometers [3, 9], being much smaller than the Bohr radius α B for an exciton, α/2 < < α B , which is especially important for the case of exciton formation under the action of a light beam incident on semiconductor. The energy difference defines the blueshift of absorption edge. In all semiconductors studied, the value of α B exceeds 15 nm according to the expression below: a B = 4 π 2 ϵ ϵ 0 μ e 2 w i t h r e d u c e d m a s s μ = m e m h m e + m h Here, me,h is the electron/hole effective mass, ϵ is the dielectric constant of the material, and ϵ0 is a permittivity constant. Following the argumentation given in [18, 19], we see that one can directly apply the expressions for energy spectra because the separation between the quantum levels proportional to ħ 2 /ma 2 is large compared to the Coulomb interaction between the carriers which is proportional to e2/ϵ ϵ0α. Therefore, Coulomb interaction can be neglected, and the energy levels could be found from quantum confinement effect alone. Accordingly, we shall calculate the emission/absorption photon energy for transitions corresponding to the exciton ground state, which is given by n = 0 for spherical QD and n = 1 for other geometries. From Table 1, it follows that the lowest energy value can be obtained for a spherical QD, whereas for a prism with quadratic section, the energy value is twice larger. For all other geometries, the energy has the latter order of magnitude. For the estimation of porosity effects, we will use the expression for a prismatic QD with a square base, assuming that the fundamental absorption edge corresponds to generation of an exciton with ground state energy: ω min = E g + h 2 4 μ a 2 with the semiconductor bandgap E g . In the case of CdSe (exciton reduced mass of 0.1 m0) using the expression (2) and the band edge shift ħωminE g = 0.15 eV (1.88 − 1.73), we calculate the pore size of 7 nm. For the average crystallite dimension of 22 nm, the pore fraction, thus, would be (7/22)2 ≈ 10% , which is twice as big as the relative reduction of refractive index found (Figure 3). To explain the edge shift observed in CdS (exciton reduced mass 0.134 m0[16]), one obtains the pore size of 8 nm. Here, the crystallite size is 20.1 nm, making the total pore fraction of approximately 12% . The observed reduction of refractive index changes from 2.5 for the bulk material [7, 16] to 2.3 for 600-nm-thick film, yielding the pore fraction of 9% that is close to our predictions. The reduced mass for PbS is 0.0425 m0[16], and the observed edge shift is 0.4 eV, yielding the average pore size of 6.5 nm. Having the crystallite size of 20 nm, it will give the pore fraction of 10% (observed reduction of refractive index in [7] was 8% , and from Figure 4 we obtain the value of 7.5% ). We see that in all cases, the volumetric percentage of pores calculated using the blueshift values renders the correct order that is verified from the refractive index reduction. However, the latter value is always smaller that may mean that the pores' height is about 30% to 40% less than that of the grains. It should be noted that in cases of PbS, due to high value of dielectric constant (17) and small exciton reduced mass, the Bohr radius for an exciton (21 nm) appears to be the same order of magnitude as the grain size. It means that the quantum confinement effect can be observed even without taking into account the porosity of the material. This effect was studied experimentally in [20] for PbS spherical quantum dots. It was found that in PbS quantum dots with diameter of 3.5 nm, the blue band edge shift of 1.05 eV is observed. Taking into account that the blueshift due to quantum confinement is reversely proportional to the square of the dot's diameter, we find that the shift caused by the crystallite size of 20 nm will be equal to 0.03 eV, which is about 10 times smaller than the observed values. We also note that the smaller crystallite size observed in our experiments at early stages of CBD process (variation from 8 to 18 nm, see Figure 5) does not allow to explain the experimentally observed blueshift. Thus, we conclude the mandatory accounting of nanopores, which offers improved agreement between theoretical and experimental data. We report on ammonia-free CBD method that provides cheap, efficient, and environmentally harmless production of CdS, CdSe, and PbS films. Material porosity inherent to CBD technique can be used to fine-tune the material bandgap towards the required values, paving promising ways for solar cell applications. The theoretical description of porosity based on the solution of the Schrödinger equation with even mirror boundary conditions provides a good correlation of theoretical and experimental data. The authors are grateful to Editor Prof. Andres Cantarero for the support and encouragement in the revision of the manuscript. PV and CP wish to thank CONACYT for their scholarships. Authors’ Affiliations CINVESTAV-IPN Unidad Querétaro, Libramiento Norponiente 2000, Fracc. Real de Juriquilla, Querétaro, México CIMAV Chihuahua/Monterrey, Avenida Miguel de Cervantes 120, Chihuahua, México 1. Nemec P, Nemec I, Nahalkova P, Nemcova Y, Trojank F, Maly P: Ammonia-free method for preparation of CdS nanocrystals by chemical bath deposition technique. Thin Solid Films 2002, 403–404: 9–12.View ArticleGoogle Scholar 2. Nakada T, Mitzutani M, Hagiwara Y, Kunioka A: High-efficiency Cu(In, Ga)Se2 thin film solar cell with a CBD-ZnS buffer layer. Sol Energy Mater Sol Cells 2001, 67: 255–260. 10.1016/S0927-0248(00)00289-0View ArticleGoogle Scholar 3. Lokhande CD, Lee EH, Jung KID, Joo QS: Ammonia-free chemical bath method for deposition of microcrystalline cadmium selenide films. Mater Chem Phys 2005, 91: 200–204. 10.1016/j.matchemphys.2004.11.014View ArticleGoogle Scholar 4. Ortuño-Lopez MB, Valenzula-Jauregui JJ, Ramírez-Bon R, Prokhorov E, González-Hernández J: Impedance spectroscopy studies on chemically deposited CdS and PbS films. J Phys Chem Solids 2002, 63: 665–668. 10.1016/S0022-3697(01)00210-4View ArticleGoogle Scholar 5. Valenzula-Jauregui JJ, Ramírez-Bon R, Mendoza-Galvan A, Sotelo-Lerma M: Optical properties of PbS thin films chemically deposited at different temperatures. Thin Solid Films 2003, 441: 104–110. 10.1016/S0040-6090(03)00908-8View ArticleGoogle Scholar 6. Esparza-Ponce H, Hernández-Borja J, Reyes-Rojas A, Cervantes-Sánchez M, Vorobiev YV, Ramírez-Bon R, Pérez-Robles JF, González-Hernández J: Growth technology, X-ray and optical properties of CdSe thin films. Mater Chem Physics 2009, 113: 824–828. 10.1016/j.matchemphys.2008.08.060View ArticleGoogle Scholar 7. Hernández-Borja J, Vorobiev YV, Ramírez-Bon R: Thin film solar cells of CdS/PbS chemically deposited by an ammonia-free process. Sol En Mat Solar Cells 2011, 95: 1882–1888. 10.1016/j.solmat.2011.02.012View ArticleGoogle Scholar 8. Ellingson RJ, Beard MC, Johnson JC, Yu P, Micic OI, Nozik AJ, Shabaev A, Efros AL: Highly efficient multiple exciton generation in colloidal PbSe and PbS quantum dots. Nano Lett 2005, 5: 865–871. 10.1021/nl0502672View ArticleGoogle Scholar 9. Hodes G: Semiconductor and ceramic nanoparticle films deposited by chemical bath depo sition. Phys Chem Chem Phys 2007, 9: 2181–2196.View ArticleGoogle Scholar 10. Hodes G, Albu-Yaron A, Decker F, Motisuke P: Three-dimensional quantum size effect in chemically deposited cadmium selenide films. Phys Rev B 1987, 36: 4215–4222. 10.1103/PhysRevB.36.4215View ArticleGoogle Scholar 11. Sandoval-Paz MG, Sotelo-Lerma M, Mendoza-Galvan A, Ramírez-Bon R: Optical properties and layer microstructure of CdS films obtained from an ammonia-free chemical bath deposition process. Thin Solid Films 2007, 515: 3356–3362. 10.1016/j.tsf.2006.09.024View ArticleGoogle Scholar 12. Sandoval-Paz MG, Ramírez-Bon R: Analysis of the early growth mechanisms during the chemical deposition of CdS thin films by spectroscopic ellipsometry. Thin Solid Films 2007, 517: 6747–6752.View ArticleGoogle Scholar 13. Vieira VR, Vorobiev YV, Horley PP, Gorley PM: Theoretical description of energy spectra of nanostructures assuming specular reflection of electron from the structure boundary. Phys Stat Sol C 2008, 5: 3802–3805. 10.1002/pssc.200780107View ArticleGoogle Scholar 14. Vorobiev YV, Vieira VR, Horley PP, Gorley PN, González-Hernández J: Energy spectrum of an electron confined in the hexagon-shaped quantum well. Science in China Series E: Technological Sciences 2009, 52: 15–18. 10.1007/s11431-008-0348-6View ArticleGoogle Scholar 15. Vorobiev YV, Horley PP, Vieira VR: Effect of boundary conditions on the energy spectra of semiconductor quantum dots calculated in the effective mass approximation. Physica E 2010, 42: 2264–2267. 10.1016/j.physe.2010.04.027View ArticleGoogle Scholar 16. Singh J: Physics of Semiconductors and Their Heterostructures. McGraw-Hill, New York; 1993.Google Scholar 17. Palik ED: (Ed): Handbook of Optical Constants of Solids. Academic Press, San Diego; 1998.Google Scholar 18. Éfros AL, Éfros AL: Interband absorption of light in a semiconductor sphere. Sov Phys Semicond 1982, 16(7):772–775.Google Scholar 19. Gaponenko SV: Optical Properties of Semiconductor Nanocrystals. Cambridge University Press, Cambridge; 1998.View ArticleGoogle Scholar 20. Deng D, Zhang W, Chen X, Liu F, Zhang J, Gu Y, Hong J: Facile synthesis of high-quality, water-soluble, near-infrared-emitting PbS quantum dots. Eur J Inorg Chem 2009, 2009: 3440–3446. 10.1002/ejic.200900227View ArticleGoogle Scholar © Vorobiev et al.; licensee Springer. 2012
91103a8a74c77507
Free chemistry software - quantum chemistry Free chemistry software - quantum chemistry In the previous part of the journey into free chemistry software I wrote about molecular mechanics. The molecular mechanics view of a molecular system is a classical mechanical view i.e., the atoms move according to Newton's law. Quantum mechanics revolutionized the view of the atomic world. The Schrödinger equation is a general model and number of parameters is limited. Solving a Schrödinger equation for an atomic or molecular system is often referred to an ab initio calculation. The word ab initio is latin and can be translated to from the beginning or  from first principles. The Schrödinger equaiton - in particular the time-independent equation - can be used to calculate many different properties of chemical substances. Properties like enthalpy and entropy, and heat capacity can calculated as the partition function can be calculated from the solution. Moreover, it is possible to predict data for various spectra (IR, Raman, UV-VIS, etc.). The problem is that it is not possible to find an analytical solution for any non-trivial cases. The solutions to the equation are called wave functions. The Copenhagen interpretation of quantum mechanics is that the wave function represents the probably distribution (in space) of the electron. In order to solve the (time independent) Schrödinger equation you must make two important approximations. The first approximation is that the nucleus of the atom is fixed in space, and the Schrödinger equation is reduced to only model the electrons. This approximation is called the Born-Oppenheimer approximation. The rationale behind the approximation is that the nucleus of an atom is much heavier that the electrons, and the nucleus moves much slower (this is probably a classical mechanical picture). The second approximation is that the wave function is a sum - or linear combination - of other functions. These functions form a basis set. The basis set is often atom orbitals (solution to the Schrödinger equation for a single atom), and they are often selected in much a manner that they require so few CPU cycles to process as possible. A popular family of basis set is the Slater-type orbitals. There are many quantum chemistry packages in the world. From a license perspective, they fall in three groups. The first group is completely proprietary software. You buy the package (maybe the source code is included so you can compile it yourself) and can use it for the set of computers, you have bought a license for. It is not rare to find special pricing schemes for users in the academic world. Gaussian is a leading software vendor in this field, and a site licence (with UNIX source code) for commercial use is listed as roughly 40,000 USD. This might sound like a high price tag but you must remember that the development of a new drug can easily run in billions. Compared to that, the software is quite cheap - if you are in big pharma. Gaussian does provided must more that just a quantum chemistry calculation engine - it includes various supporting utilities. The group of free software packages is another group with a completely different licensing model. A typical example is MPQC (Massive Parallel Quantum Chemistry). Released under GNU General Public License it is a truly free software package. The motivation of the developers is to test new algorithm - in particular for the parallellization for using either compute clusters or multicore computers. Somewhere in between you find the third group. Software packages in this group are free to download (if you are in academia or using it for personal purposes) but you are allowed to redistribute the source code or binary. And often you must include a citation to a particular scientific paper if you publish any results based on the software. In this group you find packages like GAMESS (both US and UK versions).  The program gabedit provides a uniform interface to many quantum chemistry packages including GAMESS-US, GAMESS-UK, Gaussian and MPQC. The program exists as a binary package for most Linux distributions including Ubuntu Linux and Debian GNU/Linux which makes the installation quite simple. 1,3,7-trimethylxanthine is a known (legal) psycho-active chemical. It is found naturally in coffee, tea, and soft drinks. The trivial (non systematic) name is cafferine. It is a small molecule with an aromatic ring structure. Setting up a calculation in Gabedit You can either draw your molecule or load a file. The structure of many molecules can be found at PubChem or similar services. A 3-dimensional structure of Cafferine can be found at PubChem. A 3-dimensional structure is nothing more that the coordinates of the nuclei of the atoms in the molecule. Using OpenBabel (discussed in previous blog entries), you can convert the PubChem files format to something that Gabedit can understand. Setting up a calculation is simple but you are required to know and understand all the parameters and methods. For a causal user of quantum chemistry software, Gabedit might help you to allow to write configuration files manually, but the program is a thin wrapper only. Monitoring a calculation Once you have edited the calculation paramters, you are ready to go. It is possible to monitor the progress of your simulation but beware that quantum chemistry calculation might take hours. Gabedit will use the good old UNIX trick called nohup so you can come back later. Moreover, Gabedit let you perform the calculation on remote machines. This is useful for long calculations. When the calculation has finished, Gabedit can assist you in analyzing the result. It is only a limited number of analyses offered, and if you need more advanced analysis, you will probably write small scripts and programs to help you. Analyzing the result of a geometry optimization using Gabedit The really good thing about Gabedit is that it provides a uniform interface to the most common quantum chemistry packages. Other software packages exists e.g., Ghemical. Ghemical is fairly tight bound to GNOME, and tries to solve all needs for computational chemistry in one packages. The latest version of Ghemical is recent (October 2011). MPQC is a pure free software project which provides an ab initio package. It is designed to run in multicore or cluster environments i.e., in massive parallel environment. Ubuntu and Debian packages exists. Currently, two packages exist: the core program and supporting utilities. Even an Emacs mode can be found for editing the configuration files. The main mode of operation is that the user writes a configuration or input file and run the program from the command-line or through a batch system. The OpenBabel conversion utility supports MPQC and can write raw input files from structure file formats. This simplifies the usage a lot in the learning period. In order to test MPQC, I have prepared an input file using Caffeine. As MPQC supports the calculation of thermodynamics properties (non-electronic enthalpy and entropy), my test input file will perform this analysis after the geometry optimization using the STO-6G basis set and a Hartree-Fock method. The Hartree-Fock method is only calculating the ground level state, and the thermodynamics will be somewhat inaccurate. MPQC/Linux Speed-up As MPQC is a parallel calculation program, it is worth analyzing the speed-up i.e., how well does scale as the number of core/CPUs increases. MPQC is parallellized using either MPI, SysV shared memory and Posix threads. Running on multiple cores is simple with MPQC. You can specify the number of threads on the command-line. For a 4-thread calculation, the command-line is: mpqc -threadgrp "<PthreadThreadGrp>:(num_threads = 4) -o caffeine.out caffeine.in where caffeine.in is the input file and the output is found in caffeine.out. As I have access to a hyperthreaded quad-core computer, I have tested the Posix thread parallellization (please remember that Linux has a highly optimized Posix thread implementation). The result of my simple benchmark shows that hyperthreading is not a good idea in heavy computing environment - notice that the speed-up levels off at four cores. The reason is probably that the threads are competing for a limited resource: the floating-point units. A calculation of the Caffeine molecule took about 8 hours using 4 Posix threads, and this only calculated the ground state using the STO-6G basis set. In general it is possible to carry out quantum chemistry calculation using free software if you can live without a fancy user interface. If you do many calculations, you might be happy to know that you can modify and extend the source code. For a causal computational chemist (maybe an organic chemist predicting spectras), free chemistry software might not be the solution yet. Saved by Dropbox Saved by Dropbox I have installed Dropbox on my laptops and my phone. On my laptops I use it for two purposes: synchronization and backup. Initially, I used it for synchronization of my private and my work laptop (both computers are running Linux). The laptop at work is a bit unstable. It crashes probably 3-4 times a week, and I believe it has something to do with the temperature of the processor. It happened to me the other way, and an OpenOffice document was left in a state where the file did not contain anything else that zeroes. Luckily, Dropbox keeps track on my revisions of the files. That means that I was able to go one revision back and recover much of my file (probably lost half a page). Needless to say, I'm pretty happy with the Dropbox service now! Emacsforum 2011 Emacsforum 2011 Peter Toft and I are in the process of preparing Emacsforum 2011 with some help by Troels Henriksen (at DIKU) and Keld Simonsen (from KLID). The program is almost ready for publication, so I will not say too much - but there will be something for scientists and developers. Even our Evil Twin will be represented. The mini-conference takes place 12th November 2011 at DIKU. The is no conference fee - and there will be no benifits. If you are using Emacs (and even XEmacs) and live in the Copenhagen area, Emacsforum is a good place to meet fellow users. Free chemistry software - molecular modeling Free chemistry software - utilities Free chemistry software - utilities and having one on your desktop seems as a good idea. Free chemistry software - Introduction
d6a5237cf143b009
From Wikipedia, the free encyclopedia In physics, energy is a property of objects which can be transferred to other objects or converted into different forms.[1] The “ability of a system to perform work” is a common description, but it is misleading because energy is not necessarily available to do work.[2] For instance, in SI units, energy is measured in joules, and one joule is defined “mechanically”, being the energy transferred to an object by the mechanical work of moving it a distance of 1 metre against a force of 1 newton.[note 1] However, there are many other definitions of energy, depending on the context, such as thermal energy, radiant energy, electromagnetic, nuclear, etc., where definitions are derived that are the most convenient. Common energy forms include the kinetic energy of a moving object, the potential energy stored by an object’s position in a force field (gravitational, electric or magnetic), the elastic energy stored by stretching solid objects, the chemical energy released when a fuel burns, the radiant energy carried by light, and the thermal energy due to an object’s temperature. All of the many forms of energy are convertible to other kinds of energy. In Newtonian physics, there is a universal law of conservation of energy which says that energy can be neither created nor be destroyed; however, it can change from one form to another. For “closed systems” with no external source or sink of energy, the first law of thermodynamics states that a system’s energy is constant unless energy is transferred in or out by mechanical work or heat, and that no energy is lost in transfer. This means that it is impossible to create or destroy energy. While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations.[3] The total energy of a system can be calculated by adding up all forms of energy in the system. Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. Our Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that in itself (since it still contains the same total energy even if in different forms), but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy. Mass and energy are closely related. According to the theory of mass–energy equivalence, any object that has mass when stationary in a frame of reference (called rest mass) also has an equivalent amount of energy whose form is called rest energy in that frame, and any additional energy acquired by the object above that rest energy will increase an object’s mass. For example, with a sensitive enough scale, one could measure an increase in mass after heating an object. Living organisms require available energy to stay alive, such as the energy humans get from food. Civilisation gets the energy it needs from energy resources such as fossil fuels, nuclear fuel, or renewable energy. The processes of Earth’s climate and ecosystemare driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth Main article: Forms of energy Potential energies are often measured as positive or negative depending on whether they are greater or less than the energy of a specified base state or configuration such as two interacting bodies being infinitely far apart. Wave energies (such as radiant or sound energy), kinetic energy, and rest energy are each greater than or equal to zero because they are measured in comparison to a base state of zero energy: “no wave”, “no motion”, and “no inertia”, respectively. These notions of potential and kinetic energy depend on a notion of length scale. For example, one can speak of macroscopic potential and kinetic energy, which do not include thermal potential and kinetic energy. Also what is called chemical potential energy is a macroscopic notion, and closer examination shows that it is really the sum of the potential and kinetic energy on the atomic and subatomic scale. Similar remarks apply to nuclear “potential” energy and most other forms of energy. This dependence on length scale is non-problematic if the various length scales are decoupled, as is often the case … but confusion can arise when different length scales are coupled, for instance when friction converts macroscopic work into microscopic thermal energy. Some examples of different kinds of energy: Forms of energy Type of energy Description Potential A category comprising many forms in this list Mechanical The sum of (usually macroscopic) kinetic and potential energies Chemical that contained in molecules Electric that from electric fields Magnetic that from magnetic fields Radiant (≥0), that of electromagnetic radiation including light Nuclear that of binding nucleons to form the atomic nucleus Ionization that of binding an electron to its atom or molecule Gravitational that from gravitational fields Rest (≥0) that equivalent to an object’s rest mass Thermal A microscopic, disordered equivalent of mechanical energy Thomas Young – the first to use the term “energy” in the modern sense. The word energy derives from the Ancient Greek: ἐνέργεια energeia “activity, operation”,[4] which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In 1807, Thomas Young was possibly the first to use the term “energy” instead of vis viva, in its modern sense.[5]Gustave-Gaspard Coriolis described “kinetic energy” in 1829 in its modern sense, and in 1853, William Rankine coined the term “potential energy”. The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat. Units of measure Main article: Units of energy In 1843 James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the “Joule apparatus”: a descending weight, attached to a string, caused rotation of a paddle immersed in water, practically insulated from heat transfer. It showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle. In the International System of Units (SI), the unit of energy is the joule, named after James Prescott Joule. It is a derived unit. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units not part of the SI, such as ergs, calories, British Thermal Units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units. The SI unit of energy rate (energy per unit time) is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce. Scientific use Classical mechanics Work, a form of energy, is force times distance. {\displaystyle W=\int _{C}\mathbf {F} \cdot \mathrm {d} \mathbf {s} }W=\int _{C}\mathbf {F} \cdot \mathrm {d} \mathbf {s} This says that the work ({\displaystyle W}W Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction). Main articles: Bioenergetics and Food energy Basic overview of energy and human life. C6H12O6 + 6O2 → 6CO2 + 6H2O C57H110O6 + 81.5O2 → 57CO2 + 55H2O and some of the energy is used to convert ADP into ATP. ADP + HPO42− → ATP + H2O The rest of the chemical energy in O2[10] and the carbohydrate or fat is converted into heat: the ATP is used as a sort of “energy currency”, and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:[note 2] gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ Daily food intake of a normal adult: 6–8 MJ Earth sciences In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth’s interior,[12] while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations brought about by solar energy on the atmosphere of the planet Earth. Quantum mechanics Main article: Energy operator In quantum mechanics, energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck’s relation: {\display style E=h\nu }E=h\nu (where {\display style h}h is Planck’s constant and {\display style \nu }\nu the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons. When calculating kinetic energy (work to accelerate a mass from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest mass energy: energy which every mass must possess even when being at rest. The amount of energy is directly proportional to the mass of body: {\displaystyle E=mc^{2}}E=mc^{2}, m is the mass, c is the speed of light in vacuum, E is the rest mass energy. It is not uncommon to hear that energy is “equivalent” to mass. It would be more accurate to state that every energy has an inertia and gravity equivalent, and because mass is a form of energy, then mass too has inertia and gravity associated with it. In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector).[13] In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts). Main article: Energy transformation A turbo generator transforms the energy of pressurised steam into electrical energy There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot’s theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all availabledegrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. Energy is also transferred from potential energy ({\displaystyle E_{p}}E_{p} ) to kinetic energy ({\displaystyle E_{k}}E_{k}) and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: {\displaystyle E_{pi}+E_{ki}=E_{pF}+E_{kF}}E_{pi}+E_{ki}=E_{pF}+E_{kF} The equation can then be simplified further since {\displaystyle E_{p}=mgh}E_{p}=mgh (mass times acceleration due to gravity times the height) and {\displaystyle E_{k}={\frac {1}{2}}mv^{2}}E_{k}={\frac {1}{2}}mv^{2} (half mass times velocity squared). Then the total amount of energy can be found by adding {\displaystyle E_{p}+E_{k}=E_{total}}E_{p}+E_{k}=E_{total}. Conservation of energy and mass in transformation Matter may be converted to energy (and vice versa), but mass cannot ever be destroyed; rather, mass/energy equivalence remains a constant for both the matter and the energy, during any process when they are converted into each other. However, since {\displaystyle c^{2}}c^{2} Reversible and non-reversible transformations Conservation of energy Main article: Conservation of energy Richard Feynman said during a 1961 lecture:[15] Most kinds of energy (with gravitational energy being a notable exception)[16] are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.[3][15] This law is a fundamental principle of physics. As shown rigorously by Noether’s theorem, the conservation of energy is a mathematical consequence of translational symmetry of time,[17] a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured. {\displaystyle \Delta E\Delta t\geq {\frac {\hbar }{2}}}\Delta E\Delta t\geq {\frac {\hbar }{2}} Energy transfer Closed systems Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat.[note 4] Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy,[note 5] and the conductive transfer of thermal energy. Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:[note 6] {\displaystyle \Delta {}E=W+Q}\Delta {}E=W+Q where {\displaystyle E}E is the amount of energy transferred, {\displaystyle W}W  represents the work done on the system, and {\displaystyle Q}Q represents the heat flow into the system. As a simplification, the heat term, {\displaystyle Q}Q, is sometimes ignored, especially when the thermal efficiency of the transfer is high. {\displaystyle \Delta {}E=W}\Delta {}E=W Open systems Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by {\displaystyle E}E , one may write {\displaystyle \Delta {}E=W+Q+E.}\Delta {}E=W+Q+E. Internal energy First law of thermodynamics The first law of thermodynamics asserts that energy (but not necessarily thermodynamic free energy) is always conserved[19] and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as {\displaystyle \mathrm {d} E=T\mathrm {d} S-P\mathrm {d} V\,}\mathrm {d} E=T\mathrm {d} S-P\mathrm {d} V\,, {\displaystyle \mathrm {d} E=\delta Q+\delta W}\mathrm {d} E=\delta Q+\delta W where {\displaystyle \delta Q}\delta Q is the heat supplied to the system and {\displaystyle \delta W}\delta W is the work applied to the system. Equipartition of energy This principle is vitally important to understanding the behaviour of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between “new” and “old” degrees. This mathematical result is called the second law of thermodynamics. 1. Jump up^ Energy (and its units) are often defined in terms of the work they can do. However, technically this is only an approximation, because the second law of thermodynamics means the work a system can do is always less than the total energy of the system, due to waste heat. See: Robert L. Lehrman (1973). “Energy is not the ability to do work” (PDF). The Physics Teacher. 4. Jump up^ Although heat is “wasted” energy for a specific energy transfer,(see: waste heat) it can often be harnessed to do useful work in subsequent interactions. However, the maximum energy that can be “recycled” from such recovery processes is limited by the second law of thermodynamics. 6. Jump up^ There are several sign conventions for this equation. Here, the signs in this equation follow the IUPAC convention. 1. Jump up^ Kittel, Charles; Kroemer, Herbert (1980-01-15). Thermal Physics. Macmillan. ISBN 9780716710882. 2. Jump up^ Benno Maurus Nigg; Brian R. MacIntosh; Joachim Mester (2000). Biomechanics and Biology of Movement. Human Kinetics. p. 12. ISBN 9780736003315. 3. ^ Jump up to:a b The Laws of Thermodynamics including careful definitions of energy, free energy, et cetera. 4. Jump up^ Harper, Douglas. “Energy”. Online Etymology Dictionary. Retrieved May 1, 2007. 6. Jump up^ Lofts, G; O’Keeffe D; et al. (2004). “11 — Mechanical Interactions”. Jacaranda Physics 1 (2 ed.). Milton, Queensland, Australia: John Willey & Sons Australia Ltd. p. 286. ISBN 0-7016-3777-3. 7. Jump up^ The Hamiltonian MIT OpenCourseWare website 18.013A Chapter 16.3 Accessed February 2007 8. Jump up^ “Retrieved on May-29-09”. Retrieved 2010-12-12. 9. Jump up^ Bicycle calculator – speed, weight, wattage etc. [1]. 10. Jump up^ Schmidt-Rohr, K (2015). “Why Combustions Are Always Exothermic, Yielding About 418 kJ per Mole of O2“. J. Chem. Educ. 92: 2094–2099. doi:10.1021/acs.jchemed.5b00333. 11. Jump up^ Ito, Akihito; Oikawa, Takehisa (2004). “Global Mapping of Terrestrial Primary Productivity and Light-Use Efficiency with a Process-Based Model.” in Shiyomi, M. et al. (Eds.) Global Environmental Change in the Ocean and on Land. pp. 343–58. 12. Jump up^ “Earth’s Energy Budget”. Retrieved2010-12-12. 13. ^ Jump up to:a b Misner, Thorne, Wheeler (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 0-7167-0344-0. 14. Jump up^ Berkeley Physics Course Volume 1. Charles Kittel, Walter D Knight and Malvin A Ruderman 15. ^ Jump up to:a b Feynman, Richard (1964). The Feynman Lectures on Physics; Volume 1. U.S.A: Addison Wesley. ISBN 0-201-02115-3. 16. Jump up^ “E. Noether’s Discovery of the Deep Connection Between Symmetries and Conservation Laws”. 1918-07-16. Retrieved 2010-12-12. 17. Jump up^ “Time Invariance”. Retrieved2010-12-12. 18. Jump up^ I. Klotz, R. Rosenberg, Chemical Thermodynamics – Basic Concepts and Methods, 7th ed., Wiley (2008), p.39
e893a67846bd3ff7
Download Q and P college-physics-with-concept-coach-3.3 yes no Was this document useful for you?    Thank you for your participation! Document related concepts Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup Conservation of energy wikipedia, lookup Woodward effect wikipedia, lookup Lorentz force wikipedia, lookup Speed of gravity wikipedia, lookup Electrical resistance and conductance wikipedia, lookup Gravity wikipedia, lookup Weightlessness wikipedia, lookup Free fall wikipedia, lookup Electromagnetism wikipedia, lookup Work (physics) wikipedia, lookup Anti-gravity wikipedia, lookup Faster-than-light wikipedia, lookup Time in physics wikipedia, lookup Classical central-force problem wikipedia, lookup Newton's laws of motion wikipedia, lookup Mass versus weight wikipedia, lookup Electromagnetic mass wikipedia, lookup History of thermodynamics wikipedia, lookup Nuclear physics wikipedia, lookup Negative mass wikipedia, lookup Chapter 5 | Further Applications of Newton's Laws: Friction, Drag, and Elasticity concrete. (c) On ice, assuming that μ s = 0.100 , the same as for shoes on ice. 15. Repeat Exercise 5.14 for a car with four-wheel drive. 16. A freight train consists of two 45 cars with average masses of 8.00×10 5-kg engines and 5.50×10 5 kg . (a) What force must each engine exert backward on the track to accelerate the train at a rate of 5.00×10 −2 m / s 2 if the force of friction is 7.50×10 N , assuming the engines exert identical forces? This is not a large frictional force for such a massive system. Rolling friction for trains is small, and consequently trains are very energy-efficient transportation systems. (b) What is the magnitude of the force in the coupling between the 37th and 38th cars (this is the force each exerts on the other), assuming all cars have the same mass and that friction is evenly distributed among all of the cars and engines? 17. Consider the 52.0-kg mountain climber in Figure 5.22. (a) Find the tension in the rope and the force that the mountain climber must exert with her feet on the vertical rock face to remain stationary. Assume that the force is exerted parallel to her legs. Also, assume negligible force exerted by her arms. (b) What is the minimum coefficient of friction between her shoes and the cliff? Figure 5.23 Which method of sliding a block of ice requires less force—(a) pushing or (b) pulling at the same angle above the horizontal? 5.2 Drag Forces 20. The terminal velocity of a person falling in air depends upon the weight and the area of the person facing the fluid. Find the terminal velocity (in meters per second and kilometers per hour) of an 80.0-kg skydiver falling in a pike (headfirst) position with a surface area of 0.140 m 2 . 21. A 60-kg and a 90-kg skydiver jump from an airplane at an altitude of 6000 m, both falling in the pike position. Make some assumption on their frontal areas and calculate their terminal velocities. How long will it take for each skydiver to reach the ground (assuming the time to reach terminal velocity is small)? Assume all values are accurate to three significant digits. Figure 5.22 Part of the climber’s weight is supported by her rope and part by friction between her feet and the rock face. 18. A contestant in a winter sporting event pushes a 45.0-kg block of ice across a frozen lake as shown in Figure 5.23(a). block moving. (b) What is the magnitude of its acceleration once it starts to move, if that force is maintained? 19. Repeat Exercise 5.18 with the contestant pulling the block of ice with a rope over his shoulder at the same angle above the horizontal as shown in Figure 5.23(b). 22. A 560-g squirrel with a surface area of 930 cm 2 falls from a 5.0-m tree to the ground. Estimate its terminal velocity. (Use a drag coefficient for a horizontal skydiver.) What will be the velocity of a 56-kg person hitting the ground, assuming no drag contribution in such a short distance? 23. To maintain a constant speed, the force provided by a car’s engine must equal the drag force plus the force of friction of the road (the rolling resistance). (a) What are the magnitudes of drag forces at 70 km/h and 100 km/h for a Toyota Camry? (Drag area is 0.70 m 2 ) (b) What is the magnitude of drag force at 70 km/h and 100 km/h for a Hummer H2? (Drag area is 2.44 m 2 ) Assume all values are accurate to three significant digits. 24. By what factor does the drag force on a car increase as it goes from 65 to 110 km/h? 25. Calculate the speed a spherical rain drop would achieve falling from 5.00 km (a) in the absence of air drag (b) with air drag. Take the size across of the drop to be 4 mm, the density to be 1.00×10 kg/m , and the surface area to be πr 2 .
5e3196fe23a50e51
Beauty-full exotic bound states at the LHC Article: Beauty-full Tetraquarks Authors: Yang Bai, Sida Lu, and James Osborn Good Day Nibblers, As you probably already know, a single quark in isolation has never been observed in Nature. The Quantum Chromo Dynamics (QCD) strong force prevents this from happening by what is called ‘confinement. This refers to the fact that when quarks are produced in a collision for example, instead of flying off alone each to be detected separately, the strong force very quickly forces them to bind into composite states of two or more quarks called hadrons. These multi-quark bound states were first proposed in 1964 by Murray Gell-Mann as a way to explain observations at the time. The quarks are bound together by QCD via the exchange of gluons (e.g. see Figure 1) and there is an energy associated with how strongly they are bound together. This binding energy between the quarks contributes to the ‘effective mass’ for the composite states and in fact it is what is largely responsible for the mass of ordinary matter (Footnote 1). Most of the theoretical and experimental progress has been in two or three quark bound states, referred to as mesons and baryons respectively. The most familiar examples of quark bound states are the neutron and proton, both of which are baryons composed of three quarks bound together and form the basis for atomic nuclei. Figure 1: Bound state of four bottom quarks (blue) held together by the QCD strong force which is transmitted via the exchange of gluons (pink). Of course four and even more quark bound states are possible and some have been observed, but things get much trickier theoretically in these cases. For four quark bound states (called tetra-quarks) the theoretical progress had been largely limited to the case where at least one of the quarks was a light quark, like an up or a down quark. The paper highlighted here takes a step towards understanding four quark bound states in the case where all four quarks are heavy. These heavy four body systems are extra tricky because they cannot be decomposed into pairs of two body systems which we could solve much more easily. Instead, one must solve the Schrödinger equation for the full four body system for which approximation methods are needed. The example the current authors focus on is the four bottom quark bound state or 4b state for short (see Figure 1). In this paper they use sophisticated numerical methods to solve the non-relativistic Schrödinger equation for a four-body system bound together by QCD. Specifically they solve for the energy of the ground state, or lowest energy state, of the 4b system. This lowest energy state can effectively be interpreted as the mass of the 4b composite state. In the ground state the four bottom quarks arrange themselves in such a way that the composite system appears as spin-0 particle. So in effect the authors have computed the mass of a composite spin-0 particle which, as opposed to being an elementary scalar like the Standard Model Higgs boson, is made up of four bottom quarks bound together. They find the ground state energy, and thus the mass of the 4b state, to be about 18.7 GeV. This is a bit below the sum of the masses of the four (elementary) bottom quarks which means the binding energy between the quarks actually lowers the effective mass of the composite system. The interesting thing about this study is that so far no tetra-quark states composed only of heavy quarks (like the bottom and top quarks) have been discovered at colliders. The prediction of the mass of the 4b resonance is exciting because it means we know where we should look at the LHC and can optimize a search strategy accordingly. This of course increases the prospects of observing a new state of matter when the 4b state decays, which it can potentially do in a number of ways. For instance it can decay as a spin-0 particle (depicted as \varphi in Figure 2) into two bound states of pairs of b quarks, which themselves are referred to as \Upsilon mesons. These in turn can be observed in their decays to light Standard Model particles giving many possible signatures at the LHC. As the authors point out, one such signature is the four lepton final state which, as I’ve discussed before, is a very precisely measured channel with small backgrounds. The light mass of the 4b state also allows for it to potentially be produced in large rates at the LHC via the strong force. This sets up the exciting possibility that a new composite state could be discovered at the LHC before long simply by looking at events with four leptons with total energy around 18 – 19 GeV. Figure 2: Production of a four bottom quark bound state (\varphi) which then decays to two bound states of bottom quark pairs called \Upsilon mesons. Of course, one could argue this is less exciting than discovering a new elementary particle since if the 4b state is observed it won’t be the discovery of a new particle but instead of yet another manifestation of the QCD strong force. At the end of the day though, it is still an exotic state of nature which has never been observed. Furthermore, these exotic states could be interesting testing grounds for beyond the Standard Model theories which include new forces that communicate with the bottom quark. We’ll have to wait and see if the QCD strong force can indeed manifest itself as a four bottom quark bound state and if the prediction of its mass made by the authors indeed turns out to be correct. In the meantime, it gives plenty of motivation to experimentalists at the LHC to search for these and other exotic bound states and gives us perhaps some hope for finding physics beyond the Standard Model at the LHC. Footnote 1: I know what you are thinking, but I thought the Higgs gave mass to matter!? Well yes, but…The Higgs gives mass to the elementary particles of the Standard Model. But most of the matter (that is not dark!) in the universe is not elementary, but instead made up of protons and neutrons which are composed of three quarks bound together. The mass of protons and neutrons is dominated by the binding and kinetic energy of the three quarks systems and therefore it is this that is largely responsible for the mass of normal matter we see in the universe and not the Higgs mechanism. Other recent studies on heavy quark bound states: Further reading and video: 1) TASI 2014 has some great introductory lectures and notes on QCD:
19ddbabbace5c21e
zbMATH — the first resource for mathematics Orbital stability of the black soliton for the Gross-Pitaevskii equation. (English) Zbl 1171.35012 The authors of this interesting paper consider the one-dimensional Gross-Pitaevskii equation \[ i\Psi_t + \Psi_{xx}=\Psi (|\Psi |^2-1) , \quad (t,x)\in \mathbb R \times \mathbb R , \] which is a version of the defocusing cubic nonlinear Schrödinger equation. The boundary condition is given at infinity \(|\Psi (x,t)|\to 1\), as \(|x|\to +\infty \). The conserved Hamiltonian is a Ginzburg-Landau energy \[ E(\Psi )=(1/2)\int_{\mathbb R }|\Psi '|^2dx + (1/4)\int_{\mathbb R }(1-|\Psi |^2)^2dx . \] The authors establish the orbital stability of the black soliton, or kink solution, that is, \(v_0=\tanh{(x/\sqrt{2})}\), with respect to perturbations in the energy space. 35B35 Stability in context of PDEs 35Q55 NLS equations (nonlinear Schrödinger equations) 35Q40 PDEs in connection with quantum mechanics 35Q51 Soliton equations Full Text: DOI arXiv
0ab076cb1467d080
David Tong: Cambridge Lecture Notes on Theoretical Physics David Tong: Lectures on Theoretical Physics Classical Mechanics Dynamics and Relativity [Lagrange Points] Classical Dynamics A second course on classical mechanics, covering the Lagrangian and Hamiltonian approaches, together with a detailed discussion of rigid body motion. [Doc Brown's Genius] Vector Calculus A course for first year undergraduates, describing integral theorems and all things ∇. Quantum Mechanics Quantum Mechanics A first course on quantum mechanics, focussing mostly on the Schrödinger equation. Topics in Quantum Mechanics An advanced course on quantum mechanics, aimed at final year undergraduates. It covers atomic physics, scattering theory, quantum foundations and a number of other topics. Condensed Matter Physics Solid State Physics An introductory course on solid state physics, aimed at final year undergraduates. It cover the basics of band theory, Fermi surfaces, phonons and particles in magnetic fields. anyon face Quantum Hall Effect Statistical Physics Statistical Physics [Rolling Rolling Rolling] Kinetic Theory Statistical Field Theory An introductory course on phase transitions and critical phenomena, aimed at first year graduate students. The main goal is to get to grips with the renormalisation group. Gravitation and Cosmology General Relativity An introduction to general relativity, aimed at first year graduate students. It starts with a gentle introduction to geodesics in curved spacetime. The course then describes the basics of differential geometry before turning to more advanced topics in gravitation. An introductory course on cosmology, aimed at final year undergraduates. It covers the expanding universe, thermal history, and structure formation. Quantum Field Theory Particle Physics An elementary course on elementary particles. This is, by some margin, the least mathematically sophisticated of all my lecture notes, requiring little more than high school mathematics. The lectures provide a pop-science, but detailed, account of particle physics and quantum field theory. Quantum Field Theory Gauge Theory These notes provide an introduction to the fun bits of quantum field theory, in particular those topics related to topology and strong coupling. They are aimed at beginning graduate students and assume a familiarity with the path integral. String Theory String Theory
6ede8bf71de4078b
Tuesday, August 23, 2011 Once more: gravity is not an entropic force Once more: gravity is not an entropic force What motivated him to write another paper was, among other things, an April 2011 preprint, On gravity as an entropic force, by Masud Chaichian, Markku Oksanen, and Anca Tureanu. This paper contains some invalid criticism of Kobakhidze's paper which is why Kobakhidze decided to explain the flaw in a new preprint. The authors of the flawed preprint used at least two (but related) invalid arguments in their attempts to resuscitate Erik Verlinde's theory. One of them was the claim that Verlinde's theory produces the "right classical limit". When this classical limit is quantized, one obtains the right quantum theory, including the neutron interference. However, this argument incorrectly assumes that quantum physics is uniquely determined by a classical limit. It's not. If you take the classical limit C of a quantum theory Q and "quantize" C again, you don't necessarily get Q. In particular, when we talk about the distance-dependent entropy, it's a feature of a physical theory that holds both in the quantum theory Q and in the classical limit C. And in the quantum theory, it automatically destroys the interference patterns because there exists no one-to-one way how to link microstates at different separations (because their numbers differ). So there can't exist any quantum theory that preserves the interference but that still produces a classical limit with a distance-dependent entropy. Kobakhidze looks at a particular method used by the three authors to mask the error in their paper: they try to hide the distance-dependent character of the number of microstates - which is the very defining feature of Verlinde's proposal - by using different conventions for coarse-graining at different distances. Of course, this is just a trick to fool themselves (as well as insufficiently careful readers of their paper). If one uses a self-consistent description and a fixed definition of the entropy, the entropy simply does depend on the distance between the gravitating bodies (such as the Earth and the neutrons) and the interference pattern disappears. In my opinion, Kobakhidze also tries to present Verlinde's proposal in a maximally comprehensible, no-nonsense way. Instead of accepting the usual potential force as found by Newton, \[ \vec F = -m \vec \nabla \Phi,\] one must adopt an entropic force that depends on a temperature and an entropy, \[ \vec F = T \vec \nabla S. \] The temperature should be associated with the Unruh temperature which depends on the gravitational acceleration. This assumption is very problematic and probably inconsistent by itself because different pairs of bodies would have different temperatures which means that they couldn't be in a thermal equilibrium. Moreover, everyone knows that the actual temperature of the Sun - and the temperature of all degrees of freedom on the Surface of the Sun - is 6000 °C and has nothing to do with tiny temperatures associated with the Sun by Verlinde. Fine. Ignore that the temperatures in the real situations would be highly non-constant which would lead to a huge heat transfer and irreversibility. Kobakhidze writes down what the formula for entropy \(S = S(r)\) as a function of the distance has to be, \[ S = 2\pi m \log(r/r_{\rm Verlinde}). \] I've integrated his formula for \(\vec \nabla S\). An unknown distance scale, \(r_{\rm Verlinde}\), really has to be added as well which is almost certainly another inconsistency but that's not the way how Kobakhidze shows that Verlinde's theory is falsified by the observations. Lisa Randall's new book on particle physics and philosophy of science, Knocking on Heaven's Door, will be released on September 20th. Pre-order now: I've read it thrice, it's great. Kobakhidze realizes that the number of microstates depends on the distance. But in agreement with his previous paper, he still assumes that the neutron may be associated with a wave function. If you try to do so, however, the momentum operator in quantum mechanics inevitably contains a non-Hermitian piece which takes care of the separation of the wave function among many microstates when the number of microstates goes up: \[ \hat p = -i \frac{\partial}{\partial r} - 2\pi i m. \] Well, this is too optimistic because the relative phases between all the microstates would be undetermined and would evolve chaotically because of small differences in the energy between the macroscopically indistinguishable microstates - which would eliminate any trace of quantum coherence. But even if you assume that the quantum coherence is preserved, you get totally wrong new and very large terms in the Schrödinger equation that will obviously predict that the neutron interferometry experiments should see something completely different than what they do see. The author describes some wrong predictions of Verlinde's theory for the neutron in the gravitational field in some detail. At any rate, the conclusion is that the neutron interferometry experiments in the Earth's gravitational field falsify all forms of "gravity as an entropic force" hypotheses. If you think that Verlinde's proposal has not been falsified, just explain the neutron interferometry experiments with his theory! Define some microstates so that the entropy depends on the distance according to Verlinde's description. And then try to derive the usual Schrödinger equation for the neutron in the external potential - which has been experimentally observed to govern the neutrons - from your original Hamiltonian for the neutron-Earth system by defining the state \( |h\rangle \) for the neutron at height \(h\) as a linear combination of your many microstates (whose density depends on the height \(h\)). In advance, I can assure you that your attempts will fail. The very meaning of the entropic force is that the coherence is lost whenever the force acts, so the information about the relative phases of the neutron's wave function will disappear. Try it and if you fail - and you will surely fail - please stop with the nonsensical suggestions that Verlinde's proposal could still be OK in some way. Its very fundamental assumptions contradict some experimentally established pillars of modern physics. If you couldn't have figured this simple thing out for more than 1 year, it's pretty painful, but if you will fail to do so for 2 or more years, it will be even more painful. ;-) 1. This reminds me of Garrett Lisi's E8 theory to some extent. I noticed that Woit has on entry on his "Not Even Wrong" blog to the effect that Verlinde has been awarded some big grant money to continue his work: "$6.5 Million for Entropic Gravity"http://www.math.columbia.edu/~woit/wordpress/?p=3781 So it seems that even if experts can be confident that he is wrong, Verlinde has a short-term victory. 2. Why in the world would one even suggest to associate the temperatures of the atoms of gravitating objects with the "temperature" of the microscopic degrees of freedom that cause gravity in the case of entropic gravity??? The universe is clearly far, far away from equilibrium with regard to the effective temperature of the real vacuum. By the time it will get to that equilibrium, all notions of space and time will be gone and there will be no observers left to amuse themselves about physics. As far as Kobakhidze's derivation of the (in his eyes) correct Schroedinger equation is concerned... Schroedinger quantum mechanics is not even a self-consistent theory of quantum systems. It has nothing to say about fields and it does not treat gravity as a field. It treats gravity as a CLASSICAL potential. Kobakhidze's argument is basically nonsense in, nonsense out (in that he pretends that one can say anything about gravitation by using non-relativistic single particle quantum mechanics) and nature tells him so by agreeing completely with Verlinde's hand-waving and disagreeing with his. Now, if someone would be so nice to show me a quantum field theoretical derivation of neutron wave functions formulated in a quantum field theory of gravity that incorporates Verlinde's argument and that still predicts the wrong outcome, then, maybe, I would be convinced. Until then I go with the experimental proof, which clearly states that Verlinde's hands wave far more successfully than Kobakhidze's. And why entropic forces would immediately destroy coherence is also not clear to me. I talk a lot to people, which means that the entropic forces which transport the sound waves from my mouth to their ears work quite well without destroying short term coherence in sound waves. They retain both their amplitude and phase over quite some distance. And anybody who has done atomic spectroscopy knows that random em fields acting on atoms do not automatically destroy line spectra, they merely modify them, leading to line broadening and changes in transition probabilities. How much coherence gets destroyed is all a matter of scales. The effective scale of a neutron scattering experiment is far, far, far away from the scale on which entropic gravity emerges from microscopic degrees on freedom holographic screens... and by the time the neutrons get to react to them, the collective sum of all those interactions behaves like a Newtonian potential. 3. Sorry, your comment is pure crackpottery. Whenever some degrees of freedom do interact with the rest so that this interaction matters - and it surely matters if one wants to use this behavior to explain huge effects such as the gravitational attraction - then they inevitably do converge to equilibrium. It's pure crackpottery for you to suggest that they don't thermalize and/or that their temperature isn't really temperature and can't be measured. The temperature is always the same thing. The entropy is always the same thing, too. You can't have it both ways. You either try to use these temperatures and entropy to explain something - but then you also have to accept that these crazy new terms in the temperature and entropy also have other consequences that instantly falsify the model - or you preserve your sanity in realizing that there's no extra temperature or entropy coming from a nonzero gravitational potential. I won't even discuss your utterly idiotic comments about Schrodinger's equations being inapplicable or bad or whatever rubbish you are writing. I wouldn't be doing anything else if I had to respond to every paragraph written by every crank of your type. Schrodinger's equation works perfectly, even in the presence of the gravitational potential as has been tested since the 1970s by neutron interferometry etc.
865beed1b572f3e9
In Equation 1, f(x,t,u,u/x) is a flux term and s(x,t,u,u/x) is a source term. ) Continuous group theory, Lie algebras and differential geometry are used to understand the structure of linear and nonlinear partial differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform and finally finding exact analytic solutions to the PDE. ‖ Taylor3 has published a comprehensive text on these differential equation models of attrition in force-on-force combat, alluding also to various OR methods that have been used historically in the study of niilitary problems. So the Cauchy-Kowalevski theorem is necessarily limited in its scope to analytic functions. and the connection with dimensional analysis is pointed out. P. R. Garabedian, \Partial Di erential Equations", Wiley, 1964. α An important example of this is Fourier analysis, which diagonalizes the heat equation using the eigenbasis of sinusoidal waves. Improve this question. To learn more, see our tips on writing great answers. Separable PDEs correspond to diagonal matrices – thinking of "the value for fixed x" as a coordinate, each coordinate can be understood separately. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods conserve mass by design. Applied Partial Differential Equations by R. Haberman, Pearson, 2004. The solution for a point source for the heat equation given above is an example of the use of a Fourier integral. If the partial differential equation being considered is the Euler equation for a problem of variational calculus in more dimensions, a variational method is often employed. rev 2021.1.20.38359, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. PARTIAL DIFFERENTIAL EQUATIONS-IV. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. The disadvantage of Morgan's method is that the trans- Thus there is no It is also shown here that Morgan's theorems can be applied to ordinary differential equations. If the domain is finite or periodic, an infinite sum of solutions such as a Fourier series is appropriate, but an integral of solutions such as a Fourier integral is generally required for infinite domains. They … Because systems of nonlinear equations can not be solved as nicely as linear systems, we use procedures called iterative methods. The classification of partial differential equations can be extended to systems of first-order equations, where the unknown u is now a vector with m components, and the coefficient matrices Aν are m by m matrices for ν = 1, 2,… n. The partial differential equation takes the form, where the coefficient matrices Aν and the vector B may depend upon x and u. u < u at If the data on S and the differential equation do not determine the normal derivative of u on S, then the surface is characteristic, and the differential equation restricts the data on S: the differential equation is internal to S. Linear PDEs can be reduced to systems of ordinary differential equations by the important technique of separation of variables. Some differential equations are not as well-behaved, and show singularities due to a failure to model the problem correctly, or a limitation of the model that was not apparent. In many introductory textbooks, the role of existence and uniqueness theorems for ODE can be somewhat opaque; the existence half is usually unnecessary, since one can directly check any proposed solution formula, while the uniqueness half is often only present in the background in order to ensure that a proposed solution formula is as general as possible. , which is achieved by specifying Note that well-posedness allows for growth in terms of data (initial and boundary) and thus it is sufficient to show that For instance, the following PDE, arising naturally in the field of differential geometry, illustrates an example where there is a simple and completely explicit solution formula, but with the free choice of only three numbers and not even one function. Furthermore, there are known examples of linear partial differential equations whose coefficients have derivatives of all orders (which are nevertheless not analytic) but which have no solutions at all: this surprising example was discovered by Hans Lewy in 1957. It is then shown how Lie's Examples are given The second part of this report deals with partial differential equations. Lanchester differential equation model.’ These equations predict the time dependent state of a battle based on attrition. {\displaystyle u} Why does Kylo Ren's lightsaber use a cracked kyber crystal? For this reason, they are also fundamental when carrying out a purely numerical simulation, as one must have an understanding of what data is to be prescribed by the user and what is to be left to the computer to calculate. Different methods and their advantages/disadvantages to solve pde? This is easily done by using suitable difference approximations. Mathematical models for transient gas flow are described by partial differential equations or a system of such equations. It is, however, somewhat unusual to study a PDE without specifying a way in which it is well-posed. These terms are then evaluated as fluxes at the surfaces of each finite volume. [3] It is probably not an overstatement to say that almost all partial differential equations (PDEs) that arise in a practical setting are solved numerically on a computer. Computational solution to the nonlinear PDEs, the split-step method, exist for specific equations like nonlinear Schrödinger equation. is a constant and Is it usual to make significant geo-political statements immediately before leaving office? The energy method is a mathematical procedure that can be used to verify well-posedness of initial-boundary-value-problems. Alternatives are numerical analysis techniques from simple finite difference schemes to the more mature multigrid and finite element methods. Qualitative solutions are an alternative. A Partial Differential Equation commonly denoted as PDE is a differential equation containing partial derivatives of the dependent variable (one or more) with more than one independent variable. By contrast, for PDE, existence and uniqueness theorems are often the only means by which one can navigate through the plethora of different solutions at hand. If A2 + B2 + C2 > 0 over a region of the xy-plane, the PDE is second-order in that region. More classical topics, on which there is still much active research, include elliptic and parabolic partial differential equations, fluid mechanics, Boltzmann equations, and dispersive partial differential equations. > There is only a limited theory for ultrahyperbolic equations (Courant and Hilbert, 1962). There are no generally applicable methods to solve nonlinear PDEs. . The thesis commences with a description and classification of partial differential equations and the related matrix and eigenvalue theory. For hyperbolic partial differential equations it is essential to control the dispersion, dissipation, and the propagation of discontinuities. We also present the convergence analysis of the method. , = What do you call a 'usury' ('bad deal') agreement that doesn't involve a loan? However this gives no insight into general properties of a solution. This is an undergraduate textbook. For instance, they are foundational in the modern scientific understanding of sound, heat, diffusion, electrostatics, electrodynamics, fluid dynamics, elasticity, general relativity, and quantum mechanics. In most all cases the study of parabolic equations leads to initial boundary value problems and it is to this problem that the thesis is mainly concerned with. Revise India CSIR 2020I Mathematical SciencesI Day 8I PDE PYQs Part 1. Analytical solution for the diffusion equation, Relationship between Faedo-Galerkin Method and Semigroup Method. x systems of total differential equations at, extension thought to be new. Ie 0
ebd77f5ae543e4c6
5 Memorable C-Drama And TW-Drama Confessions To Celebrate 5/20 Completely happy Could 20th! 5/20 is a romantic vacation that’s celebrated in China. The person numbers 5, two, and 0 sound just like “I really like you” in Chinese language, therefore it’s a day to specific one’s love to a different. In celebration of this vacation, there’s no higher manner than to share how our heroes and heroines in dramas confess! Warning: spoilers for the dramas under The basic confession from “It Began With A Kiss” “It Began With A Kiss” is an iconic drama that delivers a basic confession of affection! Xiang Qin (Ariel Lin) lastly decides to muster up the braveness to admit to her long-time crush Zhi Shu (Joe Cheng). She finally ends up penning a letter to him, introducing herself and her love for him. This can be a confession that brings again nostalgic emotions! Whereas this sort of confession is somewhat easy, it’s nonetheless fairly a candy gesture and one which is often seen in dramas. The nerdy confession from “Put Your Head On My Shoulder” What might be the potential results of a genius who lastly decides to let his crush know the way he feels? Nicely, for Gu Wei Yi (Lin Yi), it seems disastrously. After mulling over methods to admit to Situ Mo (Fei Xing), he lastly decides on his methodology of selection. He arms her the folded paper beneath the romantic and snowy environment. Gu Wei Yi is assured that his confession with the Schrödinger equation is the surefire method to woo her, however to his stunning shock, Situ Mo errors the paper as notes for his class as a substitute. For these unclear the place Gu Wei Yi was heading with this confession, he hoped that she would see that just like particles being within the construction of the whole lot, Situ Mo is the construction to his the whole lot. Additionally, when the particles are multiplied by time, they full a wave perform, which in flip is supposed to say that she completes his life. Nicely, geniuses could also be specialists in understanding tough ideas associated to highschool however not essentially in ones that relate to professing to a crush! Bonus factors for the lovable Doraemon image although. Begin watching “Put Your Head On My Shoulder”: Watch Now The grand confession from “With You” This confession is an absolute favourite solely for it being extremely excessive and downright hilarious. When it comes to checking off the listing of the worst methods to admit, Lu Xing He’s (Wang Li Xin) grand manner of doing so undoubtedly ticks off most marks. He begins off by bringing out a megaphone as a result of a public confession that’s heard loudly and clearly is unquestionably going to win somebody’s coronary heart over. He pours sodium into the fountain after getting inspiration from one other pupil who states that love and chemistry are the identical. Lu Xing He’s logic with that is to point out Geng Geng (Tan Music Yun), his crush, that whereas they could be totally different individuals, they will come collectively and create sparks similar to how sodium and water react… The one problem although is that he decides to dump the entire bottle into a big pool of water, which in the end creates an unlimited explosion. Evidently, even Lu Xing He’s left speechless at his good occasion. Predictably, his confession doesn’t succeed, however it positive introduced on a large amount of laughter! Begin watching “With You”: Watch Now The direct confession from “Candy Fight” Generally a no-frills and easy confession is simply as nice, and this one matches the theme of its drama since Xiao Mi (Ivy Shao) is about to tear Solar Hao (Pei Zi Tian) aside along with her phrases and fists. She misunderstands that he likes another person and refuses to listen to him out. This leads him to simply instantly fess up that he in reality has preferred her the entire time, earlier than she vents out all of her anger on him. It’s a humorous confession at first as a result of viewers see he’s going to be roasted alive, however as soon as the misunderstanding clears up, it ends on a satisfyingly candy decision. Begin watching “Candy Fight”: Watch Now The overdue confession from “Le Coup de Foudre” Gradual and regular wins the race! This confession takes years to truly occur, regardless of the each of them liking one another in highschool. Qiao Yi (Wu Qian) lastly confesses why she made the choice to come back to Beijing, and after witnessing all of the angsty and melodramatic moments they endure, this scene positive does really feel gratifying to look at as a viewer. It’s additionally nice that Qiao Yi simply stops holding it in and lets Yan Mo (Zhang Yu Jian) know simply how she feels. Her confession is honest, particularly since the phrases maintain the load of so a few years of pent-up feelings. Whereas their romance might have been official at a a lot earlier time, it’s higher late than by no means. Begin watching “Le Coup de Foudre”: Watch Now Hey Soompiers, what are your favourite Chinese language or Taiwanese drama confessions? isms is a long-time drama and selection present fan. Please be happy to drop any present suggestions to look at. At present watching: “As soon as Once more,” “The Love Equations,” and “Miss Reality” Trying ahead to: “A Murderous Affair in Horizon Tower” and “Unusual Glory” How does this text make you are feeling? Please enter your comment! Please enter your name here
4c816ae8ad02fba8
The limits of “knowing” After the renaissance in Europe (14th-17th century) a revolution in science and rational thinking began, which ultimately led to the amazing technology we now use and depend on every day. Seeking more knowledge and developing new technology clearly has many practical advantages, logical thinking has proven to be a very useful tool. Now in the 21st century, we live in the age of science, reason and rationality. Some scientists even think that ultimately everything (about reality) can and will be known, which suggests that knowledge seems to be the ultimate means we have to evolve as human beings. The question this article poses: does the knowing-of-things have limitations? One aspect is the fact that we are not infinitely smart and therefore at some point we’ll stumble on the limits of the human brain. This already seems to be so for quantum mechanics since nobody understands what the Schrödinger equation and wavefunction actually “is”, although the models and formulae work fine. That’s not the point though. A more interesting question is: what does it mean when we say that we know something? When you look deeper you will notice that knowledge always introduces a subject who knows something about an object. This implies that our thought-processes are inherently dualistic (subject-object) and only offer models and ideas about something. The dangerous aspect of “knowing-things” is we start to believe that we know what something actually “is” and that is the great illusion! When you for instance read the daily news about all that happens in the world, in a way you are made to believe that many of the reports describe situations/things which are real and actually exist. The daily information we are bombarded with suggests that there really is something like the USA, and there really are democrats and republicans, there really are countries or groups of people who are the enemy, there really are conflicts between people of different religions, and so on and so forth… It’s absolutely endless and this process of “news” has been going on for thousands of years. When you actually look deeper, you may wake up to seeing that all of these statements  are only based on thoughts about apparent people, things, objects etc. They are just stating concepts and labels. Have you however ever seen “Europe” or “Asia”?  Have you ever met a “Chinese”? Do you really think that some people are your political opponents? Sorry folks, it is all one grand illusion, none of it is real, even though many people take these labels very seriously and this kind of information constantly seems to affirm that we are talking about “reality”. It’s not hard to see how such thinking leads to division, conflict and misery. When you are convinced there are real “enemies” out there, you are willing to wage war upon them, aren’t you? Could it be, that an “enemy” is nothing but a label you created in your head? Have you realized that some people may believe that you are their “enemy”? You might ask; what happens when you truly wake up and see the illusion of this? It means you can’t take it seriously anymore, nor can you live and act like it’s real, right? So what remains then? The simple answer is; Life remains, Existence or Being, whatever you like to call it. Now you see “it all” without the labeling, all the ideas, without all the “knowing-about”. One could call that living as “not-knowing”, but language actually can’t capture it. There is beauty in that, there is great freedom and love in that…always fresh and never known. What an amazing discovery and so simple really… So that’s the invitation. To dare and look into this… 2. This gets at something I’ve been thinking about a lot recently. My hunch is that people create systems of beliefs because post Enlightenment in the West has become a very cerebral and cognitive oriented space. But ask anyone on the street what has shaped their life the most and they will convey a story; an experience. I think this is because experiences are tied so heavily to our emotions, which shape individuals much more primarily than thoughts. It’s why you can know a lot about God or the universe, but find that you really know nothing when you experience Oneness or awakening 3. Greenhouse grown CBD Northern Lights with a loud, piney terpene profile. Everything was hand harvested and dried in a dark climate controlled room to preserve terpenes. Flower has really nice moisture content, great color, and is fresh. (Harvested at the end of September) Its been testing at 17.23% cannabinoid content (15.38 CBDA) and is compliant with d9 limits (ND). Average bud size is small. The flower was machine trimmed once, and hand finished. The flower is seedless. Flower is processed and ready to ship. Buyer is responsible for paying the cost of shipping/freight. 4. NEMBUTAL  sodium pentobarbital for sale  (pentobarbital sodium injection, USP) is available in the following sizes: where to buy pentobarbital, where can i buy pentobarbital, buy pentobarbital online for animals, sodium pentobarbital for sale, where to buy nembutal pentobarbital, where to buy nembutal online, nembutal pills for sale online, buy nembutal pentobarbital euthanasia, where to buy nembutal online, where to buy nembutal pentobarbital, buy nembutal online for pets 5. can i buy a car with temporary driving license | buy international drivers license | buy driver license online | Old drivers license for sale | Johnny cash drivers license for sale | slash drivers license for sale | cars for sale without drivers license | celebrity drivers license for sale | driver license for sale | driving license for sale | real drivers license for sale | drivers license for sale | 6. Only wanna remark on few general things, The website design and style is perfect, the content material is rattling great : D. Darline Derward Haland 7. Hi there! I’m at work browsing your blog from my new iphone 3gs! Keep up the fantastic work! Feel free to surf to my homepage – 사설토토 nice evening! my previous room mate! He always kept chatting about Many thanks for sharing! Check out my web site 먹튀검증 Leave a Reply to masterdomino99 Cancel reply Please enter your comment! Please enter your name here
cef9c705bcfb03d3
The and motion is restricted by properties of The Mathematics of OurUniverse”Classical similarities in theQuantum world”Student ID – 26628961Faculty of Social andMathematical Sciences – University of Southampton2018  AbstractInthis report, we start by defining key aspects of Classical Lagrangian mechanicsincluding the principle of least action and how one can use this to derive theEuler-Lagrange equation.   Symmetries andConservation laws shall also be introduced, deriving relations betweenposition, momenta and the Lagrangian of our system.  Following this, we develop our study ofClassical mechanics further using Legendre transforms on the Euler-Lagrangeequation and our conservation laws to define Hamiltonian mechanics.  In our new notation, we use Poisson brackets whenevaluating the rate of change of a classical observable.  Next, we cross to Quantum mechanics, givingsome definitions which shall be used in later discussion.  We then state and prove the Ehrenfesttheorem, from which we draw our first correspondence between Classical andQuantum mechanics, most notably between the Poisson bracket and the Commutator. We Will Write a Custom Essay Specifically For You For Only $13.90/page! order now   Furthermore, the Ehrenfest theorem applied tooperators of position and momentum shows a further correspondence withClassical results.  Finally, we take anexample of the Simple Harmonic Oscillator, using both Classical and Quantummethods to solve for this system and comment on the similarities anddifferences between the results.  1. Introduction                           2. LagrangianMechanicsWebegin by exploring a re-formulation of Newtonian mechanics developed byJoseph-Louis Lagrange called Lagrangian Mechanics.  For a given physical system we requireequations of motion which contain variables as functions of time, in order topinpoint the location of an object or particle at any given time.  The majority of physical systems are notfree, and motion is restricted by properties of the system.  These systems are called constrained systems.2 Definition 2. 1-  Aconstrained system is a system that is subject to either 3:Geometric constraints:  factors which impose some limit to theposition of an object. 2Kinematical constraints: factors whichdescribe how the velocity of a particle behaves. 2  Definition 2.2-  A function for which the integral can becomputed is said to be integrable. 19 Definition 2.3- A system is said tobe Holonomic if it has only Geometricalor Integrable Kinematical Constraints. 2 Sincethe Classical Newtonian equations using Cartesian coordinates do not have theseconstraints we must find a new coordinate system to work with. Definitions 2. 3-  Let S bea system and  be a set of independent variables.  If the position of every particle in S can be written as a function of these variableswe say that  are a set of generalised coordinates for S. The time derivatives  of these generalised coordinates are calledthe generalised velocities of S. 23 Definition 2.4-  Let Sbe a holonomic system.  The number of degrees of freedom of S is the number of generalisedcoordinates  required to describe the configuration of S.   Thenumber of degrees of freedom of a system is equal to the number of equations ofmotion needed to find the motion of the system. 2 Definition 2.5- Let S be a holonomic system with generalisedcoordinates.  Then the Lagrangian function  is, Hereour Lagrangian function is dependent on the set of generalized coordinates , the generalised velocities , and time . 2  3. Calculus ofVariationsThemethod of calculus of variations is used to find the stationary values on apath, curve, surface, etc. of a given function with fixed end points by usingan integral.   Definition 3.1-  Let  be a real valued function, which we call an action of function  for .  We can write this in the form of an integral, Definition 3. 2-  The correct path ofmotion of a mechanical system with holonomic constraints and conservativeexternal forces, from time  to , is the stationarysolution of the action.  The correct pathsatisfies Lagrange’s equations of motion, this is called the principle of least action. 4 Lemma 3.3-(Euler-Lagrange Lemma) 5 If  is a continuous function on , and  for all continuously differentiable functions  which satisfy , then,   Proof.  A proof of theEuler-Lagrange Lemma can be found in 5 pg. 189. Example of F=-dV/dt? Theorem 3.4- Suppose the function  minimises the action , then it mustsatisfy the following equation on  This is called the Euler-Lagrangeequation. 2 Proof.   Following similar derivations as in 5 and 9, we start with an action , where  is a given function of  and . Let be a twicedifferentiable function, with fixed at end points,   Leavingthe following,  We want to find the extremum points of the action inorder to find the value of such that  is the required minimum.  We begin by assuming that  is the function that minimises our action andthat satisfies the required boundary conditions on .  Now, we introduce a continuous twice differentiablefunction  defined on , which satisfies .  Define,  where  is an arbitrarily small real parameter.  We set,   We want to find the extremum of  at , this means that  is a stationaryfunction for , and for all  we require,  Differentiating  with respect to parameter ,  By a property of Calculus, we bringthe  into the integral giving,  and using the chain rule to evaluate the integrand,  Applying our definition of , it’s clear to seethat  and similarly that , hence,  Integrating the term containing the  using the integration by parts formula, wename  and. and our equation (3.12) becomes,  Evaluate the first term of (3. 14) using,  Substituting into equation (3.14) leaves,  By taking , we arrive at  and by factoring out a (-1) we are left withthe integral,  Finally, applying Lemma 3.3 we see ourrequired result,  This is the Euler-Lagrange equation for  It can be used to solve our problems involvingthe least action principle.  The reversalof the argument also shows that if  satisfies (3.18) then  is an extremum of .  Hence, Definition 3.5- (Lagrange’s Equations of Motion) If S is a holonomic system with generalisedcoordinates  and Lagrangian .  Then the equations of motion of the systemcan be written in the following form, 2 The Lagrangian approach to mechanics is to find the extrema minimumvalue of an integral in order to derive the equations of motion for thatsystem.   4. Symmetries and Conservation LawsLet S be a holonomic system with a set of generalisedcoordinates and the Euler-Lagrange equations of motion with n degrees of freedom.  The Lagrangian for this system is clearly begiven by,  Definition 4.1-  If a generalised coordinate  of a mechanicalsystem S is not contained in theLagrangian L such that,  Then wecall  an ignorable coordinate. 67  At anignorable coordinate  the Euler-Lagrangeequation states,  Here, the term , because  has no  dependence, hence,  Definition 4.2-  Consider aholonomic system S with Lagrangian , such that we can define a ,  which we call the momentum of a free particle.  Now say Sis a system described by generalised coordinates .  One can define quantities  as,  Thisis called the generalised momenta forcoordinate . 4 This concept of generalised momenta isuseful, because it can be substituted into equation (4.3) giving, a furthersimplified Euler-Lagrangian equation such that .  Therefore, thisshows that the generalised momentum for the ignorable coordinate, , is constant.                    We can also find the time derivative of thisgeneralised momenta simply using (4.7) in the Euler-Lagrange equation (3.20). Then using common notation one can see the result,  Theorem 4.3-  For all ignorable coordinates, , the generalised momenta are not time dependent; this iscalled conserved momentum. 8  The conservation laws in Lagrangian mechanicsare more general than in Newtonian mechanics. Therefore, the Lagrangian can also be used to prove the conservationlaws that were proved previously in Newtonian mechanics. 5. Hamiltonian MechanicsWe shall now introduce Hamiltonianmechanics and see how they can be derived from the Lagrangian mechanics that wehave already seen.  The Hamiltonianformulation adds no new physics to what we have already learnt, however it doesprovide us with a pathway to the Hamilton-Jacobi equations and branches ofstatistical mechanics. Definition 5.1-  An activevariable is the one that is transformed by a transformation between twofunctions.   The two functions may alsohave dependence on other variables that are not part of the transformation,these are called passive variables.2 Definition 5.2-  We have the variables  which are functions of the active variables  and passive variables   Suppose  can be defined by thefollowing formula,  whereF is a given function of .  With inverse,  Thefunction G is related to F by the formula,  where is the standardvector dot product (.  Moreover, the derivatives of F and G with respect to the passive variables  are related by,  Therelationship between the two functions F andG is symmetric and is said to be the Legendre Transform of the other. 2 Let  be a Lagrangiansystem with  degrees of freedomand generalised coordinates .  Then the Euler-Lagrange equations of motionfor  are,  where is the Lagrangian of the system.  We now want to convert this set of  second order ODE’s into Hamiltonian form interms of unknowns , where {are the generalisedmomenta of  (4. 7). These can be written in vector form,   Wewant to eliminate the velocities  from the Lagrangian.  To do this we use the Legendretransforms.  Leaving us with, Thisleads us to the definition of the Hamiltonian function. Definition 5.3-  The function , which is theLegendre transform of the Lagrangian function  must obey the following equation, where  is called the Hamiltonian function of .    Wecan now use (5. 4) to form a relation between  with respect to the passive variable . 2  Usingthis relation, we can transform the Lagrange equations into Hamilton’sequations.  Take (4. 9) which hasequivalent vector form, Whichcan be transformed into Hamiltonian notation by using (5.9) giving,  Hencethis leaves us with the two transformed Lagrange equations (5.7) and (5.11),these are known as Hamilton’s equations,which have expanded form,  Definition 5.4-  Let  be two Classicalobservables.   We define Poisson Bracket  as, 2 Let  be a system with  degrees of freedomand generalised coordinates .  In the system,we have an observable  looking at itstime derivative we have,  Using the Hamilton’s equations in (5.12) wecan replace  and  leaving us with,   Now applying the definition of the Poissonbracket, we can concisely write the first term,  We shall refer to this result when looking atthe Ehrenfest theorem. 18 Comparison betweenLagrangian and Hamiltonian mechanics? 6. Classical Limit and Correspondence Principle 17,18QuantumMechanics is built upon an analogy with the Hamiltonian ClassicalMechanics.  Here we find a clear linkbetween the coordinates of position and momentum with the Quantumobservables.  Statistical interpretation… The theory of QuantumMechanics is built upon a set of postulates.9 In brief summary, they state that:  –       The state of aparticle can be represented by a vector | in the Hilbertspace.  –       The independentvariables from classicalinterpretations become hermitian operators .  In general, observables from classicalmechanics become operators in quantummechanics. –       If we study aparticle in state |, a measurement ofobservable  will give an eigenvalue  and a probability of yielding this state .-       The state vector | obeys the Schrodingerequation:  where  is the Quantum Hamiltonian Operator, equal to the sum of kinetic and potential energies.9Definition 6.1-   The expectation value of a given observable,represented by operator  is the average value of the observable overthe ensemble. 12 Say every particle is in the state  then,  Definition 6. 2-  Let  be a Quantum operator representing a physicalobservable.  We say  is a HermitianOperator if,  Where  is the adjointof the operator (definition can be found in 12 pg. 22).  An example of a Hermitian operator is theHamiltonian operator. 12Definition 6. 3-  The commutatorof two Quantum operators is defined as,  If then we say the operators commute. It is also noted that the order of the operators canchange the result,  and that in general, . 14 Theorem 6.4- WORDS + HATS Proof.  First, we apply the definition of the commutator (6.4),     Twocommutation relations which we shall use in later discussion are,   Theproofs for these can be found in 12. Theorem 6.5- (The Ehrenfest Theorem) WORDSThegeneralized Ehrenfest theorem for thetime derivative of the expectation value of a Quantum operator  is, where is the Hamiltonian operator. … Proof.   Westart by applying the definition of the expectation value of a general operator(6. 14),  Takingthe derivative into the expectation value gives,  Wecan now simply evaluate the time derivatives of  in the bras and kets by rearranging theSchrödinger equation (6.1). and similarly using the fact  is Hermitian.  Usingresults (6.14) and (6.15) in (6.13) we have,  Wecan now combine the first and third term in (6. 16) using the commutationrelation (6.4).  Finally,we apply the definition of expectation value (6.2) on both terms in (6.17) and weare left with the Ehrenfest Theorem for a general Quantum operator (6.11). The Ehrenfest Theorem corresponds structurally to aresult in Classical Mechanics.  If wetake a Classical observable  which depends on set of generalisedcoordinates  and momenta , then calculate its rateof change we see as shown for (5. 16) that,  Fromthis we can see an immediate correspondence between the Classical Poissonbracket (5.13) and the Quantum commutator (6.4),  what do we learn from this?? Now, we look at some key results from the Ehrenfest theoremand how they can help us find further correspondence between Classical andQuantum Mechanics. Example 6.6- In this example weshall look at a specific case of the Ehrenfest theorem where we set  the position operator. 17 For a Hamiltonian,  Webegin by subbing  into (6.11), Itis clear to see that the second term in this equation disappears as  has no time dependence.  We now use our Hamiltonian to expand thecommutator.  Here (Definition 6.3) so we are only left with thecommutator .  ApplyingTheorem 6.4 setting , the commutator canbe expand leaving,  Utilizingthe commutator result ,   Thisresult can be compared with  from Classical Mechanics.  It is also possible to translate it into anexpression involving the Hamiltonian, only if it is legal to take thederivative of the Hamiltonian operator with respect to another operator, namelyas shown,  Thisclearly shows a correspondence with one of Hamilton’s equations seen in (5. 12), Evaluation Example 6.7-  We now follow a similar route as in 17 using the operator for momentum in the Ehrenfesttheorem, Again has no time dependence so the second termdisappears.  Using the same Hamiltonian (6.20)  Here commutes with  and so we are left with    Byutilizing the result from (6.10) for the commutator.   Some trivial simplification leaves,  Inone dimension, we can see that the rate of change of the average momentum isequal to the average derivative of the potential V.  Again, the behavior ofthe average Quantum variables corresponds with the Classical expressions forthese observables.  In Classical terms(6.32) reduces to  .Explanations Again,one sees resemblance between this Quantum result and the Classical Hamilton’sequations (5. 12),  Evaluation of above results in relationto Classical MechanicsThe key differences between the Classical andQuantum versions of (Themain difference between the quantum and classical forms is that the quantumversion is a relation between mean values, while the classical version isexact. We can make the correspondence exact provided that it’s legal to takethe averaging operation inside the derivative and apply it to each occurrenceof X and P.  That is, is itlegal to say that,CORRESPONDENCE PRINCIPLE pg. 253-255 Taylor  7. Simple HarmonicOscillatorExample 7.1- LagrangianHarmonic Oscillator 9Consider a system containing the undamped HarmonicOscillator in 3-D, with displacement coordinate , which is a generalised coordinate.  We first form a Lagrangian relation for thissystem, Now,we consider the case of the 1-D Harmonic Oscillator (i.e. Constraining y and zto both be zero, ). 4 Leaving us tofind the following equations,  Henceour equations of motion for the system,  Allthat is left is to rearrange this equation and to solve, Definition 7.2-   Scaled Quantum operators for position andmomentum  and  are defined as,   Hencelowering and raising operators  can be defined in the following way,  Theyhave commutation relation, Weshall use the ladder operators ormore notably the raising operator when analyzing the Quantum HarmonicOscillator in Section 8. 12 Definition 6. 7- Ground State Ket? Remark 7…- The theory of Quantum Mechanics makespredictions using probabilities for the result of a measurement of anobservable .  The probabilities are found by obtaining thereal eigenvalues  of  and using the relation stated in thepostulates.  Example 8. 2- Quantum12 We start with ourscaled operators of position and momentum,  Forthe Quantum Harmonic Oscillator, we need a Hamiltonian operator based on theClassical Simple Harmonic Oscillator. Replacing observables  and  with operators we have,  Weuse raising and lowering operators  defined in (6.12) in order to find the wavefunction for the Simple Harmonic Oscillator.  We have scaled operators of position and momentum as in (6.11), so wecan write  in terms of our ,  Lowering operator  can act in our X-space on ground state ket |.  Such that,  aswe cannot lower past the ground state. Apply the definition of the expectation values,  Evaluating the two terms inside the bracketwe see,  So, we have equation (8. 8) rewritten as,  Giving us solution the solution for ourground state wave function,  Nowwe have our ground state we can apply raising operator  to | and using a similar approach to above, Byrepeating this process, at the end of the story we find a generalised form ofthe normalised wave function,  where are Hermite polynomials. We can compare the probability density function of theclassical approach with the quantum ground state .  It is clear to see that the classicalmechanics has a minimum at , where it hasmaximum kinetic energy, whereas for quantum mechanics peaks at  for the ground state.   However, as  increases the quantum wave functions begin torepresent a similar distribution to that of classical mechanics as shown infigure 8.2.  For a very large  with macroscopic energies, the classical andquantum curves are indistinguishable, due to limitations of experimentalresolution.  Chatabout measurements. .. 275 18         Conclusion                  In this report, we have definedLagrangian, Hamiltonian and Quantum mechanics                 In further study, one could…10Bohr’s correspondenceprinciple 16Large values of n  References1 – Oliveira, A.R. E (2013) Lagrange as a Historian of Mechanics.Rio de Janeiro, Brazil: Federal University of Rio de Janeiro.2 – Gregory,R.D. (2006) Classical Mechanics. New York: Cambridge University Press.3 – Kibble,T.W.B. and Berkshire, F.H. (1996) Classical Mechanics. England: Addison Wesley.4 – Feynman,R.P. and Leighton, R.B. and Sands, M. (2013) The Feynman Lectures on Physics.Date accessed: 21/10/17. 5 –Ruostekoski, J. (2015-16) MATH2008: Introduction to Applied Mathematics.Southampton: University of Southampton.6 – Goldstein, H.and Poole, C and Safko, J. (2002) ClassicalMechanics. San Francisco: Addison         Wesley. Third Edition, InternationalEdition. 7 – Chow,T.L. (1995) Classical Mechanics. Canada: John Wiley & Sons, Inc.8 – Morin, D. (2008) Introduction to Classical Mechanics. New York: Cambridge UniversityPress9 – Fowles, G. and Cassiday, G. (2005) AnalyticalMechanics. 10 – Feynman,R., Dirac P. (2005) Feynman’s Thesis: A New Approach to Quantum Theory.11 – Malham,S. (2016) An Introduction to Lagrangian and Hamiltonian mechanics.12 – Akeroyd,A. (2017) PHYS6003: Advanced Quantum Mechanics. Southampton: University ofSouthampton.13 – Dirac,P. (1964) Lectures on Quantum Mechanics. New York: Belfer Graduate School ofScience. 14 – Sachrajda,C. (2016) Quantum Physics. Southampton: University of Southampton.15 – Dirac,P. The Principles of Quantum Mechanics: 16 – Bohr, N. (1976) Collected Works. Amsterdam17 – Shankar,R. (1994) Principles of Quantum Mechanics 2nd Edition. New York:Plenum Press.18 – Taylor.Mechanics: Classical andQuantum. 19 – Weisstein, Eric W. “Integrable.”From MathWorld–AWolfram Web: Resource. I'm Mary! Check it out
888b21a1af7a4e25
Dispersion (water waves) Frequency dispersion for surface gravity waves This section is about frequency dispersion for waves on a fluid layer forced by gravity, and according to linear theory. For surface tension effects on frequency dispersion, see surface tension effects in Airy wave theory and capillary wave. Wave propagation and dispersion The simplest propagating wave of unchanging form is a sine wave. A sine wave with water surface elevation η( x, t ) is given by:[2] where a is the amplitude (in metres) and θ = θ( x, t ) is the phase function (in radians), depending on the horizontal position ( x , in metres) and time ( t , in seconds):[3]   with     and   Characteristic phases of a water wave are: • the upward zero-crossing at θ = 0, • the wave crest at θ = ½ π, • the downward zero-crossing at θ = π and • the wave trough at θ =  π. A certain phase repeats itself after an integer m multiple of : sin(θ) = sin(θ+m•2π). Essential for water waves, and other wave phenomena in physics, is that free propagating waves of non-zero amplitude only exist when the angular frequency ω and wavenumber k (or equivalently the wavelength λ and period T ) satisfy a functional relationship: the frequency dispersion relation[4][5] The dispersion relation has two solutions: ω = +Ω(k) and ω = −Ω(k), corresponding to waves travelling in the positive or negative x–direction. The dispersion relation will in general depend on several other parameters in addition to the wavenumber k. For gravity waves, according to linear theory, these are the acceleration by gravity g and the water depth h. The dispersion relation for these waves is:[6][5] an implicit equation with tanh denoting the hyperbolic tangent function. An initial wave phase θ = θ0 propagates as a function of space and time. Its subsequent position is given by: This shows that the phase moves with the velocity:[2] which is called the phase velocity. Phase velocity A sinusoidal wave, of small surface-elevation amplitude and with a constant wavelength, propagates with the phase velocity, also called celerity or phase speed. While the phase velocity is a vector and has an associated direction, celerity or phase speed refer only to the magnitude of the phase velocity. According to linear theory for waves forced by gravity, the phase speed depends on the wavelength and the water depth. For a fixed water depth, long waves (with large wavelength) propagate faster than shorter waves. In the left figure, it can be seen that shallow water waves, with wavelengths λ much larger than the water depth h, travel with the phase velocity[2] with g the acceleration by gravity and cp the phase speed. Since this shallow-water phase speed is independent of the wavelength, shallow water waves do not have frequency dispersion. Using another normalization for the same frequency dispersion relation, the figure on the right shows that for a fixed wavelength λ the phase speed cp increases with increasing water depth.[1] Until, in deep water with water depth h larger than half the wavelength λ (so for h/λ > 0.5), the phase velocity cp is independent of the water depth:[2] with T the wave period (the reciprocal of the frequency f, T=1/f ). So in deep water the phase speed increases with the wavelength, and with the period. Since the phase speed satisfies cp = λ/T = λf, wavelength and period (or frequency) are related. For instance in deep water: The dispersion characteristics for intermediate depth are given below. Group velocity Interference of two sinusoidal waves with slightly different wavelengths, but the same amplitude and propagation direction, results in a beat pattern, called a wave group. As can be seen in the animation, the group moves with a group velocity cg different from the phase velocity cp, due to frequency dispersion. The group velocity is depicted by the red lines (marked B) in the two figures above. In shallow water, the group velocity is equal to the shallow-water phase velocity. This is because shallow water waves are not dispersive. In deep water, the group velocity is equal to half the phase velocity: cg = ½ cp.[7] The group velocity also turns out to be the energy transport velocity. This is the velocity with which the mean wave energy is transported horizontally in a narrow-band wave field.[8][9] In the case of a group velocity different from the phase velocity, a consequence is that the number of waves counted in a wave group is different when counted from a snapshot in space at a certain moment, from when counted in time from the measured surface elevation at a fixed position. Consider a wave group of length Λg and group duration of τg. The group velocity is:[10] The number of waves in a wave group, measured in space at a certain moment is: Λg / λ. While measured at a fixed location in time, the number of waves in a group is: τg / T. So the ratio of the number of waves measured in space to those measured in time is: So in deep water, with cg = ½ cp,[11] a wave group has twice as many waves in time as it has in space.[12] The water surface elevation η(x,t), as a function of horizontal position x and time t, for a bichromatic wave group of full modulation can be mathematically formulated as:[11] • a the wave amplitude of each frequency component in metres, • k1 and k2 the wave number of each wave component, in radians per metre, and • ω1 and ω2 the angular frequency of each wave component, in radians per second. Both ω1 and k1, as well as ω2 and k2, have to satisfy the dispersion relation: Using trigonometric identities, the surface elevation is written as:[10] The part between square brackets is the slowly varying amplitude of the group, with group wave number ½ ( k1  k2 ) and group angular frequency ½ ( ω1  ω2 ). As a result, the group velocity is, for the limit k1  k2 :[10][11] Wave groups can only be discerned in case of a narrow-banded signal, with the wave-number difference k1  k2 small compared to the mean wave number ½ (k1 + k2). Multi-component wave patterns The effect of frequency dispersion is that the waves travel as a function of wavelength, so that spatial and temporal phase properties of the propagating wave are constantly changing. For example, under the action of gravity, water waves with a longer wavelength travel faster than those with a shorter wavelength. While two superimposed sinusoidal waves, called a bichromatic wave, have an envelope which travels unchanged, three or more sinusoidal wave components result in a changing pattern of the waves and their envelope. A sea state – that is: real waves on the sea or ocean – can be described as a superposition of many sinusoidal waves with different wavelengths, amplitudes, initial phases and propagation directions. Each of these components travels with its own phase velocity, in accordance with the dispersion relation. The statistics of such a surface can be described by its power spectrum.[13] Dispersion relation In the table below, the dispersion relation ω2 = [Ω(k)]2 between angular frequency ω = 2π / T and wave number k = 2π / λ is given, as well as the phase and group speeds.[10] quantity symbol units deep water ( h > ½ λ ) shallow water ( h < 0.05 λ ) intermediate depth ( all λ and h ) dispersion relation rad / s phase velocity m / s group velocity m / s ratio - wavelength m for given period T, the solution of: Deep water corresponds with water depths larger than half the wavelength, which is the common situation in the ocean. In deep water, longer period waves propagate faster and transport their energy faster. The deep-water group velocity is half the phase velocity. In shallow water, for wavelengths larger than twenty times the water depth,[14] as found quite often near the coast, the group velocity is equal to the phase velocity. The full linear dispersion relation was first found by Pierre-Simon Laplace, although there were some errors in his solution for the linear wave problem. The complete theory for linear water waves, including dispersion, was derived by George Biddell Airy and published in about 1840. A similar equation was also found by Philip Kelland at around the same time (but making some mistakes in his derivation of the wave theory).[15] The shallow water (with small h / λ) limit, ω2 = gh k2, was derived by Joseph Louis Lagrange. Surface tension effects In case of gravity–capillary waves, where surface tension affects the waves, the dispersion relation becomes:[5] with σ the surface tension (in N/m). For a water–air interface (with σ = 0.074 N/m and ρ = 1000 kg/m³) the waves can be approximated as pure capillary waves – dominated by surface-tension effects – for wavelengths less than 0.4 cm (0.2 in). For wavelengths above 7 cm (3 in) the waves are to good approximation pure surface gravity waves with very little surface-tension effects.[16] Interfacial waves For two homogeneous layers of fluids, of mean thickness h below the interface and h′ above – under the action of gravity and bounded above and below by horizontal rigid walls – the dispersion relationship ω2 = Ω2(k) for gravity waves is provided by:[17] where again ρ and ρ′ are the densities below and above the interface, while coth is the hyperbolic cotangent function. For the case ρ′ is zero this reduces to the dispersion relation of surface gravity waves on water of finite depth h. When the depth of the two fluid layers becomes very large (h→∞, h′→∞), the hyperbolic cotangents in the above formula approaches the value of one. Then: Nonlinear effects Shallow water Amplitude dispersion effects appear for instance in the solitary wave: a single hump of water traveling with constant velocity in shallow water with a horizontal bed. Note that solitary waves are near-solitons, but not exactly – after the interaction of two (colliding or overtaking) solitary waves, they have changed a bit in amplitude and an oscillatory residual is left behind.[18] The single soliton solution of the Korteweg–de Vries equation, of wave height H in water depth h far away from the wave crest, travels with the velocity: So for this nonlinear gravity wave it is the total water depth under the wave crest that determines the speed, with higher waves traveling faster than lower waves. Note that solitary wave solutions only exist for positive values of H, solitary gravity waves of depression do not exist. Deep water The linear dispersion relation – unaffected by wave amplitude – is for nonlinear waves also correct at the second order of the perturbation theory expansion, with the orders in terms of the wave steepness k a (where a is wave amplitude). To the third order, and for deep water, the dispersion relation is[19] This implies that large waves travel faster than small ones of the same frequency. This is only noticeable when the wave steepness k a is large. Waves on a mean current: Doppler shift Water waves on a mean flow (so a wave in a moving medium) experience a Doppler shift. Suppose the dispersion relation for a non-moving medium is: with k the wavenumber. Then for a medium with mean velocity vector V, the dispersion relationship with Doppler shift becomes:[20] where k is the wavenumber vector, related to k as: k = |k|. The dot product kV is equal to: kV = kV cos α, with V the length of the mean velocity vector V: V = |V|. And α the angle between the wave propagation direction and the mean flow direction. For waves and current in the same direction, kV=kV. See also Dispersive water-wave models 1. Pond, S.; Pickard, G.L. (1978), Introductory dynamic oceanography, Pergamon Press, pp. 170–174, ISBN 978-0-08-021614-0 2. See Lamb (1994), §229, pp. 366–369. 3. See Whitham (1974), p.11. 4. This dispersion relation is for a non-moving homogeneous medium, so in case of water waves for a constant water depth and no mean current. 5. See Phillips (1977), p. 37. 6. See e.g. Dingemans (1997), p. 43. 7. See Phillips (1977), p. 25. 8. Reynolds, O. (1877), "On the rate of progression of groups of waves and the rate at which energy is transmitted by waves", Nature, 16 (408): 343–44, Bibcode:1877Natur..16R.341., doi:10.1038/016341c0 Lord Rayleigh (J. W. Strutt) (1877), "On progressive waves", Proceedings of the London Mathematical Society, 9: 21–26, doi:10.1112/plms/s1-9.1.21 Reprinted as Appendix in: Theory of Sound 1, MacMillan, 2nd revised edition, 1894. 9. See Lamb (1994), §237, pp. 382–384. 10. See Dingemans (1997), section 2.1.2, pp. 46–50. 11. See Lamb (1994), §236, pp. 380–382. 12. Henderson, K. L.; Peregrine, D. H.; Dold, J. W. (1999), "Unsteady water wave modulations: fully nonlinear solutions and comparison with the nonlinear Schrödinger equation", Wave Motion, 29 (4): 341–361, CiteSeerX, doi:10.1016/S0165-2125(98)00045-6 13. See Phillips (1977), p. 102. 14. See Dean and Dalrymple (1991), page 65. 15. See Craik (2004). 16. See Lighthill (1978), pp. 224–225. 17. Turner, J. S. (1979), Buoyancy effects in fluids, Cambridge University Press, p. 18, ISBN 978-0521297264 18. See e.g.: Craig, W.; Guyenne, P.; Hammack, J.; Henderson, D.; Sulem, C. (2006), "Solitary water wave interactions", Physics of Fluids, 18 (57106): 057106–057106–25, Bibcode:2006PhFl...18e7106C, doi:10.1063/1.2205916 19. See Lamb (1994), §250, pp. 417–420. 20. See Phillips (1977), p. 24. • Mathematical aspects of dispersive waves are discussed on the Dispersive Wiki.
943453c960abb089
I am slightly confused about hybridisation and how it relates to molecular and atomic orbitals, despite having pored through many sources online. I was hoping someone could verify whether my current understanding is correct, in particular regarding what hybridisation actually is/does because I have not read this explicitly, but am assuming it is the case from what I have read so far: • The Schrödinger equation can be solved to give the atomic orbitals (at least, some simplification involving the effective nuclear charge can be used to find the outer (and inner?) atomic orbitals. • In hybridisation, we first consider the shape of a molecule and then we consider each atom seperately and how we could combine the atomic orbitals so that the geometry about each atom is correct. Now what I am particularly unsure about is: Are the hybridised orbitals entirely a conceptual construct, or are they mathematical solutions to the Schrödinger equation? I was thinking that maybe the hybrid atomic orbitals that are created are a linear combination of the atomic orbitals, and thus this linear combination also solved the Schrödinger equation and can exist but would have a different energy to the individual atomic orbitals? I am finding it difficult to think how a linear combination of the atomic orbitals could produce something with a completely different shape though (although I suppose the orbitals are orientated and essentially vectorial, so I think it could work). Perhaps the hybrid atomic orbital form could also be, say, the product of the atomic orbital forms? Although then I do not think this would solve the Schrödinger equation anymore. But I am not sure. And anyway, all of these ideas assume that hybridisation has a mathematical basis in the Schrödinger equation, which I am not sure about at all! • Then molecular orbital theory takes the orbitals of each atom (I think theoretically it takes every orbital in every atom?) in the molecule and combines them to form a molecular orbital for the whole molecule - no longer simply an overlap between two atoms. This certainly also solves the Schrödinger equation. From what I understand, theoretically the calculation to form MOs should involve only the AOs of each atom, but to simplify the situation sometimes hybrid atomic orbitals are used in MO construction? If so, surely the hybrid atomic orbitals must be a linear combination of the atomic orbitals that solve the Schrödinger equation, so that this can be used in molecular obrital theory which is based in the solution to the Schrödinger equation? Also, where in the calulcation are the coefficients for each atomic orbital arising? Is it from the boundary conditions including things like molecule geometry, or perhaps that we know the energy of a molecule and when we put this particular energy in to the Schrödinger equation it gives us the coefficients? I apologise for the long post and I realise there seem to be many questions within here, however they are all linked and about the mathematical underpinnings of hybridisation/valence bond theory and molecular orbitals, so I thought they belonged in one post. • $\begingroup$ When you say hybridization, are you referring to LCAO (linear combination of atomic orbitals)? Or are you referring to the hybridization taught in organic chemistry classes (where they only deal with the valence orbitals)? $\endgroup$ – CoffeeIsLife May 19 '17 at 19:42 • $\begingroup$ @QuantumAMERICCINO I think the latter (sp/sp2/sp3 mixing etc)- i.e. the hybridisation where you want it to match up with the shape of the molecule $\endgroup$ – Meep May 19 '17 at 20:31 • $\begingroup$ An important point is that the total electron density described by the hybrid orbitals is the same as the density of the unhybridized orbitals. We're just carving that space up differently in terms of names like drawing different boundaries on a map. The land isn't changed by the change in names. $\endgroup$ – Andrew Feb 12 '19 at 2:15 Mathematically, atomic/molecular orbitals are 1-electron wavefunctions (hydrogen-like wavefunctions) that are used as a basis with-which the total N-election wavefunction is expanded. The N-election wavefunction is a determinant (or a linear combination of determinants) built from these 1-electron wavefunctions: an anti-symmetric (Fermi statistics) linear combination of Hartree products (usually a product of MOs expanded in a basis of AOs). For simplicity's sake, lets assume we are interested only in 1-determinant wavefunctions. The Hartree-Fock method will (when used where HF is appropriate) result in a variationally-optimal 1-determinant wavefunction that is an energy eigenfunction of the applicable Hamiltonian operator. Not only is the final, self-consistently optimized HF total N-electron wavefunction an energy eigenfunction, but the individual HF canoncial MOs are also eigenfunctions of the Hamiltonian/Fock operator, with eigenvalues that are the so-called 'orbital energies' (canonical orbitals are defined as the optimzed orbitals which diagonalized the Fock matrix). However, there is nothing special about these canonical orbitals in the context of the total system energy. Any linear, norm-preserving transformation of the occupied orbital space will result in an N-electron wavefunction that is still an eigenfunction of the Fock operator, while the individual MOs are no longer eigenfunctions of the Fock operator (for example, localized MOs, LMOs). So, once we have a HF wavefunction and optimized MOs, we can linearly transform the MOs however we want. Such a linear transformation might have the effect of producing orbitals that resemble hybridized orbitals. Or maybe we are interested in generating localized MOs. The point is, a linear transformation within the occupied orbital space results in a wavefunction that has the exact same electon density and energy eigenvalue as the wavefunction HF originally provides us in terms of Canonical orbitals. Orbitals transformed to look like hybrid-orbitals are on the exact same footing as any other choice of orbital. Your Answer
b4ebf7ca36e1295e
Scientists solve half-century-old magnesium dimer mystery Scientists solve half-century-old magnesium dimer mystery This graph shows the team’s highly accurate ab initio calculations in red, dotted lines relative to the experimental LIF spectrum of Mg2, marked in black. Credit: Piecuch Lab Magnesium dimer (Mg2) is a fragile molecule consisting of two weakly interacting atoms held together by the laws of quantum mechanics. It has recently emerged as a potential probe for understanding fundamental phenomena at the intersection of chemistry and ultracold physics, but its use has been thwarted by a half-century-old enigma—five high-lying vibrational states that hold the key to understanding how the magnesium atoms interact but have eluded detection for 50 years. The lowest fourteen Mg2 vibrational states were discovered in the 1970s, but both early and recent experiments should have observed a total of nineteen states. Like a quantum cold case, experimental efforts to find the last five failed, and Mg2 was almost forgotten. Until now. Piotr Piecuch, Michigan State University Distinguished Professor and MSU Foundation Professor of chemistry, along with College of Natural Science Department of Chemistry graduate students Stephen H. Yuwono and Ilias Magoulas, developed new, computationally derived evidence that not only made a quantum leap in first-principles quantum chemistry, but finally solved the 50-year-old Mg2 mystery. Their findings were recently published in the journal Science Advances. “Our thorough investigation of the magnesium dimer unambiguously confirms the existence of 19 vibrational levels,” said Piecuch, whose research group has been active in quantum chemistry and physics for more than 20 years. “By accurately computing the ground- and excited-state potential energy curves, the transition dipole moment function between them and the rovibrational states, we not only reproduced the latest laser-induced fluorescence (LIF) spectra, but we also provided guidance for the future experimental detection of the previously unresolved levels.” So why were Piecuch and his team able to succeed where others had failed for so many years? The persistence of Yuwono and Magoulas certainly revived interest in the Mg2 case, but the answer lies in the team’s brilliant demonstration of the predictive power of modern electronic structure methodologies, which came to the rescue when experiments encountered unsurmountable difficulties. “The presence of collisional lines originating from one molecule hitting another and the background noise muddied the experimentally observed LIF spectra,” Piecuch explained. “To make matters worse, the elusive high-lying vibrational states of Mg2 that baffled scientists for decades dissipate into thin air when the molecule starts rotating.” Scientists solve half-century-old magnesium dimer mystery The missing, high-lying vibrational states of Mg2 are clearly visible here as computationally derived red lines. Experiments were unable to detect these vibrations—a decades-old enigma the MSU team finally solved. Credit: Piecuch Lab Instead of running costly experiments, Piecuch and his team developed efficient computational strategies that simulated those experiments, and they did it better than anyone had before. Like the quantized vibrational states of Mg2, in-between approximations were not acceptable. They solved the electronic and nuclear Schrödinger equations, tenets of quantum physics that describe molecular motions, with almost complete accuracy. “The majority of calculations in our field do not require the high accuracy levels we had to reach in our study and often resort to less expensive computational models, but we provided compelling evidence that this would not work here,” Piecuch said. “We had to consider every conceivable physical effect and understand the consequences of neglecting even the tiniest details when solving the quantum mechanical equations.” Their calculations reproduced the experimentally derived vibrational and rotational motions of Mg2 and the observed LIF spectra with remarkable precision—on the order of 1 cm-1, to be exact. This provided the researchers with confidence that their predictions regarding the magnesium dimer, including the existence of the elusive high-lying vibrational states, were firm. Yuwono and Magoulas were clearly excited about the groundbreaking project, but emphasized they had initial doubts whether the team would be successful. “In the beginning, we were not even sure if we could pull this investigation off, especially considering the number of electrons in the magnesium dimer and the extreme accuracies required by our state-of-the-art computations,” said Magoulas, who has worked in Piecuch’s research group for more than four years and teaches senior level quantum chemistry courses at MSU. “The computational resources we had to throw at the project and the amount of data we had to process were immense—much larger than all of my previous computations combined,” added Yuwono, who also teaches physical chemistry courses at MSU and has worked in Piecuch’s research group since 2017. The case of the high-lying vibrational states of Mg2 that evaded scientists for half a century is finally closed, but the details of the computations that cracked it are completely open and accessible on the Science Advances website. Yuwono, Magoulas, and Piecuch hope that their computations will inspire new experimental studies. “Quantum mechanics is a beautiful mathematical theory with a potential of explaining the intimate details of molecular and other microscopic phenomena,” Piecuch said. “We used the Mg2 mystery as an opportunity to demonstrate that the predictive power of modern computational methodologies based on first-principles quantum mechanics is no longer limited to small, few-electron species.”
46b0d68c0c50e1f3
World Library   Flag as Inappropriate Email this Article Density matrix Density matrix A density matrix is a matrix that describes a quantum system in a mixed state, a statistical ensemble of several quantum states. This should be contrasted with a single state vector that describes a quantum system in a pure state. The density matrix is the quantum-mechanical analogue to a phase-space probability measure (probability distribution of position and momentum) in classical statistical mechanics. Explicitly, suppose a quantum system may be found in state | \psi_1 \rangle with probability p1, or it may be found in state | \psi_2 \rangle with probability p2, or it may be found in state | \psi_3 \rangle with probability p3, and so on. The density operator for this system is[1] \hat\rho = \sum_i p_i |\psi_i \rangle \langle \psi_i|, where \{|\psi_i\rangle\} need not be orthogonal and \sum_i p_i=1. By choosing an orthonormal basis \{|u_m\rangle\}, one may resolve the density operator into the density matrix, whose elements are[1] \rho_{mn} = \sum_i p_i \langle u_m | \psi_i \rangle \langle \psi_i | u_n \rangle = \langle u_m |\hat \rho | u_n \rangle. The density operator can also be defined in terms of the density matrix, \hat\rho = \sum_{mn} |u_m\rangle \rho_{mn} \langle u_n| . For an operator \hat A (which describes an observable A of the system), the expectation value \langle A \rangle is given by[1] \langle A \rangle = \sum_i p_i \langle \psi_i | \hat{A} | \psi_i \rangle = \sum_{mn} \langle u_m | \hat\rho | u_n \rangle \langle u_n | \hat{A} | u_m \rangle = \sum_{mn} \rho_{mn} A_{nm} = \operatorname{tr}(\rho A). In words, the expectation value of A for the mixed state is the sum of the expectation values of A for each of the pure states |\psi_i\rangle weighted by the probabilities pi and can be computed as the trace of the product of the density matrix with the matrix representation of A in the same basis. Mixed states arise in situations where the experimenter does not know which particular states are being manipulated. Examples include a system in thermal equilibrium (or additionally chemical equilibrium) or a system with an uncertain or randomly varying preparation history (so one does not know which pure state the system is in). Also, if a quantum system has two or more subsystems that are entangled, then each subsystem must be treated as a mixed state even if the complete system is in a pure state.[2] The density matrix is also a crucial tool in quantum decoherence theory. The density matrix is a representation of a linear operator called the density operator. The close relationship between matrices and operators is a basic concept in linear algebra. In practice, the terms density matrix and density operator are often used interchangeably. Both matrix and operator are self-adjoint (or Hermitian), positive semi-definite, of trace one, and may be infinite-dimensional.[3] The formalism was introduced by John von Neumann[4] in 1927 and independently, but less systematically by Lev Landau[5] and Felix Bloch[6] in 1927 and 1946 respectively. • Pure and mixed states 1 • Example: Light polarization 1.1 • Mathematical description 1.2 • Formulation 2 • Measurement 3 • Entropy 4 • The Von Neumann equation for time evolution 5 • "Quantum Liouville", Moyal's equation 6 • Composite Systems 7 • C*-algebraic formulation of states 8 • See also 9 • Notes and references 10 Pure and mixed states For pure state, it is always to be tr(\rho^2)=1 . Example: Light polarization The incandescent light bulb (1) emits completely random polarized photons (2) with mixed state density matrix \begin{bmatrix} 0.5 & 0 \\ 0 & 0.5 \\ \end{bmatrix} After passing through vertical plane polarizer (3), the remaining photons are all vertically polarized (4) and have pure state density matrix \begin{bmatrix} 1 & 0 \\ 0 & 0 \\ \end{bmatrix} Mathematical description The state vector | \psi \rangle of a pure state completely determines the statistical behavior of a measurement. For concreteness, take an observable quantity, and let A be the associated observable operator that has a representation on the Hilbert space \mathcal{H} of the quantum system. For any real-valued, analytical function F defined on the real numbers,[7] suppose that F(A) is the result of applying F to the outcome of a measurement. The expectation value of F(A) is where the operator ρ is the density operator of the mixed system. A simple calculation shows that the operator ρ for the above discussion is given by For the above example of unpolarized light, the density operator is \rho = \tfrac{1}{2} | R \rangle \langle R | + \tfrac{1}{2} | L \rangle \langle L |. For a finite-dimensional function space, the most general density operator is of the form i.e., U is unitary and such that In operator language, a density operator is a positive semidefinite, hermitian operator of trace 1 acting on the state space.[8] A density operator describes a pure state if it is a rank one projection. Equivalently, a density operator ρ is a pure state if and only if \; \rho = \rho^2, i.e. the state is idempotent. This is true regardless of whether H is finite-dimensional or not. Geometrically, when the state is not expressible as a convex combination of other states, it is a pure state.[9] The family of mixed states is a convex set and a state is pure if it is an extremal point of that set. It follows from the spectral theorem for compact self-adjoint operators that every mixed state is a finite convex combination of pure states. This representation is not unique. Furthermore, a theorem of Andrew Gleason states that certain functions defined on the family of projections and taking values in [0,1] (which can be regarded as quantum analogues of probability measures) are determined by unique mixed states. See quantum logic for more details. \lang A \rang = \sum_j p_j \lang \psi_j|A|\psi_j \rang = \sum_j p_j \operatorname{tr}\left(|\psi_j \rang \lang \psi_j|A \right) = \sum_j \operatorname{tr}\left(p_j |\psi_j \rang \lang \psi_j|A\right) = \operatorname{tr}\left(\sum_j p_j |\psi_j \rang \lang \psi_j|A\right) = \operatorname{tr}(\rho A), where \operatorname{tr} denotes trace. Moreover, if A has spectral resolution \; \rho ' = \sum_i P_i \rho P_i. This is true assuming that \textstyle |a_i\rang is the only eigenket (up to phase) with eigenvalue ai; more generally, Pi in this expression would be replaced by the projection operator into the eigenspace corresponding to eigenvalue ai. The von Neumann entropy S of a mixture can be expressed in terms of the eigenvalues of \rho or in terms of the trace and logarithm of the density operator \rho. Since \rho is a positive semi-definite operator, it has a spectral decomposition such that \rho= \sum_i \lambda_i |\varphi_i\rangle\langle\varphi_i| where |\varphi_i\rangle are orthonormal vectors, \lambda_i> 0 and \sum \lambda_i = 1. Then the entropy of a quantum system with density matrix \rho is Also it can be shown that The Von Neumann equation for time evolution Just as the Schrödinger equation describes how pure states evolve in time, the von Neumann equation (also known as the Liouville-von Neumann equation) describes how a density operator evolves in time (in fact, the two equations are equivalent, in the sense that either can be derived from the other.) The von Neumann equation dictates that[12][13] where the brackets denote a commutator. i \hbar \frac{dA^{(H)}}{dt}=-[H,A^{(H)}] ~, "Quantum Liouville", Moyal's equation Composite Systems C*-algebraic formulation of states It is now generally accepted that the description of quantum mechanics in which all self-adjoint operators represent observables is untenable.[14][15] For this reason, observables are identified with elements of an abstract C*-algebra A (that is one without a distinguished representation as an algebra of operators) and states are positive linear functionals on A. However, by using the GNS construction, we can recover Hilbert spaces which realize A as a subalgebra of operators. The states of the C*-algebra of compact operators K(H) correspond exactly to the density operators, and therefore the pure states of K(H) are exactly the pure states in the sense of quantum mechanics. See also Notes and references 1. ^ a b c Sakurai, J, Modern Quantum Mechanics (2nd ed.), p. 181,   2. ^ Hall, B.C. (2013), Quantum Theory for Mathematicians, p. 419  3. ^   4. ^   5. ^ Schlüter, Michael and Lu Jeu Sham (1982), "Density functional theory", Physics Today 35 (2): 36,   6. ^ Ugo Fano (June 1995), "Density matrices as polarization vectors", Rendiconti Lincei 6 (2): 123–130,   7. ^ Technically, F must be a Borel function 8. ^ Hall, B.C. (2013), Quantum Theory for Mathematicians, Springer, p. 423  9. ^ Hall, B.C. (2013), Quantum Theory for Mathematicians, Springer, p. 439  10. ^ Nielsen, Michael; Chuang, Isaac (2000), Quantum Computation and Quantum Information,  . Chapter 11: Entropy and information, Theorem 11.9, "Projective measurements cannot decrease entropy" 11. ^   12. ^ Breuer, Heinz; Petruccione, Francesco (2002), The theory of open quantum systems, p. 110,   13. ^ Schwabl, Franz (2002), Statistical mechanics, p. 16,   14. ^ See appendix,   15. ^ Emch, Gerard G. (1972), Algebraic methods in statistical mechanics and quantum field theory,  
546f22b9aae9d98d
First Principles in expressions Physicists seek to discover first principles of the manifest universe. Why? These principles, if identified, would reflect the fundamental nature of reality and represent the origins of natural law and process. Relativity theory takes us to an essential singularity where matter is crushed into a point of infinite curvature and gravity—which produced the Big Bang. Quantum theory takes us to a state where the universe exists as a non-local foamy cloud of mere “tendencies to exist.” String theory takes us back to a multidimensional state where nothing exists but energetic strings, membranes and blobs (of something or other). Religion teaches that fundamental reality takes us to an Infinite God. Scientist/theologian Emanuel Swedenborg supported the latter, but applied scientific reasoning to his theistic position. Like some of today’s cutting edge thinkers he anticipated that the concept of causality could have its basis in a reality freed-up from involvement with time and locality. Swedenborg’s model of reality embraced a dynamical nexus between Divine (God’s) order and temporal order. In this model, various states of God’s love flow into (descend) into boundary conditions with increased constraints until finally finding expression in the spatio-temporal arena. This means that the physical universe and its laws must be expressions or analogs of spiritual laws and God’s essential character. Swedenborg called these causal links between God and physical nature the science of correspondences. To give you a simple example of this top-down causal relationship between spiritual (non-local) and physical terms, we can look at our mundane everyday expressions and language. The word “seeing” has its mental analog in the word “understanding” which in turn has its Divine analog in God’s Infinite “foresight” and “providence.” In each case the expression is self-similar (corresponds) but becomes less local and physical and more universal as it moves up the hierarchical ladder. This self-similarity allows linkage for God to act in the world. In a top-down causal scheme of reality, all God’s qualities represent first principles—and that which is responsible for the patterning principles and dynamics of the whole multi-tiered system that follows. Heaven’s angels live in a non-physical realm and are cognitive of the first principles that are contained within all human expression. Every idea or concept that comes to an angelic being’s perception is immediately transformed into its “higher” equivalent or corresponding mental and spiritual quality. The significance of this is that angels (and specially enlightened humans) perceive deeper levels of meaning within the narratives of Holy Scripture. According to Swedenborg, not only were angels able to apply new degrees of freedom to language but from this loftier viewpoint they could also identify patterns of lawful process and order within the sacred scaffolding and architecture of Scripture. In other words, God’s Holy Word could be studied as a multidimensional and scientific document with the potential of leading physicists to formulate a causal theory from a non-physical (pre-geometric but holy) matrix. This is a game-changer! In order to explain these ideas in greater detail, I have just completed a book entitled Proving God. It is now available on Amazon. Using Swedenborg to Understand the Quantum World III: Thoughts and Forms Swedenborg Foundation By Ian Thompson, PhD, Nuclear Physicist at the Lawrence Livermore National Laboratory In this series of posts, Swedenborg’s theory of correspondences has been shown to have interesting applications for helping us to better understand the quantum world. In part I, we learned that our mental processes occur at variable finite intervals and that they consist of desire, or love, acting by means of thoughts and intentions to produce physical effects. We in turn came to see the correspondential relationship between these mental events and such physical events that occur on a quantum level: in both cases, there will be time gaps between the events leading up to the physical outcome. So since we find that physical events occur in finite steps rather than continuously, we are led to expect a quantum world rather than a world described by classical physics. In part II, we saw that the main similarity between desire (mental) and energy (physical) is that they both persist between events, which means that they are substances and therefore have the capability, or disposition, for action or interaction within the time gaps between those events. Now we come to the question of how it is that these substances persist during the intervals between events. The events are the actual selection of what happens, so after the causing event and before the resultant effect, what occurs is the exploring of “possibilities for what might happen.” With regard to our mental processes, this exploration of possibilities is what we recognize as thinking. Swedenborg explains in detail how this very process of thinking is the way love gets ready to do things (rather than love being a byproduct of the thinking process, as Descartes would require): Everyone sees that discernment is the vessel of wisdom, but not many see that volition is the vessel of love. This is because our volition does nothing by itself, but acts through our discernment. It first branches off into a desire and vanishes in doing so, and a desire is noticeable only through a kind of unconscious pleasure in thinking, talking, and acting. We can still see that love is the source because we all intend what we love and do not intend what we do not love. (Divine Love and Wisdom §364) When we realize we want something, the next step is to work out how to do it. We first think of the specific objective and then of all the intermediate steps to be taken in order to achieve it. We may also think about alternative steps and the pros and cons of following those different routes. In short, thinking is the exploration of “possibilities for action.” As all of this thinking speaks very clearly to the specific objective at hand, it can be seen as supporting our motivating love, which is one of the primary functions of thought. A focused thinking process such as this can be seen, simplified, in many kinds of animal activities. With humans, however, thinking goes beyond that tight role of supporting love and develops a scope of its own. Not only do our thoughts explore possibilities for action, but they also explore the more abstract “possibilities for those possibilities.” Not only do we think about how to get a drink, but we also, for example, think about the size of the container, how much liquid it contains, and how far it is from where we are at that moment! When we get into such details as volume and distance, we discover that mathematics is the exploration of “possibilities of all kinds,” whether they are possibilities for action or not. So taken as a whole, thought is the exploration of all the many possibilities in the world, whether or not they are for action and even whether or not they are for actual things. For physical things (material objects), this exploration of possibilities is spreading over the possible places and times for interactions or selections. Here, quantum physics has done a whole lot of work already. Physicists have discovered that the possibilities for physical interactions are best described by the wave function of quantum mechanics. The wave function describes all the events that are possible, as well as all the propensities and probabilities for those events to happen. According to German physicist Max Born, the probability of an event in a particular region can be determined by an integral property of the wave function over that region. Energy is the substance that persists between physical events, and all physical processes are driven by energy. In quantum mechanics, this energy is what is responsible for making the wave function change through time, as formulated by the Schrödinger equation, which is the fundamental equation of quantum physics.[1] Returning now to Swedenborg’s theory of correspondences, we recognize that the something physical like thoughts in the mind are the shapes of wave functions in quantum physics. In Swedenborg’s own words: When I have been thinking, the material ideas in my thought have presented themselves so to speak in the middle of a wave-like motion. I have noticed that the wave was made up of nothing other than such ideas as had become attached to the particular matter in my memory that I was thinking about, and that a person’s entire thought is seen by spirits in this way. But nothing else enters that person’s awareness then apart from what is in the middle which has presented itself as a material idea. I have likened that wave round about to spiritual wings which serve to raise the particular matter the person is thinking about up out of his memory. And in this way the person becomes aware of that matter. The surrounding material in which the wave-like motion takes place contained countless things that harmonized with the matter I was thinking about. (Arcana Coelestia §6200)[2] Many people who have tried to understand the significance of quantum physics have noted that the wave function could be described as behaving like a non-spatial realm of consciousness. Some of these people have even wanted to say that the quantum wave function is a realm of consciousness, that physics has revealed the role of consciousness in the world, or that physics has discovered quantum consciousness.[3] However, using Swedenborg’s ideas to guide us, we can see that the wave function in physics corresponds to the thoughts in our consciousness. They have similar roles in the making of events: both thoughts and wave functions explore the “possibilities, propensities, and probabilities for action.” They are not the same, but they instead follow similar patterns and have similar functions within their respective realms. Thoughts are the way that desire explores the possibilities for the making of intentions and their related physical outcomes, and wave functions are the way that energy explores the possibilities for the making of physical events on a quantum level. The philosophers of physics have been puzzled for a long time about the substance of physical things,[4] especially that of things in the quantum realm. From our discussion here, we see that energy (or propensity) is also the substance of physical things in the quantum realm and that the wave function, then, is the form that such a quantum substance takes. The wave function describes the shape of energy (or propensity) in space and time. We can recognize, as Aristotle first did, that a substantial change has occurred when a substance comes into existence by virtue of the matter of that substance acquiring some form.[5] That still applies to quantum mechanics, we now find, even though many philosophers have been desperately constructing more extreme ideas to try to understand quantum objects, such as relationalism[6] or the many-worlds interpretation.[7] So what, then, is this matter of energy (desire, or love)? Is it from the Divine? Swedenborg would say as much: It is because the very essence of the Divine is love and wisdom that we have two abilities of life. From the one we get our discernment, and from the other volition. Our discernment is supplied entirely by an inflow of wisdom from God, while our volition is supplied entirely by an inflow of love from God. Our failures to be appropriately wise and appropriately loving do not take these abilities away from us. They only close them off; and as long as they do, while we may call our discernment “discernment” and our volition “volition,” essentially they are not. So if these abilities really were taken away from us, everything human about us would be destroyed—our thinking and the speech that results from thought, and our purposing and the actions that result from purpose. We can see from this that the divine nature within us dwells in these two abilities, in our ability to be wise and our ability to love. (Divine Love and Wisdom §30) When seeing things as made from substance—from the energy (or desire) that endures between events and thereby creates further events—we note that people will tend to speculate about “pure love” or “pure energy”: a love or energy without form that has no particular objective but can be used for anything. But this cannot be. In physics, there never exists any such pure energy but only energy in specific forms, such as the quantum particles described by a wave function. Any existing physical energy must be the propensity for specific kinds of interactions, since it must exist in some form. Similarly, there never exists a thing called “pure love.” The expression “pure love” makes sense only with respect to the idea of innocent, or undefiled, love, not to love without an object. Remember that “our volition [which is the vessel of love] does nothing by itself, but acts through our discernment.” Ian Thompson is also the author of Starting Science from Godas well as Nuclear Reactions in Astrophysics (Univ. of Cambridge Press) and more than two hundred refereed professional articles in nuclear physics. [1] Wikipedia,ödinger_equation. [2] Secrets of Heaven is the New Century Edition translation of Swedenborg’s Arcana Coelestia. [3] See, for example, [4] Howard Robinson, “Substance,” Stanford Encyclopedia of Philosophy, [5] Thomas Ainsworth, “Form vs. Matter,” Stanford Encyclopedia of Philosophy, [6] Michael Epperson, “Quantum Mechanics and Relational Realism: Logical Causality and Wave Function Collapse,” Process Studies 38.2 (2009): 339–366. [7] J. A. Barrett, “Quantum Worlds,” Principia 20.1 (2016): 45–60. Read more posts from the Scholars on Swedenborg series > The title for this blog post is actually the title I use for chapter four in my upcoming book Proving God. The purpose of this adventuresome book is to unify science and theology—in a way that will offer new insights to both the New Paradigm science (relativity theory, quantum theory and string theory) and biblical interpretation (exegesis). My aim is to offer novel and rational ideas that can be applied toward measureable social transformation. The basic material for this book comes from my 35-year study of the remarkable ideas of scientist/theologian Emanuel Swedenborg. His ideas are remarkable because he claimed that LOVE was fundamental reality (esse) and the a priori law-giving universal substance by which creation comes forth through orderly causal process. Of course, such a premise—that the physics ruling the universe on the fundamental level is something we usually associate with a human emotion such as romance or a value like empathy—would obviously have dubious merit among the proponents of the natural sciences. So to add potency to my book I decided to apply this psychical dynamic to the toughest problems facing physicists today. One of toughest of the challenges facing today’s scientists is QUANTUM GRAVITY. Without a solution, scientists will be unable to unify the laws of the universe. String theory attempts to offer a solution but it never ventures beyond physical explanations and remains unproven. Is there any precedence for a physicist to suspect that non-material values, such as justice, ethics, morality, or empathy should enter into the equation of describing fundamental reality? Yes! But this is not your father’s (patriarchal) physics and would equally embrace the feminine worldview. Not long ago someone (who remains unknown) came upon my blog from a link to a most interesting site. I “clicked” on the link and an article entitled Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity appeared. The article was written (in 1994) by Professor of Physics, Alan D. Sokal, at the Department of Physics, New York University (NYU). Not too shabby! You can read it here: Sokal described his paper as a “subversive undertaking” because it challenged the scientific community that the very foundation of their worldview must be rebuilt on the principle of social ideology—otherwise it could not be considered legitimately postmodern. He offered no final answers but simply as an “idea starter,” suggested that a final science would have to be in line with an “emancipatory” and “ecological” (holistic) perspective capable of transforming society in positive ways. That is precisely what my book attempts—to show that the ultimate laws of nature are the same as the laws of mutual love! Mutual love is the essence of social ideology and will redefine the content of science—even leading us to a plausible scientific theory of Quantum Gravity. So Swedenborg was way ahead of his time! And my book will present all the startling evidence. P.S. I have just been informed by a physicist friend that I fell for a hoax concerning the Skokal article. See affair Rather than remove the post I will keep it as is. My book is not a hoax and even if I fell into a trap (because I trust people) the Sokal article is based on a real premise – a premise that it attempts to make fun of – that love is the ultimate science. So perhaps the laugh is on me for now. It doesn’t hurt my relationship with my readers that I can laugh at myself and embrace a little humility. But I am quite amused that the hoax itself is based on the real direction science must ultimately take – fundamental reality is psychical not physical! Certainly it is no hoax that the issue of Quantum Gravity has not been solved. And, my book “Proving God” indeed offers novel ways of approaching this elusive topic that are anything but superficial. The Sokal article offers no solutions to quantum gravity anyway. My book does! I was only fooled into thinking a scientist was interested in expanding science to include VALUES. There are many, many serious scientists attempting such a challenge. This has been a hoot for me and I am going to enjoy it! So please have a laugh on me as well. Posted onby Order and Disorder New Christian Bible StudyNew Christian Bible Study By Mr. Joseph S. David ← Previous   Next → If you look at the universe, containing so many stars and galaxies, extending so far, and containing so much sustaining energy, you can sense the overwhelming order that enables it to provide for solar systems and planets that surely provide other places for life to thrive. If you learn a little about the physics of matter, where scientists are continuing to postulate building blocks of larger particles and the various forces that bind them at close distances or operate over astronomic distances, you can see not only a mind-boggling complexity, but also a serene order that keeps everything operating as it should and has been doing so for billions of years. This order comes from God, the creator. His infinite Divine love, as it gets finited and sublimated, is the only real thing, the only thing that exists in itself. Everything else, time and space, elements and physical matter, living creatures and human consciousness, all natural and spiritual entities come from Him. This flowing down, so to speak, is the order that exists, and it is dominant. The Lord’s order is not only concerned with natural creation but with mankind’s spiritual welfare as well. Mankind is different from all other living creatures because all other living creatures can live only in the order in which they are created, whereas people can deviate from their order because they are free. The short form of the order for mankind is to “love the Lord with all the heart… and thy neighbor as thyself”. Spiritually speaking, people did so until, as told in the parable of the garden of Eden, we ate of the forbidden tree. From that time we took upon ourselves the decisions for what is good and what is evil, and forgot what the Lord said. As these decisions multiplied, we turned more and more to ourselves and away from the Lord, and our heredity was passed on generation after generation, slowly getting worse. Order was replaced by disorder. Eventually our heredity was turned upside down so that the order of our lives in our heredity was completely backwards, and we are now born with the will to love ourselves rather than the Lord and our neighbor. However, because we have rational minds, we can be taught and trained to turn our minds right side up again, putting a love to the Lord and the neighbor ahead of loves of self and the world. A written Word or Sacred Scripture was, in time, given to guide us. The Lord is also always sending love and wisdom into all of us, to influence and help us, but we are all free to reject that help if we choose. People who read the Word, listen to its teaching, and accept the Lord’s help, are trying to live according to Divine order. (References: Arcana Coelestia 1055, 1475, 8513, 9736; Arcana Coelestia 10659 [4]; Arcana Coelestia 4839 [2]; Brief Exposition of Doctrine 52; Divine Providence 279 [5]; Heaven and Hell 315, 523; True Christian Religion 53, 54) Starting Science From God My photo New book: Starting Science from God. Image may contain: sky, outdoor, nature and text Site Map An integration of science and religious theism into a science of theism (theistic science), in which both sides keep their strengths, and are firmly and logically linked together. starting science from God Unique explanatory advantages of this book: • Principles in more detail: • Describes an honest, welcoming and living theism • Prediction of relations between the mental and the physical • That some formal modeling is possible within this scientific theism About Me Online Course Rational Scientific Theories from Theism Approaches through Physics, Biology, Psychology, Philosophy, Spirituality, Religion , Theology to Beginning to see Basic Principles with Consequences for Nature,   Evolution,   Mind,   Bible Meanings  and  Dualism. Blind faith – Science or religion? Spiritual Questions & Answers Discovering inner health and transformation Blind faith of scientists who deny a purposive life source Blind faith of creationists (Helen Brown Do spiritual symbols mean anything today?) Blind faith in scientific theories limited by naturalistic assumptions I notice that likewise some scientists claim that random processes created human Blind faith due to arrogance Copyright 2011 Stephen Russell-Lacy Author of  Heart, Head & Hands  Swedenborg’s perspective on emotional problems Using Swedenborg to Understand the Quantum World I: Events Swedenborg Foundation By Ian Thompson, PhD, Nuclear Physicist at the Lawrence Livermore National LaboratoryFor the last hundred years, physicists have been using the quantum theory about the universe, but they still do not properly understand of what the quantum world is made. The previous physics (referred to as “classical” and started by Isaac Newton) used ideas of “waves” and “particles” to picture what makes up the physical world. But now we find that every object in the quantum world sometimes behaves as a particle and sometimes behaves as a wave! Which is it? In quantum physics, objects behave most of the time like waves spreading out as they travel along, but sometimes measurements show objects to be particles with a definite location: not spread out at all. Why is that? It is as though their size and location suddenly change in measurement events. This is quite unlike classical physics, where particles exist continuously with the same fixed shape. In quantum physics, by contrast, objects have fixed locations only intermittently, such as when they are observed.  So they only offer us a discrete series of events that can be measured, not a continuous trajectory. Quantum objects, then, are alternately continuous and discontinuous. Why would we ever expect such a fickle world? Emanuel Swedenborg (1688–1772) has some ideas that might help us. He describes how all physical processes are produced by something mental, or spiritual, and this can be confirmed by reason of the similarity in patterns between the physical processes and their mental causes. In Swedenborg’s words, there are correspondences between the physical and the mental—that they have similar structures and functions, even though mind and matter are quite distinct. I need to state what correspondence is. The whole natural world is responsive to the spiritual world—the natural world not just in general, but in detail. So whatever arises in the natural world out of the spiritual one is called “something that corresponds.” It needs to be realized that the natural world arises from and is sustained in being by the spiritual world . . . (Heaven and Hell §89) Although these ideas are not part of present-day science, I still hope to show below that they may have some implications for how science could usefully develop. Swedenborg’s theory of mind is easy to begin to understand. He talks about how all mental processes have three common elements: desire, thought, and action. The desire is what persists and motivates what will happen. The thought is the exploration of possibilities for actions and the making of an intention. The action is the determined intention, the product of desire and thought that results in an actual physical event. The [actions] themselves are in the mind’s enjoyments and their thoughts when the delights are of the will and the thoughts are of the understanding therefrom, thus when there is complete agreement in the mind. The [actions] then belong to the spirit, and even if they do not enter into bodily act still they are as if in the act when there is agreement. (Divine Providence §108) All of the three spiritual elements are essential. Without desire (love), or ends, nothing would be motivated to occur. Without thought, that love would be blind and mostly fail to cause what it wants. Without determined intention, both the love and thought would be frustrated and fruitless, with no effect achieved at all. In everyday life, this intention is commonly called will, but it is always produced by some desire driving everything that happens. Here is the pattern:       Spiritual                                                                   Natural Desire + Thought Mental Action (Intention)  Physical Action, or Event, in the World Swedenborg summarizes the relationship between these elements as follows: All activities in the universe proceed from ends through causes into effects. These three elements are in themselves indivisible, although they appear as distinct in idea and thought. Still, even then, unless the effect that is intended is seen at the same time, the end is not anything; nor is either of these anything without a cause to sustain, foster and conjoin them. Such a sequence is engraved on every person, in general and in every particular, just as will, intellect, and action is. Every end there has to do with the will, every cause with the intellect, and every effect with action. (Conjugial Love §400:1–2) Now consider Swedenborg’s theory of correspondences mentioned above. He says that there is a similar pattern between the details of the effects and the details of the causes. ”As above, so below,” others have said. So if mental action produces some effect in the physical world, then, by correspondence, we would expect a similar pattern between that physical effect and each of the three elements common to all mental processes. We would expect something physical like desire, then something physical like thought, and finally something physical like mental action. Do we recognize these patterns in physics? And if so, do we recognize them better in classical physics or in quantum physics? I claim we do recognize them in physics: • We recognize the “something physical like desire” as energy or propensity. These are what persist physically and produce the result, just like desire does in the mind. They are in both classical and quantum physics. • We recognize the “something physical like thought” as the wave function in quantum physics. This describes all the possibilities, propensities, and probabilities for physical events, just like thought does in the mind. • We recognize the “something physical like mental action” as the actual specific physical outcome, a selection of just one of the possibilities to be made actual. This is a measurement event in quantum physics, the product of energy or propensity and the wave function, just like the product of desire and thought is the mental action. We will discuss energy and wave functions in later posts, focusing here on the final step of mental actions and physical events. According to Swedenborg’s ideas, the structure of mental processes and the structure of physical events should be similar. So, too, the function of mental processes and the function of physical events should be similar. Can we tell from this whether we should expect a classical world or a quantum world? One feature of thought and mental action with which we should be familiar is time. That is, we always need time to think! Without any time gap between desiring and intending, we would be acting instinctively and impulsively. Sometimes that works but not always (at least in my experience!). Most often, there has to be some delay, even some procrastination, between having a desire and fulfilling it. That delay gives us time to deliberate and decide on the best action to select. And, most importantly, if it is we who decide when to act, we feel that we act in some freedom. It feels better. If the physical world corresponds to those mental processes, according to Swedenborg, what hypothesis do we reach about physics? It is that there will be corresponding time gaps between the beginning of some persisting energy or propensity and the selection of physical outcome. Remember that quantum objects are selected and definite only intermittently—when measured, or observed—while classical objects are continuously definite with no gaps. All this leads us to expect that physical events should not be continuous; that is, we should expect a quantum world rather than a classical world. Continue with Part II: Desire and Energy> Ian Thompson is also the author of Starting Science from God, as well as Nuclear Reactions in Astrophysics (Univ. of Cambridge Press) and more than two hundred refereed professional articles in nuclear physics. Read more posts from the Scholars on Swedenborg series > %d bloggers like this:
3b9c602debf05be8
The potential well for a single atom, e.g. enter image description here is basically a graphical representation of the possible electron positions at various energy levels. The horizontal axis represents distance from the nucleus of the atom. For higher energy levels (increasing $n$), we generally expect an electron within that energy level to be further from the nucleus than an electron at a lower energy level. The square of the magnitude of the wave function $\psi(x,t)$ (from the Schrödinger equation) gives the probability density associated with finding an electron at a certain position $x$ from the nucleus. This is also sometimes plotted on the potential well (but for the above picture $\psi(x,t)$ is shown because the square of the function should be symmetrical about the central vertical line). We can observe that an electron at a particular energy level (e.g. $n = 1$) can, at one instance, be closer to, or further from, the nucleus than if we were to measure the distance at another instance, and this is based on $\psi(x,t)$ for that particular energy level. An electron at a higher energy level (e.g. $n = 2$) will have an increased "maximum" distance from the nucleus (represented on the potential well diagram), but can sometimes be closer to the nucleus than an electron at a lower energy level, but the probability is again based on the wave functions $\psi(x,t)$ for the two energy levels. More likely than not an electron at a higher energy level will be further away than an electron at a lower energy level. In reality, the energy levels are split into subshells and we represent 4 different subshells with the letters s, p, d, and f. In a Cartesian coordinate system, we can use 3 different potential well diagrams to represent each spacial dimension. The aomic orbital for any given subshell is a region of space where an electron of that subshell has a 90% chance of occupying at any time. The atomic orbital for the s subshell is spherical (or dome shaped) because the wave function is the same for each spacial dimension, but, for other subshells such as p, the wave function is not the same for every dimension so their atomic orbitals appear to extend in particular directions. So basically my question is that, based on my explanation, am I right in my understanding, or am I way off? If there is anything that wasn't quite right, please feel free to correct me. I also appreciate additional explanatory comments in general. Right now, I don't really know much about the Schrödinger equation. As far as I am aware $|\psi(x,t)|^2$ gives the probability density of finding an electron at a certain position, but I don't know how it came about, how to interpret the Schrödinger equation, and how to solve for $\psi(x,t)$. So I plan to do more digging on this. However it would also be great if anyone can provide a dummy explanation to help me understand the Schrödinger equation. • $\begingroup$ There's no maximum distance from the nucleus in quantum theory of atoms. It can be anywhere. You need to discard all classical ideas of orbits. $\endgroup$ – StephenG Sep 22 '17 at 20:47 • $\begingroup$ Oh I see, thanks. What do the "walls" represent when we plot a Potential Well? $\endgroup$ – Royalrange Sep 22 '17 at 20:53 • $\begingroup$ They're not walls in the quantum world. They're just the value of the potential field at given radii. And that's all they are. As quantum tunneling shows, potentials are not walls. $\endgroup$ – StephenG Sep 22 '17 at 21:23 • $\begingroup$ If an electron were a circular standing wave around the nucleus, the length of the orbit would contain a whole number of wavelengths. This view is incorrect, because electrons are clouds and there are no orbits, but it gives you an intuitive feeling of why the wave mechanics produces discrete levels. $\endgroup$ – safesphere Sep 22 '17 at 22:00 • 2 $\begingroup$ Please read our FAQ on writing good question titles. $\endgroup$ – DanielSank Sep 22 '17 at 22:46 I've read your explanation, and I agree with it. As someone who is formally studying physics, I hope I can help. Think of the electrons creating a cloud. You can't make out exactly where they are, but they are in this region. The cloud becomes thicker the closer to the center you get. The higher energy electron spend more time further away from the nucleus. In this sense, they could be considered further away. There are two forms of the Schrodinger equation, time dependent and time independent. You wrote |ψ(x,t)|2, which is time dependent. However, orbitals are an example of time independent states. The relation given in the graph you posted is strictly spatial, meaning there is no change with time of this system. Since you said you are new to this, I imagine you are actually studying time independent states. Time dependent states can become much more complicated. I believe I didn't study them until I was in graduate level classes. ψ represents the particle. In the Copenhagen interpretation, this is the most complete description of a physical system. It describes how a particles wave equation (also called state function) changes with time and space. To find this, you solve the Schrodinger equation. The Schrodinger equation is like the F=ma of quantum mechanics. For the most part, you will just plug in what you need the equation and solving it will yield ψ. The Schrodinger equation came about when more evidence was being presented about wave-particle duality. Schrodinger thought that if particles could behave as waves, there should be an equation that describes them as such. Using other equations for inspiration, Schrodinger was able to come up with a description of particles using a wave equation. Schrodinger was able to calculate the spectral lines of hydrogen using this equation. The Schrodinger equation isn't the only way to calculate the states of quantum systems, but it is a very useful one. | cite | improve this answer | | • $\begingroup$ "Using other equations for inspiration, Schrodinger was able to come up with a description of particles using a wave equation" - It may be worth mentioning that the Schrodinger equation comes from quantizing the classical Hamilton principle of least action by replacing energy with the energy operator. Therefore the Schrodinger equation is simply the least action principle restated in the language of quantum mechanics. $\endgroup$ – safesphere Sep 22 '17 at 21:10 • $\begingroup$ @safesphere I agree with that statement. The user said he was an electrical engineer, so I didn't think that information would really help his understanding of the situation. He did ask for a "dummy explanation". Or perhaps it is helpful to him. $\endgroup$ – Hugger Sep 22 '17 at 21:16 Actually the plots are rather misleading as they only show the part of the wave functions that are in the classical region, where the solutions are oscillatory in nature. Upon reaching classical turning points of motion the solutions decay exponentially. Also, the small $r/r_0$ the behaviour of the wave function for the hydrogen atom is $R(r)\sim (r/r_0)^{\ell}$ so the probabilty distribution "close" to the nucleus is largely determined by the angular quantum number $\ell$ rather than the principal quantum number $n$. Classically this is because the effective potential contains a centrifugal part which pushes the particle away from the origin when $\ell\ne 0$. In the Coulomb problem, this part is dominant at small distances. For the hydrogen atom, the tail of the wave functions depends on the energy and is of the type $\sim e^{-r/(nr_0)}$, with $r_0$ the Bohr radius. Schrodinger was apparently inspired a mixture of optics and classical mechanics. In its simplest incarnation the Hamilton-Jacobi equation of classical mechanics is given by $$ H\left(q,\frac{\partial S}{\partial q}\right)=E $$ Schrodinger replaced the function $S$ by $k\log\psi$ to obtain $$ H\left(q,\frac{k}{\psi}\frac{\partial \psi}{\partial q}\right)=E $$ which, after straightforward manipulations and appropriate identification, lead to variational problem common in optics and mechanics and also used in the old Sommerfeld theory: $$ \delta J=\delta \int d^3 r\,\left((\nabla \psi)^2 -\frac{2m}{k^2}(E-V)\psi^2\right) $$ and (if I did not make any error) to a differential equation $\psi$ that is the time-independent Schrodinger equation. The final key step was to show that the spectrum of hydrogen was recovered by forcing $\psi$ to satisfy some boundary conditions, most notably at $\infty$. Fully time-dependent solutions are of the form $\Psi_n(r,t)=\psi(r)e^{-itE_n/\hbar}$. A good historical account can be found in the book by Max Jammer The conceptual development of quantum mechanics. The hydrogen atom is an exceptional case in that multiple values of $\ell$ can produce the same energies. This energy degeneracy is actually good as it produced the correct number of states of a given energy, something the "old" Bohr model did not predict correctly. The values of $\ell$ predicted by Schrodinger for each energy level are in agreement with experiment. (The harmonic oscillator in $n>1$ dimensions is another notable potential with the property that energy levels do not depend on $\ell$.) In general however, you can expect the energies to depend on both $n$ and $\ell$. If one applies Schrodinger's prescription to an atom with $2$ electrons, the solution contains $6$ spatial coordinates ($3$ for each electrons). Thus, $\Psi(\vec r_1,\vec r_2,t)$ cannot be a wave in ordinary $\mathbb{R}^3$ space. Moreover, even for a single electron, $\Psi(\vec r,t)$ will be generally complex. Born proposed that $\vert \Psi\vert^2$ - which is necessarily real and non-negative - be interpreted and treated as a probability density. This removes the two aforementioned problems in understanding "what" is $\psi(r)$ or $\Psi(\vec r,t)$. Edit: As pointed out by @dmckee there is also the problem that the solutions are not $0$ at the turning points. Finally, for good measure, the number of nodes does not necessarily increase with energy. For hydrogen, the number of nodes is $n-\ell-1$: for a given $n$, the highest allowed $\ell$ is $\ell_{max}=n-1$ so in particular for $\ell_{max}$ the solution has $n-n+1-1=0$ nodes. In fact, for $\ell_{max}$ the probability density has a maximum at the value of $r$ predicted by the Bohr model. The wavefunctions illustrated here are more appropriate for the 1d infinite well than the hydrogen atom, although the information and the form of the potential strongly suggest a Coulomb-type potential | cite | improve this answer | | • $\begingroup$ "Actually the plots are kinda misleading [...]" They also show the amplitude going to zero at the classical limit of motion which is very wrong. $\endgroup$ – dmckee --- ex-moderator kitten Sep 23 '17 at 16:10 • $\begingroup$ @dmckee good point and completelly correct. $\endgroup$ – ZeroTheHero Sep 23 '17 at 16:14 Your Answer
da987b9aa7192a11
Center manifold From Scholarpedia Jack Carr (2006), Scholarpedia, 1(12):1826. doi:10.4249/scholarpedia.1826 revision #126955 [link to/cite this article] Jump to: navigation, search Post-publication activity Curator: Jack Carr Figure 1: The centre manifold \(y=h(x)\) and stable manifold \(W^s\ .\) One of the main methods of simplifying dynamical systems is to reduce the dimension of the system. Centre manifold theory is a rigorous mathematical technique that makes this reduction possible, at least near equilibria. An Example We first look at a simple example. Consider \[\tag{1} x' =ax^3 \,, \qquad y' =-y + y^2 \] where \(a\) is a constant. Since the equations are uncoupled, we see that the stationary solution \( x=y =0\) of (1) is asymptotically stable if and only if \(a < 0\ .\) Suppose now that \[\tag{2} x' =ax^3 + xy - xy^2\,, \qquad y' =-y + bx^2 +x^2 y \] Since the equations are coupled we cannot immediately decide if the stationary solution \( x=y =0\) of (2) is asymptotically stable. The key is an abstraction of the idea of uncoupled equations. A curve \(y =h(x)\ ,\) defined for \(|x|\) small, is said to be an invariant manifold for the system \[\tag{3} x' =f(x,y)\,, \qquad y' = g(x,y) \] if the solution of (3) with \(x(0) =x_0\ ,\) \(y(0) = h(x_0)\) lies on the curve \(y =h(x)\) as long as \(x(t)\) remains small. For the system (1), \(y=0\) is an invariant manifold. Note that in deciding upon the stability of the stationary solution of (1), the only important equation is \(x' = ax^3\ ,\) that is, we need only study a first order equation on a particular invariant manifold. Center manifold theory tells us that (2) has an invariant manifold \(y =h(x) = \mbox{O}(x^2)\) for small \(x\ .\) Furthermore, the local behaviour of solutions of the two dimensional system (2) can be determined by studying the scalar equation \[\tag{4} u' = au^3 + uh(u) -uh^2(u) \] The theory also tells us how to compute approximations to the invariant manifold \(y = h(x)\ .\) For (2) we have that \(h(x) = bx^2 + \mbox{O}(x^4)\) and using this information in (4) gives \[\tag{5} u' =(a+b)u^3 + \mbox{O}(u^5) \] Hence the stationary solution of (2) is asymptotically stable if \(a+b < 0\) and unstable if \(a+b>0\ .\) If \(a+b = 0\) we need a better approximation to the invariant manifold in order to decide on the stability. Centre Manifolds Consider the system \[\tag{6} x' =Ax + f(x,y)\,, \qquad y' = By+g(x,y)\,, \qquad (x,y ) \in \R^n \times \R^m \] where all the eigenvalues of the matrix \(A\) have zero real parts and all the eigenvalues of the matrix \(B\) have negative real parts. The functions \(f\) and \(g\) are sufficiently smooth and \[ f(0,0) =0\,, \qquad Df(0,0) =0\,, \qquad g(0,0) =0\,, \qquad Dg(0,0) = 0 \] where \(Df\) is the Jacobian matrix of \(f\ .\) If \(f\) and \(g\) are identically zero then (6) has the two obvious invariant manifolds \(x=0\) and \(y=0\ .\) The invariant manifold \(x=0\) is called the stable manifold, and on the stable manifold all solutions decay to zero exponentially fast. The invariant manifold \(y=0\) is called the centre manifold. In general, an invariant manifold \(y = h(x)\) for (6) defined for small \(|x|\) with \(h(0)=0\) and \(Dh(0)=0\) is called a centre manifold. In more physical terms, the dynamics of y follows the dynamics of x and one may say that x enslaves the variable y. This interpretation has been called slaving principle. Main Results The general theory states that there exists a centre manifold \(y =h(x)\) for (6) and that the equation on the centre manifold \[\tag{7} u' =Au + f(x,h(u))\,, \qquad u \in R^n \] determines the dynamics of (6) near \((x, y) =(0,0)\ .\) In particular, if the stationary solution \(u=0\) of (7) is stable, we can represent small solutions of (6) as \(t \rightarrow \infty\) by \[ x(t) =u(t) + \mbox{O}(e^{-\gamma t} )\,, \qquad y(t) =h(u(t)) + \mbox{O}(e^{-\gamma t}) \] where \(\gamma > 0\) is a constant. To use the above theory, we need to have enough information about the centre manifold \(y = h(x)\) in order to determine the local dynamics of (7). If we substitute \(y(t) = h(x(t))\) into the second equation in (6) we obtain \[\tag{8} N(h(x)) =h'(x)\left[ Ax +f(x,h(x)) \right] - Bh(x) -g(x,h(x)) = 0 \] The general theory tells us that the solution \(h\) of (8) can be approximated by a polynomial in \(x\ ,\) that is, if \(N(\phi(x)) = \mbox{O}(|x|^q)\) as \(x \rightarrow 0\) then \(h(x) =\phi (x) + \mbox{O}(|x|^q)\ .\) There is also an \(m\) dimensional invariant manifold \(W^s\) tangential to the y-axis called the stable manifold. On the stable manifold all solutions decay to zero exponentially fast. Figure 1 illustrates the local dynamics for equation (6). The details of the flow on the centre manifold \(y = h(x)\) depend on the higher order terms in equation (7) and we cannot assign directions to the flow without further information. We have assumed that all of the eigenvalues of the matrix B in (6) have negative real parts. The theory can be extended to the case in which the matrix B has in addition some eigenvalues with positive real parts. In this case the stationary solution \(x=0, y=0\) of (6) is unstable due to the unstable eigenvalues. There exists a centre manifold for (6) which captures the behaviour of small bounded solutions. In particular, this gives a method of studying all sufficiently small equilibria, periodic orbits and heteroclinic orbits. Local Bifurcations Centre manifold reduction is central to the development of bifurcation theory. We illustrate this by means of a simple example. Consider \[\tag{9} x' =\epsilon x -x^3 +xy\,, \qquad y' =-y + y^2 -x^2 \] where \(\epsilon\) is a small scalar parameter. The goal is to study small solutions of (9). The linearised problem about the zero equilibrium has eigenvalues \(-1\) and \(\epsilon\) so the theory does not directly apply. We can write the equations in the equivalent form \[\tag{10} x' =\epsilon x -x^3 +xy\,, \qquad y' = -y + y^2 -x^2 \,, \qquad \epsilon' = 0 \ .\] When considered as an equation on \(\R^3\) the \(\epsilon x\) term in (10) is nonlinear and the system has an equilibrium at \((x,y,\epsilon) = (0,0,0)\ .\) The linearisation about this equilibrium has eigenvalues \(-1, 0 ,0\ ,\) that is, it has two zero eigenvalues and one negative eigenvalue. . The theory now applies so that the extended system (10) has a two dimensional centre manifold \(y =h(x,\epsilon)\) that can be approximated by a polynomial in \(x\) and \(\epsilon\ .\) The equation on the centre manifold is two dimensional and may be written in terms of the scalar variables \(u\) and \(\epsilon\) as \[ u' =\epsilon u - 2u^3 + \mbox{higher order terms} \,, \qquad \epsilon' = 0 \] and the local dynamics of (10) can be deduced from this equation. Notes and Further Reading The ideas for centre manifolds in finite dimensions have been around for a long time and have been developed by Carr (1981), Guckenheimer and Holmes (1983), Kelly (1967), Vanderbauwhede (1989) and others. For recent developments in the approximation of centre manifolds see Jolly and Rosa (2005). Pages 1-5 of the book by Li and Wiggins (1997) give an extensive list of the applications of centre manifold theory to infinite dimensional problems. Mielke (1996) has developed centre manifold theory for elliptic partial differential equations and has applied the theory to elasticity and hydrodynamical problems. Applications to phase transitions in biological, chemical and physical systems have been investigated by Haken (2004). In addition, it is interesting to note that there is a stochastic extension of the center manifold theorem, which has been introduced by Boxler (1989). In this case, for instance the center and stable manifolds may fluctuate randomly. J. Carr (1981), Applications of Centre Manifold Theory, Springer-Verlag. J. Guckenheimer and P. Holmes (1983), Nonlinear Oscillations, Dynamical systems and Bifurcations of Vector Fields. Springer-Verlag. M. S. Jolly and R. Rosa (2005), Computation of non-smooth local centre manifolds, IMA Journal of Numerical Analysis , 25, no. 4, 698-725. A. Kelly (1967), The stable, center-stable, center, center-unstable and unstable manifolds. J. Diff. Eqns, 3, 546-570. Li and S. Wiggins (1997), Invariant manifolds and fibrations for perturbed nonlinear Schrödinger equations. Springer-Verlag. A. Mielke (1996), Dynamics of nonlinear waves in dissipative systems: reduction, bifurcation and stability. In Pitman Research Notes in Mathematics Series, 352. Longman. A. Vanderbauwhede (1989). Center Manifolds, Normal Forms and Elementary Bifurcations, In Dynamics Reported, Vol. 2. Wiley. H. Haken (2004), Synergetics: Introduction and Advanced topics, Springer Berlin P. Boxler (1989), A stochastic version of center manifold theory, Probability Theory and Related Fields, 83(4), 509-545 Internal references External links See Also Attractor, Bifurcations, Normal Hyperbolicity, Stability, Synergetics, Personal tools Focal areas
94f56da5b1edc055
Solving the Schrödinger equation numerically by expansion in eigenstates Computational physics example – Quantum Mechanics By Jonas Tjemsland, Andreas Krogen og Jon Andreas Støvneng. Last edited: March 12th 2016 In this notebook we will be solving the one-dimensional Schrödinger equation, $$i\hbar\frac{\partial\Psi(x, t)}{\partial t} = -\frac{\hbar^2}{2m}\frac{\partial^2 \Psi( x, t)}{\partial x^2}+V(x)\Psi( x, t) $$ numerically for an arbitrary initial condition $\Psi(x, 0)$. The eigenstates $\psi_n(x)$ and the eigenenergies $E_n$ of the system are found by solving the time-independent Schrödinger equation $$-\frac{\hbar^2}{2m}\frac{\partial^2 \psi_n(x)}{\partial x^2}+V(x)\psi_n(x) = E_n\psi_n(x),$$ and normalizing the result. The inital condition $\Psi(x, 0)$ is expanded in terms of $\psi_n(x)$: $$\Psi(x,0) = \sum_{i}\alpha_i\psi_i(x).$$ In turn, the solution at time $t$, $\Psi(x, t)$, is given by $$\Psi(x, t) = \sum_n\alpha_n\psi_n(x)\exp\left(-i\frac{E_n}{\hbar}t\right).$$ As an example, we will be propagating an electron given by a gaussian wave packet towards a potential barrier. A similar example is studied in our notebook on One-Dimensional Wave Packet Propagation, but with a quite different approach. The numerical scheme that is used is developed and explained in detail in the appendix at the end of this notebook. The reader is adviced to read through this before reviewing the notebook. We start by importing packages, setting common figure parameters and defining physical parameters. In [1]: import matplotlib.pyplot as plt from scipy.linalg.lapack import ssbevd import numpy as np from matplotlib import animation newparams = {'axes.labelsize': 25, 'axes.linewidth': 1, 'savefig.dpi': 200, 'lines.linewidth': 3, 'figure.figsize': (20, 10), 'ytick.labelsize': 25, 'xtick.labelsize': 25, 'ytick.major.pad': 5, 'xtick.major.pad': 5, 'figure.titlesize': 25, 'legend.fontsize': 25, 'legend.frameon': True, 'legend.handlelength': 1.5, 'axes.titlesize': 25, 'mathtext.fontset': 'stix', '': 'STIXGeneral'} hbar = 1.05E-34 # J⋅s. Reduced Plank's constant m = 9.11E-31 # kg. Electron mass As mentioned in the introduction, we will be propagating an electron towards a potential barrier in one dimension. We will be considering a domain $x\in[0,L]$. Let us use $\Delta x = 1\text Å$, which is a typical diameter of an atom. In turn, the width of the barrier is being decided by the number of discretization points it consists of. We want each side of the potential barrier to be large, so that the electron is not influenced by the barrier or the edges at $t=0$. We choose $N=10$ discretization points for the barrier, and 50 times that for each of the sides. The barrier has a height $V_0=1.5\cdot 1.6\cdot 10^{-19}J = 1.5\text{eV}$. Play around with other parameters and potential barriers. The code in this notebook works even for arbitrary potentials! In [2]: V0 = 1.5*1.6E-19 # J. Potential height dx = 1e-10 # m. Discretization step N = 10 # #. Number of discretization points in the barrier N_sides = 100*N # #. Number of discretization points on each side of the barrier Ntot = N + 2*N_sides # Total number of discretization points x = np.linspace(0, dx*Ntot, Ntot) # x-axis # Potential V = np.array([0]*N_sides + [V0]*N + [0]*N_sides) Wave packet We will be representing the initial electron as a gaussian wavepacket, $$\Psi(x,0)=C\exp\left(-\frac{(x-x_0)^2}{4\sigma^2}+i\frac{p_0x}{\hbar}\right),$$ where $p_0=\sqrt{2mE_0}$ is the momentum of the wave packet, $E_0$ the energy of the electron, $x_0$ is the initial expectation value, and $\sigma$ is some parameter specifying the width of the wave packet. It will not be unreasonable to choose $E_0\sim V_0$. As we will see, this will give a good visualization of transmission and reflection. We start by choosing an energy a bit higher than the potential height, $E_0=1.39V_0$. $x_0$ to be in the middle of the left part of the domain and $\sigma$ (one standard deviation) to 1/8 of the left part. Play around with different parameters! In [3]: x0 = 0.5*dx*N_sides k0 = np.sqrt(2.0*m*E0)/hbar sigma = dx*N_sides/8. A = (2*np.pi*sigma**2)**(-0.25) Psi_0 = A * np.exp(-(x-x0)**2/(4*sigma**2)) * np.exp(1j*k0*x) # Check if the wave function is normalized print("Normalization:", dx*np.sum(np.abs(Psi_0)**2)) Normalization: 0.999471364542 We now visualize the initial wave packet and the potential (with a suitable scaling). In [4]: plt.plot(x, .75*V*np.max(np.abs(Psi_0)**2)/max(1e-30,np.max(V)), '--') plt.plot(x, np.abs(Psi_0)**2) plt.title('Initial probability distribution and potential') plt.xlabel('$x$ [m]') Solving the eigenvalue problem (Schrödinger equation) Now that all the parameters are settled, we can finally solve the Schrödinger equation. This is done by solving an eigenvalue problem. We are using a real symmetric band matrix solver (You could of course also use numpy.linalg.eigh, but this requires the initialization of the whole matrix, mostly consisting of zeros). We thus need to initialize the diagonal and the sub- and superdiagonal. This is explained in detail in the appendices. Note that this is the most computationally demanding part of these computations. In [5]: diag = hbar**2/(m*dx**2) + V # Diagonal sup_diag = np.ones(Ntot)*(-hbar**2/(2*m*dx**2)) # Superdiagonal In [6]: E, psi_n, _ = ssbevd([sup_diag, diag]) # Call solver Let us visualize some of the eigenstates and eigenenergies! In [7]: for i in [0, 1, 3]: plt.plot(x, psi_n[:,i], label=r"$\psi_{%.0f}(x)$"%(i)) plt.plot(x, .75*V*np.max(psi_n[1])/max(1e-30,np.max(V)), '--', label="Potential") plt.title("Eigenmodes for the given potential") plt.xlabel("$x$ [m]") In [8]: plt.ylabel('Energy (eV)') Here is a quick question for the reader: why is $\psi_0(x)$ and $\psi_1{x}$ almost equal in the right part of the domain? Would be expect the same result for other pairs $(\psi_n, \psi_{n+1})$? Hint: nearly degenerate. Finding the expansion coefficients We now calculate the expansion coefficents as explained in the introduction and in the appendices. In [9]: psi_n = psi_n.astype(complex) c = np.zeros(Ntot, dtype=complex) for n in range(Ntot): c[n] = np.vdot(psi_n[:,n], Psi_0) Computing $\Psi(x,t)$ Now, everything is set to compute the wave function at some arbitrary time $t$ given the inital condition and potential. To do this, we create a function performing the calculation as explained in the introduction and in the appendices. In [10]: def Psi(t, c, psi_n, E): """ Calculate the wave function at some time t given the expansion coefficients c, eigenstates psi_n and eigenenergies E. t : float. Time c : 1d array-like float, len Ntot. Expansion coefficient psi_n : 1d array-like float, len Ntot. Eigenstates E : 1d array-like float, len Ntot. Eigenenergies Numpy-array float, len Ntot. Wave function at time t Finding a suitable time step - Ehrenfest's theorem To find a suitable time step $\Delta t$, we will be using Ehrenfest's theorem. That is, the quantum mechanical expectation values obeys the classical equations of motion. For zero potential, (the expectation value of) the particle will thus have a velocity $$v = \frac{p(x)}{m} = \sqrt{\frac{2E_0}{m}}.$$ We will thus use $\Delta t \sim \sqrt{m/2E_0}\Delta x$. Let us plot the result for some $t$'s! In [11]: dt = 250*dx*(m/(2*E0))**.5 nt = 5 for t in np.arange(0, nt*dt, dt): plt.plot(x, np.abs(Psi(t, c, psi_n, E))**2, label=r"$t=%.1e$ s"%(t)) plt.title("Wave function for different $t$") plt.xlabel("$x$ [m]") plt.ylabel("$|\Psi(x, t)|^2$") Tunneling, reflection and transmission There are many things one can learn from this simple exercise. For example, note that we have used an energy that is higher than the potential barrier, $E_0>V_0$. In classical mechanics we would expect total transmission, but from the plot above, we see that there is a probability for reflection! On the other hand, if $E_0<V_0$ we would classically expect total reflection, but there is some probability for transmission (test for yourself)! This is called tunneling. These concepts are explained in more detail in our notebook on One-Dimensional Wave Packet Propagation, and the different probabilities are explicitly calculated. Note how the wave function has a high peak at the barrier. This is again due to reflection and transmission. In quantum mechanics we will have some reflection both when the potential is lowered and raised (check for yourself with a potential well!). The peak is thus due to constructive interference between different parts of the wave function being reflected repeatedly. Exercises and further work Investigate the problem further by yourself! • What are the advantages and disadvantages using this method (opposed to the more direct method used in our notebook on One-Dimensional Wave Packet Propagation)? • Compute numerically the transmission and reflection coefficient for different barrier widths and different barriers. • Implement periodic boundary conditions. (Hint: Take a look at the matrix in the appendices and consider the boundary condition at the edges. We need to add two new non-zero matrix elements. These are located in the upper right and lower left corner. What are they? Note that we also need to use a sparse matrix or general eigenvalue solver, e.g. numpy.eigh) • Explain why we have dispersion of the wave packet (it is spreading out). • Calculate (you can make approximations if it is necessary) how long it takes for the electron to pass the barrier, reflect the right boundary, pass the barrier again and return to its initial position. Verify your calculations using the Python codes in this notebook. • Generalize the method to two dimensions. (Hint: Use the same finite difference method as in the appendix on the two-dimensional Schrödinger equation. For simplicity, use $\Delta x = \Delta y = h$. To write the resulting approximation as a matrix, use the reindexing $i,j\to i + (j-1)N$. Treat carefully the boundaries! The easiest boundary condition is probably the Dirichlet boundary condition.) Let us make an animation to visualize the propagating electron! It may also be instructive to calculate the probabilities for the particle to be in the different parts of the domain. When the particle has propagated through the barrier, the probability that the particle is on the right side of the barrier should be approximately equal to the transmission coefficient. In [12]: from matplotlib import animation from IPython.display import HTML plt.rcParams.update({'animation.html':'html5', 'savefig.dpi': 50}) def init_anim(): """ Initialises the animation. """ global ax, line, textbox line, = ax.plot([], []) ax.set_xlim([0, dx*Ntot]) ax.set_ylim([0, 4*np.max(np.abs(Psi_0)**2)]) ax.set_title('Numerical simulation') props = dict(boxstyle='round', facecolor='wheat', alpha=0.5) # A text box that will display the probability for different parts of the domain textbox = ax.text(0.05, 0.95, '', transform=ax.transAxes, fontsize=25, verticalalignment='top', bbox=props) return line, textbox def animate(i): """ Animation function. Being called repeatedly. """ global ax, line, textbox prob = np.abs(Psi(i*dt, c, psi_n, E))**2 line.set_data(x, prob) left_text = "Left side: %.4f\n"%(dx*np.sum(prob[0:N_sides])) barrier_text = "Barrier: %.4f\n"%(dx*np.sum(prob[N_sides:N_sides+N])) norm_text = "Normalization: %.4f\n"%(dx*np.sum(prob)) right_text = "Right side: %.4f\n"%(dx*np.sum(prob[-N_sides:])) return line, textbox # Run the simulation and visualize the system as an animation. fig, ax = plt.subplots() h_anim = animation.FuncAnimation(fig, animate, init_func=init_anim, frames=1000, interval=20, blit=True)
c94170eaf83332ed
The Imaginary Part of Quantum Mechanics Really Exists! (Quantum) For almost a century, physicists have been intrigued by the fundamental question: why are complex numbers so important in quantum mechanics, that is, numbers containing a component with the imaginary number i? Usually, it was assumed that they are only a mathematical trick to facilitate the description of phenomena, and only results expressed in real numbers have a physical meaning. However, a Polish-Chinese-Canadian team of researchers has proved that the imaginary part of quantum mechanics can be observed in action in the real world. We need to significantly reconstruct our naive ideas about the ability of numbers to describe the physical world. Until now, it seemed that only real numbers were related to measurable physical quantities. However, research conducted by the team of Dr. Alexander Streltsov from the Centre for Quantum Optical Technologies (QOT) at the University of Warsaw with the participation of scientists from the University of Science and Technology of China (USTC) in Hefei and the University of Calgary, found quantum states of entangled photons that cannot be distinguished without resorting to complex numbers. Moreover, the researchers also conducted an experiment confirming the importance of complex numbers for quantum mechanics. Articles describing the theory and measurements have just appeared in the journals Physical Review Letters and Physical Review A. “In physics, complex numbers were considered to be purely mathematical in nature. It is true that although they play a basic role in quantum mechanics equations, they were treated simply as a tool, something to facilitate calculations for physicists. Now, we have theoretically and experimentally proved that there are quantum states that can only be distinguished when the calculations are performed with the indispensable participation of complex numbers,” explains Dr. Streltsov. Complex numbers are made up of two components, real and imaginary. They have the form a + bi, where the numbers a and b are real. The bi component is responsible for the specific features of complex numbers. The key role here is played by the imaginary number i, i.e. the square root of -1. There is nothing in the physical world that can be directly related to the number i. If there are 2 or 3 apples on a table, this is natural. When we take one apple away, we can speak of a physical deficiency and describe it with the negative integer -1. We can cut the apple into two or three sections, obtaining the physical equivalents of the rational numbers 1/2 or 1/3. If the table is a perfect square, its diagonal will be the (irrational) square root of 2 multiplied by the length of the side. At the same time, with the best will in the world, it is still impossible to put i apples on the table. The surprising career of complex numbers in physics is related to the fact that they can be used to describe all sorts of oscillations much more conveniently than with the use of popular trigonometric functions. Calculations are therefore carried out using complex numbers, and then at the end only the real numbers in them are taken into account. Compared to other physical theories, quantum mechanics is special because it has to describe objects that can behave like particles under some conditions, and like waves in others. The basic equation of this theory, taken as a postulate, is the Schrödinger equation. It describes changes in time of a certain function, called the wave function, which is related to the probability distribution of finding a system in a specific state. However, the imaginary number i openly appears next to the wave function in the Schrödinger equation. The photon source used to produce quantum states requiring description by complex numbers. Source: USTC “For decades, there has been a debate as to whether one can create coherent and complete quantum mechanics with real numbers alone. So, we decided to find quantum states that could be distinguished from each other only by using complex numbers. The decisive moment was the experiment where we created these states and physically checked whether they were distinguishable or not,” says Dr. Streltsov, whose research was funded by the Foundation for Polish Science. The experiment verifying the role of complex numbers in quantum mechanics can be presented in the form of a game played by Alice and Bob with the participation of a master conducting the game. Using a device with lasers and crystals, the game master binds two photons into one of two quantum states, absolutely requiring the use of complex numbers to distinguish between them. Then, one photon is sent to Alice and the other to Bob. Each of them measures their photon and then communicates with the other to establish any existing correlations. “Let’s assume Alice and Bob’s measurement results can only take on the values of 0 or 1. Alice sees a nonsensical sequence of 0s and 1s, as does Bob. However, if they communicate, they can establish links between the relevant measurements. If the game master sends them a correlated state, when one sees a result of 0, so will the other. If they receive an anti-correlated state, when Alice measures 0, Bob will have 1. By mutual agreement, Alice and Bob could distinguish our states, but only if their quantum nature was fundamentally complex,” says Dr. Streltsov. An approach known as quantum resource theory was used for the theoretical description. The experiment itself with local discrimination between entangled two-photon states was carried out in the laboratory at Hefei using linear optics techniques. The quantum states prepared by the researchers turned out to be distinguishable, which proves that complex numbers are an integral, indelible part of quantum mechanics. The achievement of the Polish-Chinese-Canadian team of researchers is of fundamental importance, but it is so profound that it may translate into new quantum technologies. In particular, research into the role of complex numbers in quantum mechanics can help to better understand the sources of the efficiency of quantum computers, qualitatively new computing machines capable of solving some problems at speeds unattainable by classical computers. The Centre for Quantum Optical Technologies at the University of Warsaw (UW) is a unit of the International Research Agendas program implemented by the Foundation for Polish Science from the funds of the Intelligent Development Operational Programme. The seat of the unit is the Centre of New Technologies at the University of Warsaw. The unit conducts research on the use of quantum phenomena such as quantum superposition or entanglement in optical technologies. These phenomena have potential applications in communications, where they can ensure the security of data transmission, in imaging, where they help to improve resolution, and in metrology to increase the accuracy of measurements. The Centre for Quantum Optical Technologies at the University of Warsaw is actively looking for opportunities to cooperate with external entities in order to use the research results in practice. Featured image: Photons can be so entangled that within quantum mechanics their states cannot be described without using complex numbers. Source: QOT/jch “Operational Resource Theory of Imaginarity” K.-D. Wu, T. V. Kondra, S. Rana, C. M. Scandolo, G.-Y. Xiang, Ch.-F. Li, G.-C. Guo, A. Streltsov Physical Review Letters 126, 090401 (2021) DOI: 10.1103/PhysRevLett.126.090401 “Resource theory of imaginarity: Quantification and state conversion” Physical Review A 103, 032401 (2021) DOI: 10.1103/PhysRevA.103.032401 Provided by Faculty of Physics University of Warsaw Leave a Reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
152771ad15a81bd2
The Sun in a laboratory container In addition to quantum physics, I also have of course other interests and fascinations. And sometimes some other than a quantum physics subject is so impressive and important that I want to say something about it on this website, even though it’s not about quantum physics. SAFIRE project It’s about the SAFIRE project. The acronym means: Stellar Athmospheric Function in Regulation Experiment. It was started by a group of plasma physicists, astrophysicists and electrical engineers who wanted to test an idea differing from mainstream physics about the forces that play an important role within our solar system and also in interstellar space. This group is called out by RationalWiki as a bunch of garden-variety physicists or pseudo-physicists. Well, they have answered the challenge and started the SAFIRE project. They have implemented their model of how they think the sun works in a laboratory container, a three-year project, to see if their model can be falsified. Click on the image to download the SAFIRE report as pdf Their result is truly amazing. View the film they produced, read their 72 page report and think for yourself. Either they are completely fraudulent, or they have discovered something particularly important (and that option is my firm impression) that can have enormous implications for: • Our knowledge about the real processes that take place in a star, especially in our own nearby sun. • Insights about the origin of the elements heavier than hydrogen and helium. • Free energy production: a revolutionary way in which energy can be generated. It seems nuclear fusion is happening, because heavy elements appear to be produced, without any adverse side effects and without the need for an incredibly expensive and complex fusion reactor, which has to enclose the hot plasma in extremely strong magnetic fields. • Safe processing of radioactive waste. Energy by transmutation of light elements If this is true, then this is incredibly good news, especially in the context of our current problems with regard to our global energy needs. Confirmation by replication When watching the film and reading their report, I am reminded of the facilities that are available on the most universities, to replicate this and to test it. It is not beyond the capabilities of an academic technician with adequate resources. Physics students, accept the challenge. Beyond Weird & The Quantum Handshake To keep up to date with the subjects on my website I have to read quite a bit. And a lot of highly interesting material on quantum physics is being written and published. But occasionally I come across something that impresses me particularly and seems worth of special attention. Especially when it considerably broadens or clarifies my view on quantum physics and its interpretations. Therefore highly recommended stuff for visitors of my website. So, I’ll discuss two books here. The first one I want to discuss is: “Beyond Weird – Why Everything You Thought About Quantum Physics is .. different” by Philip Ball. Beyond Weird I am grateful to the student who put this book in my hands. Philip Ball is a science journalist who has been writing about this topic in Nature for many years. You don’t need to be able to solve exotic Schrödinger equations to follow his fascinating and utterly clear explanation of the quantum world and the riddles it presents. Also, he clears some misunderstandings up about this subject. Such as the word quantum, which is actually not the fundamental thing in quantum physics but rather an emerging phenomenon. The state wave is not quantized but fundamentally very continuous. He desctibes how quantum physics in its character and history deviates from all previous physical theories. It is a theory that is not built by extrapolation on the older theories. You can’t imagine what happens in the quantum world as you can do with, for example, gravity, electric currents, gas molecules, etc. The mathematical basis of quantum physics, quantum mechanics was not created by starting from fundamental principles but was the result of particularly happy intuitions that worked well but whose creators could not fundamentally explain what they were based on. Examples are: The matrix mechanics of Heisenberg, the Schrödinger equation, the idea of ​​Born that the state function gives you the probability of finding the particle at a certain place when measured. It was all inspired intuitive guesswork that laid the foundation for an incredibly successful theory we still don’t really understand how and why it works. Ball makes presents a good case for the idea that quantum mechanics seems to be about information. It is a pity, in my opinion, that he ultimately appears to adhere to the decoherence hypothesis. That is the point in his book where the critical reader will notice that what was until then comparably good to follow step by step suddenly loses its strict consistency and that from there one has to do with imperfect metaphors. His account remains interesting but isn’t that convincing anymore. Despite that, the book is highly recommended for anyone who wants to understand more about the quantum world and especially about quantum computers. The Quantum Handshake A completely different type of book is “The Quantum Handshake – Entanglement, Nonlocality and Transactions” by John Cramer. His interpretation of quantum physics seems, in my opinion incorrectly, not to be placed on the long list of serious quantum interpretations. Not a big group of supporters. In any case, I had never heard of his interpretation until it was brought forward by someone at a presentation about consilience I attended a short time ago. The subject made me curious because the state wave seems to stretch out backward and forward in time as I see it. Cramers’ hypothesis is that the state wave can also travel back in time, creating a kind of ‘handshake’ between the primary departing state wave and the secondary backwards in time reflected state wave. The reflected state wave traveling back in time arrives at the source thus exactly at the time of departure of the primary wave. This handshake between both waves effects the transfer of energy without the need for the so-called quantum collapse. The measurement problem where the continuous state wave instantaneously changes into an energy-matter transfer would then be explained as the result of a energy transfer by the handshaking state waves. However, in order to finally be able to complete that energy-matter transfer from source to measurement device, Cramer has to assume that the state wave is “somewhat” material-physical. This ephemeral quality of the state wave is considered as a severe weakness in his interpretation. Nevertheless the book provides worthwhile reading for those who want to delve into the various interpretations of quantum physics, also and especially because of Cramer’s discussion of a large number of experiments with amazing implications such as, for example, quantum erasers and delayed choice experiments where retro causality appears to occur. His idea of ​​a state wave that is traveling back in time – which is not forbidden in the formulations of quantum mechanics – remains a fascinating possibility.
a0e8a01cc836e121
Information: what do you mean? 0 downloads 0 Views 3MB Size Report Nov 14, 2010 - Information as being more fundamental than matter and energy was .... energy that is absorbed or emitted during transitions between these different ... Even if you would remove the earth atmosphere and our sun distant photon's ... to be the identity of the particular thing itself, that is, all of its properties, all ... Information: what do you mean? On the formative element of our universe Dirk K. F. Meijer* ABSTRACT Information is considered as a fundamental building block of reality, along with matter and energy. Yet the word information is often employed as a container term that represents many different modalities ranging from information constituting a physical parameter to the daily transmission of the news in human culture. Information is particularly known from the description of nature at its micro-level and from computer science (bits an qbits), but also is essential in understanding the evolution of macrostructures in the Universe. The interactions of subatomic waves/particles subsequent to the Big Bang, guided by feedback loops and backward causation, created a dynamic network of quantum information, that finally enabled the formation of highly complex macromolecular structures and first life. Parallel innovations in biophysical complexity occurred, expressed in quantum states that can be brought in superposition, after an “intelligent” search and selection process in nature, aiming at a future path. Therefore, both the becoming and future of the Universe can be viewed as an unfolding as well as a continuous measurement and creation of basic information. A collective memory of nature or a universal consciosness is considered as a prerequisite for the origin of life and further evolution of intelligence. Current information theory implies that information can both be described as an entropic element in which the impact of information is inversely related to the probability that it will occur, versus the concept that information reflects the certainty of a message and is directly related to its probability and meaning. This dual aspect of information reflects the perspectives of sender and receiver in the transmission process. It is shown that basic information is largely hidden from us, due to observation- induced perturbation of this intrinsic information. Information may be transmitted in very different ways and at very different levels. In the living cell this may constitute chemical and electrical signals, but also specific spatial perturbations, for instance, in the 3dimensional structure of proteins. At the level of human communication, vibration patterns can be expressed in electromagnetic waves in the form of light, sound, music, as well as in images and stories (transmitted by radio, telephone, internet and TV, for example). Such information is transferred to the brain through specifically tailored sensory organs that accommodate complex patterns of wave activity, that subsequently are converted to neural activities in a cyclic workspace of the nervous system. The emergence of human information, knowledge and understanding , in itself, can be seen as a creative force in the physical universe, which can influence the generation of complexity in all domains. A new information paradigm has been proposed that represents a new integral science of information, on a physical and metaphysical basis: it seems easier to describe matter and energy in terms of information than vice versa. Consequently, information can be used as a common language across scientific disciplines. Contents: 1. Introduction: interrelation of Matter, Energy and Information 2. The fundamental character of information 3. Information according to the opposing theories of Wiener and Shannon 4. Information is partly hidden: we can only observe a part of reality 5. Information, entropy, neg-entropy and syntropy: order or chaos ? 6. Why a science philosophy of information? 7. Information from a quantum perspective 8. The Universe created through unfolding of information 9. Information as self-organized complexity in the evolution of life 10. Generalization of Information and physics of gravity 11. Information represented in a universal knowledge field 12. Information transfer in the human cultural evolution 13. References 1 2 4 7 9 15 17 19 24 28 34 38 *Em. Professor of Pharmacology and Therapeutics, University of Groningen, The Netherlands Mail: [email protected] “A basic idea in communication theory is that information can be treated very much like a physical quantity such as mass or energy” – Claude Shannon “Without matter, there is nothing; without energy matter is inert; and without information, matter and energy are disorganized, hence useless” – Anthony Oettinger “I believe that consciousness is, essentially, the way information feels when being processed.” – Max Tegmark 1. Introduction: Interrelation of Matter, Energy and Information Our world, as we sense and experience it, can be perceived as consisting of three building blocks: matter (in all its modalities), energy (in all its forms) and information (in all its variants), see Fig. 1. Information is particularly known from the description of nature at its micro-level of elementary particles and from computer science (bits an qbits), but is also essential in understanding the higher complexity of living organisms as well as the macrostructures such as planets, and galaxies of the Universe. For instance the so-called "Big Bang" and the events that followed, appear to constitute a fine tuned expansion process, in the framework of a very specific set of interrelated physical laws and constants, as it has been unveiled by humans 13.5 billion years later (see for excellent reviews: Davies, 2007, Greene, 2004, Linde, 2004, Görnitz, 2012). In this sense the evolution of our universe can be seen as a dynamic flow of unfolding information and creation of new information. Fig. 1: The fundamental triad of energy/matter/information This essay is based on the thesis that information is as fundamental as matter and energy in the fabric of reality, in other words: information is physical. Information may even represent a modality of physics that preceded the manifestation of matter (Wheeler 1994, Zeilinger, 2003). But how are these three building blocks interrelated? Can information be reduced to energy and vice versa, can energy and matter be defined as modalities of information? Matter and energy were once considered two separate and distinct elements, until Einstein proved they were inter-convertible as indicated in the E = m c2 equation. One may wonder: where the item of information is in this famous equation (see Fig. 2). Some see information as a modality of energy and interestingly the constant c in the equation, according to Einstein, was not only indicating the maximal speed of light but at the same time was meant as the maximal speed of information transfer (see Seife, 2006). Umpleby, 2004, published a paper titled Physical Relationships among Matter, Energy and Information, which attempts to connect the three concepts. Using Einstein’s established mass-energy equivalence formula, the relationship between the frequency of photon energy, which is observed in the photoelectric effect, displayed a maximum rate at which any system can compute, being 2×1047 bits/second/gram. Reversely, it has been recently experimentally shown that information can be converted to free energy, using non-equilibrium feedback manipulation of Brownian particles on the basis of information on its location. Such a particle can be driven to a higher energy level by the information gained by measurement of its location (Toyabe et al, 2010). Görnitz et al, 2012, derived an extension of the mass/energy equation in which quantum information in Qbits can be directly related to mass and energy. Quantum information is defined here as absolute information , in principle, being free of meaning, (so called protyposis), and it was proposed that matter is formed, condensed or can be designed from such abstract information. Information as being more fundamental than matter and energy was earlier proposed by John Wheeler, 1994 (it from a bit !), and by Anton Zeilinger, 2000/2003. The latter author demonstrated that sending of complete information on an elementary particle over a large distance (teleportation) results in the formation of that particular particle in material form and concluded that information is a primary element in nature Fig. 2: Relating energy to matter, but where is information ? Information and evolution The interactions of subatomic waves/particles subsequent to the so called Big Bang, created a dynamic network of quantum information, that finally also enabled the formation of highly complex macromolecular structures. The history of these particular wave/particle interactions, are supposed to be stored in an all pervading quantum field, as it was inherited from the initial information matrix (Zizzi, 2006). Each step in the unfolding evolution implied an inherent potential for change and, ultimately, also the ability to generate biological life. The creation of first life was facilitated by processes such as self-organization and autocatalysis, as well as synergistic and symbiotic processes (see Kauffman, 1993, Margoulus, 1998), providing an ever growing, novel, information framework. Further complexity and sophistication in biological evolution was partly realized by genetic mutation, and chromosomal reorganization, combined with the selection pressure of the environment. Most of the abovementioned evolutionary phenomena have in common that they are based on "copying of information”. Obvious time constraints in the creation of complex cellular components such as membranes, organelles and functional proteins including the renders it likely that in the evolution process nature employed quantum mechanical principles such as superposition and entanglement of wave information as well as backward causation in the process of creating higher complexity towards proto-conscious first life and self-supporting and replicating life forms, see section 8). In the ongoing process of higher complexification, the humanoid brain evolved, among others, leading to self-consciousness and social awareness (Fig. 3), as central elements in the cultural evolution in the past 5000 years. Our present world with its mass media and Internet is almost dominated by an overwhelming flow of information, that for some means inspiration and for others is rather threatening, since the control of information quality as well as the freedom of its distribution is of great concern. Therefore, we should always consider the following questions: where is the real information in the data, where is the hard knowledge in the information, but also: where is the very wisdom in the presented knowledge ! (see Fig. 3). The reader will notice that in the foregoing the term information is used in very different contexts (information in physical, biological and cultural settings), that requires a precise definition of what we mean by it, what forms it can take and even how it can be manipulated in our world of societal and scientific metaphors. Fig. 3: Evolution of the universe as a progressive unfolding of information from the micro - to macro level (left) and from the “Big Bang” to living organisms (right). 2. The fundamental character of information What Is Information? It arises through interaction ! Extending the notion of environment or the external world, the following notions of information were given by Gershenson (2010): Notion 1: Information is anything that an agent can sense, detect, observe, perceive, infer or anticipate. This notion is in accordance with Wiener (see later), where information is seen as a just-so arrangement, a defined structure, as opposed to randomness and it can be measured in bits. Notion 2: An agent is a description of an entity that acts on its environment. Note that agents and their environments are also information, as the can be perceived by other agents. An agent can be an electron, an atom, a molecule, a cell, a human, a computer program, a market, an institution, a society, a city, a country or a planet. Each of these can be described as acting on their environment, simply because they interact with it. Notion 3: The environment of an agent consists of all the information interacting with it. Notion 4: The ratio of living information of an agent is the amount of active information produced by itself over the amount of active information produced by its environment. Information is relative to the agent perceiving it. Information can exist in theory “out there”, independently of an agent. Yet for practical purposes, it can be only spoken about once an agent perceives / interacts with it. The meaning of the information will be given by the agent perceiving and evaluating the particular information, and, among others determines how the agent responds to it. Thus note that perceived information is different from the meaning that an agent gives to it. Consequently, meaning is an active product of the interaction between information and the agent perceiving it. Information is Physical In physics, physical information refers generally to the information that is contained in a physical system. Its usage in quantum mechanics (i.e. quantum information) is important, for example in the concept of quantum entanglement to describe effectively direct or causal relationships between apparently distinct or spatially separated particles. Fig. 4: The physical information of the Universe. Quantum theory states that the energy levels that electrons can occupy in an atom are “quantized”, so that the energy that is absorbed or emitted during transitions between these different energy levels can only take on (discrete) multiples of some minimal value. So energy is not absorbed or emitted on a continuous sliding scale, but can only change in discrete steps. Particles at the atomic level should therefore not be seen as refractory entities, but rather as elements that are able to exchange energy in a ongoing process of quantum communication, albeit in discrete steps. In this communication process, light waves (photons) are crucially important. On the scale of the micro-universe, the resulting network of elementary particles has the ability to store information through perturbation of modalities such as position, spin, charge and polarization on the basis of an incredible number of possible combinations of these parameters. If information is constantly produced due to particle encounters, where exactly is this information localized ? It seems implicitly enclosed in the particles themselves! According to the famous physicists Maxwell, Boltzman and later on Gibbs, entropy is proportional to the number of bits of information registered by atoms in motion in the form of polarization, spin and momentum. This seem to imply that individual elementary particles such as electrons and the related atom constructions are not entirely identical but rather contain implicit information related to their individual history of physical encounters. Such a collection of information can perhaps be more easily envisioned, realizing that particles can be represented in wave form and may undergo superposition, meaning that they can integrate wave information. Nielsen and Chuang (2011) even concluded that that atoms and elementary particles can be programmed to perform digital calculations. In line with that, Lloyd sees the universe as a physical system that, at its microscopic level is programmed to perform digital computations (Fig. 4). Implicit in this concept is that through entanglement and resonance these wave/particles form an active information matrix that constantly unfolds and creates new information collected in an all pervading data/knowledge field (see section 8), that from the very beginning of our universe acted as a dynamic source of change and experience (the latter used here as a metaphor, such as proposed by Whitehead as the most fundamental creative element of the Universe) In a very useful book in this regard with the title: “Decoding the Universe”, Charles Seife (2006) explains: ”What is it that gathers information about the atom and disseminates it into the surrounding environment: it is nature itself that is constantly making measurement on everything. The particles of light and air are nature’s probes or measuring devices. By observing an object you are simply receiving the information that has already be deposited on those particles. Even if you would remove the earth atmosphere and our sun distant photon’s from distant stars are bombarding our planet. The universe is teeming with cosmic rays that are composed of photons that were born shortly the big bang. Even without any of those photons, nature would be able to collect information since it creates its own particles at every point of space: on the smallest scales, particles are constantly winkling in and out of existence in the quantum vacuum or zero-point energy field. They appear, gather information, disseminate it into the environment and disappear into nothingness from whence they came. These evanescent are the so called vacuum fluctuations occurring throughout the universe, and make it impossible to shield an object completely from Nature’s measurements”. The latter vacuum domain is also called the zero-point energy field (ZPE) and is considered a likely candidate for collective memory of nature, by some interpreted as a sort of universal consciousness (see for instance Laszlo, 2011, Mitchell, Meijer 2012, and 2013, (see also section 9). According to quantum theory, such interactions between particles (for example, between photons and electrons) can be also described as wave interferences, producing novel vibration patterns in the form of superpositions. In even greater detail: matter, at its most basic level, may exists as a system of vibration patterns in a web of mutual relations, as more recently hypothesized in the String/M theories. The different faces of information Information itself may be loosely defined as "that which can distinguish one thing from another. It was earlier defined also as “the difference that makes the difference”. The information embodied by a thing can thus be said to be the identity of the particular thing itself, that is, all of its properties, all that makes it distinct from other (real or potential) things. It is a complete description of the thing, but in a sense that can be separated from any particular language. When clarifying the subject of information, care should be taken to distinguish between the following specific cases (taken from Wikipedia, 2011):  The phrase instance of information refers to the specific instantiation of information (identity, form, essence) that is associated with the being of a particular example of a thing. (This allows for the reference to separate instances of information that happen to share identical patterns.)  A holder of information is a variable or mutable instance that can have different forms at different times (or in different situations).  A piece of information is a particular fact about a thing's identity or properties, i.e., a portion of its instance.  A pattern of information (or form) is the pattern or content of an instance or piece of information. Many separate pieces of information may share the same form. We can say that those pieces are perfectly correlated or say that they are copies of each other, as in copies of a book.  An embodiment of information is the thing whose essence is a given instance of information.  A representation of information is an encoding of some pattern of information within some other pattern or instance.  An interpretation of information is a decoding of a pattern of information as being a representation of another specific pattern or fact.  A subject of information is the thing that is identified or described by a given instance or piece of information. (Most generally, a thing that is a subject of information could be either abstract or concrete; either mathematical or physical.)  An amount of information is a quantification of how large a given instance, piece, or pattern of information is, or how much of a given system's information content (its instance) has a given attribute, such as being known or unknown. Amounts of information are most naturally characterized in logarithmic units. The above usages are clearly all conceptually distinct from each other. However, many people insist on overloading the word "information" (by itself) to denote (or connote) several of these concepts simultaneously. The way the word information is commonly used can refer to both the "facts" in themselves and the transmission of the “facts”, as is treated in the following. 3. Information according to the opposing theories of Wiener and Shannon. Information according to Wiener The double notions of information as both facts and communication are inherent in one of the foundations of information theory: cybernetics introduced by Norbert Wiener (1948). The cybernetic theory was derived from the new findings in the 1930s and 1940s regarding the role of bioelectric signals in biological systems, including the human being. The full title was: Cybernetics or Control and Communication in the Animal and the Machine. Cybernetics was thus attached to biology from the beginning. Norbert Wiener Wiener introduced the concepts: amount of information, entropy, feedback and background noise as essential characteristics of how the human brain functions. Fig. 5: The concepts of entropy and neg-entropy in relation to information. S= Entropy; k= Planck constant; W=number of microstates (indicating disorder) ; I= Information content or impact; p= probability that information arises From Wiener (1948): “The notion of the amount of information attaches itself very naturally to a classical notion in statistical mechanics: that of entropy. Just as the amount of information in a system is a measure of its degree of organization, so the entropy of a system is a measure of its degree of disorganization ( Fig. 5). Wiener coined the label of a whole new science: We have decided to call the entire field of control and communication theory, whether in machine of animal by the name Cybernetics, which we form from the Greek steersman. He also declared his philosophical heritage: If I were to choose a patron for cybernetics... I should have to choose Leibnitz. What is information and how is it measured? Wiener defines it as a probability: One of the simplest, most unitary forms of information is the recording of choice between two equally probable simple alternatives, one or the other is bound to happen - a choice, for example, between heads and tails in the tossing of a coin. We shall call a single choice of this sort a decision. If we then ask for the amount of information in the perfectly precise measurement of a quantity known to lie between A and B, which may with uniform a priori probability lie anywhere in this range, we shall see that if we put A = 0 and B = 1, and represent the quantity in the binary scale (0 or 1), then the number of choices made and the consequent amount of information is infinite. Wiener described the amount of information mathematically as an integral, i.e. an area of probability measurements: (1) I = log p , in which p= probability. Wiener says the formula means: The quantity that we here define as amount of information is the negative of the quantity usually defined as entropy in similar situations. Wiener`s view of information is thus explicitly that it contains a structure that has a meaning. It was also called formative information. (see Gregersen, in Davies and Gregersen, 2010). It will be seen that, according to Wiener the processes which lose information are, as we should expect, closely analogous to the processes which gain entropy (disorder). By Wiener the concept of information is, from its very conception attached to issues of decisions, communication and control.. System theorists build further on this concept and see information as something that is used by a mechanism or organism, (a system which is seen as a "black box"), for steering the system towards a predefined goal. The goal is compared with the actual performance and signals are sent back to the sender if the performance deviates from the norm. This concept of negative feedback has proven to be a powerful tool in most control mechanisms, relays etc. Interestingly, quantum physicist Schrodinger, 1959, in his influential book “What is life” earlier coined this type of information as neg-entropy. Information according to Shannon Claude Shannon The other scientist connected with information theory is Claude Shannon. He was a contemporary of Wiener and as an AT&T mathematician he was primarily interested in the limitations of a channel in transferring signals and the cost of information transfer via a telephone line. He developed a mathematical theory for such communication in The Mathematical Theory of Communication, (Shannon & Weaver 1959). Shannon defines information as a purely quantitative measure of communicative exchanges (see Fig. 6). Weaver (in Shannon & Weaver 1959), links Shannon`s mathematical theory to the second law of thermodynamics and states that it is the entropy of the underlying stochastic process in the information source that determines the rate of information generation: The quantity which uniquely meets the natural requirements that one sets up for "information" turns out to be exactly that which is known in thermodynamics as entropy. This concept of information was later substantiated by Bekenstein, 2003 and Hawking, 2010. Shannon defined the amount of information as the negative of the logarithm of a sum of probabilities: the impact of information is inversely proportional to the probability that the information arises. Equation (2): I = log 1/p in which I = information content and p = probability. Note that 1/p in fact stands for uncertainty or disorder: the larger the probability the smaller is the extent of order or likelihood of occurrence. It resembles the well known Boltzman equation for Entropy: S= k. log M , in which S= amount of Entropy (disorder), k= the Planck constant and M the number of microstates (measure of disorder). The formula (2) for I is in fact the opposite of Wiener`s equation (1). It is there because the amount of information according to Wiener is equal to neg-entropy (order) and that of Shannon to amount of entropy (disorder). Fig. 6: Scheme from the famous article of Shannon For an information theorist based on Shannon it does not matter whether we are communicating a fact, a judgement or just nonsense. Everything we transmit over a telephone line is "information". The message "I feel fine" is information, but "ff eeI efni" is an equal amount of information. Shannon is said to have been unhappy with the word "information" in his theory. He was advised to use the word "entropy" instead, but entropy was a concept too difficult to communicate so he remained with the word. Since his theory concerns only transmission of signals, Langefors (1968) suggested that a better term for Shannon’s information theory would therefore perhaps be "signal transmission theory". But Shannon`s "information" is not even a signal: If one is confronted with a very elementary situation where he has to choose one of two alternative messages, then it is arbitrarily said that the information, associated with this situation, is unity. Note that it is misleading (although often convenient) to say that one or the other message conveys unit information. The concept of information applies not to the individual messages (as the concept of meaning would), but rather to the situation as a whole, the unit information indicating that in this situation one has a freedom of choice, in selecting a message, which it is convenient to regard as a standard or unit amount. The contradictions in the current information theories further explained Weaver, explaining Shannon`s theory in the same book: Information is a measure of ones freedom of choice in selecting a message. The greater this freedom of choice, the greater the information, the greater is the uncertainty that the message actually selected, is some particular one. Greater freedom of choice, greater uncertainty greater information go hand in hand. Thus there is one large, and often confusing, difference between Shannon and Wiener. Whereas Wiener sees information as negative entropy, i.e. a "structured piece of the world", Shannon`s information is the same as (positive) entropy. As pointed out above, this makes Shannon’s "information" the opposite of Wiener`s definition of "information". How can something be interpreted as both positive entropy and negative entropy at the same time? The confusion is unfortunately fuelled by other authors. The systems theorist James G. Miller, 1978, writes in Living Systems: It was noted by Wiener and by Shannon that the statistical measure for the negative of entropy is the same as that for information….. Yet, as pointed out by Seife (2006): “Information is flowing from the sender to the recipient of a message and each has a different role in the transaction. It is really ”good” for a source of message to have high entropy since it means that the source is unpredictable and it is uncertain what the message is going to say ahead of time. If you already knew it would not give you any new information! But once the message is received it is essential to reduce the uncertainty and derive meaning. Sometimes you will hear people say that information is negative entropy. This arises because people are accustomed to analyze different things. Some are looking at the sender and the unpredictability of a potential message and others are looking at the receiver and the uncertainties about the answer to the question. In truth, both are looking at the same thing: sender and receiver are just two sides of the coin”. In conclusion, Shannon’s information deals only with the technical aspect of the transmission of information and not with its meaning, i.e. it neglects the semantic aspect of communication. The amount of information required to describe a process, system, object or agent determines its complexity. According to our current knowledge, during the evolution of our universe there has been a shift from simple information towards more complex information (the information of an atom is less complex than that of a molecule, than that of a cell, than that of a multi-cellular organism, etc.).Interestingly, this “arrow of complexity in evolution can guide us to explore general laws of nature. Of note, since information is relative to the agents perceiving it, information will potentially be transformed as different agents perceive it. Another way of stating this law is the following: information will potentially be transformed by interacting with other information. Information may propagate as fast as possible. However, only some information manages to propagate at all. In other words, we can assume that different information has a different “ability” to propagate, also depending on its environment. The “fitter” information, i.e. that which manages to persist and propagate faster and more effectively, will prevail over other information. Note the similarity with the meme concept (a meme is an infectious piece of information, see Heylighen, 2011, Meijer, 2007). In relation to information, there is no agreed notion of life, which reflects the difficulty of defining this concept. Gregersen, 2010 explains: ”Many researchers have put forward properties that characterize important aspects of life. Autopoiesis is perhaps the most salient one, which notes that living systems are self-producing. Yet, it has been argued that autopoiesis is a necessary but not sufficient property for life. The relevance of autonomy and individuality for life have also been highlighted . These approaches are not unproblematic, since no living system is completely autonomous. This follows from the fact that all living systems are open. For example, we have some degree of autonomy, but we are still dependent on food, water, oxygen, sunlight, bacteria living in our gut, etc. This does not mean that we should abandon the notion of autonomy in life. However, we need to abandon the sharp distinction between life and non-life, as different degrees of autonomy escalate gradually, from the systems we considered as non-living to the ones we consider as living. In other words, life has to be a fuzzy concept. Under the present framework, living and non-living systems are both information. Rather than a yes/no definition, we can speak about a “life ratio” 4. Information is partly hidden: we can only observe a limited part of reality The verb “to inform”, as employed in the common daily language, is originally related to the expression “to model according to a form”. As mentioned earlier, “to inform” derives from the Latin term “ in-formare”, that indeed means “to give a form”. Aristotle wrote: "Information" (translated in current terminology) is a truly more primitive fundamental activity than energy and matter. Thus he seemed to imply that information does not have an immediate meaning, such as the world “knowledge”, but rather it encompasses a modality that precedes every physical form (Meijer, 2012). Once there is a form, the potential information can become expressed through one of its possible manifestations. The totality of all forms can then be regarded as (the) space, and can be viewed upon as a “know-dimension”. A form is intrinsically capable of movement (and hence of re- and de-formation as well as recombination). Series of such events may have created first manifestations of life and subsequently a spectrum of different life forms. The ability of a life form to control its own abilities can be defined as (proto) consciousness. This “awareness” of the surroundings enabled life forms to probe the immediate environment and also to experience time (according to sequence and relative motion of forms). Such data were crucial in the framework of maintenance, security and survival (see also section 9). The interpretation of a shapes in the environment, or forms of sensed energy, can be envisioned as individual information that provided primitive entities with such (proto)consciousness. Perhaps the most important of all this is that consciousness, in more sophisticated forms, colored perceptions and directed manifestation of organisms by actively generating and selecting meaningful representations of the outer world. This in turn created self-awareness of the own life form, in relation to both the external and bodily environment. Information concepts have been examined, apart from the earlier mentioned Wiener (1948) and Shannon (1959), also by von Neumann (1963) in well known contributions and, more recently, by Frieden (2004). This generated useful theories to physics, to computation and to communications technologies. Information is hypothesized to be comprised of dual aspects, similar to the dual aspects of light: wave and particle. Wheeler (1990) stated that information is truly fundamental and exhibits two basic aspects: physical and phenomenal. Both aspects seem essential in the further understanding of consciousness. According to Frieden, (2004): “In information theory, a clear difference should be made between intrinsic (bound) information [B I] and observed information [O I]. Intrinsic information is defined as the most complete information that describes an object known as Fisher information. In the process of observation.. for instance by a human being, an incomplete picture is obtained due to the inherent limitation of the observer (Fig. 6, for example, remember the uncertainty principle in quantum physics). Fig. 7: The veiled reality pictured as a firewall In the process of observation, photons play a crucial role: they probe (illuminate) the object (the source of information) and act as a communication carrier in the chosen communication channel or information flowing route (for instance a telescope or a microscope, see Fig. 7). Observation of such information can subsequently lead to mathematical description and finally to the formulation of laws of nature. Important in this aspect is the role of the particular probe particle (for instance a photon). In the process of probing the object, the probe particle interacts with the object and perturbs its intrinsic information. Nature therefore seems to play a “knowledge acquisition game” in which it adds a certain level of random noise to the data (this was called “the information demon”). According to Frieden: [B I] minus [O I] varies between infinite and zero, depending of the quality of the information (transmission) channel between object and observer as well as on that of the “measurement” with regard to sensory detection, perception and interpretation by the observer. This difference also indicates the ability to communicate the perceived observation, in a consistent form, to the external world (for example to the scientific community). Measurements are in principle imperfect” ([B I] – [O I] > 0). This difference can also be seen as a measure of complexity or, from the standpoint of the observer, as the relative inability to know perfectly. Thermodynamically bound information is a measure of disorder: [B I] has the intrinsic tendency to decrease (called entropy) and to spread across larger space (dissipation). [B I] is also a measure of value in the sense that it can be expressed in Bits or in Qbits (see Frieden, 2004) As treated above, if the information is fully observed and transmitted, it may be compared with the result of teleporting a particle: by sending complete information on the particular particle over a long distance a real particle (in material form) is created at the given distance (Zeilinger, 1999, and 2000). This shows the fundamental property of information: it precedes matter or, in other words, information [B I] produces matter. This concept of intrinsic information [B I] has been earlier called “Fisher information” (see the review of Frieden, 2004). [B I] may also be used to envision the phenomenon of entanglement of paired particles with opposite quantum mechanical spin over large distances: a measurement of the spin of one of the single particles immediately influences the spin of the other paired particle, irrespective of the distance between them (Bell, 1966). This is due to the fact that they share some form of intrinsic information that for observers represents a hidden variable, instead of being due to classical signal transduction between the particles. Thus, the observed particle contains, what Bohm calls, “active information” about the unobserved particle (Bohm, 1980, Bohm and Hiley, 1987). In other words, humans with their evolutionary developed brain, can only see “reality” through a “firewall” that only permits a sort of selective filtration of information, see Jahn and Dunne (1997 and 2004). Therefore it is of utmost importance to identify the nature of these inborn filters, to develop mental abilities to inactivate them at least to some extent and and/or to create technology to potentially circumvent them. This may eventually be a feasible task: Spinoza (1677/1995) claimed that intelligence will ultimately learn to fully comprehend reality! Although the truth at the micro-level may be directly hidden for us, it can, in principle, be inferred despite this “information demon”: we may in various ways penetrate into this intrinsic information level. In this sense the universe will cooperate with intelligence. The goal in this cooperation is survival and consequently to reverse the destructive effect of the second law of thermodynamics. Science is concerned with the ability to make predictions that can be tested empirically. For example observing an interaction of particles through studying interference patterns in quantum systems, yields relevant information. Particle interactions can be seen as a form of information propagation and in fact, each particle is basically a bundle of information fully describing its current state, in other words: the wave function of a particle contains various modalities of information about the particle. The spatial part of the wave function contains information about the probability of localizing the particle in a given spot. The spin part has information about the probability of identifying it pointing one way or another, clearly the property of spin should be seen as a category of information rather than one of energy. This is also true for potential entanglement, that provides information of a paired particle, irrespective of their distance. Quantum information, however, is different from classical information since it cannot be established without its state becoming the measured value. Such states measured as a qubit are known as basis states or basis vectors”. 5. Information as related to Neg-entropy and Syntropy: Order or Chaos ? Neg-entropy Ordered systems, such as our universe in the beginning, are supposed to expire in less ordered systems (increase of disorder or entropy increases, according to the principles of thermodynamics). A greater disorder implies that more information is needed to describe the system. An increase in entropy, consequently, means an implicit increase in information. Yet, in our part of the universe, contrary to the second law of thermodynamics, also a decrease in entropy is seen. This produces an increase of ordered complexity such as life forms, as was already described by Schrödinger, 1959. As mentioned earlier, this so called neg-entropy is associated with a virtual reduction of information from the Category B. and partly of type C., since, in a systematic manner, information is compressed (formative information). For example, the compression of information in formulating the laws of nature can be seen as an example of such a neg-entropic process. Presently, more and more of such information is generated and shared in the non-local information store that we call the internet. In this global process, interestingly, information density is increasing, despite the much larger area over which it is distributed. Syntropy The famous mathematician Fantappié already in 1944 formulated the Unitary Theory of the Physical and Biological World, and started from the consideration that half of the solutions of the fundamental equations of the universe had been rejected by physicists. Vannini and Di Corpo, 2011 explained it starting from the KleinGordon energy/momentum/mass equation of special relativity: E2 = m2c4 + p2c2 In this equation E is energy, m mass, c the constant of the speed of light and p the momentum. This equation is quadratic and has two solutions, one positive (+E) and one negative (-E). Physicists had always rejected the negative solution since in the variable p there is time and in the negative solution time flows backward, from the future to the past. Einstein proposed to put p = 0, since the speed of bodies, compared to the speed of light, is very low and can be neglected. In this way the energy/momentum/mass equation simplifies into the famous E = mc2. Fantappié however believed that mathematics has a principle of reality, and that we cannot take in consideration only the part of the formulas that suits us. Fantappiè decided to study the properties of both solutions, the positive and the negative solution and he found that the first solution describes energy that diverges from a point, from a source, as for example the light from a light bulb, whereas the negative solution describes energy that diverges from a point, backwards in time. Consequently in our observable world we appear to move forward in time, but in connection with a domain, hidden for us, experience the negative solution as converging forces. Fantappié named this tendency syntropy (from Greek syn=converging, tropos=tendency), in order to distinguish it from the law of entropy which is derived from the abovementioned positive solution. Starting from this past-future duality, another mathematician, the New Zealander Chris King has developed a model of consciousness in which free will would arise from our being immersed in a dual stream of information travelling in opposite directions of time: on the one hand information from the past in the form of memories and experiences, on the other hand information from the future in the form of emotions (King, 2011). The syntropy model strikingly coincides with the ideas of Teilhard de Chardin: life, rather than being caused, would be guided by attractors which already exist in the future. His theory assumes that, in the future, there is a human-attractor towards which we are converging and evolving. This in contrast to a random biological evolution as dictated by Darwin’s classical theory. The formation of new complex structures would be driven by attractors that guide macroevolution processes towards more advanced complex structures through the mechanism of attractors that may retroact from the future. In 1928 Paul Dirac tried to get rid of the unwanted negative solution by applying the energy/momentum/mass equation to the study of electrons, turning them into relativistic objects. But, also in this case, the dual solution emerged in the form of electrons (e-) and antiparticles (e+). The antiparticle of the electron, initially named negelectron, was experimentally observed in 1932 by Carl Anderson in cosmic rays and named positron. Anderson became the first person who proved empirically the existence of the negative energy solution. Consequently, the negative solution was no longer an impossible mathematical absurdity, but it was an empirically shown phenomenon. Dirac’s equation predicts a universe made of matter which moves forwards in time and antimatter which moves backwards in time. According to Wheeler’s and Feynman’s electrodynamics, emitters coincide with retarded fields, which propagate into the future, while absorbers coincide with advanced fields, which propagate backward in time. This time-symmetric model leads to predictions identical with those of conventional electrodynamics. For this reason it is impossible to distinguish between time symmetric results and conventional results (Wheeler & Feynman, 1949). In the 1970s Szent-Gyorgyi (Nobel prize, 1937) concluded that in living systems there was wide evidence of the existence of the law of syntropy, even though he never managed to infer it from the laws of physics. While entropy seems a universal law which leads towards the disintegration of all types of organization, syntropy reflects an opposite law that attracts living systems towards a more harmonic organization (Szent-Gyorgyi, 1977). Ilya Prigogine, winner in 1977 of the Nobel prize for chemistry, introduced in his book "The New Alliance", a new type of thermodynamics, the "thermodynamics of dissipative systems", typical of living systems. Prigogine stated that this new type of thermodynamics cannot be reduced to the rules or thermodynamics (Prigogine, 1979). Recently, Henry-Couannier, 2012, in a paper on: “Negative Energies and Time Reversal in Quantum Field Theory” reviewed the theoretical and phenomenological status of negative energies is in Quantum Field Theory, leading to the conclusion that hopefully their rehabilitation might be completed in a modified general relativistic model. Thus, the concept of syntropy, at first sight, seems related to neg-entropy (absence of entropy by), but as treated above it is rather based on a reversed flow of information from future, by which a converging information process is obtained that opposes the information-diverging processes of entropy: increased chaos is compensated for by ordered, life-conferring information flow (Vannnini and DiCorpo, 2011). How to express physical information What is the basic entity to describe information? Entropy, if considered as information is measured in bits. The total quantity of bits is related to the total degrees of freedom of matter/energy. For a given energy in a given volume, there is an upper limit to the density of information (the so called Bekenstein bound), suggesting that matter itself cannot be subdivided infinitely many times and there must be an ultimate level of fundamental particles. Bekenstein, 2011 in his overview "A Tale of Two Entropies", highlighted a connection between the world of information theory and classical physics. As mentioned above, this connection was first described by the earlier mentioned Shannon (1959), who introduced a measure of information content, known as Shannon entropy. As an objective measure of the quantity of information, Shannon entropy has obtained a central position, for example, the design of modern communication instruments and data storage devices, are based on Shannon entropy. As, mentioned above, Shannon entropy deals with an intuitive link between measures of uncertainty and information: the greater our uncertainty about the outcome of an experiment, the more one may gain from actually performing it. In fact, Shannon information represents a parameter indicating the expected information gain, even before we perform an experiment, and also an average gain following multiple repetitions. In this concept, the higher the deviation from uniform probabilities the more information is available. The central idea in this context is that information is a physical entity (it is encoded into configurations of energy and matter). Consequently physics, in fact, consists out of information, for instance by statistically indicating the amount of information imparted by prior conditions ("prior knowledge") at a given measurement. Modern physics now considers the bit (binary digit) - the binary choice - as the ultimate fundamental entity. John Wheeler (1988), expressed this idea as “it-from-bit”, and implied that the basis of the physical universe the “it” of an atom or subatomic particle - is not matter, nor energy, but a bit of information. Consequently, the entire universe should be seen as a cosmic processor of information. If elementary particles interact, they are exchanging bits or, in other words, they transmit quantum states. The universe can thereby also compute its own destiny. For instance, Lloyd (2006) postulated that there are content-holding structures in the universe, that posses "content" of whether they are "here or there”. At the same time, there are other cosmic structures that can read that content and may identify it to be non-random. They then use this information to recognize patterns and quantify how much information is in a particular channel. It is important to note here that information does not exist by itself, because it depends on an intrinsic system that is able to decode the “message” and can register the "sender" and "receiver". Fig. 8: Schematic representation of a black hole (upper left) and the holographic universe (upper right) in which the integral information, expressed in bits, is holographically projected on a virtual screen, also enabling an entropic description of gravity, as opposed to the classical deformation of space/time (right below) Shannon's efforts to find a way to quantify the information contained in transmitted messages, led, as mentioned above, to a formula with the same form as that of Boltzmann. In his article “Information in the Holographic Universe”, (Fig. 8, see also section… ). Bekenstein, 2003, concluded that: "Thermodynamic entropy and Shannon entropy are conceptually equivalent: the number of arrangements that are counted by Boltzmann entropy reflects the amount of Shannon information one would need to implement any particular arrangement of matter and energy.” At first sight there seems to be a clear difference between the thermodynamic entropy and Shannon's entropy of information: the former is expressed in units of energy divided by temperature, the latter in essentially expressed in dimensionless "bits" of information, but this apparent difference is entirely a matter of convention. Conclusion: The expanding Universe can, in this view, be considered as the outcome of an entropic force which in its turn gives rise to the accumulation of information that provided biological evolution with a life conferring potential. Closely related to this is an intrinsic property of this system: the universe, in spite of the ongoing entropic processes, at the same time, is increasing order in relation to creation and further development of intelligence. This aspect is not only inevitably connected to its ultimate destination on the cosmic scale (see Barrow and Tipler, 1986), but it is also fundamental for the organization of life on the micro-level. As stated earlier, this phenomenon was called neg-entropy, and can be viewed upon as the compression of active information, such as the formulation of the laws of nature or the coding of information for the proteome in DNA/RNA. 6. Why a Science Philosophy of Information? The philosophy of information (PI) is the area of research that studies conceptual issues arising at the intersection of computer science, information science, information technology, and philosophy. According to Floridi (2011 b) this discipline includes: 1. 2. the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilization and sciences the elaboration and application of information-theoretic and computational methodologies to philosophical problems. The philosophy of information (PI) has evolved from the philosophy of Artificial Intelligence, logic of information, cybernetics, social theory, ethics and the study of language and information. Just what is information? According to Beaver (2012): The term is undoubtedly vague and still an important part of the modern linguistic landscape. We live in the “information age,” we read “information” in the papers, we can gather “information” on, say, the salt gradients of the currents in the Pacific Ocean, and we can talk about the amount of “information” that can be delivered over a wireless connection. Yet, as several philosophers have pointed out, we can scarcely say precisely what the term means. Given that it is also used differently across different fields of study (biology, communications, computer science, economics, mathematics, etc.), it is a hallmark of the philosophy of information to undertake this clarifying task, if the term “information” is to be informative at all. The expression: philosophy of information was coined in the 1990s by the abovementioned Luciano Floridi, who elaborated a unified and coherent, conceptual frame for the whole. Floridi (2010a) identified five different kinds of information: mathematical, semantic, physical, biological and economic, but this list is obviously not definitive. According to Floridi, four kinds of mutually compatible phenomena are commonly referred to as "information":     Information about something (e.g. a train timetable) Information as something (e.g. DNA, or fingerprints) Information for something (e.g. algorithms or instructions) Information in something (e.g. a pattern or a constraint). The author stipulated: “the word "information" is commonly used so metaphorically or so abstractly that the meaning is quite unclear. Information is in fact a polymorphic phenomenon and a poly-semantic concept so, and it can be associated with several explanations, depending on the level of abstraction adopted and the cluster of requirements and desiderata orientating a theory. The abovementioned Claude E. Shannon, for instance, was very cautious: “The word ‘information’ has been given different meanings by various writers in the general field of information theory. It is likely that at least a number of these will prove sufficiently useful in certain applications to deserve further study and permanent recognition. It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field. (italics added)” (Shannon, 1993). Thus, following Shannon, Weaver, 1949, supported a tripartite analysis of information in terms of (1) technical problems concerning the quantification of information and dealt with by Shannon’s theory; (2) semantic problems relating to meaning and truth; and (3) what he called “influential” problems concerning the impact and effectiveness of information on human behavior, which he thought had to play an equally important role. And these are only two early examples of the problems raised by any analysis of information”. Floridi, also mentioned eighteen problems in information science that are in need of solution there by setting the agenda for future development in this research area while connecting it to previous work. The questions are discussed in Floridi, 2011a and are worthwhile to be cited here: 1.What is information? 2. What are the dynamics of information? 3. Is a grand unified theory of information possible? 4. How can data acquire their meaning? 5. How can meaningful data acquire their truth value? 6. Can information explain truth? 7. Can information explain meaning? 8. Can (forms of) cognition be fully and satisfactorily analyzed in terms of (forms of) in-formation processing at some level of abstraction? 9. Can (forms of) natural intelligence be fully and satisfactorily analysed in terms of (formsof) information processing at some level of abstraction? 10. Can (forms of) natural intelligence by fully and satisfactorily implemented non- biologically? 11. Can an informational approach solve the mind-body problem? 12. How can information be assessed? If information cannot be transcended but can only bechecked against further information … what does this tell us about our knowledge of the world? 13. Could epistemology be based on a theory of information? 14. Is science reducible to information modelling? 15.What is the ontological status of information? 16. Can information be naturalized? 17. Can nature be informationalized? 18. Does computer ethics have a philosophical foundation? Information-theoretic and computational methods, concepts, tools and techniques have already been developed and applied in many philosophical areas: • to extend our understanding of the cognitive and linguistic abilities of humans and animals and the possibility of artificial forms of intelligence (e.g. in the philosophy of AI; in information-theoretic semantics; in information-theoretic epistemology and in dynamic semantics); • to analyze inferential and computational processes (e.g. in the philosophy of computing; in the philosophy of computer science; in information-flow logic; in situation logic; in dynamic logic and in various modal logics); • to explain the organizational principles of life and agency (e.g. in the philosophy of artificial life; in cybernetics and in the philosophy of automata; in decision and game theory); • to devise new approaches to modeling physical and conceptual systems (e.g. in formal ontology; in the theory of information systems; in the philosophy of virtual reality); • to formulate the methodology of scientific knowledge (e.g. in model-based philosophy of science; in computational methodologies in philosophy of science); • to investigate ethical problems (in computer and information ethics and in artificial ethics), aesthetic issues (in digital multimedia/hypermedia theory, in hypertext theory and in literary criticism) as well as in psychological, anthropological and social phenomena characterizing the information society and human behavior in digital environments(cyber-philosophy). Fig. 9: A representation of a multi-layered organization structure of the human brain, encompassing topdown and bottom-up information transfer on the basis of an evolved personal universe (world view), in recurrent interaction with the environment and universal mind. Modeling of physical and conceptual systems in relation to organizational principles of life, was recently performed for cognitive brain function on the basis of an iso-energetic and bi-cyclic information model of mindbrain relationships. The figure 9 shows the levels of organization of the brain, largely based on the concepts of Searle and Kauffman. The lowest levels of complexity range from elementary particles (photons, quanta), via atoms and protein molecules, whereas the higher levels are composed of individual neurons, neuronal networks, individual or personal brain. According to QM mind theories the (individual) mind is directly connected to the whole universe, via quantum fields interacting with the “personal universe”. This spatial-temporal organization is created through bottom-up ontological processes (bottom-up causation) and is also subject to top-down causation (Bohm, 1987), through interaction with a supposed general knowledge field (Meijer and Korf, 2013). Current Paradoxes of Information Beavers (2012), illustrated various kinds of philosophical problems that the philosophy of information confronts by examining three paradoxes that have received much attention in the literature. I cite: “The inverse relationship principle: that the informativeness of a piece of information increases as its probability decreases, , may seem intuitive at first glance, but as it stands, it leads to two problems with counter-intuitive outcomes. The first was framed by Hintikka (1970) which he named the “scandal of deduction.” The second was identified by Bar-Hillell and Carnap (1952) and is accordingly called the Bar-Hillell--Carnap Paradox. The third involves Weiner’s (1950) conflation of meaning with information and appears in Dretske (1981).Consider again the inverse relationship principle. The probability that a given (correct) conclusion or answer will follow from a logic or math problem defined in a formal language is 100 percent. It is therefore, according to the inverse relationship principle, maximally uninformative. Yet, as Hintikka notes, “in what other sense, then, does deductive reasoning give us new information? Isn’t it perfectly obvious there is some such sense, for what point would there otherwise be to logic and mathematics?” The Bar-Hillell-Carnap Paradox notes that since the less probable a piece of information is the more informative it is, and since contradictions are maximally improbable, they are the most informative, leading to another counter-intuitive conclusion. Appealing to Norbert Wiener’s equation of “amounts of meaning” with “amounts of information” (Wiener, 1950), Dretske noted a similar issue that challenges the inverse relationship principle. Any adequate theory of semantic information must somehow account for these paradoxes. Dretske does so by sharply distinguishing between meaning and information, which offers some help with the last paradox. Floridi (2011a) suggested that the absence of truth as a criterion for informativeness in standard theories of semantic information lies at the root of the problem. He suggests “a theory of strongly semantic information,” which provides the definition of semantic information as “well-formed, meaningful, and truthful data,” mentioned above. This seems to deal adequately with the first and second paradox, since taking truth into account means that “semantic information about a situation presents an actual possibility that is inconsistent with at least one but not all other possibilities”. This view renders a contradiction impossible where truth is concerned and a tautology vacuous because it eliminates any possibility of false-hood. Thus, both are uninformative(see for references Beaver, 2012). Preference for using information as a fundamental parameter Another benefit of using information as a basic descriptor for our world is that the concept is well studied and formal methods have already been developed, as well as its philosophical implications have been discussed. Thus, there is no need to develop a new formalism, since information theory is well established. One can borrow this formalism and interpret it in a new way. Finally, information can be used to describe other formalisms: not only particles and waves, but also systems, networks, agents, automata, and computers can be seen as information. In other words, it can contain other descriptions of the world, potentially exploiting their own formalisms. Information, therefore is an inclusive formalism.This is not to suggest that describing the world as information is more suitable than physics to describe physical phenomena, or better than chemistry to describe chemical phenomena. It would be redundant to describe particles as information if we are studying only particles. Rather, the suggested approach is meant only for the cases when the physical approach is not sufficient, i.e. across scales, constituting an alternative worth exploring to describe evolution. 7. Information from a quantum physics perspective Classical versus Quantum information According to Wikipedia, 2012, “The instance of information that is contained in a physical system is generally considered to specify that system's "true" state. In many practical situations, a system's true state may be largely unknown, but a realist would insist that a physical system regardless always has, in principle, a true state of some sort—whether classical or quantum. When discussing the information that is contained in physical systems according to modern quantum physics, we must distinguish between classical information and quantum information. Quantum information specifies the complete quantum state vector (or equivalently, wave function) of a system, whereas classical information, roughly speaking, only picks out a definite (pure) quantum state if we are already given a pre-specified set of distinguishable (orthogonal) quantum states to choose from; such a set forms a basis for the vector space of all the possible pure quantum states (see pure state). Quantum information could thus be expressed by providing (1) a choice of a basis such that the actual quantum state (See Fig. 10) is equal to one of the basis vectors, together with (2) the classical information specifying which of these basis vectors is the actual one. However, the quantum information by itself does not include a specification of the basis, indeed, an uncountable number of different bases will include any given state vector. Note that the amount of classical information in a quantum system gives the maximum amount of information that can actually be measured and extracted from that quantum system for use by external classical (decoherent) systems, since only basis states are operationally distinguishable from each other. The impossibility of differentiating between non-orthogonal states is a fundamental principle of quantum mechanics equivalent to Heisenberg's uncertainty principle. Fig.10 : The choice aspect of consciousness as the resultant of two induced quantum states in the quantum brain as realized by wave resonance, superposition, coherence and entanglement, enabling memorizing the past and prehension of future events. As indicated above, digital information is, in general, expressed in bits which may have the value “0” or “1”(yes/no). In computers, bits are represented as the physical states of a certain physical system. It is obvious that a physical bit can only be either “0” or “1”, that is, if it is represented by a classical system. Yet, with the possibility of characterizing individual quantum particles in much greater detail, the question arises which new phenomena may occur when we use such quantum systems to represent information, assuming that their propagation and processing is determined by the laws of quantum physics. One interesting aspect comes up when we consider the qubit or quantum bit. In contrast to a classical bit, a qubit is not restricted to the states “0” or “1”, but it can also be in a superposition of “0” and “1”. This means that the value of the bit is not exactly defined. If it is measured, one gets randomly the answer “0” or “1”. Although in this way certainty is lost, a major advantage of a qubit is that the superposition can exist in many different forms, and consequently a qubit has the potential to represent much more information than the classical bit and renders extremely high calculation capacity The instance of information that is contained in a physical system is generally considered to specify that system's "true" state. (In many practical situations, a system's true state may be largely unknown, but a realist would insist that a physical system regardless always has, in principle, a true state of some sort—whether classical or quantum. When discussing the information that is contained in physical systems, according to modern quantum physics, we must distinguish between classical information and quantum information. Quantum information specifies the complete quantum state vector (or equivalently, wave function) of a system, whereas classical information, roughly speaking, only picks out a definite (pure) quantum state if we are already given a prespecified set of distinguishable quantum states to choose from. Note that the amount of classical information in a quantum system gives the maximum amount of information that can actually be measured and extracted from that quantum system for use by external classical (decoherent) systems, since only basis states are operationally distinguishable from each other. Von Neumann (1963) introduced an "ontological" quantum physics approached information theory as a knowledge-based discipline, which brought the role of the observer and the measurement instrument in the operation of the system. Stapp, 2003 described Von Neumann's view of quantum theory through a simple definition: "the state of the universe is an objective compendium of subjective knowings". This statement implies that the state of the universe can be seen as represented by a wave function which is a superposition of all the wave functions that conscious beings can collapse through observations. In this sense it is a sum of subjective acts, although collectively an objective one. Thus the physical aspect of Nature (the Schrödinger equation) can be viewed upon as a compendium of subjective knowledge. Of note: the conscious act of asking questions on the very nature of reality may drive the actual transition from one state to another, i.e. the evolution of the universe. Quantum information in a final context Potentially, all information that is ultimately available about the state of the universe could in the far future be collected and compressed by an advanced intelligence, producing a final knowledge register to be used as a recipe for the construction of next version of our universe. In other words: this ultimate syntropic type of information could be made suitable for black hole-mediated transmission into an adjacent space/time domain, in order to create a follow-up universe of our present one ( the so called cyclic model of our universe, see for cosmological projections: Vidal (2010), Vaas (2004), Heylighen (2010), Zizzi (2006) and Penrose, 2010). In this sense, intelligent life may be inevitable for the future evolution of our type of universe (cf. the “Strong Anthropic Principle”, (see Linde, (2004) and for the final destiny of intelligence, (Barrow and Tipler, 1986 and Tipler, 1995): “all events in nature belong to a particular form of different codified energy transformations, so that the total energy cannot be created or destroyed”. Alternatively, scientific observations made throughout the history of our universe may, through a process of backward causation, may lead to adaptation of the fundamental laws of nature that were often assumed to be fixed from the beginning (see Wheeler, 1987). Observations made throughout the entire duration of the universe, in this way, can contribute to fashioning the form of the laws in the first split second after the Big Bang, when they were still significantly malleable. Thus the potential for future life acted like an attractor, drawing the emerging laws towards a bio-friendly region in the parameter space (see Davies, 2003 and 2007). 8. The Universe as created through a process of unfolding information From the birth of our universe to its supposed end, information will continuously flow, for example, in the process of biological and cultural/scientific evolution. This is not a time-linear event but should rather be seen as a chain of feed-back loops, in which existing information is processed and new information is generated and integrated. With regard to evolution, feed-back of information on the state of the whole, including that of the stable intermediates, of life forms is required to create and functionally integrate the particular building blocks of the entities that constitute the ongoing processes of higher complexification. This feed-back leads to perturbation of these basic processes that in turn can, but not always will, result in a to a higher level of functionality of the whole. This cyclic flow of information, for example can lead to efficient adaption of cells and organisms in evolutionary processes. Fig. 11: An example of including a universal information field in an integrated scheme, depicting our Universe as a circular flow of information with its material (right part of the figure) and mental (left) aspects, in which for these aspects a non-dual and complimentary matter/mind modality is assumed. This concept assumes a holographic quantum information field (universal consciousness), that is regarded as a fundamental component of our universe and gradually further develops, among others through feed-back processes and interaction with individual consciousness, in which humans and other intelligent life forms play crucial roles in observation of and participation in the cosmos. A circular model of the universe is proposed (more extensive treatment in Meijer, 2012). Yet, a basic perception of nature as a whole is only possible if a collective memory is available, which argues for some kind of universal knowledge domain. In principle, consciousness can be perceived as processing of information. Since consciousness is observed in literally all the aspects of evolution, consciousness should have a universal character and must be present at each level of the cosmos. Three aspects should be differentiated here: the gradual unfolding of the primary information that was present a priori at the start of our universe, and along with that, new information that should arises in the ongoing process of universal entropy and converging information, potentially induced by attractors. The interference of the latter two modalities of information, can be viewed upon as a holographic process, in which these two types of information, interacting in a twodimensional space, are converted to a three-dimensional image. The universe as a holographic projection was earlier proposed by David Bohm, (1980) and later on worked out by Bekenstein, (2003), ‘t Hooft, (2001) and Hawking, (2010), among others. In a hologram, each sub-part contains the information of the total holographic picture. It is the unfolding (a priori) information that is the basis of this holistic aspect and forms the fundamental binding element in the universal information matrix (Fig. 11). This may give rise to evolution: the creation of form both in its material and mental modalities (as exemplified in Fig. 11) In our brain the latter aspect is reflected in our thoughts, sensory and extra-sensory percepts, memes, metaphors, concepts, models etc. One central feature of quantum mechanics is the existence of informational but non-causal relations between elements of systems. These relations are non-causal insofar as they are modulated instantaneously over any distance and do not involve the transfer of energy between the parts of the system. In conclusion: the above mentioned information matrix pervades all non-living and living elements of the universe and can be called a knowledge domain or a universal information field that may be structured as a holographic system (Meijer, 2012). But how is information organized and integrated in nature? Although a reductionist scheme on the dynamic flow of information in nature from the micro- to the macro scale, as pictured in Fig. 12, seems intellectually satisfactory, such a scheme evidently lacks the aspect of integration and consistency that enable nature to act as a whole at the different levels indicated. The unfolding and creation of information, as well as the processing of it, can be pictured as an act of composing of symphonic music: in addition to the interpretation by the maker and the musicians, it obtains significance through the subjective emotion of the listener. Unfolding can also be pictured as the growth of a huge tree from an extremely small seed (a priori information) that unfolds during maturing. During the growth of the tree, intrinsic (morphogenetic) information is used and new information is collected from the environment, resulting in steadily rising complexity as well as modulation of the basic recipe, resulting in the manifestation of life and survival. Fig. 12: The dynamic flow of information in our Universe is pictured as starting with vibration of strings (lower part, inset left) in a 10 dimensional space, leading to elementary particles and atoms that form molecules as a basis for living cells such as neurons that on their turn, with other cell types, form our brain as a part of the human organism. Humans inhabit planet Earth, as a part of our galaxy and the universe, in a process of participation by natural and artificial intelligence. This phenomenon is often “explained” by, so called, “emergent” processes in which completely new properties are claimed to arise spontaneously from building blocks that themselves do not display that particular property. However, as mentioned above, the physical background and validity of the emergent concept is presently debated. Alternatively, the induction of novel complexity in time can be seen as a process of “backward causation”. Two different mechanisms may play a role here: Firstly, such a time-reversed causation may entail a feed-back of information from a future condition of higher complexity (see also section 7) This can be related to the observer effect in quantum physics in which the wave information collapses to particle information by the act of conscious observation, but only after the observer chooses to read and interpret the interference pattern (see the delayed choice model of Wheeler, 1990). An observer effect can even be envisioned to occur in observing the boundaries of our Universe through a telescope and thereby looking back in time to the Universe in its starting conditions. The observer may, in this manner, even perturb events at the time of birth of our universe (see Wheeler, 1990). Thus the present observation may influence the past in a retro-causal manner. In information terminology, one could say that such backward causation is to make a time-reversed copy of a future event. Backward causation may also be understood in relation to the so called transactional interpretation of quantum physics: collapse of the wave function and the experimentally observed time delays and “feeling the future aspects” may be due to the sending of an advanced wave (into the future) and simultaneously an offer wave (into the past) that then are accommodated by the best fitting future and past events (see Cramer, 1998 and 2005): the handshake effect). The produced answer waves, subsequently, are returned to the present and mixed in order to create the state of the particular wave function. Each quantum event in the present time thereby entails specific but not directly observable information from the future (Fig. 10). Aharonov’s team and various collaborating groups (see Aharonov, 2010), studied whether the future events can influence the past, by sophisticated quantum physics technology. Aharonov concluded that a particle’s past does not contain enough information to fully predict its fate, but he wondered, if the information is not in its past, where could it be? After all, something must regulate the particle’s behavior. In 1964, Aharonov, then in New York, proposed a new framework called time-symmetric quantum mechanics. Recent series of quantum experiments in about 15 other laboratories around the world seem to actually confirm the notion that the future can influence results that happened before those measurements were even made. Fig 13: Experiment supporting” time symmetric” quantum physics Generally the protocol included three steps: a “pre-selection” measurement carried out on a group of particles; an intermediate measurement; and a final, “post-selection” step, in which researchers picked out a subset of those particles on which to perform a third, related measurement. To find evidence of backward causality, information flowing from the future to the past, the effects of, so called, weak measurements were studied. Weak measurements involve the same equipment and techniques as traditional ones but do not disturb the quantum properties in play. Usual (strong) measurements would immediately collapse the wave functions in superposition to a definite state. The results in the various cooperating groups were amazing: repeated post-selection measurement of the weak type changed the pre-selection state, clearly revealing an aspect of non-locality ( Fig. 13). Thus, according to Aharonov and associated research teams, it appears that the universe might have a destiny that interacts with the past, in order to bring the present into view, in line with the earlier mentioned theories of Wheeler and Fantappie. Fig. 14: Elementary particles (below left) may also behave as waves (middle) or, even smaller, string elements that take different forms (upper part right) and are supposed to vibrate in a > 5-dimensional space, (depicted in the cartoon up left), according to the M-theory. This highlights the notion that matter should be seen as a “frozen” (collapsed) aspect of wave information, each particle is in fact material point information in a quantum field and living organisms are complex compositions of billions if such wave/particle modalities. This idea is very much in line with the idea that, on the deepest micro-level, nature can be described as vibrational energy, in the sense that each specific type of elementary particle should be seen as one modality of vibration of strings. As mentioned earlier, strings might represent the most fundamental supposed building blocks of the universe according to the string and M-theories (Fig. 14) Yet, as treated above, it should be realized that we cannot really detect what an electron or even an atom is really like: we will only see their shadows as representations on the background (as exemplified in the metaphor of Plato’s cave). All of the above mentioned micro-events cannot be observed directly by humans since, as treated before, the measuring instruments and the act of observation intrinsically disturbs the bound information. Such events can only be indirectly inferred by postulating theories, designing models and verification of these models by experimentation. On the human level, this feeling of the future aspect may be a brain process that occurs in the unconscious domain that is proposed to represent 90% of the brain’s workspace and has also be related to aspects of clairvoyance and telepathy (see Radin, 1997, 2006, Grof, 1987 and Griffin, 1997). What is the underlying basis for all of these processes in nature? There are now attempts to develop a “theory of everything, abbreviated TOE” on the basis of string theories (see Green, 2004). Such a theory should be valid both on the micro (quantum) level and macro (cosmological) level including an adequate concept of gravity. (Fig. 14, see also section 10). Another candidate to describe the deep structure of reality, therefore, is the so called loop quantum gravity theory (see Smolin, 2004), in which matter exists at the nodes of an extremely fine spin network. Interestingly attempts have been made by describing consciousness as being produced by the network of spin movements of elementary particles that make up our brain (see Penrose, 2004; Hu and Wu, 2010), and also Meijer and Korf, 2013. Several authors have proposed that the entire universe can be calculated (Lloyd, 2006) and may have a mathematically defined structure (see Tegmark,……, Fig. 15), as also earlier implied by Wigner, 1960. Of note, it should be stressed that such theories can never fully describe reality without taking into account the phenomenon of consciousness and self-awareness, as essential parts of an information-generating and as well as information-processing system in our world. Anyway, a consistent “theory of everything, TEO” should also contain an Fig. 15: Biological information may have a mathematical foundation (see references Wigner, Lloyd and Tegmark). explanation for itself (Vedral, 2010). Charles Seife (2012), recently, postulated a natural teleology for the efficient causation in biological evolution, meaning that ”the universe is rationally governed in more than one way, not only through the universal quantitative laws of physics that underlie this efficient causation, but also through principles which imply that things happen because they are on a path that leads to certain outcomes, notably the existence of living, and ultimately of conscious organisms”. He further argued that “not only emergence of life from a lifeless universe of reproducing organisms but also consciousness should be included in a TEO, including the development of consciousness into an instrument of transcendence that can grasp objective reality and objective value. The universe has become not only conscious and aware of itself, but capable of some respects of choosing a path for the future”. 9. Information as self-organized complexity in the evolution of life Information and life processes According to Gershenson, 2010: ”The last decades there is a great interest in the relationship between energy, matter, and information. One of the main reasons for this arises because this relationship plays a central role in the definition of life: Hopfield, 1989 suggests that the difference between biological and physical systems is given by the meaningful information content of the former ones. This does not imply that information is not present in physical systems, but, as Roederer, 2005 puts it, information is passive in physics and active in biology. However, this requires a complicated concept in which information is expressed in terms of the physical laws of matter and energy. In the particular paper, the inverse approach was proposed: let us describe matter and energy in terms of information ! If atoms, molecules and cells are described as modalities of information, there is no need of a qualitative shift (from non-living to living matter) while describing the origin and evolution of life. In this sense rather a quantitative shift (from less complex to more complex information) is at stake. In Living Systems Miller, 1978 provided a detailed look at a number of systems in order of increasing size, and identifies his subsystems in each. By definition, living systems are open, self-organizing systems that have the special characteristics of life and interact with their environment. This takes place by means of information and material-energy exchanges. Essential subsystems process information for the coordination, guidance and control of the system. The twenty subsystems and processes of all living systems are arranged by information inputthroughput-output processes which take place in a so called input transducer that brings information into the system, and an ingestor that brings material-energy into the system. Processes which take place in the systems throughput stage are information processes. An internal transducer receives and converts information brought into system channel and a so called “net” distributes information throughout the system, a decoder: prepares information for use by the system timer that maintains the appropriate spatial/temporal relationships. An associator maintains the appropriate relationships between information sources memory that stores information for system use. A decider: makes decisions about various system operations encoder that converts information to needed and usable form. A reproducer handles this information and carries on reproductive function boundary that with information, protects system from outside influences Processes which take place in the systems output stage output make use of a transducer that handles information output of the system. The drawback with the physics-based approach to the studies of life and cognition is that it requires a new category, that in the best situations can be referred to as “emergent”. Emergence can, in some cases, be a useful concept, but it is clearly not explanatory: obviously it is always an explanation in retrospect and as such does not contain predictive power. No physical model have been developed that can predict an emergent phenomenon. Moreover, the concept of emergency stealthily introduces a dualist view of the world: if we cannot properly relate matter and energy with processes such as life and cognition, we are forced to see these as separate categories. Once this step is made, there is no clear way of studying or understanding how systems with life and cognition evolved from those without it. However, if we see matter and energy as particular, simple cases of information, the dualist trap is avoided by following a continuum of information processing in the evolution of the universe. Physical laws are suitable for describing phenomena at the physical scale. The tentative laws of information, presented here, aim at being suitable for describing phenomena at any scale. Certainly, there are other approaches to describe phenomena at multiple scales, such as general systems theory and dynamical systems theory. These approaches are not exclusive, since one can use several of them, including information, to describe different aspects of the same phenomena. A unified concept of information was earlier proposed also as a form of self-organized complexity, a model that may be equally applicable to the physical, biological and human/social domains by Bawden, (2007). We cite the following section from this excellent article: "The seemingly empty space around us is seething with information. Much of it we cannot be aware of because our senses do not respond to it. Much of it we ignore because we have more interesting things to attend to. But we cannot ignore it if we are seeking a general theory of information. As mentioned above, Stonier, (1990, 1992, 1997) made one of the first detailed attempts to unify the concept of information in the physical, biological and human domains. Starting from the concept of information as a fundamental constituent of the physical world, Stonier proposed relations between information and the basic physical quantities of energy and entropy, and suggested that a general theory of information may be possible, based on the idea that the universe is organized into a hierarchy of information levels. Stonier identified self-organizing information processing systems as the "physical roots of intelligence", based on his conception of information as a basic property of the universe”. Madden, (2004) focused on the biological domain in his evolutionary treatment of information, examining information processing as a fundamental characteristic of most forms of life. He argued that Lamarckian evolution, the idea that characteristics acquired by a biological organism during its lifetime can be passed on to their descendants, while discredited in general biology, may be appropriate for understand the evolution of human societies, including their information behavior. Madden proposed that insights from the information sciences may be valuable to the supposedly more 'basic' sciences, in this case the biological sciences, because of the commonality of the “information” concept. Bates, (2005), like Stonier seeking to reconcile the physical, biological and human forms of information, took the general definition that "information is the pattern of organization of everything". All information is “natural information”, existing in the physical universe of matter and energy. “Represented information” is either “encoded” (having symbolic, linguistic or signal-based patterns of organization) or “embodied” (encoded information expressed in physical form), and can only be found in association with living creatures. Beyond this, Bates defined three further forms of information: Information type1: the pattern of organization of matter and energy; Information type 2: some pattern of organization of matter and energy given meaning by a living being (or its constituent parts) as well as Information type 3: Knowledge: information given meaning and integrated with other contents of understanding”. Self-organizing systems are not only a topic of relatively recent interest, but are of importance in a variety of areas in the physical sciences (Davies, 1987, 1998). The interest in them comes from two perspectives. The ubiquitousness of self-organization has led some scientists to propose that there may be 'laws of complexity', such that the universe has an 'in-built' propensity to organize itself in this way; this view is far from generally accepted, but is gaining support. On the small-scale, it may be observed that simple physical and chemical systems show a propensity to 'self-organize': to spontaneously move towards a mode which is both organized and also highly complex (Kauffman, 1993, 2008). Both evolution of species and individual ontology have a common principle: relatively simple entities evolve to more complicated organisms. However, the Darwinian type of evolution does not solely lead to biological structures with higher complexity, but also to entirely new structures that cannot be predicted or deduced from the properties of precursor components. The properties of the constituting elements, somehow, enable an integrated and interacting network that is largely unpredictable from the properties of these precursor elements. In other words, a cell is more than a collection of molecules such as proteins, lipids and nucleic acids. Rather, it is a well-organized entity that, for instance, entertains a correct replication process and gains survival based on an adequate adaptive response to environmental challenges(Kauffman, 2012) On the large scale, science must account for the emergence of highly complex organized structures - stars, galaxies, clusters of galaxies, and so on - in a universe which theorists assure us was entirely uniform and homogenous immediately after its creation. It is still not clear what the origins of this complexity are; it is generally assumed to come from gravitational effects, acting on very small inhomogeneities (Davies 1998). Gravity in the early universe can therefore be seen as "the fountainhead of all cosmic organization, triggering a cascade of self-organizing processes" (Davies 1987, Fig.16). Fig. 16: The Fabric of Reality on the Basis of Information With the increasing emphasis on the understanding of genetic information is the tendency to describe life itself as an informational phenomenon. Rather than defining living things, and their differences from non-living, in terms of arrangements of matter and energy, and of life processes, metabolism, reproduction, etc., it is increasingly usual to refer to information concepts. Life, thought of in these terms, is the example of selforganized complexity par excellence. But with life comes a change from the organized complexity in the physical universe: with life we find the emergence of meaning and context. The genetic code, for example, allows a particular triplet of DNA bases to have the meaning a particular amino acid is to be added to a protein under construction; but only in the context of the cell nucleus. It has also become clear that the origin of life itself may best be viewed as an “information event”: the crucial aspect is not the arrangement of materials to form the anatomy of a living creature, nor the beginning of metabolic processes; rather it is the initiation of information storage and communication between generations which marks the origin of life (Davies 1998, Floridi, 2005). An exponent of a new interest in the 'philosophy of information' within the discipline of philosophy itself, recasts the idea of knowledge as “justified, true belief” into the idea that information is “well-formed, meaningful and truthful data”. This seems more suitable for the needs of information science, but does not reflect the rather muddled reality of the human record. Perhaps the most interesting philosophical approach is that of Kvanvig (2003), who argues that we should replace “knowledge' with 'understanding” as a focus for interest. Understanding, for Kvanvig, requires "the grasping of explanatory and other coherence-making relationships in a large and comprehensive body of information". The linking thread, and the unifying concept of information here, is self-organized but guided complexity. The crucial events which allow the emergence of new properties are: the origin of the universe, which spawned organized complexity itself; the origin of life, which allowed meaning-in-context to emerge; and the origin of consciousness, which allows self-reflection, and the emergence of understanding, at least partly occasioned when the self reflects on the recorded knowledge created by other selves. If, therefore, we understood these three origins fully, we would, presumably, understand information itself equally fully, and the ways in which its various forms emerged. Sadly, the beginnings of the universe, of life, and of consciousness, are among the most deep and difficult problems for science (Gleiser, 2004). As mentioned before, there has been a trend in science, following the so-called “strong anthropic principle’, to conjecture that the emergence of life and consciousness may, in some ill-understood way, have an effect of backward causation, so as to affect the nature of the universe which have rise to it. The analogy for our purposes would be to allow the possibility that the emergence of human information, knowledge and understanding is in itself a force in the physical universe, which can influence the generation of complexity in all domains (see Fig.17). This is an intriguing speculation, but it is not necessary to accept it in order to believe that this studies may have some value for the understanding of self-organization and complexity in other domains”. Fig. 17: Compressed universal information as the recipe for life Davies, 2003 explains: The missing concepts that prevented the earliest investigators of life and consciousness from succeeding in their quest were: 1) a generalized theory of information; 2) a deeper understanding of quantum science itself, with its associated phenomena of non-locality/entanglement, and quantum holography; 3) an adequate theory on chaotic processes, that is necessary to understand the nonlinear evolutionary processes that caused consciousness to evolve toward the self-consciousness experienced by humans. As mentioned before, on the basis of these concepts, consciousness now seems an essential and integral modality in the manifestation of the material world. One blind spot in evolutionary theory is the possible influence of non-local quantum information transfer in the bio-construction of the first primitive cells (Fig. 18 and 20), in which information processing and replicating abilities are at stake rather than complexity per se. According to Davies (2003) and the same author in Abbott (2008), quantum mechanics provides a way to drastically shorten the trajectory of matter to life, by exploiting the parallel processing properties of superpositions and wave interference. It is quite likely that bio-systems selected potential life components from a great number of non-living states through wave superposition. The transition from non-life to life can, in this manner, be considered as a quantum-mediated process in which the environment served as a sort of measuring device, that enabled the material expression of the particular wave patterns. These dynamic conditions also enable top-down causation (Fig. 19), by information control, that is likely to play a central role in evolution and comprises aspects that are basic for any information acquisition process, namely mutual information and information selection (see also Patel, 2001, and Mc Fadden, 2001). But what is information control in this framework really? This is not related to Shannon’s (1959) theory of communication (in the context of controlled transmission), that is a general theory of information. A mentioned earlier, this theory is centered on signal/noise discrimination, the message is already selected and well defined from the start: the selection among several alternative states already occurred at the level of input or sender. The crucial item here is only to reliably transmit the sequence of bits that has been selected, in the presence of potential disturbances. In contrast, a real information theory, for instance that of Wiener’s (1948), starts with an input as a source of variety and has the selection only at the end of the information-processing. Thus, a message here is rather the message selected by the receiver. It goes without saying that any information reception will be subject to the initial variety, in addition to the influences of disturbance, dispersion, and use of any of this information, at the most elementary level, already constitutes information selection. Fig. 18: Biological evolution theory: potentials and problems This is of major relevance for biological systems, since they are confronted with an environment that includes sources of uncertainty, and for this reason such systems do not have control from the start of the information that has been sent. Even inside a single cell (Fig. 20) there is such a problem, due to the modularization of the different subsystems. Consequently, in this case, the control must somehow be exerted while having only a limited pool of resources An important question remains: if a sort of “recipe for life” was present non-locally in the context of a bidirectional time concept and/or potential backward causation, how did this information influence evolutionary processes such as self-assembly and auto-catalysis? (see Paul Davies in: Abbott et al., 2008). According to the traditional information theory, the main item is reliability, understood as the matching between input and output. However, in biological phenomena one has a condition in which the receiver does not have full control over the input and therefore is forced to “guess” the nature of the input by taking the received partial information rather as a sign of it. As mentioned above, at any biological level, the receiver is in general flooded with incoming data, and has to separate background data (important but constant) and noise (irrelevant data) from relevant information, data that are needed for some purpose and may be expressed in algorithmic terms (see Fig. 19). Therefore, information control consists in information selection, often involving a sort of guess from a certain point of view, and this represents the goal of the system. For instance, a bacterium searching for an energy source may use a specific temperature gradient (the received information) as a sign of this source. In this framework it is necessary to state how goals and feedback control are linked (see Murphy, 2011, Fig 19). Information control via feedback is not the only way to have control via information, yet it plays a fundamental role in living systems, being involved in any homeostatic process. In conclusion, in any information exchange we have selection at the end, not at the start. That is, if the output selection is saying something about the input, the receiver starts a new information process aiming at the source, thereby inverting somehow the ordinary flow of information from the source to the receiver, and in other words enters the process of backward causation. An attractive hypothesis is that quantum mechanical mechanisms, acting upon primordial information, were instrumental in the origin of first life and the construction of the first homeostatic and replicating cells. In this framework one should realize the extremely complex structure of the cellular machinery (see Fig. 20), not only its versatile plasma (outer) membrane, that is equipped with transport, channel and signal-transducing proteins, (Meijer et al, 1999) but also within the cell the various organelles, for instance involved in the production of energy and storage of genes. The genome is read out to produce at least 100.000 functional proteins in the cytoplasm, involved in a network of metabolic and repair processes as well as in intracellular movements (see for quantum mechanical mechanisms in genetics Patel, 2001, Mc Fadden, 2001, Schempp, 2003). Fig. 19: Backward (downward) causation in (neuro)-biological processes on the basis of (see A1.b) a space of possibilities, at an intermediate level, in a circular mode (upper left inset). Feed–back control is shown as a comparator that determines the difference between a system state and goal and provides an error signal activating the controller to correct the particular error, a mechanism that operates for example in the DNA/RNA protein synthesis machinery (lower part). B1: Input information as a source of variety initiates an information process that is finalized when information selection is accomplished, that is then taken again as informative for the further input relation, by which a new information round is started (above right, modified from Auletta et al.). Seife (2012) asked a rightful question: “is it likely that only the process of natural selection generated creatures with the capacity to discover by reason the truth about reality that extends vastly beyond the initial appearances as we continue to do and is it credible that selection of fitness in the prehistoric past should have fixed capacities that are effective in theoretical pursuits that were unimaginable at that time?” If one further realizes that survival implies a fine-tuned homeostatic effort, we may conclude that it is highly unlikely that all of this was coming together by pure chance, even if it took billions of years to let these chains of events be developed into a coordinated cellular network. Only by a primordial recipe or through backward causation that enabled a “feeling of a future” this amazing becoming can be imagined. Of note: a collective memory of the whole nature was therefore a prerequisite for the origin of life. On the basis of a combination of these elements, in a concerted action, it was possible that, as treated above, parallel innovations in biophysical complexity occurred. Only expressed in wave functions such quantum states could be brought in superposition, yet only after an intelligent search and selection process in nature. One prominent example was the construction of whole series of individual proteins, each having an exact spatial structure, in order to render them functional as enzymes or to be instrumental in the cooperation of large series of functionally related proteins in regulatory or cellprotective processes in the whole organism. Davies (2004) further explains: “Quantum mechanics provides an explanation for the shapes of molecules, crucial to the templating functions of nucleic acids and the specificity of proteins. The Pauli exclusion principle ensures that atoms and molecules possess definite sizes, which in turn determines not just templating, but differential diffusion rates, membrane properties and many other important biological functions. Quantum mechanics also accounts for the strengths of molecular bonds that hold the machinery of life together and permit metabolism. But these examples are not quite what Schrödinger and Bohr were hinting at. They merely serve to determine the structure, stereochemistry and chemical properties of molecules, which may thereafter be treated as essentially classical. This leads to the ball-and-rod view of life, which is the one routinely adopted by biochemists and molecular biologists, according to which all essential biological functions may be understood in terms of the arrangements and rearrangements of classical molecular units of various shapes, sizes and stickiness. But there are fundamental aspects of quantum mechanics that go beyond this description, such as: - Superpositions and various aspects of quantum phases, such as resonances. - Entanglement. - Tunneling. - Aspects of environmental interactions, such as the watchdog and inverse watchdog effects. - Abstract quantum properties such as supersymmetry Davies distinguished three possibilities of potential interest: - Quantum mechanics played a significant role in the emergence of life from nonliving chemical systems in the first place, but ceased to be a significant factor when life got going. Quantum information processing may have played a key role in the emergence of life, and a sporadic or subsidiary role in its subsequent development. There may be relics of ancient quantum information processing systems in extant organisms, just as there are biochemical remnants that give clues about ancient biological, or even pre-biological, processes. Life started out as a classical complex system, but later evolved some form of quantum behavior as a refinement. For example, if biological systems can process information quantum mechanically, they would gain a distinct advantage in speed and power, so it might be expected that natural selection would discover and amplify such capabilities, if they are possible. This is an extension of the dictum that whatever technology humans invent, nature normally gets there first. Both experimental and theoretical work offers circumstantial evidence that non-trivial quantum mechanical processes might be at work in biological systems. Some examples are: 1. Mutations. Ever since Crick and Watson elucidated the structure of DNA the possibility has been seriously entertained that mutations might occur as a result of quantum fluctuations, which would serve as a source of random biological information. Proton tunneling can indeed spontaneously alter the structure of nucleotide bases, leading to incorrect pair bonding. McFadden and Al-Khalili, …. have suggested that, in some circumstances, the genetic code should be regarded as a quantum code, so that superpositions of coding states might occur, leading to spontaneous errors in base pairing. 2. Enzyme action. Enzymes are proteins that catalyze biochemical reactions but their hugely accelerated reactions rates, with factors as large as 10, are difficult to account for by conventional catalytic mechanisms. Evidence that quantum tunneling plays an essential role has been obtained for many enzyme-driven reactions, and it is likely that tunneling is an important factor contributing to the extraordinary efficiency of enzyme catalysis. 3. Genetic code. The origin of the genetic code is a major area of study. Patel has argued that the code contains evidence for optimization of a quantum search algorithm. Living systems form a very special subset among the set of all complex systems. Biological complexity is distinguished by being information-based complexity, and a fundamental challenge to science is to provide an account of how this unique information content and processing machinery of life came into existence. Most commentators agree that the subset of living systems represents an extremely small fraction of the total space of complex systems. For example, the fraction of peptide chains that have biological efficacy is exponentially small among the set of all possible sequences. Viewed this way, the origin of life is a type of search problem. Given a soup of classical molecular building blocks, how did this mixture “discover” the appropriate extremely improbable combination by chance in a reasonable period of time? Simple calculation shows that it would take much longer than the age of the universe. Since quantum systems can exist in superpositions of states, searches of sequence space or configuration space may proceed much faster. In effect, a quantum system may “feel out” a vast array of alternatives simultaneously. In some cases, this speed-up factor is exponential. systems exploit quantum information processing in some way, either to kick-start life, or to assist in its more efficient running? ” Fig. 20: Diagram of the cell with its membranes and organelles and nucleus with gene information encoded in DNA (see Meijer, 1999.)Right: Mutations in DNA may involve quantum effects. According to Conrad, 1989: “Biological systems have a vertical architecture that allows them to exploit microphysical dynamics for information processing and related macroscopic functions. Macroscopic sensory information in this vertical picture is transduced to increasingly microscopic forms within biological cells and then back to macroscopic form. Processing of information can occur at any level of organization, but much of the most powerful processing occurs at the molecular and submolecular level. The open process of Darwinian evolution plays an important role, since this provides the mechanism for harnessing physical dynamics for coherent function. The vertical architecture is analogous to a quantum measurement system, with transduction of input signals corresponding to state preparation and amplification of the microstate corresponding to measurement. The key point is that the microphysical dynamics is not classically picturable, whereas the macroscopic actions are definite and picturable. If this analogy is taken seriously, it becomes necessary to suppose that irreversible projection processes occur in organisms, despite the fact that the standard equations of motion are reversible. We construct a model that embeds such irreversible measurement interactions into interactions that mediate the conventional forces. The idea is that the forces between observable particles depend on the density of negative energy particles in a surrounding Dirac type vacuum. In systems that are not overly macroscopic or overly far from equilibrium this dependence is hidden, and as a consequence the force appears conservative. The model suggests that the irreversible aspect of the dynamics played an important role in the early universe, but became masked in ordinary laboratory situations as the structure of the vacuum and the distribution of mass and charge equilibrated. Organisms are particularly effective at unmasking the underlying irreversibility due to their sensitive amplification mechanisms. Unifying measurement and force type interactions makes it possible for physical models to fit more naturally to models of cognition”. 12. Information transfer in the human cultural evolution It should be stressed that using the term “transmission of information”, several aspects should be distinguished: the level at which information transfer takes place (in the atom, in the cell, in the brain), the actual content of the information, the type of information (vibration pattern, sequence of nucleotides, spatial forms of a protein, etc.), the density of information (the data content per unit of space), as well as the impact of the particular information, for instance in evolutionary processes or in a cultural setting. As treated above, with regard to the latter aspect, it has been proposed earlier (see Shannon, 1949), that the impact of information is inversely proportional to the probability that the information arises. Nature preferentially detects anomalies and deviations from normal patterns of common reality and this may also hold for human culture! (see also Vedral, 2010). Generally speaking, the concept of information in our world seems closely related to notions of news, constraint, communication, control of data, form, instruction, knowledge, meaning, mental stimulus, repeating patterns, perception of experience, as well as representation of observations and pattern recognition. Since “Information”, is often used as a “container term”, it seems important to differentiate information in its daily use and in its very nature into, at least, four interrelated layers: A. Intrinsic information, such as the micro-physical properties of the constituent elementary particles B. Shaping information, which is the neg-intropic or syntropic information that gives form to matter/energy and, for instance, is expressed the basic genetic information of living organisms. C. Information containing meaning: the type of information that is produced in our brain and represents explicit information that was obtained through interaction with the environment and subsequently translated, stored as scientific and/or cultural representations, percepts, concepts and/or models that have meaning for us. D. Sub-numinous information (mostly non-conscious), that extends to feeling the future, qualia, intuition serendipity, synchronicity, channeling, telepathy, clairvoyance and other subjective human experiences. In the biological and cultural evolution, with their ever-increasing complexity, information is a key aspect and in particular the mechanisms of information transfer deserve further attention. Information may be transmitted in very different ways and at very different levels (see Fig. 18). In the living cell this may constitute chemical and electrical signals, but also specific spatial perturbations, for instance, in the 3-dimemsional structure of proteins as well as in specific sequences of nucleotide building blocks of DNA in the genes (belonging to the earlier mentioned category A, or intrinsic information). Fig. 25: The different forms of information and information processing in living organisms and human culture, represented as a circular process of pattern recognition and signal processing through detection by senses, leading to activation of neurons. Neuronal storage (short and long term memory) takes the form of neuronal firing patterns, leading to representations of thoughts, ideas, percepts and concepts. Metaphors and memes (Meijer, 2007) are forms of information (units of information, compiled from various information components). They are willingly or unwillingly combined with culture-specific features by the individual, so that the whole is suitable for transmission to other individuals, for example through the media. In this circular process of information processing, these information units obtain cultural significance. Information transfer is therefore based on sequential steps of information detection, perception (interpretation), representation and cultural transmission. Information is extracted from the environment by observation, and can also be derived through extra-sensory perception from knowledge fields that store quantum information (ESP). At the level of human communication, vibration patterns can be expressed in electromagnetic waves in the form of light, sound, music, as well as in images and stories (transmitted by radio, telephone, internet and TV, for example). Such information is transferred into the brain through specifically tailored sensory organs that accommodate complex patterns of wave activity, that subsequently are converted to neural activities in the nervous system.(Meijer, 2007). Information type B gets significance only after reception, perception and representation (see Fig. 18). An important question here is how the diverse information that reaches our brains through our senses (sensory or potential extrasensory signals) is selected, stored, retrieved and then exported from the individual to, for example, the public domain. These processes are obviously crucial for the generation and processing of knowledge and also the transfer of cultural knowledge in society (see Heylighen, 2012; Meijer, 2007). In a recent study on quantum modeling of the mental state (Meijer and Korf, 2013) it was put forward that taking into account the constituting elements of the human brain, such as neuronal networks, individual neurons, transmembrane ion-fluxes and energy producing cellular metabolism as well as other molecules that promote neural activity, there is clear consensus that the present knowledge of the brain, collectively, is insufficient to explain higher mental processes such as (self)consciousness, qualia, intuition, meditative states, transpersonal experiences as well as functional binding between distant parts of the brain. The authors argue that super-causal mechanisms are required to optimally integrate the above mentioned building blocks of brain function, also enabling the brain to amplify minimal perturbations for proper anticipation and action. We propose that such a super-causal structure may function as an interface between molecular transitions and the particular higher mental functions. As attractive bridging principles, the isoenergetic brain model and the physical-mathematical hypotheses denoted as quantum brain theories are treated. It is acknowledged that elementary quantum processes are likely to be essential for higher brain functions, as well as behavior and cognitive processing, since our central nervous system forms an integral part of a dynamic universe as a non-local information processing modality. In addition the authors conclude that quantum concepts may, at least, serve as a useful probability model and/or metaphor for human cognition. Yet, versatile brain function may require complementary information processing mechanisms at the classical and quantum (macro- and micro-) levels, both enabling bottom up and top down information processing. Concerted action of isoenergetic and quantum physics-based cognitive mechanisms in the human brain, requires a nested organization of fine-tuned neural micro-sites that enable decoherence-protected information transfer. For a rapid and causally effective flux of information, as well as a continuous updating of meaningful information, a supercausal field model is required. This neural structure wass conceived as a “bi-cyclic” mental workspace, housing interacting and entangled wave/particle modalities that are integral parts of an a-temporal and universal knowledge domain (Meijer and Korf, 2013). Quantum information may be detected by our brain and interchanged with the so-called quantum vacuum field, scientifically identified as the non-local "zero-point energy field". This is a field with fluctuating energy, in which symmetric pairs of particle/anti-particles are continuously created and disappearing. Some consider it, by its nature, to represent a permanent storage medium for wave information and as such it can be seen as the physical basis for an assumed universal consciousness (see Lázsló, 2007). The latter domain may also incorporate information from the category D, as mentioned above. Although many definitions for information have been proposed, the present author favors that of David Deutsch (1997). He stated that: David Deutch “Information is that which is encoded in the structure of discernible patterns, where the discerner of the patterns is an experiential process. Hence information is a subjective measure that depends on the observer's capacity to receive and the fidelity of his/her interpretation. A field of discernible difference is an information medium that comprises an information space. Data exits encoded within an information space, i.e. data are not things in themselves, they are just discernible features of the information space. To the extent that it is capable, each system is resolving and refining an internal mirror of itself and its world, thereby gaining in knowledge. As selfknowledge leads to general knowledge of the nature of reality, this reality-model is a novel instance of the computational process within the universe, which is a new level of creation and manifestation. This selfrealization creates a resonance, where the veil of virtual appearances is subtly penetrated and the system apprehends the computational nature of reality and comes to know itself as reality in action.” Fig. 26: The central position of information science In conclusion: In modern physics, quantum mechanics is an essential instrument. It is basically a theory about the representation and manipulation as well as the communication of information and as such should be regarded as a new physical primitive that may explain a deep nature of physical reality (Fig 26). Zeilinger, 2000 , agreeing with earlier statements of Wheeler(1987), even proposed that information should be seen as more fundamental than matter/energy, since it fully determines what we can say about reality. Information was therefore also pictured as “the missing link in current concepts on the architecture of reality” (Meijer, 2012). It is no wonder that, recently, a new information paradigm (see Fig.26) was proposed that represents a new integral science of information, on a physical and metaphysical basis (see DeWitt Doucette, 2012). 13. References Abbott D [and others] (2008). Quantum aspects of life. London: Imperial College Press. Allen & Selander (1985): What is Information ? Avery J (2003), Information theory and evolution, Singapore: World Scientific Publishing Barrow J D, Davies P C W and Harper C L (eds.) (2004), Science and ultimate reality, Cambridge: Cambridge University Press Barrow J D and Tipler F J (1986). The Anthropic Cosmological Principle. Oxford University Press Bates M J (2005), Information and knowledge: an evolutionary framework, Information Research, 10(4), paper 239, available from Bawden D (2007). Information as self-organized complexity: a unifying viewpoint. Beavers A F (2012) A Brief Introduction to the Philosophy of Information Bekenstein J (2003). Information in the holographic universe. Sci. Am. 289, 58–65. Bell J S (1966). On the problem of hidden variables in quantum theory, Reviews of Modern Physics, 38, p. 447. Belkin N (1990).The cognitive viewpoint in information science, Journal of Information Science, 16(1), 11-15 Bohm D and Hiley BJ (1987). An ontological basis for the quantum theory, Physics Reports, 144, pp. 323-348 Bohm D (1980). Wholeness and the implicate order, London: Routledge & Kegan Paul Brookes B C (1980), The foundations of information science: Part 1: Philosophical aspects, Journal of Information Science, 2(3/4), 125-133 Buckland M (1991), Information as thing, Journal of the American Society for Information Science, 42(5), 351360 Chaim Z (2006), Redefining information science: from "information science" to "knowledge science", Journal of Documentation, 62(4), 447-461 Conrad M (1997). PRINCIPLE OF PHILOSOPHICAL RELATIVITY. Brain & Consciousness, Proc. ECPD Workshop, pp. 157-169 Belgrade, Yugoslavia Lj. Rakić, G. Kostopoulos, D. Raković, and Dj. Koruga, eds. Conrad M (1989). Physics and Biology: towards a unified model. In: Applied Mathematics and Computation pp 75–102 Cramer J (1988). An Overview of the Transactional Interpretation. International Journal of Theoretical Physics 27, 227. Davies P and Gregersen N H (2010). Information and the Nature of Reality: From Physics to Metaphysics. Cambridge:Cambridge University Press. Davies P C W (2003). The Origin of Life. Penguin, London (previous title: The Fifth Miracle. Penguin, London and Simon&Schuster, New York, 1998). Davies P C W (2004). Quantum fluctuations and life. Available on: arXiv:quant-ph/0403017 Deutsch D (1997). The Fabric of Reality, London: Allen Lane DiCorpo U and Vannini A (2011). Doucette D (2012). Establishing a New Information Paradigm, Duncan W L( 2010). The Quantum Universe: An Information Systems Perspective. Ellis G (2011). Does the Multiverse Really Excist ? Sci. Am. July Floridi, L. (2010a) Information: A Very Short Introduction, Oxford, UK: Oxford UniversityPress. Floridi, L. (ed.) (2010b) The Cambridge Handbook of Information and Computer Ethics, Cam- bridge, UK: Cambridge University Press. Floridi L (2005). Is semantic information meaningful data ?, Philosophy and Phenomenological Research, 70(2), 351-370 [available from] Frieden B R (2004). Physics from Fisher Information, Cambridge University Press Gatlin L L (1972). Information theory and the living system, New York NY: Columbia University Press Germine M (2007). The Holographic Principle Theory of Mind. Gershenson C (2010). The world as evolving information. In Proceedings of Inter-national Conference on Complex Systems Y. Bar-Yam (Ed.). arXiv:0704.0304v3 [cs.IT] 13 Oct 2010. URL: Gleiser M (2004), The three origins: cosmos, life, and mind, in Science and ultimate reality, J.D. Barrow, P.C.W. Davies, and C.L. Harper (eds.), Cambridge: Cambridge University Press, pages 637-653. Görnitz T (2012). Quantum Theory as Universal Theory of Structures – Essentially from Cosmos to Consciousness, Advances in Quantum Theory, Prof. Ion Cotaescu (Ed.), ISBN: 978-953-51-0087-4, InTech, Available from: Greene B (2004). The Fabric of the Cosmos. About the Search for the Theory of Everything, The Spectrum, Utrecht. Griffin D R (1997). Parapsychology, Philosophy, and Spirituality: A Postmodern Exploration, (SUNY Series in Constructive Postmodern Thought), State University of New York Press. Grof S, (1987). Beyond the Brain; Birth, Death and Transcendence in Psychotherapy, New York: State University of New York Press. Hawking S W and Mlodinov, L. (2010). The Grand Design. New York: Bantam Press. Hameroff S (2003). Consciousness, Whitehead computation and quantum in the brain: Panprotopsychism meets the physics of fundamental spacetime geometry. in Whitehead Process Network Compendium, ed M Weber. Henry-Couannier F (2012). Negative Energies and Time Reversal in Quantum Field Theory. Global Journal of Science Frontier Research Mathematics and Decision Sciences, Vol. 12 Heylighen F( 2010). The Self-organization of Time and Causality: steps towards understanding the ultimate origin. Heylighen F & Chielens, K (2006). Cultural Evolution and Memetics. Encyclopedia of Complexity and System Science Hopfield, J J (1982)."Neural networks and physical systems with emergent collective computational abilities", Proceedings of the National Academy of Sciences of the USA, vol. 79 no. 8 pp. 2554–2558 Hu H and Wu M (2010). Current landscape and future direction of theoretical and experimental quantum brain/mind/consciousness research, J. Consc. Exploitation & Research 1, 888-897. Jacobson, T, ( 1995). "Thermodynamics of Spacetime: The Einstein Equation of State". Phys.Rev.Lett. 75 (7): 1260–1263. arXiv:gr-qc/9504004. Bibcode 1995PhRvL..75.1260J. doi:10.1103/PhysRevLett.75.1260. Jahn RG and Dunne BJ (2004). Sensors, Filters, and the Source of Reality. Journal of Scientific Exploration, 18(4): 547–570. Jahn RG and Dunne BJ (2007). A modular model of mind/matter manifestations. Explore, 3:311-24, reprinted from J. Scientific. Exploration, 2001 Kauffman SA (1993). Origins of Order: Self-Organization and Selection in Evolution, Oxford University Press. Kauffman SA (2008). Reinventing the Sacred: A New View of Science, Reason, and Religion. New York: Basic Books. Kauffman SA (2012). Is there a “”poised realm between quantum and classical worlds? Kvanvig, J L (2003), The value of knowledge and the pursuit of understanding, Cambridge: Cambridge University Press Leff, H S and Rex A F (1990), Maxwell's demon: entropy, information, computing, Bristol: Adam Hilger Leff, H S. and Rex, A F (2003), Maxwell's demon 2: entropy, classical and quantum information, computing, Bristol: Institute of Physics Publishing Langefors B (1977): Information systems theory. Inf. Syst. 2(4): 207-219 László, E. (2007).The Akashic Field. New York: Dutton Lee JW (2012). Physics from information. Lee JW, Kim HC and Lee J (2010). Gravity as Quantum Entanglement Force. . Lee JW, Kim HC and Lee J (2010). Gravity from Quantum Information. High Energy Physics - Theory (hep-th); arXiv:1001.5445v2 [hep-th]. Linde A (2003). Inflation, Quantum Cosmology, and the Anthropic Principle, in Science and Ultimate Reality: From Quantum to Cosmos, honoring John Wheeler’s 90th birthday. Barrow JD, Davies PCW and Harper CL eds. Cambridge University Press. Lloyd S (2011). Quantum coherence in biological systems. J. Phys.: Conf. Ser. 302, 012037 Journal of Physics: Conference Series 302 (2011) 012037 doi:10.1088/1742-6596/302/1/012037 Loyd S (2006). Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos, Knopf Doubleday Publishing Group (Random House). Madden, A.D, (2004), Evolution and information, Journal of Documentation, 60(1), 9-23 Margoulus N and Levitin LB (1998). The maximum speed of dynamical evolution. Physica D. 120: 188–195. Matzke D J (1999) Quantum Information Research Supports Consciousness as Information Matzke D J. Information is Protophysical McFadden J (2001). Quantum Biology. Norton, New York Meadow, C.T. and Yuan, W., (1997), Measuring the impact of information: defining Meijer D KF (2012). The Information Universe. On the missing link in concepts on the architecture of reality. Syntropy Journal, 1, pp 1-64 Meijer D K F (2013). Quantum modeling of the mental state: the concept of a cyclic mental workspace. Syntropy Journal, (1), pp 1-41 Meijer DKF, Jansen PLM, Groothuis GMM (1999). Hepatobiliary disposition and targeting of Drugs and Genes. Oxford Textbook of of Clinical Hepatology, sect 1-13. vol. 1, Oxford University Press, 87-144. Meijer DKF (2007). Van Meme tot Medicijn: over Boodschap en Beeldvorming in Cultuur en Cognitie, ed. van Baak J (Damon) 99-119 (in Dutch). Miller JG (1978). The Living Systems Theory of James Grier Miller See: Murphy N (2011). Avoiding Neurobiological Reductionism: the role of downward causation in complex systems, in Moral Behavior and Free Will. A Neurological and Philosophical Approach, eds Juan José Sanguineti, Ariberto Acerbi, José Angel Lombo. Nagel T (2012). Minds and Cosmos. Why the materialist neo-darwinian conception of nature is almost certainly false. Oxford Univ. Press. New York. Patel A (2001). Why genetic information processing could have a quantum basis. J. Biosci. 26: 145–151. Penrose R (2004). The Road to Reality. A Complete Guide to the Laws of the Universe, Jonathan Cape Penrose R (2010). Cycles of Time. An Extraordinary New View of the Universe. London: Bodley Head. Popper, K.R., (1979), Objective Knowledge: an evolutionary approach (revised edition), Oxford: Oxford University Press Prigogine I (1997). The end of certaintly: time, chaos. and the new laws of nature. The Free Press New York. Radin DI and Nelson R (2006). Entangled Minds. Extrasensory experiences in the quantum reality. New York: Simon & Schuster Radin, Dean. (1997). The Conscious Universe. The Scientific Truth of Psychic Phenomena. New York: HarperEdge. Roederer J G (2005). Information and its Role in Nature, Springer-Verlag Heidelberg Schempp W (2003). Replication and transcription processes in the molecular biology of gene expressions: control paradigms of the DNA quantum holographic information channel in nanobiotechnology. BioSystems 68: 119–145. Schrödinger E (1959). Mind and Matter. Cambridge: University Press. Seife C (2006). Decoding the Universe. How the new science of information is explaining everything in the cosmos, from our brains to black holes. Penquin books, New York. Shannon, C E. (1948). "A Mathematical Theory of Communication", Bell System Technical Journal, 27, pp. 379–423 & 623–656, July & October, 1948 Shannon, CE & Weaver. (1959). The Mathematical Theory of Communication. Univ.of Illinois Press. Shimony, A. (1997) On mentality, quantum mechanics and the actualization of potentialities, pp. 144-160, Smolin L (2004). Atoms of Space and Time. Scientific. Am. Febr: 43-52. Spinoza (1677/1995). Ethics, in The Collected Works of Spinoza, Princeton: Princeton. Stapp HP (2009). Mind, Matter and Quantum Mechanics, Berlin-Heidelberg: Springer-Verlag. Stapp, HP (2012). Reply to a critic: Mind efforts, quantum zeno effect and environemental decoherence. NeuroQuantology,10: 601-605 Stonier, T., (1990). Information and the internal structure of the universe: an exploration into information physics, London: Springer-Verlag Stonier, T., (1992). Beyond information: the natural history of intelligence, London: Springer-Verlag Stonier, T., (1997). Information and meaning: an evolutionary perspective, London: Springer-Verlag Talja, S., Tuominen, K., and Savolainen, R., (2006), "Isms" in information science: constructivism, collectivism and constructionism. Journal of Documentation, 61(1), 79-101 ‘t Hooft G (2001). The Holographic Principle. Basics and Highlights in Fundamental Physics The Subnuclear Series, Vol. 37; Zuchichi, A., Ed.; World Scientific Singapore; pp 72–100. Toyabe S, Sagawa T, Ueda M, Muneyuki E and Sano M (2010). Experimental demonstration of information-toenergy conversion and validation of the generalized Jarzynski equality. Nature Physics, vol.6, pp 988-992, DOI: 10.1038/NPHYS1821 Tegmark M (2008). The mathematical universe. Found. Phys. 38:101-150, arXiv:0704.0646 [gr-qc] Tipler F (1995). The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead. New York: Anchor Ultimate Reality, Cambridge: Cambridge University Pres. Umpleby, S (2004). Physical relationships among matter, energy and information", Cybernetics and Systems (Vienna, ) (R. Trappl ed.), vol. 1, Austrian Society for Cybernetic Studies Vaas R. (2004) Time before Time. Classifications of universes in contemporary cosmology, and how to avoid the antinomy of the beginning and eternity of the world. arXiv:physics/0408111 Vannini A (2008). Quantum Models of Consciousness. Quantum Biosystems; 1(2): 165-184. Vedral V (2010). Decoding Reality, University Oxford Press, Oxford, U.K. Vedral V ( 2012). Information and physics. Information, 3, 219-223 Verlinde EP, (2011): On the Origin of Gravity and the Laws of Newton. JHEP 04, 29 Von Neumann J (1963). Collected Works of John von Neumann, Taub, A. H., ed., Pergamon Press Von Baeyer, C., (2004), Information: the new language of science, Harvard MA: Harvard University Press Von Bertalanffy L, (1950). An Outline of General System Theory, British Journal for the Philosophy of Science 1, pp. 139-164 Wheeler J.A. (1990). Information, physics, quantum: the search for links. Complexity, Entropy and the Physics of Information. Zurek, W.H., Ed.; Addison-Wesley, Redwood City, 3–28. Wiener N (1948). Cybernetics. MIT Technology Press. Wigner, E. P. (1960). "The unreasonable effectiveness of mathematics in the natural sciences. Richard Courant lecture in mathematical sciences delivered at New York University, May 11, 1959". Communications on Pure and Applied Mathematics 13: 1–14. Yockey, H.P., (2005), Information theory, evolution and the origin of life, Cambridge: Cambridge University Press Zeilinger A (1999). A Foundational Principle for Quantum Mechanics. Foundations of Physics 29 (4), 63-143. Zeilinger A (2000). Quantum Teleportation. Scientific Am. Febr. 8-16. Update 2003: Zizzi P (2006). Consciousness and Logic in a Quantum-Computing Universe. In: The Emerging Physics of Consciousness. The Frontiers Collection, pp 457-481, chapter 14, DOI: 10.1007/3-540-36723-3_14, Springer, Berlin Books on Information Theory and Quantum physics: Barrow J D, (2004). The Artful Universe expanded. Oxford University Express. Bokulich, A and G Jaeger (2010). Philosophy of Quantum Information and Entanglement. Cambridge: Cambridge University Press. Beauregard, M, (2008). The Spiritual Brain, Ten Have. Bohm D, (1987): Wholeness and the implicate order. Lemniscaat. Boyd B, (2010). Evolution, Literature and Film, a reader. Columbia Univ Press. Chown M (2007). Quantum Theory Cannot Hurt You : A Guide to the Universe. London: Faber & Faber Limited Cover T M., and Thomas J A (2006). Elements of Information Theory, Wiley- Interscience Cox B and Forshaw J (2012). The Quantum Universe: Everything that can happen does happen.. New York: Penguin Books. Close, F. (2011) : The Infinity Puzzle. Quantum Field Theory and the Hunt for an Orderly Universe, Basic books. Davies, P and Gregersen, N H (2010). Information and the Nature of Reality, from Physics to Metaphysics. Cambridge Univ. Press Desurvire E (2009). Classic and Quantum Information Theory. Cambridge: Cambridge University Press De Quincey C, (2002): Radical Nature, The Soul of Matter, Park Street.. Ferguson N, (2011): Civilization. The six killer apps of Western Power. Penguin Books. Floridi, L (2010): Information. A very short introduction. Oxford: Oxford University Press Gleick, J (2011) : The Information: A History, a Theory, a Flood. Pantheon Books, New York Greene, B (2011): De Verborgen Realiteit, Parallelle Universums en de Diepere Wetten van de Kosmos, Spectrum. Hall S, (2010): “Wisdom”, Random House. Hofkirchner W (1999). A Quest for a Unified Theory of Information. Overseas Publisher Association Gordon and Breach Publishers, The Netherlands. Kelly K, (2010). What technology wants. Viking. Nielsen M A and Chuang I L (2011). Quantum Computation and Quantum Information: 10th Edition. Cambridge: Cambridge University Press. Nixon T (2011). Quantum information. In: Ramage, M. & D. Chapman (eds.), Perspectives on Information. New York: Routledge Radin DI and Nelson R (2006). Entangled Minds. Extrasensory experiences in the quantum reality. New York: Simon & Schuster Vedral, V (2010): Decoding Reality, The Universe as Quantum Information, Oxford Univ. Press Vidal C (2013). The Beginning and the End: The Meaning of Life in a Cosmological Perspective arXiv:1301.1648 [physics.gen-ph] Watzlawick P, J H Beavin, D D. Jackson (1972): Pragmatics of human communications. New York: Norton 1967. Whitehead, A.N., (1929) Process of Reality. Macmillan, London. Whitehead, A.N. (1933) Adventure of Ideas, Macmillan, London. Wilson E O, (1998): Consilience: the Unity of Knowledge. Harvard University Press
4fb454b8433e9593
December 28, 2020 : Morning Edition I am brainstorming ideas during NPQG Breakthrough Days. Enjoy. Overnight, it occurred to me that the Planck ‘constants’ apply to a free Tau dipole, not to a collection of Tau dipoles reacting with one another in a supermassive black hole core. At the Planck energy a Tau dipole isolated in absolute Euclidean space, could still rotate, be it ever so slowly at one cycle per Planck time. Theoretically, an electrino and a positrino, each with the Planck energy stored (almost entirely) kinetically, could be on a path towards each other that is the perfect path for Coulomb’s law and classical mechanics to operate together to so they enter into this tiniest of orbits where they appear to be rotating while adjacent and the energy is stored almost entirely electromagnetically. The implications of this insight are : • The Planck length Lp, may not be the ultimate radius of the dipole rotation when adjacent. • Instead, Lp may be the path length of the closest orbit at adjacency. • If so, then r at adjacency is Lp/τ = Lp/2π. I’ll think on that for a while and if it becomes clear that this is how nature works, I’ll adjust the model for this new insight on the natural basis of Lp as the circumference of a planar tau dipole orbit at adjaceny. Another reason this makes sense is that if you think of using Lp for an absolute ruler, how would you fix one end of the ruler at the origin inside the point charge. Maybe it makes more sense to measure the circumference which is accessible. One full orbit of adjacent point charges is theoretically observable (not practically though) in nature, so that suggests Lp is a better fit as that metric. If this line of thinking is correct, the state of zero energy for a dipole is hard to fathom as is the state of Planck energy plus one Planck’s constant h J⋅s units of energy. If we examine the electromagnetic field energy in concentric spherical surface layers then the sum of that field energy in each surface layer would correspond to one Planck’s constant h J⋅s units of energy. For a Tau dipole orbiting at a frequency of 1 cycle per absolute second, the interior of the shell contains one Planck’s constant h J⋅s units of energy and the entire volume of the void background space outside the shell contains one Planck’s constant h J⋅s units of energy. This seems like it would involve mathematical integration. This makes sense since there are two point charges and a total of two Planck’s constant h J⋅s units of energy for a one Hz Tau dipole. Is it always the case that the field energy inside a Tau dipole equals the field energy outside the Tau dipole, that the total energy of a dipole is always equally split between interior and exterior? Again, let’s go back to the adjacent dipole with the Planck energy per point charge. That would suggest that the field inside the spherical layer from Lp/τ to 2Lp/τ contains one Planck’s constant h J⋅s units of energy and the field energy outside the 2Lp/τ sphere, i.e., the remainder of the universe is also one Planck’s constant h J⋅s units of energy. These are interesting thoughts and I need to noodle them for a while to see if they hold. The next step is to gather all the formulas for the Planck constants and see what they reveal for the adjacent point charge case. We also need to think about what the experiences of the local observer and the Euclidean observer. In the Euclidean frame we see that the Tau dipole has a frequency of 1 Hz at its lowest per point charge energy level of one Planck’s constant h J⋅s as well as at the maximum Planck energy. That is odd to think about. For every energy level of the tau dipole, is the energy divided between equally between kinetic and electromagnetic forms? Is this leading towards permittivity and permeability being where the Lorentz factor comes into play? This is promising. Stay tuned. I have been thinking about how the shells of standard matter might be architected. • Is it one shell after another nested all the way down to the payload? • Is it possible that the payload can be trapped between layers or distributed between layers? • Can a shell have a branching factor of sorts and hold 2 or more shells at the same level much as atoms hold multiple protons and neutrons in containment? • Can a shell have any number of point charge pairs at the same radius? What is stable? How do they behave? • A general architectural pattern is branching with distributed payload at various shell levels. • Perhaps we can prune this pattern and the design will be what remains. Let’s keep an open mind about the possibilities for the shell architecture and go looking for clues and patterns that might help us reverse engineer what in tarnation is going on down there. We already know about the Koide formula and have linked it up to NPQG. • Koide formulas appear to describe three 3 orthogonal layers as containment shells for the electron(muon(tau)) and the field iteractions between them. • They suggest that the muon and tau are actually contained within the outer shell of the electron. • They suggest that a shell layer can mask or shield the energy in whole or in part from the internal layers. Recently there was a breakthrough in the use of a deep neural network to solve the electronic Schrodinger equation. • The preprint is open access on arXiv : Deep neural network solution of the electronic Schrödinger equation • Named PauliNet, it can scale to medium size molecular systems. • Their research and especially the wave equations might provided insight. • Perhaps it would be better to start with wave equations with far less superposition from all the point charges. Wave equations for less complex particle structures may be available. Perhaps by studying them insight can be gained about the shell architecture. The NPQG decoding of stable standard matter in order of total point charges is : • Neutrino : 3/3 • Photon : 6/6 • Electron : 9/3 • Neutron : 18/18 • Proton : 15/21 In 1951 Friedrich Lenz identified an interesting case of numerology potentially related to the ratio of the Proton and Electron masses. • Lenz wrote what is considered the shortest paper ever published in Physical Review. • The number, 6π5, is potentially a clue since we know fermion generations increase in mass by many multiples each generation. • Also the π is promising since we are considering orbits related to circles and spheres. • It is thought that the proton mass consists primarily of gluons and the quarks. • There is a hint in NPQG that gluons may be tau dipoles, more specifically, captured tau neutrinos. • I’m imagining the quark’s yielding some of all of their shells and some of those shells forming the outer containment for the proton and others becoming tau dipoles trapped along with the at least partially de-shelled quarks. • The proton is a 15/21 particle so that gives up to 15 tau dipoles to distribute as shells or trapped neutrinos. • The gluon has only two polarization states. Does that indicate two tau dipoles per gluon? • If there are three colors and three anti-colors that could correspond to a 3 shell with three orthogonal dipoles. What is the implementation of “anti-ness”? Wikipedia image of a spinor • It could be the point charges executing the wave equation forwards vs. backwards. I thought a dipole would have that symmetry though, why would it matter which way it is rotating if it is free? • If it is free.” — is that a clue? Maybe there is more structure than I realized and that makes a difference which way each dipole is rotating. • The PauliNet pre-print, discussing a “one-dimensional scan of the wave function and its local energy LiH molecule” says : “from left the Li nucleus, a spin-up electron, a spin- down electron, and the H nucleus.” Is spin-up and spin-down an artifact of the structure? Is it saying that there is a difference based on which way the point charges execute the wave equation? • Only certain structures are stable for any significant length of time. So the wave equations for the point charges in a structure must be just right to maintain stability. • Another idea is that each dipole has a plus and minus point charge, so perhaps the observations and theory are picking up on that somehow. • If dipoles are cycling in synchronicity, perhaps a conflict arises if some dipoles are at 1/2 frequency which can happen since it is a spin 1/2 particle. At a 1/2 frequency step the point charges are in opposite position compared to at integer frequencies. • QCD is a gauge theory with SU(3) gauge symmetry. Quarks are introduced as spinors in Nf flavors, each in the fundamental representation (triplet, denoted 3) of the color gauge group, SU(3).” – Wikipedia • Let’s examine the image of a spinor to see if we can imagine a point charge / dipole construct that has this behaviour. • Imagine a very high energy tau neutrino, with reduced radius and moving very slowly due to the Euclidean speed of fields dropping quickly at those energies. • Now introduce another tau neutrino with the same energy passing by at close range but where its slowly moving point charges happen to be oriented 180 degrees. • Can they form a double dipole of sorts where we have a tau particle chasing another tau particle but its really just Coulomb’s law at different energies? Can low energy spacetime aether particles exist inside of a composite matter shell? Do they pass through? Does composite matter pass right through the orbits of low energy spacetime aether particles, with their relaxed radii? Remember, spacetime aether particles are extremely non-interactive, but they do have a tiny apparent energy. Yet, close to an energetic particle the aether is gaining energy radiated by the apparent energy (ne mass) of the matter. Clearly the aether must be right along side if not passing right through. Is a photon a neutrino — anti-neutrino pair? I just came across the “Neutrino Theory of Light. which is not mainstream physics.” My current decoding of a neutrino is 3/3 and of a photon is 6/6 which can be six dipoles or two groups of three. Most recently I was looking at groups of three orthogonal dipoles since that may be a repeating pattern. A neutrino is Ψ while an anti-neutrino is . I had coded a photon as ΨΨ but perhaps it is Ψ which is a tau particle containing an anti-tau particle. I don’t understand anti-ness yet, but perhaps it is stable inside a regular shell. Einstein is noted for the concept of energy mass equivalence and the popular equation \mathbf{E=mc^{2}} for an object at rest. A better description is : Mass is a measure of how your apparent electromagnetic energy couples with spacetime aether. Your inertial energy is your apparent energy paced by permittivity and permeability. It’s that simple. We know \mathbf{c^{2}=\frac{1}{\epsilon \mu }} . We can express mass as \mathbf{m = E \epsilon \mu } . Makes a lot more sense right? Now consider the electric and magnetic fields expressing your energy. The electric field expression is paced by permittivity \mathbf{\epsilon} and the magnetic field expression is paced by the permeability \mathbf{\mu} . Rest mass is a simple concept and Einstein made it complicated. In physics, the energy–momentum relation is the relativistic equation relating any object’s rest (intrinsic) mass, total energy, and momentum: holds for a particle having intrinsic rest mass m0, total energy E, and a momentum of magnitude p, where the constant c is the speed of light, assuming the special relativity case of flat spacetime. What is the energy momentum equation telling us? Notice the form \mathbf{a^{2} + b^{2} = c^{2}} which is the form of the Pythagorean theorem for the relationship of three sides of a right triangle. So the theorem is really telling us that these two forms of energy, your rest energy coupling and your moving energy coupling are orthogonal. They are vector components. You add them as you add vectors. It is as if your energy has a direction. Your apparent energy at rest is the lossless AC coupling of your composite particles to spacetime aether. If a force is applied to accelerate you, it is causing dipoles in your shells to gain energy. If after doing work W, the force is removed, you now have momentum and your shells plateau in energy. Why is that not simple addition? Oh, I see, this starts as a very acute triangle. You are not really adding all that much energy to your shells until you start getting fairly near c. Then the energy spikes. This is Einstein’s curvy-stretchy spacetime biting us in the ass! What is happening from our Euclidean observers point of view as you get close to local c? They are seeing you slow down and shrink and they notice you have to apply ever increasing amounts of energy for your shells to continue to speed up. Why? Two reasons. Coulomb’s law and the density of the electromagnetic fields causing permittivity and permeability to change. It’s like it is saying — your fields are no good here. And that is exactly what is happening, because of the maximum field strength defined by nature. The final step is when the electrino and positrino are adjacent and it essentially looks like they are rolling on each others immutable spherical surface boundaries. Interestingly, were you able to conduct this experiment in empty Euclidean void space the dipole would actually appear locally to have a Planck length/tau radius, Planck frequency, Planck mass, and the other Planck constants at the high energy end of the scale. It is only in a Planck core that the final step occurs which has not been considered before. Once you pack those Planck energy dipoles together, they can not rotate relative to each other, although possibly the entire core could rotate as a whole. What does all this mean? I think it means that in the Euclidean world your apparent energy and your momentum energy add linearly, as would seem natural. We will know soon. J Mark Morris : San Diego : California : December 28, 2020 By J Mark Morris Leave a Reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
b829ee2095c76f94
12 Events That Changed The Course of Quantum Information Science Photo by Pascal Meier on Unsplash A Most Intelligent Photo For many people, the story of quantum computing (QC) and quantum information science (QIS) really began way back in 1927 at the Solvay Conference, in Belgium. Formally the fifth conference ever organized, the topic of agenda was on ‘Electrons and Photons’. All the biggest names in physics then — as well as now — were in attendance: Albert Einstein, Niels Bohr, Werner Heisenberg, Paul Dirac, and Erwin Schrödinger, and the photo taken of the event has been dubbed ‘the most intelligent picture ever taken’.                             Solvay Conference 1927 True to form, some great discussions (though I wasn’t there) went on, and the theory of quantum mechanics was developed further until it was properly formalized in the 1930s by the groundbreaking work of Paul Dirac, David Hilbert and John von Neumann. Now, this is not denying what happened prior to 1927 was unimportant, because it was truly important: starting with the early 19th century’s ‘double-slit-like’ experiment by Thomas Young, Michael Faraday’s 1838 discovery of cathode rays, Gustav Kirchoff’s black-body radiation dilemma a generation later, and Ludwig Boltzmann work until we come to Max Planck’s seminal quantum hypothesis in 1900, the foundations were laid for what the geniuses at Solvay discussed in the late 1920s. All this, it turned out, would be very good for the development — starting some fifty years later — of quantum information theory. Before that, though, quantum mechanics would have a darker side, being the science that finally produced the atomic bomb that flattened Hiroshima and Nagasaki, thankfully ending the Second World War but killing thousands of innocents in the process. Yet, out of the bad came the good again, a new theory, a way of manipulating the smallest particles known to man for our own benefit, quantum information theory (QIS) and quantum computing (QC). I will soon list, in no particular order of importance, 12 of the most pioneering discoveries/theories/events in the history of QIS. Although I list these twelve, there are many that — for the sake of time— I have unfortunately left out. Since the mid-2010s, there has been an exponential speed-up in the rate of discoveries within the space. From Stephen Weisner’s work in the late 1960s up to about 2006, you would have been lucky to get three or four important events a year. Now — there seem to be one or two ‘incredible breakthroughs’ a week. In the grand scheme of things, though they may seem important at the time (and TQD will continue delivering important breaking news and research work done), they probably aren’t as earth-shattering to the development of the industry as the ones about to be listed. They say hindsight is a clever man, and TQD is sure the best discoveries in QIS are still to come. For now, though, we only have the grand past to work with. 1. Alexander Holevo’s paper (1973)  Let’s start off in the early days, shall we with Russian mathematician and one of the pioneers of QIS, Alexander Holevo. A member of the Steklov Mathematical Institute since the late 1960s and a prolific author, he is now a professor at Moscow State University and Moscow Institute of Physics and Technology. Holevo is famous for his eponymous theory called Holevo’s theorem (or Holevo’s bound) which shows that n qubits are able to carry more than n classical bits of information, but at most n classical bits are accessible. The paper, published in 1973, proved coding theorems in quantum information theory while also revealing the structure of quantum Markov semigroups and measurement processes. 2. Lov Grover invents the quantum database search algorithm while at Bell Labs (1996)  Indian-American computer scientist Lov Grover is the bright mind who came up with the database search algorithm used in quantum computing named after him, which goes on to prove the ‘quadratic speedup is not as dramatic as the speedup for factoring, discrete logs, or physics simulations. A quantum computer has many advantages over a classical computer, one of which is its superior speed when searching databases. Grover’s algorithm proves this and can be applied to problems that can only be solved by random searches, ones that can take advantage of this quadratic speedup. 3. The founding of D-Wave Systems (1999)  The company may have its critics in the past with its quantum annealing approach to the core IP technology where optimization problems abound, but you’ve got to give it to the Burnaby, British Columbia company for starting a trend. Being the Apple of your industry — and all the kudos it can bring — hasn’t gone to their heads: twenty-two years after its incorporation, D-Wave Systems has released four iterations of its quantum computer, the last one coming in 2017, the D-Wave 2000Q, and has such prestigious customers as Lockheed Martin, Los Alamos National Laboratory, Google/NASA/USRA, and Volkswagen to back up the claim they are are a reliable company with a viable product. The company’s next-generation Advantage quantum computer, powered by the Pegasus P16 quantum processor chip, has over 5000 qubits with impressive 15-way qubit connectivity. Led these days by CEO Alan Baratz, whatever the outcome, founders Haig Farris, Geordie Rose, Bob Wiens, and Alexandre Zagoskin have written D-Wave Systems and themselves into the QC history books. 4. Google’s quantum computer research team’s Quantum Supremacy claim (2019)  We trust the media too much these days’ says a journalist working at TQD, yet with all the cries off Twitter, Facebook, Linkedin and other social media sites — plus the input from the more veritable tech publications like TechCrunch, TNW, PC Mag, Tech Republic, Venture Beat and my favourite Wired — the screams create a hullabaloo of such intensity that the tremours ripple unabated, not always earned, unfortunately. Google’s claim to attaining Quantum Supremacy last year had many geeks excited; others, less so. The announcement was preceded by Google’s director of engineering and distinguished scientist Hartmut Neven’s prophetic proclamation only a few months before that suggested quantum supremacy could occur sometime in 2019. It did. And the result? Neven and his law will live forever. Whether it’s true or not, what Google did was force the other global corporations like IBM and Microsoft to take note, raise their game — because it worked: things are heading to the next level as far as QC is concerned. Quantum Supremacy could mean one thing to you and another to me but as the German philosopher Willem Hegel’s dialectical solution of problem → reaction → solution. 5. Roman Stanisław Ingarden’s ‘Quantum Information Theory’ paper (1976)  While Apple’s founding fathers Steve Jobs and Steve Wozniak were forming the groundwork for the personal computer in a Los Altos, California garage, a fifty-six-year-old Polish professor obsessed with Japan, Roman Stanisław Ingarden, had published the groundbreaking (though little known) paper Quantum Information Theory. In it, he argued Claude Shannon, the father of information theory, assumes that the mathematical theory of communication does not hold tight when talking about quantum information. Roman Stanisław Ingarden Like for like, then. Although Ingarden was a trained physicist, I don’t suppose he realized at the time of writing his theory would go a long way in changing the future. 6. Paul Benioff’s description of the quantum mechanical model of a computer and Yuri Manin’s contribution to the field (1980)  I’ve mentioned two events here in one year, as both of them were seminal in the development of quantum information science and QC. If one had to choose, then 1980 would have to be quantum information theory’s Annus mirabilis for the simple reason two groundbreaking events took place over a 365-day period. By the 1970s, American physicist Paul Benioff had begun research into the theoretical possibilities of a viable working quantum computing. From that research came an important paper, The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines quantum information theory, published in the Journal of Statistical Physics. In it, he details how his machine model could work under the laws of quantum mechanics by describing a Schrödinger equation description of Turing machines. Benioff’s work was roughly at the same time that Soviet physicist Yuri Manin’s work, chronicled in his influential book Computable and Uncomputable (first published in Russian), described his own ideas about a working quantum computer. 7. The First Conference on the Physics of Computation at MIT (1981)  If the Solvay Conference photo of 1927 was the most intelligent picture ever taken, then the group shot photographed at Endicott House, in Dedham, Massachusetts, in the late spring of 1981, has to come in second. MIT was the place to be that year if quantum computation was your thing, where the First Conference on the Physics of Computation at MIT took place. Names like Freeman Dyson, Tom Toffoli, John Wheeler, Frederick Kantor, Konrad Zuse and other luminaries were in attendance — so were physics geniuses Richard Feynman (Wheeler’s student) and Paul Benioff (again). Maybe what went on there over those three days in 1981 had something to do with the ambience of Endicott House, so aptly put by Jan Wampler, MIT Architecture Professor Emeritus: “For many years, I assigned a project to my undergraduate students in architecture to design a build a structure to keep the rain, cold and wind out while sleeping over for the night. Endicott was the perfect place to do this.They slept the night in their structures and early in the morning had breakfast. This taught them that design is important, but testing with the elements was most important. Of all the exercises I gave, this one has always been a highlight of the semester. Without the staff of Endicott and the beautiful setting, I would not have been able to give out this exciting assignment. Additionally, I have been at conferences of my department hosted at Endicott, the perfect place to get away from the urban setting of MIT and experience the beautiful setting of Endicott. This is true either in fall, winter or spring. Endicott has always been a special place for conferences.” Whatever Wampler’s opinion, the conference was to change the face of quantum information theory forever and lead to, some three decades later, a technological revolution. Well, anyway, during the conference Benioff and Feynman gave individual talks on quantum computing. Based on Benioff’s paper from the year before Quantum mechanical Hamiltonian models of discrete processes that erase their own histories: application to Turing machines, he discussed that a computer could operate under the laws of quantum mechanics. When Feynman — the brash, charismatic Caltech professor who had worked on the Manhattan project — stood up to talk about that it seemed nigh on impossible to efficiently simulate the evolution of a quantum system on a classical model computer, his solution was by saying he proposed a basic model for a quantum computer instead. These two speeches equal a watershed moment in the history of quantum computers, there’s no denying that. 8. The National Quantum Initiative Act (2018)  Love him or loathe him, outgoing President Donald Trump may have caused political havoc in the US and globally for many, but this sole policy act in late 2018 gets the thumbs up from TQD at least. The National Quantum Initiative Act is a sure sign that the United States — the world’s leading quantum power — is serious about the technology and wants to compete with its nearest rival: the People’s Republic of China. A competitive beast by nature, let’s hope Trump’s actions will be continued by his successor, the Democrat Joe Biden. Under the bill, the National Quantum Initiative Act calls for: […] the Subcommittee on Quantum Information Science of the NSTC to: coordinate the QIST research and education activities and programs of the federal agencies; recommend federal infrastructure needs to support the program; and evaluate opportunities for international cooperation with strategic allies. The initial ten-year plan is a counter to China’s already developing quantum information science technology (QIST) sector, but will also open the eyes of other regions, namely Europe and Russia. 9. Peter Shor formulates his important algorithm (1994)  American mathematician Peter Shor, while working at AT&T’s Bell Labs in New Jersey (what’s in the water there?), discovered one of the most important algorithms in quantum computing. The magical polynomial-time algorithm lets a quantum computer calculate integer factorization (an integer n, find its prime factors) very quickly. To date, IBM’s circuit-based Q System One quantum computer factored the largest number (35) using Shor’s algorithm, though larger numbers have been factored by quantum computers implementing algorithms based on the quantum annealing subset. While Shor’s efforts may seem unimportant to the uninitiated, it is noteworthy as it means that public-key cryptography based on the Rivest–Shamir–Adleman (RSA) algorithm could be broken, leading to a new cryptographic paradigm. 10. David Deutsch’s account of a universal quantum computer (1985)  In his 1997 book The Fabric of Reality, University of Oxford physicist David Deutsch claims: “Quantum computation is … nothing less than a distinctly new way of harnessing nature … It will be the first technology that allows useful tasks to be performed in collaboration between parallel universes, and then sharing the results.” That journey, though, started more than a decade before in 1985, when this pioneer of quantum computation first detailed his thoughts on the subject in his paper Quantum theory, the Church–Turing principle and the universal quantum computer. In it, Deutsch put forward that the Church–Turing hypothesis of ‘every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means’, while also proposing the adoption of entangled states and Bell’s theorem for quantum key distribution (QKD). David Deutsch Famous for his multiverse theory inline with the many-worlds interpretation (MWI) of Hugh Everett III and some other scientific hypotheses many experts criticize as being ‘wacky’, nobody today can deny Deutsch’s importance in the field of QIT. 11. The founding of 1QBit, the world’s first dedicated quantum computing software company (2012) Every hardware company needs its software equivalent, and 1QBit — a Canadian quantum hardware-agnostic company founded in 2012 — complies with that detail. Founders Andrew Fursman and Landon Downs ‘recognized the possibilities quantum computers were about to unlock and embarked on building the expertise and technology required to connect industry problems to this new hardware.’ Nine years after that realization, 1QBit has gone from strength to strength and is now one of the most respected in the industry. 1QBit, the Computer Usage Company (CUC) of the quantum age. 12. Formulation of the Deutsch–Jozsa algorithm (1992) Like his esteemed colleague Paul Benioff, David Deutsch has managed to sneak himself on to two events that have shaped the history of quantum information theory. In 1992, along with Australian mathematician Richard Jozsa, the pair came up with the Deutsch–Jozsa algorithm, one designed that was ‘exponentially faster than any possible deterministic classical algorithm’. With Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele Mosca making improvements to it in 1998, it is rather impractical these days but still goes to show the power of men’s minds and how, even back in the late 1990s, the idea of quantum computers and their potential was at the forefront of a handful of great thinkers’ minds. Parting Words I hope you liked TQD’s list, and as I mentioned earlier, there were many events for which I had no room: Stephen Wiesner’s conjugate coding, for example. Alexei Kitaev’s work and Shor’s algorithm execution for the first time at IBM’s Almaden Research Center. Artur Ekert’s efforts. The opening of the Institute for Quantum Computing at the University of Waterloo in the early 2000s and two dozen more. Shunning groundbreaking work is hard, but if I were to mention them all, I’d have written an article with the word count of Atlas Shrugged, and who’d have wanted that? James Dargan James Dargan James Dargan is a contributor at The Quantum Daily. His focus is on the QC startup ecosystem and he writes articles on the space that have a tone accessible to the average reader Related articles The Quantum Daily
6247190f34f3682d
TY - JOUR AB - High-throughput live-cell screens are intricate elements of systems biology studies and drug discovery pipelines. Here, we demonstrate an optogenetics-assisted method that avoids the need for chemical activators and reporters, reduces the number of operational steps and increases information content in a cell-based small-molecule screen against human protein kinases, including an orphan receptor tyrosine kinase. This blueprint for all-optical screening can be adapted to many drug targets and cellular processes. AU - Inglés Prieto, Álvaro AU - Gschaider-Reichhart, Eva AU - Muellner, Markus AU - Nowak, Matthias AU - Nijman, Sebastian AU - Grusch, Michael AU - Janovjak, Harald L ID - 1678 IS - 12 JF - Nature Chemical Biology TI - Light-assisted small-molecule screening against protein kinases VL - 11 ER - TY - JOUR AU - Lemoult, Grégoire M AU - Maier, Philipp AU - Hof, Björn ID - 1679 IS - 9 JF - Physics of Fluids TI - Taylor's Forest VL - 27 ER - TY - JOUR AB - We consider the satisfiability problem for modal logic over first-order definable classes of frames.We confirm the conjecture from Hemaspaandra and Schnoor [2008] that modal logic is decidable over classes definable by universal Horn formulae. We provide a full classification of Horn formulae with respect to the complexity of the corresponding satisfiability problem. It turns out, that except for the trivial case of inconsistent formulae, local satisfiability is eitherNP-complete or PSPACE-complete, and global satisfiability is NP-complete, PSPACE-complete, or ExpTime-complete. We also show that the finite satisfiability problem for modal logic over Horn definable classes of frames is decidable. On the negative side, we show undecidability of two related problems. First, we exhibit a simple universal three-variable formula defining the class of frames over which modal logic is undecidable. Second, we consider the satisfiability problem of bimodal logic over Horn definable classes of frames, and also present a formula leading to undecidability. AU - Michaliszyn, Jakub AU - Otop, Jan AU - Kieroňski, Emanuel ID - 1680 IS - 1 JF - ACM Transactions on Computational Logic TI - On the decidability of elementary modal logics VL - 17 ER - TY - JOUR AB - In many social situations, individuals endeavor to find the single best possible partner, but are constrained to evaluate the candidates in sequence. Examples include the search for mates, economic partnerships, or any other long-term ties where the choice to interact involves two parties. Surprisingly, however, previous theoretical work on mutual choice problems focuses on finding equilibrium solutions, while ignoring the evolutionary dynamics of decisions. Empirically, this may be of high importance, as some equilibrium solutions can never be reached unless the population undergoes radical changes and a sufficient number of individuals change their decisions simultaneously. To address this question, we apply a mutual choice sequential search problem in an evolutionary game-theoretical model that allows one to find solutions that are favored by evolution. As an example, we study the influence of sequential search on the evolutionary dynamics of cooperation. For this, we focus on the classic snowdrift game and the prisoner’s dilemma game. AU - Priklopil, Tadeas AU - Chatterjee, Krishnendu ID - 1681 IS - 4 JF - Games TI - Evolution of decisions in population games with sequentially searching individuals VL - 6 ER - TY - JOUR AB - We study the problem of robust satisfiability of systems of nonlinear equations, namely, whether for a given continuous function f:K→ ℝn on a finite simplicial complex K and α > 0, it holds that each function g: K → ℝn such that ||g - f || ∞ < α, has a root in K. Via a reduction to the extension problem of maps into a sphere, we particularly show that this problem is decidable in polynomial time for every fixed n, assuming dimK ≤ 2n - 3. This is a substantial extension of previous computational applications of topological degree and related concepts in numerical and interval analysis. Via a reverse reduction, we prove that the problem is undecidable when dim K > 2n - 2, where the threshold comes from the stable range in homotopy theory. For the lucidity of our exposition, we focus on the setting when f is simplexwise linear. Such functions can approximate general continuous functions, and thus we get approximation schemes and undecidability of the robust satisfiability in other possible settings. AU - Franek, Peter AU - Krcál, Marek ID - 1682 IS - 4 JF - Journal of the ACM TI - Robust satisfiability of systems of equations VL - 62 ER - TY - JOUR AB - The 1 MDa, 45-subunit proton-pumping NADH-ubiquinone oxidoreductase (complex I) is the largest complex of the mitochondrial electron transport chain. The molecular mechanism of complex I is central to the metabolism of cells, but has yet to be fully characterized. The last two years have seen steady progress towards this goal with the first atomic-resolution structure of the entire bacterial complex I, a 5 Å cryo-electron microscopy map of bovine mitochondrial complex I and a ∼3.8 Å resolution X-ray crystallographic study of mitochondrial complex I from yeast Yarrowia lipotytica. In this review we will discuss what we have learned from these studies and what remains to be elucidated. AU - Letts, Jame A AU - Sazanov, Leonid A ID - 1683 IS - 8 JF - Current Opinion in Structural Biology TI - Gaining mass: The structure of respiratory complex I-from bacterial towards mitochondrial versions VL - 33 ER - TY - JOUR AB - Many species groups, including mammals and many insects, determine sex using heteromorphic sex chromosomes. Diptera flies, which include the model Drosophila melanogaster, generally have XY sex chromosomes and a conserved karyotype consisting of six chromosomal arms (five large rods and a small dot), but superficially similar karyotypes may conceal the true extent of sex chromosome variation. Here, we use whole-genome analysis in 37 fly species belonging to 22 different families of Diptera and uncover tremendous hidden diversity in sex chromosome karyotypes among flies. We identify over a dozen different sex chromosome configurations, and the small dot chromosome is repeatedly used as the sex chromosome, which presumably reflects the ancestral karyotype of higher Diptera. However, we identify species with undifferentiated sex chromosomes, others in which a different chromosome replaced the dot as a sex chromosome or in which up to three chromosomal elements became incorporated into the sex chromosomes, and others yet with female heterogamety (ZW sex chromosomes). Transcriptome analysis shows that dosage compensation has evolved multiple times in flies, consistently through up-regulation of the single X in males. However, X chromosomes generally show a deficiency of genes with male-biased expression, possibly reflecting sex-specific selective pressures. These species thus provide a rich resource to study sex chromosome biology in a comparative manner and show that similar selective forces have shaped the unique evolution of sex chromosomes in diverse fly taxa. AU - Vicoso, Beatriz AU - Bachtrog, Doris ID - 1684 IS - 4 JF - PLoS Biology TI - Numerous transitions of sex chromosomes in Diptera VL - 13 ER - TY - CONF AB - Given a graph G cellularly embedded on a surface Σ of genus g, a cut graph is a subgraph of G such that cutting Σ along G yields a topological disk. We provide a fixed parameter tractable approximation scheme for the problem of computing the shortest cut graph, that is, for any ε > 0, we show how to compute a (1 + ε) approximation of the shortest cut graph in time f(ε, g)n3. Our techniques first rely on the computation of a spanner for the problem using the technique of brick decompositions, to reduce the problem to the case of bounded tree-width. Then, to solve the bounded tree-width case, we introduce a variant of the surface-cut decomposition of Rué, Sau and Thilikos, which may be of independent interest. AU - Cohen Addad, Vincent AU - De Mesmay, Arnaud N ID - 1685 TI - A fixed parameter tractable approximation scheme for the optimal cut graph of a surface VL - 9294 ER - TY - JOUR AB - Circumferential skin creases Kunze type (CSC-KT) is a specific congenital entity with an unknown genetic cause. The disease phenotype comprises characteristic circumferential skin creases accompanied by intellectual disability, a cleft palate, short stature, and dysmorphic features. Here, we report that mutations in either MAPRE2 or TUBB underlie the genetic origin of this syndrome. MAPRE2 encodes a member of the microtubule end-binding family of proteins that bind to the guanosine triphosphate cap at growing microtubule plus ends, and TUBB encodes a β-tubulin isotype that is expressed abundantly in the developing brain. Functional analyses of the TUBB mutants show multiple defects in the chaperone-dependent tubulin heterodimer folding and assembly pathway that leads to a compromised yield of native heterodimers. The TUBB mutations also have an impact on microtubule dynamics. For MAPRE2, we show that the mutations result in enhanced MAPRE2 binding to microtubules, implying an increased dwell time at microtubule plus ends. Further, in vivo analysis of MAPRE2 mutations in a zebrafish model of craniofacial development shows that the variants most likely perturb the patterning of branchial arches, either through excessive activity (under a recessive paradigm) or through haploinsufficiency (dominant de novo paradigm). Taken together, our data add CSC-KT to the growing list of tubulinopathies and highlight how multiple inheritance paradigms can affect dosage-sensitive biological systems so as to result in the same clinical defect. AU - Isrie, Mala AU - Breuss, Martin AU - Tian, Guoling AU - Hansen, Andi H AU - Cristofoli, Francesca AU - Morandell, Jasmin AU - Kupchinsky, Zachari A AU - Sifrim, Alejandro AU - Rodriguez Rodriguez, Celia AU - Dapena, Elena P AU - Doonanco, Kurston AU - Leonard, Norma AU - Tinsa, Faten AU - Moortgat, Stéphanie AU - Ulucan, Hakan AU - Koparir, Erkan AU - Karaca, Ender AU - Katsanis, Nicholas AU - Marton, Valeria AU - Vermeesch, Joris R AU - Davis, Erica E AU - Cowan, Nicholas J AU - Keays, David AU - Van Esch, Hilde ID - 1106 IS - 6 JF - The American Journal of Human Genetics TI - Mutations in either TUBB or MAPRE2 cause circumferential skin creases Kunze type VL - 97 ER - TY - JOUR AB - Clustering of fine particles is of crucial importance in settings ranging from the early stages of planet formation to the coagulation of industrial powders and airborne pollutants. Models of such clustering typically focus on inelastic deformation and cohesion. However, even in charge-neutral particle systems comprising grains of the same dielectric material, tribocharging can generate large amounts of net positive or negative charge on individual particles, resulting in long-range electrostatic forces. The effects of such forces on cluster formation are not well understood and have so far not been studied in situ. Here we report the first observations of individual collide-and-capture events between charged submillimetre particles, including Kepler-like orbits. Charged particles can become trapped in their mutual electrostatic energy well and aggregate via multiple bounces. This enables the initiation of clustering at relative velocities much larger than the upper limit for sticking after a head-on collision, a long-standing issue known from pre-planetary dust aggregation. Moreover, Coulomb interactions together with dielectric polarization are found to stabilize characteristic molecule-like configurations, providing new insights for the modelling of clustering dynamics in a wide range of microscopic dielectric systems, such as charged polarizable ions, biomolecules and colloids. AU - Lee, Victor AU - Waitukaitis, Scott R AU - Miskin, Marc AU - Jaeger, Heinrich ID - 120 IS - 9 JF - Nature Physics TI - Direct observation of particle interactions and clustering in charged granular streams VL - 11 ER - TY - JOUR AB - We show that the simplest building blocks of origami-based materials - rigid, degree-four vertices - are generically multistable. The existence of two distinct branches of folding motion emerging from the flat state suggests at least bistability, but we show how nonlinearities in the folding motions allow generic vertex geometries to have as many as five stable states. In special geometries with collinear folds and symmetry, more branches emerge leading to as many as six stable states. Tuning the fold energy parameters, we show how monostability is also possible. Finally, we show how to program the stability features of a single vertex into a periodic fold tessellation. The resulting metasheets provide a previously unanticipated functionality - tunable and switchable shape and size via multistability. AU - Waitukaitis, Scott R AU - Menaut, Rémi AU - Chen, Bryan AU - Van Hecke, Martin ID - 121 IS - 5 JF - APS Physics, Physical Review Letters TI - Origami multistability: From single vertices to metasheets VL - 114 ER - TY - JOUR AB - The factors that determine the tempo and mode of protein evolution continue to be a central question in molecular evolution. Traditionally, studies of protein evolution focused on the rates of amino acid substitutions. More recently, with the availability of sequence data and advanced experimental techniques, the focus of attention has shifted toward the study of evolutionary trajectories and the overall layout of protein fitness landscapes. In this review we describe the effect of epistasis on the topology of evolutionary pathways that are likely to be found in fitness landscapes and develop a simple theory to connect the number of maladapted genotypes to the topology of fitness landscapes with epistatic interactions. Finally, we review recent studies that have probed the extent of epistatic interactions and have begun to chart the fitness landscapes in protein sequence space. AU - Kondrashov, Dmitry A AU - Fyodor Kondrashov ID - 886 IS - 1 JF - Trends in Genetics TI - Topological features of rugged fitness landscapes in sequence space VL - 31 ER - TY - JOUR AB - MCM2 is a subunit of the replicative helicase machinery shown to interact with histones H3 and H4 during the replication process through its N-terminal domain. During replication, this interaction has been proposed to assist disassembly and assembly of nucleosomes on DNA. However, how this interaction participates in crosstalk with histone chaperones at the replication fork remains to be elucidated. Here, we solved the crystal structure of the ternary complex between the histone-binding domain of Mcm2 and the histones H3-H4 at 2.9 Å resolution. Histones H3 and H4 assemble as a tetramer in the crystal structure, but MCM2 interacts only with a single molecule of H3-H4. The latter interaction exploits binding surfaces that contact either DNA or H2B when H3-H4 dimers are incorporated in the nucleosome core particle. Upon binding of the ternary complex with the histone chaperone ASF1, the histone tetramer dissociates and both MCM2 and ASF1 interact simultaneously with the histones forming a 1:1:1:1 heteromeric complex. Thermodynamic analysis of the quaternary complex together with structural modeling support that ASF1 and MCM2 could form a chaperoning module for histones H3 and H4 protecting them from promiscuous interactions. This suggests an additional function for MCM2 outside its helicase function as a proper histone chaperone connected to the replication pathway. AU - Richet, Nicolas AU - Liu, Danni AU - Legrand, Pierre AU - Velours, Christophe AU - Corpet, Armelle AU - Gaubert, Albane AU - Bakail, May M AU - Moal-Raisin, Gwenaelle AU - Guerois, Raphael AU - Compper, Christel AU - Besle, Arthur AU - Guichard, Berengère AU - Almouzni, Genevieve AU - Ochsenbein, Françoise ID - 9017 IS - 3 JF - Nucleic Acids Research SN - 1362-4962 TI - Structural insight into how the human helicase subunit MCM2 may act as a histone chaperone together with ASF1 at the replication fork VL - 43 ER - TY - JOUR AB - Motility is a basic feature of living microorganisms, and how it works is often determined by environmental cues. Recent efforts have focused on developing artificial systems that can mimic microorganisms, in particular their self-propulsion. We report on the design and characterization of synthetic self-propelled particles that migrate upstream, known as positive rheotaxis. This phenomenon results from a purely physical mechanism involving the interplay between the polarity of the particles and their alignment by a viscous torque. We show quantitative agreement between experimental data and a simple model of an overdamped Brownian pendulum. The model notably predicts the existence of a stagnation point in a diverging flow. We take advantage of this property to demonstrate that our active particles can sense and predictably organize in an imposed flow. Our colloidal system represents an important step toward the realization of biomimetic microsystems with the ability to sense and respond to environmental changes. AU - Palacci, Jérémie A AU - Sacanna, Stefano AU - Abramian, Anaïs AU - Barral, Jérémie AU - Hanson, Kasey AU - Grosberg, Alexander Y. AU - Pine, David J. AU - Chaikin, Paul M. ID - 9057 IS - 4 JF - Science Advances SN - 2375-2548 TI - Artificial rheotaxis VL - 1 ER - TY - JOUR AB - The origin and evolution of novel biochemical functions remains one of the key questions in molecular evolution. We study recently emerged methacrylate reductase function that is thought to have emerged in the last century and reported in Geobacter sulfurreducens strain AM-1. We report the sequence and study the evolution of the operon coding for the flavin-containing methacrylate reductase (Mrd) and tetraheme cytochrome (Mcc) in the genome of G. sulfurreducens AM-1. Different types of signal peptides in functionally interlinked proteins Mrd and Mcc suggest a possible complex mechanism of biogenesis for chromoproteids of the methacrylate redox system. The homologs of the Mrd and Mcc sequence found in δ-Proteobacteria and Deferribacteres are also organized into an operon and their phylogenetic distribution suggested that these two genes tend to be horizontally transferred together. Specifically, the mrd and mcc genes from G. sulfurreducens AM-1 are not monophyletic with any of the homologs found in other Geobacter genomes. The acquisition of methacrylate reductase function by G. sulfurreducens AM-1 appears linked to a horizontal gene transfer event. However, the new function of the products of mrd and mcc may have evolved either prior or subsequent to their acquisition by G. sulfurreducens AM-1. AU - Arkhipova, Oksana V AU - Meer, Margarita V AU - Mikoulinskaia, Galina V AU - Zakharova, Marina V AU - Galushko, Alexander S AU - Akimenko, Vasilii K AU - Fyodor Kondrashov ID - 906 IS - 5 JF - PLoS One TI - Recent origin of the methacrylate redox system in Geobacter sulfurreducens AM-1 through horizontal gene transfer VL - 10 ER - TY - JOUR AB - The breaking of internal tides is believed to provide a large part of the power needed to mix the abyssal ocean and sustain the meridional overturning circulation. Both the fraction of internal tide energy that is dissipated locally and the resulting vertical mixing distribution are crucial for the ocean state, but remain poorly quantified. Here we present a first worldwide estimate of mixing due to internal tides generated at small‐scale abyssal hills. Our estimate is based on linear wave theory, a nonlinear parameterization for wave breaking and uses quasi‐global small‐scale abyssal hill bathymetry, stratification, and tidal data. We show that a large fraction of abyssal‐hill generated internal tide energy is locally dissipated over mid‐ocean ridges in the Southern Hemisphere. Significant dissipation occurs above ridge crests, and, upon rescaling by the local stratification, follows a monotonic exponential decay with height off the bottom, with a nonuniform decay scale. We however show that a substantial part of the dissipation occurs over the smoother flanks of mid‐ocean ridges, and exhibits a middepth maximum due to the interplay of wave amplitude with stratification. We link the three‐dimensional map of dissipation to abyssal hills characteristics, ocean stratification, and tidal forcing, and discuss its potential implementation in time‐evolving parameterizations for global climate models. Current tidal parameterizations only account for waves generated at large‐scale satellite‐resolved bathymetry. Our results suggest that the presence of small‐scale, mostly unresolved abyssal hills could significantly enhance the spatial inhomogeneity of tidal mixing, particularly above mid‐ocean ridges in the Southern Hemisphere. AU - Lefauve, Adrien AU - MULLER, Caroline J AU - Melet, Angélique ID - 9141 IS - 7 JF - Journal of Geophysical Research: Oceans SN - 2169-9275 TI - A three-dimensional map of tidal dissipation over abyssal hills VL - 120 ER - TY - JOUR AB - This paper presents a numerical study of a Capillary Pumped Loop evaporator. A two-dimensional unsteady mathematical model of a flat evaporator is developed to simulate heat and mass transfer in unsaturated porous wick with phase change. The liquid-vapor phase change inside the porous wick is described by Langmuir's law. The governing equations are solved by the Finite Element Method. The results are presented then for a sintered nickel wick and methanol as a working fluid. The heat flux required to the transition from the all-liquid wick to the vapor-liquid wick is calculated. The dynamic and thermodynamic behavior of the working fluid in the capillary structure are discussed in this paper. AU - Boubaker, Riadh AU - Platel, Vincent AU - Bergès, Alexis AU - Bancelin, Mathieu AU - Hannezo, Edouard B ID - 924 JF - Applied Thermal Engineering TI - Dynamic model of heat and mass transfer in an unsaturated porous wick of capillary pumped loop VL - 76 ER - TY - JOUR AB - The actomyosin cytoskeleton is a primary force-generating mechanism in morphogenesis, thus a robust spatial control of cytoskeletal positioning is essential. In this report, we demonstrate that actomyosin contractility and planar cell polarity (PCP) interact in post-mitotic Ciona notochord cells to self-assemble and reposition actomyosin rings, which play an essential role for cell elongation. Intriguingly, rings always form at the cells′ anterior edge before migrating towards the center as contractility increases, reflecting a novel dynamical property of the cortex. Our drug and genetic manipulations uncover a tug-of-war between contractility, which localizes cortical flows toward the equator and PCP, which tries to reposition them. We develop a simple model of the physical forces underlying this tug-of-war, which quantitatively reproduces our results. We thus propose a quantitative framework for dissecting the relative contribution of contractility and PCP to the self-assembly and repositioning of cytoskeletal structures, which should be applicable to other morphogenetic events. AU - Sehring, Ivonne AU - Recho, Pierre AU - Denker, Elsa AU - Kourakis, Matthew AU - Mathiesen, Birthe AU - Hannezo, Edouard B AU - Dong, Bo AU - Jiang, Di ID - 928 JF - eLife TI - Assembly and positioning of actomyosin rings by contractility and planar cell polarity VL - 4 ER - TY - JOUR AB - An essential question of morphogenesis is how patterns arise without preexisting positional information, as inspired by Turing. In the past few years, cytoskeletal flows in the cell cortex have been identified as a key mechanism of molecular patterning at the subcellular level. Theoretical and in vitro studies have suggested that biological polymers such as actomyosin gels have the property to self-organize, but the applicability of this concept in an in vivo setting remains unclear. Here, we report that the regular spacing pattern of supracellular actin rings in the Drosophila tracheal tubule is governed by a self-organizing principle. We propose a simple biophysical model where pattern formation arises from the interplay of myosin contractility and actin turnover. We validate the hypotheses of the model using photobleaching experiments and report that the formation of actin rings is contractility dependent. Moreover, genetic and pharmacological perturbations of the physical properties of the actomyosin gel modify the spacing of the pattern, as the model predicted. In addition, our model posited a role of cortical friction in stabilizing the spacing pattern of actin rings. Consistently, genetic depletion of apical extracellular matrix caused strikingly dynamic movements of actin rings, mirroring our model prediction of a transition from steady to chaotic actin patterns at low cortical friction. Our results therefore demonstrate quantitatively that a hydrodynamical instability of the actin cortex can trigger regular pattern formation and drive morphogenesis in an in vivo setting. AU - Hannezo, Edouard B AU - Dong, Bo AU - Recho, Pierre AU - Joanny, Jean AU - Hayashi, Shigeo ID - 929 IS - 28 JF - PNAS TI - Cortical instability drives periodic supracellular actin pattern formation in epithelial tubes VL - 112 ER - TY - JOUR AB - Although collective cell motion plays an important role, for example during wound healing, embryogenesis, or cancer progression, the fundamental rules governing this motion are still not well understood, in particular at high cell density. We study here the motion of human bronchial epithelial cells within a monolayer, over long times. We observe that, as the monolayer ages, the cells slow down monotonously, while the velocity correlation length first increases as the cells slow down but eventually decreases at the slowest motions. By comparing experiments, analytic model, and detailed particle-based simulations, we shed light on this biological amorphous solidification process, demonstrating that the observed dynamics can be explained as a consequence of the combined maturation and strengthening of cell-cell and cell-substrate adhesions. Surprisingly, the increase of cell surface density due to proliferation is only secondary in this process. This analysis is confirmed with two other cell types. The very general relations between the mean cell velocity and velocity correlation lengths, which apply for aggregates of self-propelled particles, as well as motile cells, can possibly be used to discriminate between various parameter changes in vivo, from noninvasive microscopy data. AU - García, Simón AU - Hannezo, Edouard B AU - Elgeti, Jens AU - Joanny, Jean AU - Silberzan, Pascal AU - Gov, Nir ID - 933 IS - 50 JF - PNAS TI - Physics of active jamming during collective cellular motion in a monolayer VL - 112 ER - TY - JOUR AB - The tunability of topological surface states and controllable opening of the Dirac gap are of fundamental and practical interest in the field of topological materials. In the newly discovered topological crystalline insulators (TCIs), theory predicts that the Dirac node is protected by a crystalline symmetry and that the surface state electrons can acquire a mass if this symmetry is broken. Recent studies have detected signatures of a spontaneously generated Dirac gap in TCIs; however, the mechanism of mass formation remains elusive. In this work, we present scanning tunnelling microscopy (STM) measurements of the TCI Pb 1â'x Sn x Se for a wide range of alloy compositions spanning the topological and non-topological regimes. The STM topographies reveal a symmetry-breaking distortion on the surface, which imparts mass to the otherwise massless Dirac electrons-a mechanism analogous to the long sought-after Higgs mechanism in particle physics. Interestingly, the measured Dirac gap decreases on approaching the trivial phase, whereas the magnitude of the distortion remains nearly constant. Our data and calculations reveal that the penetration depth of Dirac surface states controls the magnitude of the Dirac mass. At the limit of the critical composition, the penetration depth is predicted to go to infinity, resulting in zero mass, consistent with our measurements. Finally, we discover the existence of surface states in the non-topological regime, which have the characteristics of gapped, double-branched Dirac fermions and could be exploited in realizing superconductivity in these materials. AU - Zeljkovic, Ilija AU - Okada, Yoshinori AU - Maksym Serbyn AU - Sankar, Raman AU - Walkup, Daniel AU - Zhou, Wenwen AU - Liu, Junwei AU - Chang, Guoqing AU - Wang, Yungjui AU - Hasan, Md Z AU - Chou, Fangcheng AU - Lin, Hsin AU - Bansil, Arun AU - Fu, Liang AU - Madhavan, Vidya ID - 981 IS - 3 JF - Nature Materials TI - Dirac mass generation from crystal symmetry breaking on the surfaces of topological crystalline insulators VL - 14 ER - TY - JOUR AB - We propose a new approach to probing ergodicity and its breakdown in one-dimensional quantum manybody systems based on their response to a local perturbation. We study the distribution of matrix elements of a local operator between the system's eigenstates, finding a qualitatively different behavior in the manybody localized (MBL) and ergodic phases. To characterize how strongly a local perturbation modifies the eigenstates, we introduce the parameter g(L) = (In (Vnm/δ)) which represents the disorder-averaged ratio of a typical matrix element of a local operator V to energy level spacing δ this parameter is reminiscent of the Thouless conductance in the single-particle localization. We show that the parameter g(L) decreases with system size L in the MBL phase and grows in the ergodic phase. We surmise that the delocalization transition occurs when g(L) is independent of system size, g(L)=gc ~ 1. We illustrate our approach by studying the many-body localization transition and resolving the many-body mobility edge in a disordered one-dimensional XXZ spin-1=2 chain using exact diagonalization and time-evolving block-decimation methods. Our criterion for the MBL transition gives insights into microscopic details of transition. Its direct physical consequences, in particular, logarithmically slow transport at the transition and extensive entanglement entropy of the eigenstates, are consistent with recent renormalization-group predictions. AU - Maksym Serbyn AU - Papić, Zlatko AU - Abanin, Dmitry A ID - 982 IS - 4 JF - Physical Review X TI - Criterion for many-body localization-delocalization phase transition VL - 5 ER - TY - JOUR AB - Quasiparticle excitations can compromise the performance of superconducting devices, causing high-frequency dissipation, decoherence in Josephson qubits, and braiding errors in proposed Majorana-based topological quantum computers. Quasiparticle dynamics have been studied in detail in metallic superconductors but remain relatively unexplored in semiconductor-superconductor structures, which are now being intensely pursued in the context of topological superconductivity. To this end, we use a system comprising a gate-confined semiconductor nanowire with an epitaxially grown superconductor layer, yielding an isolated, proximitized nanowire segment. We identify bound states in the semiconductor by means of bias spectroscopy, determine the characteristic temperatures and magnetic fields for quasiparticle excitations, and extract a parity lifetime (poisoning time) of the bound state in the semiconductor exceeding 10 ms. AU - Higginbotham, Andrew P AU - Albrecht, S M AU - Kiršanskas, Gediminas AU - Chang, W AU - Kuemmeth, Ferdinand AU - Krogstrup, Peter AU - Jespersen, Thomas AU - Nygård, Jesper AU - Flensberg, Karsten AU - Marcus, Charles ID - 99 IS - 12 JF - Nature Physics TI - Parity lifetime of bound states in a proximitized semiconductor nanowire VL - 11 ER - TY - JOUR AB - We use ultrafast optical spectroscopy to observe binding of charged single-particle excitations (SE) in the magnetically frustrated Mott insulator Na2IrO3. Above the antiferromagnetic ordering temperature (TN) the system response is due to both Hubbard excitons (HE) and their constituent unpaired SE. The SE response becomes strongly suppressed immediately below TN. We argue that this increase in binding energy is due to a unique interplay between the frustrated Kitaev and the weak Heisenberg-type ordering term in the Hamiltonian, mediating an effective interaction between the spin-singlet SE. This interaction grows with distance causing the SE to become trapped in the HE, similar to quark confinement inside hadrons. This binding of charged particles, induced by magnetic ordering, is a result of a confinement-deconfinement transition of spin excitations. This observation provides evidence for spin liquid type behavior which is expected in Na2IrO3. AU - Alpichshev, Zhanybek AU - Mahmood, Fahad AU - Cao, Gang AU - Gedik, Nuh ID - 388 IS - 1 JF - Physical Review Letters TI - Confinement deconfinement transition as an indication of spin liquid type behavior in Na2IrO3 VL - 114 ER - TY - JOUR AB - We present a hybrid intercalation battery based on a sodium/magnesium (Na/Mg) dual salt electrolyte, metallic magnesium anode, and a cathode based on FeS2 nanocrystals (NCs). Compared to lithium or sodium, metallic magnesium anode is safer due to dendrite-free electroplating and offers extremely high volumetric (3833 mAh cm-3) and gravimetric capacities (2205 mAh g-1). Na-ion cathodes, FeS2 NCs in the present study, may serve as attractive alternatives to Mg-ion cathodes due to the higher voltage of operation and fast, highly reversible insertion of Na-ions. In this proof-of-concept study, electrochemical cycling of the Na/Mg hybrid battery was characterized by high rate capability, high Coulombic efficiency of 99.8%, and high energy density. In particular, with an average discharge voltage of ∼1.1 V and a cathodic capacity of 189 mAh g-1 at a current of 200 mA g-1, the presented Mg/FeS2 hybrid battery delivers energy densities of up to 210 Wh kg-1, comparable to commercial Li-ion batteries and approximately twice as high as state-of-the-art Mg-ion batteries based on Mo6S8 cathodes. Further significant gains in the energy density are expected from the development of Na/Mg electrolytes with a broader electrochemical stability window. Fully based on Earth-abundant elements, hybrid Na-Mg batteries are highly promising for large-scale stationary energy storage. AU - Walter, Marc AU - Kravchyk, Kostiantyn AU - Ibáñez, Maria AU - Kovalenko, Maksym ID - 333 IS - 21 JF - Chemistry of Materials TI - Efficient and inexpensive sodium magnesium hybrid battery VL - 27 ER - TY - JOUR AB - A cation exchange-based route was used to produce Cu2ZnSnS4 (CZTS)-Ag2S nanoparticles with controlled composition. We report a detailed study of the formation of such CZTS-Ag2S nanoheterostructures and of their photocatalytic properties. When compared to pure CZTS, the use of nanoscale p-n heterostructures as light absorbers for photocatalytic water splitting provides superior photocurrents. We associate this experimental fact to a higher separation efficiency of the photogenerated electron-hole pairs. We believe this and other type-II nanoheterostructures will open the door to the use of CZTS, with excellent light absorption properties and made of abundant and environmental friendly elements, to the field of photocatalysis. AU - Yu, Xuelian AU - Liu, Jingjing AU - Genç, Aziz AU - Ibáñez, Maria AU - Luo, Zhishan AU - Shavel, Alexey AU - Arbiol, Jordi AU - Zhang, Guangjin AU - Zhang, Yihe AU - Cabot, Andreu ID - 334 IS - 38 JF - Langmuir TI - Cu2ZnSnS4–Ag2S Nanoscale p–n heterostructures as sensitizers for photoelectrochemical water splitting VL - 31 ER - TY - JOUR AB - A simple and effective method to introduce precise amounts of doping in nanomaterials produced from the bottom-up assembly of colloidal nanoparticles (NPs) is described. The procedure takes advantage of a ligand displacement step to incorporate controlled concentrations of halide ions while removing carboxylic acids from the NP surface. Upon consolidation of the NPs into dense pellets, halide ions diffuse within the crystal structure, doping the anion sublattice and achieving n-type electrical doping. Through the characterization of the thermoelectric properties of nanocrystalline PbS, we demonstrate this strategy to be effective to control charge transport properties on thermoelectric nanomaterials assembled from NP building blocks. This approach is subsequently extended to PbTexSe1-x@PbS core-shell NPs, where a significant enhancement of the thermoelectric figure of merit is achieved. AU - Ibáñez, Maria AU - Korkosz, Rachel AU - Luo, Zhishan AU - Riba, Pau AU - Cadavid, Doris AU - Ortega, Silvia AU - Cabot, Andreu AU - Kanatzidis, Mercouri ID - 354 IS - 12 JF - Journal of the American Chemical Society TI - Electron doping in bottom up engineered thermoelectric nanomaterials through HCl mediated ligand displacement VL - 137 ER - TY - JOUR AB - A cation exchange-based route was used to produce Cu2ZnSnS4 (CZTS)-Ag2S nanoparticles with controlled composition. We report a detailed study of the formation of such CZTS-Ag2S nanoheterostructures and of their photocatalytic properties. When compared to pure CZTS, the use of nanoscale p-n heterostructures as light absorbers for photocatalytic water splitting provides superior photocurrents. We associate this experimental fact to a higher separation efficiency of the photogenerated electron-hole pairs. We believe this and other type-II nanoheterostructures will open the door to the use of CZTS, with excellent light absorption properties and made of abundant and environmental friendly elements, to the field of photocatalysis. AU - Yu, Xuelian AU - Liu, Jingjing AU - Genç, Aziz AU - Ibáñez, Maria AU - Luo, Zhishan AU - Shavel, Alexey AU - Arbiol, Jordi AU - Zhang, Guangjin AU - Zhang, Yihe AU - Cabot, Andreu ID - 360 IS - 38 JF - Langmuir TI - Cu2ZnSnS4-Ag2S nanoscale p-n heterostructures as sensitizers for photoelectrochemical water splitting VL - 31 ER - TY - JOUR AB - We report the synthesis and photocatalytic and magnetic characterization of colloidal nanoheterostructures formed by combining a Pt-based magnetic metal alloy (PtCo, PtNi) with Cu2ZnSnS4 (CZTS). While CZTS is one of the main candidate materials for solar energy conversion, the introduction of a Pt-based alloy on its surface strongly influences its chemical and electronic properties, ultimately determining its functionality. In this regard, up to a 15-fold increase of the photocatalytic hydrogen evolution activity was obtained with CZTS–PtCo when compared with CZTS. Furthermore, two times higher hydrogen evolution rates were obtained for CZTS–PtCo when compared with CZTS–Pt, in spite of the lower precious metal loading of the former. Besides, the magnetic properties of the PtCo nanoparticles attached to the CZTS nanocrystals were retained in the heterostructures, which could facilitate catalyst purification and recovery for its posterior recycling and/or reutilization. AU - Yu, Xuelian AU - An, Xiaoqiang AU - Genç, Aziz AU - Ibáñez, Maria AU - Arbiol, Jordi AU - Zhang, Yihe AU - Cabot, Andreu ID - 361 IS - 38 JF - Journal of Physical Chemistry C TI - Cu2ZnSnS4–PtM (M = Co, Ni) nanoheterostructures for photocatalytic hydrogen evolution VL - 119 ER - TY - JOUR AB - Monodisperse Pd2Sn nanorods with tuned size and aspect ratio were prepared by co-reduction of metal salts in the presence of trioctylphosphine, amine, and chloride ions. Asymmetric Pd2Sn nanostructures were achieved by the selective desorption of a surfactant mediated by chlorine ions. A preliminary evaluation of the geometry influence on catalytic properties evidenced Pd2Sn nanorods to have improved catalytic performance. In view of these results, Pd2Sn nanorods were also evaluated for water denitration. AU - Lu, Zhishan AU - Ibáñez, Maria AU - Antolín, Ana AU - Genç, Aziz AU - Shavel, Alexey AU - Contreras, Sandra AU - Medina, Francesc AU - Arbiol, Jordi AU - Cabot, Andreu ID - 362 IS - 13 JF - Langmuir TI - Size and aspect ratio control of Pd inf 2 inf Sn nanorods and their water denitration properties VL - 31 ER - TY - THES AB - The human ability to recognize objects in complex scenes has driven research in the computer vision field over couple of decades. This thesis focuses on the object recognition task in images. That is, given the image, we want the computer system to be able to predict the class of the object that appears in the image. A recent succesful attempt to bridge semantic understanding of the image perceived by humans and by computers uses attribute-based models. Attributes are semantic properties of the objects shared across different categories, which humans and computers can decide on. To explore the attribute-based models we take a statistical machine learning approach, and address two key learning challenges in view of object recognition task: learning augmented attributes as mid-level discriminative feature representation, and learning with attributes as privileged information. Our main contributions are parametric and non-parametric models and algorithms to solve these frameworks. In the parametric approach, we explore an autoencoder model combined with the large margin nearest neighbor principle for mid-level feature learning, and linear support vector machines for learning with privileged information. In the non-parametric approach, we propose a supervised Indian Buffet Process for automatic augmentation of semantic attributes, and explore the Gaussian Processes classification framework for learning with privileged information. A thorough experimental analysis shows the effectiveness of the proposed models in both parametric and non-parametric views. AU - Sharmanska, Viktoriia ID - 1401 TI - Learning with attributes for object recognition: Parametric and non-parametrics views ER - TY - CONF AB - We present a computer-aided programming approach to concurrency. The approach allows programmers to program assuming a friendly, non-preemptive scheduler, and our synthesis procedure inserts synchronization to ensure that the final program works even with a preemptive scheduler. The correctness specification is implicit, inferred from the non-preemptive behavior. Let us consider sequences of calls that the program makes to an external interface. The specification requires that any such sequence produced under a preemptive scheduler should be included in the set of such sequences produced under a non-preemptive scheduler. The solution is based on a finitary abstraction, an algorithm for bounded language inclusion modulo an independence relation, and rules for inserting synchronization. We apply the approach to device-driver programming, where the driver threads call the software interface of the device and the API provided by the operating system. Our experiments demonstrate that our synthesis method is precise and efficient, and, since it does not require explicit specifications, is more practical than the conventional approach based on user-provided assertions. AU - Cerny, Pavol AU - Clarke, Edmund AU - Henzinger, Thomas A AU - Radhakrishna, Arjun AU - Ryzhyk, Leonid AU - Samanta, Roopsha AU - Tarrach, Thorsten ID - 1729 TI - From non-preemptive to preemptive scheduling using synchronization synthesis VL - 9207 ER - TY - JOUR AB - Evolution of gene regulation is crucial for our understanding of the phenotypic differences between species, populations and individuals. Sequence-specific binding of transcription factors to the regulatory regions on the DNA is a key regulatory mechanism that determines gene expression and hence heritable phenotypic variation. We use a biophysical model for directional selection on gene expression to estimate the rates of gain and loss of transcription factor binding sites (TFBS) in finite populations under both point and insertion/deletion mutations. Our results show that these rates are typically slow for a single TFBS in an isolated DNA region, unless the selection is extremely strong. These rates decrease drastically with increasing TFBS length or increasingly specific protein-DNA interactions, making the evolution of sites longer than ∼ 10 bp unlikely on typical eukaryotic speciation timescales. Similarly, evolution converges to the stationary distribution of binding sequences very slowly, making the equilibrium assumption questionable. The availability of longer regulatory sequences in which multiple binding sites can evolve simultaneously, the presence of “pre-sites” or partially decayed old sites in the initial sequence, and biophysical cooperativity between transcription factors, can all facilitate gain of TFBS and reconcile theoretical calculations with timescales inferred from comparative genomics. AU - Tugrul, Murat AU - Paixao, Tiago AU - Barton, Nicholas H AU - Tkacik, Gasper ID - 1666 IS - 11 JF - PLoS Genetics TI - Dynamics of transcription factor binding site evolution VL - 11 ER - TY - GEN AB - We study conditions under which a finite simplicial complex $K$ can be mapped to $\mathbb R^d$ without higher-multiplicity intersections. An almost $r$-embedding is a map $f: K\to \mathbb R^d$ such that the images of any $r$ pairwise disjoint simplices of $K$ do not have a common point. We show that if $r$ is not a prime power and $d\geq 2r+1$, then there is a counterexample to the topological Tverberg conjecture, i.e., there is an almost $r$-embedding of the $(d+1)(r-1)$-simplex in $\mathbb R^d$. This improves on previous constructions of counterexamples (for $d\geq 3r$) based on a series of papers by M. \"Ozaydin, M. Gromov, P. Blagojevi\'c, F. Frick, G. Ziegler, and the second and fourth present authors. The counterexamples are obtained by proving the following algebraic criterion in codimension 2: If $r\ge3$ and if $K$ is a finite $2(r-1)$-complex then there exists an almost $r$-embedding $K\to \mathbb R^{2r}$ if and only if there exists a general position PL map $f:K\to \mathbb R^{2r}$ such that the algebraic intersection number of the $f$-images of any $r$ pairwise disjoint simplices of $K$ is zero. This result can be restated in terms of cohomological obstructions or equivariant maps, and extends an analogous codimension 3 criterion by the second and fourth authors. As another application we classify ornaments $f:S^3 \sqcup S^3\sqcup S^3\to \mathbb R^5$ up to ornament concordance. It follows from work of M. Freedman, V. Krushkal and P. Teichner that the analogous criterion for $r=2$ is false. We prove a lemma on singular higher-dimensional Borromean rings, yielding an elementary proof of the counterexample. AU - Avvakumov, Sergey AU - Mabillard, Isaac AU - Skopenkov, A. AU - Wagner, Uli ID - 8183 T2 - arXiv TI - Eliminating higher-multiplicity intersections, III. Codimension 2 ER - TY - JOUR AB - Parasitism creates selection for resistance mechanisms in host populations and is hypothesized to promote increased host evolvability. However, the influence of these traits on host evolution when parasites are no longer present is unclear. We used experimental evolution and whole-genome sequencing of Escherichia coli to determine the effects of past and present exposure to parasitic viruses (phages) on the spread of mutator alleles, resistance, and bacterial competitive fitness. We found that mutator alleles spread rapidly during adaptation to any of four different phage species, and this pattern was even more pronounced with multiple phages present simultaneously. However, hypermutability did not detectably accelerate adaptation in the absence of phages and recovery of fitness costs associated with resistance. Several lineages evolved phage resistance through elevated mucoidy, and during subsequent evolution in phage-free conditions they rapidly reverted to nonmucoid, phage-susceptible phenotypes. Genome sequencing revealed that this phenotypic reversion was achieved by additional genetic changes rather than by genotypic reversion of the initial resistance mutations. Insertion sequence (IS) elements played a key role in both the acquisition of resistance and adaptation in the absence of parasites; unlike single nucleotide polymorphisms, IS insertions were not more frequent in mutator lineages. Our results provide a genetic explanation for rapid reversion of mucoidy, a phenotype observed in other bacterial species including human pathogens. Moreover, this demonstrates that the types of genetic change underlying adaptation to fitness costs, and consequently the impact of evolvability mechanisms such as increased point-mutation rates, depend critically on the mechanism of resistance. AU - Wielgoss, Sébastien AU - Bergmiller, Tobias AU - Bischofberger, Anna M. AU - Hall, Alex R. ID - 5749 IS - 3 JF - Molecular Biology and Evolution SN - 0737-4038 TI - Adaptation to Parasites and Costs of Parasite Resistance in Mutator and Nonmutator Bacteria VL - 33 ER - TY - JOUR AB - The emergence of drug resistant pathogens is a serious public health problem. It is a long-standing goal to predict rates of resistance evolution and design optimal treatment strategies accordingly. To this end, it is crucial to reveal the underlying causes of drug-specific differences in the evolutionary dynamics leading to resistance. However, it remains largely unknown why the rates of resistance evolution via spontaneous mutations and the diversity of mutational paths vary substantially between drugs. Here we comprehensively quantify the distribution of fitness effects (DFE) of mutations, a key determinant of evolutionary dynamics, in the presence of eight antibiotics representing the main modes of action. Using precise high-throughput fitness measurements for genome-wide Escherichia coli gene deletion strains, we find that the width of the DFE varies dramatically between antibiotics and, contrary to conventional wisdom, for some drugs the DFE width is lower than in the absence of stress. We show that this previously underappreciated divergence in DFE width among antibiotics is largely caused by their distinct drug-specific dose-response characteristics. Unlike the DFE, the magnitude of the changes in tolerated drug concentration resulting from genome-wide mutations is similar for most drugs but exceptionally small for the antibiotic nitrofurantoin, i.e., mutations generally have considerably smaller resistance effects for nitrofurantoin than for other drugs. A population genetics model predicts that resistance evolution for drugs with this property is severely limited and confined to reproducible mutational paths. We tested this prediction in laboratory evolution experiments using the “morbidostat”, a device for evolving bacteria in well-controlled drug environments. Nitrofurantoin resistance indeed evolved extremely slowly via reproducible mutations—an almost paradoxical behavior since this drug causes DNA damage and increases the mutation rate. Overall, we identified novel quantitative characteristics of the evolutionary landscape that provide the conceptual foundation for predicting the dynamics of drug resistance evolution. AU - Chevereau, Guillaume AU - Dravecka, Marta AU - Batur, Tugce AU - Guvenek, Aysegul AU - Ayhan, Dilay AU - Toprak, Erdal AU - Bollenbach, Mark Tobias ID - 1619 IS - 11 JF - PLoS Biology TI - Quantifying the determinants of evolutionary dynamics leading to drug resistance VL - 13 ER - TY - JOUR AB - To reveal the full potential of human pluripotent stem cells, new methods for rapid, site-specific genomic engineering are needed. Here, we describe a system for precise genetic modification of human embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs). We identified a novel human locus, H11, located in a safe, intergenic, transcriptionally active region of chromosome 22, as the recipient site, to provide robust, ubiquitous expression of inserted genes. Recipient cell lines were established by site-specific placement of a ‘landing pad’ cassette carrying attP sites for phiC31 and Bxb1 integrases at the H11 locus by spontaneous or TALEN-assisted homologous recombination. Dual integrase cassette exchange (DICE) mediated by phiC31 and Bxb1 integrases was used to insert genes of interest flanked by phiC31 and Bxb1 attB sites at the H11 locus, replacing the landing pad. This system provided complete control over content, direction and copy number of inserted genes, with a specificity of 100%. A series of genes, including mCherry and various combinations of the neural transcription factors LMX1a, FOXA2 and OTX2, were inserted in recipient cell lines derived from H9 ESC, as well as iPSC lines derived from a Parkinson’s disease patient and a normal sibling control. The DICE system offers rapid, efficient and precise gene insertion in ESC and iPSC and is particularly well suited for repeated modifications of the same locus. AU - Zhu, Fangfang AU - Gamboa, Matthew AU - Farruggio, Alfonso AU - Hippenmeyer, Simon AU - Tasic, Bosiljka AU - Schüle, Birgitt AU - Chen Tsai, Yanru AU - Calos, Michele ID - 2261 IS - 5 JF - Nucleic Acids Research TI - DICE, an efficient system for iterative genomic editing in human pluripotent stem cells VL - 42 ER - TY - CHAP AB - Coordinated migration of newly-born neurons to their target territories is essential for correct neuronal circuit assembly in the developing brain. Although a cohort of signaling pathways has been implicated in the regulation of cortical projection neuron migration, the precise molecular mechanisms and how a balanced interplay of cell-autonomous and non-autonomous functions of candidate signaling molecules controls the discrete steps in the migration process, are just being revealed. In this chapter, I will focally review recent advances that improved our understanding of the cell-autonomous and possible cell-nonautonomous functions of the evolutionarily conserved LIS1/NDEL1-complex in regulating the sequential steps of cortical projection neuron migration. I will then elaborate on the emerging concept that the Reelin signaling pathway, acts exactly at precise stages in the course of cortical projection neuron migration. Lastly, I will discuss how finely tuned transcriptional programs and downstream effectors govern particular aspects in driving radial migration at discrete stages and how they regulate the precise positioning of cortical projection neurons in the developing cerebral cortex. AU - Hippenmeyer, Simon ED - Nguyen, Laurent ID - 2265 T2 - Cellular and Molecular Control of Neuronal Migration TI - Molecular pathways controlling the sequential steps of cortical projection neuron migration VL - 800 ER - TY - CONF AB - Energies with high-order non-submodular interactions have been shown to be very useful in vision due to their high modeling power. Optimization of such energies, however, is generally NP-hard. A naive approach that works for small problem instances is exhaustive search, that is, enumeration of all possible labelings of the underlying graph. We propose a general minimization approach for large graphs based on enumeration of labelings of certain small patches. This partial enumeration technique reduces complex high-order energy formulations to pairwise Constraint Satisfaction Problems with unary costs (uCSP), which can be efficiently solved using standard methods like TRW-S. Our approach outperforms a number of existing state-of-the-art algorithms on well known difficult problems (e.g. curvature regularization, stereo, deconvolution); it gives near global minimum and better speed. Our main application of interest is curvature regularization. In the context of segmentation, our partial enumeration technique allows to evaluate curvature directly on small patches using a novel integral geometry approach. AU - Olsson, Carl AU - Ulen, Johannes AU - Boykov, Yuri AU - Kolmogorov, Vladimir ID - 2275 TI - Partial enumeration and curvature regularization ER - TY - JOUR AB - We consider two-dimensional Bose-Einstein condensates with attractive interaction, described by the Gross-Pitaevskii functional. Minimizers of this functional exist only if the interaction strength a satisfies {Mathematical expression}, where Q is the unique positive radial solution of {Mathematical expression} in {Mathematical expression}. We present a detailed analysis of the behavior of minimizers as a approaches a*, where all the mass concentrates at a global minimum of the trapping potential. AU - Guo, Yujin AU - Seiringer, Robert ID - 2281 IS - 2 JF - Letters in Mathematical Physics TI - On the mass concentration for Bose-Einstein condensates with attractive interactions VL - 104 ER - TY - JOUR AB - GABAergic inhibitory interneurons control fundamental aspects of neuronal network function. Their functional roles are assumed to be defined by the identity of their input synapses, the architecture of their dendritic tree, the passive and active membrane properties and finally the nature of their postsynaptic targets. Indeed, interneurons display a high degree of morphological and physiological heterogeneity. However, whether their morphological and physiological characteristics are correlated and whether interneuron diversity can be described by a continuum of GABAergic cell types or by distinct classes has remained unclear. Here we perform a detailed morphological and physiological characterization of GABAergic cells in the dentate gyrus, the input region of the hippocampus. To achieve an unbiased and efficient sampling and classification we used knock-in mice expressing the enhanced green fluorescent protein (eGFP) in glutamate decarboxylase 67 (GAD67)-positive neurons and performed cluster analysis. We identified five interneuron classes, each of them characterized by a distinct set of anatomical and physiological parameters. Cross-correlation analysis further revealed a direct relation between morphological and physiological properties indicating that dentate gyrus interneurons fall into functionally distinct classes which may differentially control neuronal network activity. AU - Hosp, Jonas AU - Strüber, Michael AU - Yanagawa, Yuchio AU - Obata, Kunihiko AU - Vida, Imre AU - Jonas, Peter M AU - Bartos, Marlene ID - 2285 IS - 2 JF - Hippocampus TI - Morpho-physiological criteria divide dentate gyrus interneurons into classes VL - 23 ER - TY - JOUR AB - Two definitions of the effective mass of a particle interacting with a quantum field, such as a polaron, are considered and shown to be equal in models similar to the Fröhlich polaron model. These are: 1. the mass defined by the low momentum energy E(P)≈E(0)+P2/2 M of the translation invariant system constrained to have momentum P and 2. the mass M of a simple particle in an arbitrary slowly varying external potential, V, described by the nonrelativistic Schrödinger equation, whose ground state energy equals that of the combined particle/field system in a bound state in the same V. AU - Lieb, Élliott AU - Seiringer, Robert ID - 2407 IS - 1-2 JF - Journal of Statistical Physics TI - Equivalence of two definitions of the effective mass of a polaron VL - 154 ER - TY - JOUR AB - For any pencil of conics or higher-dimensional quadrics over ℚ, with all degenerate fibres defined over ℚ, we show that the Brauer–Manin obstruction controls weak approximation. The proof is based on the Hasse principle and weak approximation for some special intersections of quadrics over ℚ, which is a consequence of recent advances in additive combinatorics. AU - Timothy Browning AU - Matthiesen, Lilian AU - Skorobogatov, Alexei N ID - 248 IS - 1 JF - Annals of Mathematics TI - Rational points on pencils of conics and quadrics with many degenerate fibres VL - 180 ER - TY - JOUR AB - A version of the Hardy-Littlewood circle method is developed for number fields K/ℚ and is used to show that nonsingular projective cubic hypersurfaces over K always have a K-rational point when they have dimension at least 8. AU - Timothy Browning AU - Vishe, Pankaj ID - 249 IS - 10 JF - Duke Mathematical Journal TI - Cubic hypersurfaces and a version of the circle method for number fields VL - 163 ER - TY - JOUR AB - For any number field k, upper bounds are established for the number of k-rational points of bounded height on non-singular del Pezzo surfaces defined over k, which are equipped with suitable conic bundle structures over k. AU - Timothy Browning AU - Jones, Michael S ID - 252 IS - 3 JF - Acta Arithmetica TI - Counting rational points on del Pezzo surfaces with a conic bundle structure VL - 163 ER - TY - JOUR AB - A new "polynomial sieve" is presented and used to show that almost all integers have at most one representation as a sum of two values of a given polynomial of degree at least 3. AU - Timothy Browning ID - 254 IS - 7 JF - International Mathematics Research Notices TI - The polynomial sieve and equal sums of like polynomials VL - 2015 ER - TY - JOUR AB - We investigate the Hasse principle for complete intersections cut out by a quadric hypersurface and a cubic hypersurface defined over the rational numbers. AU - Timothy Browning AU - Dietmann, Rainer AU - Heath-Brown, Roger ID - 255 IS - 4 JF - Journal of the Institute of Mathematics of Jussieu TI - Rational points on intersections of cubic and quadric hypersurfaces VL - 14 ER - TY - JOUR AB - We prove the universality of the β-ensembles with convex analytic potentials and for any β > 0, i.e. we show that the spacing distributions of log-gases at any inverse temperature β coincide with those of the Gaussian β-ensembles. AU - Erdös, László AU - Bourgade, Paul AU - Yau, Horng ID - 2699 IS - 6 JF - Duke Mathematical Journal TI - Universality of general β-ensembles VL - 163 ER - TY - JOUR AB - Multi-dimensional mean-payoff and energy games provide the mathematical foundation for the quantitative study of reactive systems, and play a central role in the emerging quantitative theory of verification and synthesis. In this work, we study the strategy synthesis problem for games with such multi-dimensional objectives along with a parity condition, a canonical way to express ω ω -regular conditions. While in general, the winning strategies in such games may require infinite memory, for synthesis the most relevant problem is the construction of a finite-memory winning strategy (if one exists). Our main contributions are as follows. First, we show a tight exponential bound (matching upper and lower bounds) on the memory required for finite-memory winning strategies in both multi-dimensional mean-payoff and energy games along with parity objectives. This significantly improves the triple exponential upper bound for multi energy games (without parity) that could be derived from results in literature for games on vector addition systems with states. Second, we present an optimal symbolic and incremental algorithm to compute a finite-memory winning strategy (if one exists) in such games. Finally, we give a complete characterization of when finite memory of strategies can be traded off for randomness. In particular, we show that for one-dimension mean-payoff parity games, randomized memoryless strategies are as powerful as their pure finite-memory counterparts. AU - Chatterjee, Krishnendu AU - Randour, Mickael AU - Raskin, Jean ID - 2716 IS - 3-4 JF - Acta Informatica TI - Strategy synthesis for multi-dimensional quantitative objectives VL - 51 ER - TY - JOUR AB - A robust combiner for hash functions takes two candidate implementations and constructs a hash function which is secure as long as at least one of the candidates is secure. So far, hash function combiners only aim at preserving a single property such as collision-resistance or pseudorandomness. However, when hash functions are used in protocols like TLS they are often required to provide several properties simultaneously. We therefore put forward the notion of robust multi-property combiners and elaborate on different definitions for such combiners. We then propose a combiner that provably preserves (target) collision-resistance, pseudorandomness, and being a secure message authentication code. This combiner satisfies the strongest notion we propose, which requires that the combined function satisfies every security property which is satisfied by at least one of the underlying hash function. If the underlying hash functions have output length n, the combiner has output length 2 n. This basically matches a known lower bound for black-box combiners for collision-resistance only, thus the other properties can be achieved without penalizing the length of the hash values. We then propose a combiner which also preserves the property of being indifferentiable from a random oracle, slightly increasing the output length to 2 n+ω(log n). Moreover, we show how to augment our constructions in order to make them also robust for the one-wayness property, but in this case require an a priory upper bound on the input length. AU - Fischlin, Marc AU - Lehmann, Anja AU - Pietrzak, Krzysztof Z ID - 2852 IS - 3 JF - Journal of Cryptology TI - Robust multi-property combiners for hash functions VL - 27 ER -
0001d7baf1a5daa7
Skip to main content Chemistry LibreTexts 8.7: Spin-Orbitals and Electron Configurations • Page ID • The wavefunctions obtained by solving the hydrogen atom Schrödinger equation are associated with orbital angular motion and are often called spatial wavefunctions, to differentiate them from the spin wavefunctions. The complete wavefunction for an electron in a hydrogen atom must contain both the spatial and spin components. We refer to the complete one-electron orbital as a spin-orbital and a general form for this orbital is \[ | \varphi _{n,l,m_l , m_s} \rangle = | \psi _{n,l,m_l} (r, \theta , \psi ) \rangle | \sigma ^{m_s}_s \rangle \label {8.7.1}\] A spin-orbital for an electron in the \(2p_z\) orbital with \(m_s = + \frac {1}{2} \), for example, could be written as \[ | \psi _{2pz_\alpha} \rangle = | \psi _{2,1,0} (r, \theta \psi) \ | \alpha \rangle \label{8.7.2}\] A common method of depicting electrons in spin-orbitals arranged by energy is shown in Figure \(\PageIndex{1}\), which gives one representation of the ground state electron configuration of the hydrogen atom. Figure \(\PageIndex{1}\): Electron configuration of a ground-state hydrogen atom depicted on an energy-level diagram. The electron is represented by an arrow in the 1s orbital. On the energy level diagram in Figure \(\PageIndex{1}\), the horizontal lines labeled 1s, 2s, 2p, etc. denote the spatial parts of the orbitals, and an arrow pointing up for spin \(\alpha\) and down for spin \(\beta \) denotes the spin part of the wavefunction. An alternative shorthand notation for electron configuration is the familiar form 1s1 to denote an electron in the 1s orbital. Note that this shorthand version contains information only about the spatial wavefunction; information about spin is implied. Two electrons in the same orbital have spin \(\alpha\) and \(\beta \), e.g. 1s2, and one electron in an orbital is assumed to have spin \(\alpha\). Hydrogen atoms can absorb energy and the electron can be promoted to higher energy spin-orbitals. Examples of such excited state configurations are \(2p_1\), \(3d_1\), etc. Contributors and Attributions
7ffd3d8bb3c091bb
57 Important Facts About Hydrogen That You Should Know Last updated on February 28th, 2019 Hydrogen is a gas that naturally exists in the universe. It is the first element of the periodic table and occurs on Earth in vast quantities of water in the ocean, the ice packs, rivers and lakes. With these 57 facts about Hydrogen, let us learn more about it. Characteristics of Hydrogen 1. Hydrogen is the most abundant element in the universe, three times more abundant than helium (the second most widely occurring element). On Earth, hydrogen ranks ninth among the elements in abundance. [11,12] 2. Hydrogen’s atomic number is 1. It is the lightest element on the periodic table, with a standard atomic weight of 1.008.[1,12] 3. The hydrogen (H) atom has a nucleus consisting of one proton with one unit of positive electrical charge and one electron, with one unit of negative electrical charge.[12] 4. A molecule of hydrogen is the simplest of all of all molecules, being composed of two protons and two electrons.[15] periodic table with hydrogen and other elements. Periodic table with all elements. Image credit – Ptable.com 5. The earliest known chemical property of hydrogen is that it burns with oxygen to form water (H2O).[1] 6. Hydrogen is transparent to visible and infrared light, and to ultraviolet light at certain wavelengths.[15] 7. Hydrogen is colorless, odorless, tasteless and nontoxic. It is highly flammable but does not ignite unless an oxidizer and ignition source are present.[1,2] 8. Hydrogen is present in all vegetable and animal tissue plus petroleum, as part of countless carbon compounds. About 10 percent of any living organisms’ weight is hydrogen, mainly in proteins, fat, and water.[12] 9. Hydrogen is estimated to make up more than 90 percent of all atoms and 75 percent of the mass of the universe.[12,13] 10. Hydrogen has the lowest density of all gases.[1] 11. Hydrogen is approximately 14 times lighter than air. It is the lightest chemical element. It is so light that Earth’s gravity cannot hold it in the atmosphere and little “free” hydrogen atoms are found on Earth.[14] 12. Hydrogen is the only molecule without neutrons. Therefore it is not a part of any family or group on the periodic table. It has unique properties not shared by other elements.[23] 13. Hydrogen has the greatest heat conductivity of all elements. Kinetic energy is distributed faster through it than any other gas.[12] Natural Occurrences of Hydrogen 14. Hydrogen is found in huge amounts in stars and giant gas planets. Molecular clouds of H2 are associated with star formation. Hydrogen produces the light from the stars and the sun. Hydrogen is found in abundance in Jupiter. [9,13] 15. Its charged particles are highly influenced by magnetic and electrical fields. As solar winds they interact with the magnetosphere of the Earth, creating the aurora and Birkeland currents.[22] 16. Hydrogen is the third most abundant element on the surface of the Earth, found especially in chemical compounds like hydrocarbons and water.[22] 17. Hydrogen gas is produced by algae, by certain bacteria and is a natural component of flatus, as is methane.[17] History of Hydrogen Science 18. The production of hydrogen had been going on for years before it was actually discovered and named as an element. In the mid 1600s Paracelsus noted that a flammable gas was given off when iron was dissolved in sulfuric acid, but confused it with other flammable gases.[1] 19. In 1671 Robert Boyle discovered the reaction of acids and iron filings which resulted in producing hydrogen.[1] 20. In 1766 Henry Cavendish showed that this “flammable air” (hydrogen) was distinct from other combustible gases due to its density and confirmed that water was formed when hydrogen burned. He is credited with discovering hydrogen as a discrete substance.[1] 21. In 1783 Antoine-Laurent Lavoisier, the father of modern chemistry, coined the French equivalent of the word hydrogen, which became its official name. It comes from the Greek and means “maker of water”.[18] 22. In the same year, the first hydrogen-filled balloon was invented and flown, powered by lift from a mixture of hydrogen and oxygen.[2] 23. In 1806 the first internal combustion engine was built. Henri Giffard invented the first hydrogen-lifted airship in 1852.[3] 24. James Dewar liquefied hydrogen for the first time in 1898. He produced solid hydrogen the following year.[4] 25. In 1900 Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen. The first Zeppelin was flown that year.[5] 26. In 1913 the first chain reaction discovered was a chemical one, not nuclear. It was observed that a mixture of hydrogen and chlorine gases explodes when triggered by light. By 1918 a full explanation of the mechanism of this chain reaction was developed by Walther Nernst.[6] 27. The lifting power of 1 cubic foot of hydrogen gas is about 0.07 lb at °C, 760 mm pressure.[13] 28. In 1929 it was discovered that ordinary hydrogen was actually a mixture of two kinds of molecules: ortho and para-hydrogen. In 1931 and 1934 respectively, the Deuterium and Tritium isotopes were discovered.[13,19,20] 29. On May 6, 1937 the Hindenburg airship was destroyed by fire. The fire was eventually determined to be caused by the ignition of the aluminized fabric coating by static electricity but irreparable damage had been dome to hydrogen’s reputation as a lifting gas.[24] 30. In 1937 the first hydrogen-cooled turbogenerator went into service.[7] 31. The first hydrogen bomb was tested on November 1, 1952.[8] 32. In 1977 the first nickel hydrogen battery was used aboard satellites. Mars Odyssey and Mars Global Surveyor were equipped with nickel-hydrogen batteries as was the dark side orbit of the Hubble Space Telescope.[10] 33. The hydrogen fuel cells for automobiles being developed today produce no harmful emissions, only giving off water vapor and warm air. They have the potential to revolutionize transportation.[14] Physical and Chemical Properties, Isotopes and Reactivity 34. Hydrogen’s molecular weight is lower than all other gases. Its molecules have a velocity higher than all other gases at a given temperature, and it diffuses faster than any other gas.[28,29] 35. Hydrogen has three known isotopes. Their different names illustrate the significant differences in their properties. The most abundant hydrogen isotope is the mass 1 choice (H), also called protium. It has no neutrons, making hydrogen the only element that can exist without them.[8] 36. The mass 2 isotope is known as heavy hydrogen or deuterium (H2). It has one proton and one neutron.[8] 37. The mass 3 hydrogen, known as tritium (H3 or T), has one proton and two neutrons in each atom’s nucleus.[8] 38. Hydrogen forms both positive and negative ions, and does this more readily than all other elements. It is the only atom for which the Schrödinger equation has an exact solution.[16] 39. Two types of molecular hydrogen are known to exist. They differ in the magnetic interactions of the protons due to their spinning motions. They are regarded as two distinct modifications of hydrogen and conversions between them don’t usually occur.[21] 40. In ortho-hydrogen, the two protons’ spins are aligned in the same direction: they are parallel. In para-hydrogen the protons’ spins are aligned in opposite directions: they are anti-parallel. The spin alignments’ relationships determine the atoms’ magnetic properties.[21] . . . continue reading on the next page
ffda2f915683cfa1
At the Heart of the Hydrogen Atom… A photograph showing a Hydrogen atom visually captured for the first time using the technique of Quantum Microscopy. The Humble Hydrogen Atom Back in May 2013, scientists announced that they had managed to capture a photo of an electron’s whizzing orbit within a hydrogen atom, using a unique new technology of ‘quantum’ microscopy.  Ladies and gentlemen, let’s take a short trip into the infinitesimally small!  Here is the first photograph of a hydrogen atom!  According to NASA’s Astrophysics Dictionary, atomic hydrogen H (“monatomic” hydrogen) constitutes about 75% of the elemental mass of the Universe.  (Although it is worth noting that most of the Universe’s mass is not actually in the form of chemical elements, or “baryonic” matter.  Food for thought, and another story to be told!) On Earth, it is extremely rare to come across isolated hydrogen atoms outside experimental settings.  Hydrogen usually combines with other atoms into compounds, or with itself to form ordinary (diatomic) hydrogen gas, H2 The hydrogen atom H contains a single positively charged proton p and a single negatively charged electron e, bound to the nucleus by the Coulomb force.  It is electrically neutral.  The hydrogen atom is unique because it has only one electron. And the diameter of a hydrogen atom is no bigger than about twice… A classic representation of the Bohr atom.The Bohr Radius As early as 1913, Niels Bohr proposed what is now called the Bohr model of the atom, and suggested that electrons could only have certain classical motions. The model describes the atom as a small, positively charged nucleus, surrounded by electrons travelling in circular orbits around it.  In a way, the concept is similar in structure to the Solar system, but with attraction provided by electrostatic forces, rather than by gravityAlthough the Bohr model is now obsolete, the quantum theory at the heart of it is still regarded as valid. The Bohr radius for the hydrogen atom remains an important physical constantThe Bohr radius a_0 = 5.29 \times 10^{-11}m corresponds to the radius of the lowest energy electron orbit predicted by the Bohr model of the atom. The radius of an atom is over 10,000 times the radius of its nucleus, and less than 1/1000 th  of the wavelength of visible light.  The Bohr model only applies to atoms and ions with a single electron, such as singly ionized helium He II, positronium Ps, and of course hydrogen H. So, the size of a hydrogen atom in its ‘ground state’ is of order 2a_0 \approx 10^{-10}metre. That’s MIGHTY small !! The Trouble with Mighty Small Things Observing the tiniest building blocks of matter has always been tricky.  Not merely because of the infinitesimal size of an atom… You see, mighty small things operate in mighty strange ways!  At the atomic scale, Nature’s behaviour seems so absurd that particles interactions can only be explained by a special branch of physics.  Electrons have neither definite orbits, nor sharply defined ranges.  Instead, their positions must be described by probability distributions that taper off gradually as one moves away from the region of the nucleus, without any sharp cut-off. Mighty small things operate in mighty strange ways! The development of  Quantum Mechanics in the early part of the 20th century has had a profound influence on the way that scientists now understand the world.  At the centre of it, is the concept of a wave function that satisfies the time-dependent Schrödinger equation. Here we encounter another very real difficulty.  Things get even weirder.  The basic act of observing such infinitesimal particles seems to be affecting their very existence! Getting around such a reality-bending concept as the Uncertainty Principle, scientists have relied upon quantum theory to define the behaviour of particles in time and space, with complex equations that predict the probabilities of finding electrons, at any particular moment OR in any particular locations of their orbit around an atom’s densely packed nucleus. The Schrödinger equation governs the atomic structure, describing it as a wave function.  But so far, the actual observation of that structure has seemed to inevitably destroy it… The new “quantum microscope” invented by Aneta Stodolna and her colleagues at the FOM Institute for Atomic and Molecular Physics (AMOLF) in the Netherlands, uses the process of photoionization and an electrostatic magnifying lens to observe directly the electron orbital paths of an excited hydrogen atom. Atomic Energy Levels and Transitions… of Mighty Small Atoms Unlike classical particles which can have any energy, a quantum mechanical system, or ‘bound’ particle, can only take on certain discrete values of energy.  These discrete values are called energy levels.  The term is used in the context of the energy levels of electrons in atoms or molecules, bound by the electric field of the atomic nucleus.  The energy spectrum of a system with such discrete energy levels is said to be ‘quantized‘. A diagram explaining the energy transitions of the electron within a Hydrogen atom.Energy is always conserved. This implies that if an atom absorbs a photon with a given energy, the energy of that particular atom must inevitably increase by the exact same amount of energy.  In the same token, if an atom emits a photon, the energy of an atom must decrease by a fixed amount of energy (or ‘quanta’), corresponding to that of the emitted photon. So that the energy of the photon E_{ph} = h \nu equals the change \Delta in the energy of the atom E_{atom}: E_{ph} = \Delta E_{atom} If an atom absorbs a photon, the energy of an atom must increase by a fixed amount. If an atom emits a photon, the energy of an atom must decrease by a fixed amount. A New Look at the Hydrogen Atom Wave Function As described in the journal Physical Review Letters, Stodolna et al. 2013’s experiment imaged the wave function of a hydrogen atom.  Hydrogen is uniquely suited for the new photography technique because the first element in the periodic chart contains just a single electron. Initially proposed over 30 years ago, the experiment provides a unique look at one of the few atomic systems that has an analytical solution to the Schrödinger equation. Experimental diagram and photographs taken inside the Hydrogen atom. Source: Physical Review Letters.The hydrogen atom is zapped with laser pulses, thereby forcing the ionised electron to escape from the hydrogen atom along direct and indirect trajectories. Stodolna and her team first fired two lasers at hydrogen atoms inside a special chamber, thus ejecting electrons from the atoms at speeds and directions dependent on their underlying wave functions.  A strong electric field inside the chamber guided the electrons through a lens and onto a detector, which displayed the electron distribution as light and dark rings on a phosphorescent screen. The phase difference between the trajectories leads to an interference pattern, which Stodolna et al. 2013 magnified with an electrostatic lens and managed to capture. This interference pattern was photographed using a high-resolution digital camera.  This is the first ever such photo taken.
f119246e767552cd
Classification of topological phonons in linear mechanical metamaterials Roman Süsstrunk    Sebastian D. Huber Institute for Theoretical Physics, ETH Zurich, 8093 Zürich, Switzerland Topological phononic crystals, alike their electronic counterparts, are characterized by a bulk-edge correspondence where the interior of a material dictates the existence of stable surface or boundary modes. In the mechanical setup, such surface modes can be used for various applications such as wave-guiding, vibration isolation, or the design of static properties such as stable floppy modes where parts of a system move freely. Here, we provide a classification scheme of topological phonons based on local symmetries. We import and adapt the classification of non-interacting electron systems and embed it into the mechanical setup. Moreover, we provide an extensive set of examples that illustrate our scheme and can be used to generate new models in unexplored symmetry classes. Our works unifies the vast recent literature on topological phonons and paves the way to future applications of topological surface modes in mechanical metamaterials. Mechanical metamaterials derive their properties not from their microscopic composition but rather through a clever engineering of their structure at larger scales Cummer et al. (2016). Various design principles have been put forward and successfully applied in the past. Examples range from periodic modifications leading to band-gaps via Bragg scattering Kushwaha et al. (1993) to the use of local resonances Liu et al. (2000) to achieve sub-wavelength functionalities. Recently, the concept of “band-topology” emerged as a new design principle for mechanical metamaterials Prodan and Prodan (2009); Kane and Lubensky (2013); Chen et al. (2014, 2015); Paulose et al. (2015a, b); Xiao et al. (2015a); Süsstrunk and Huber (2015); Nash et al. (2015); He et al. (2015); Meeussen et al. (2016). Colloquially speaking, a system with a topological phonon band-structure will posses mechanical modes bound to surfaces or lattice defects that are immune to a large class of perturbations. If the targeted purpose of a metamaterial is encoded in such a topologically protected mode, its functioning will be largely independent of production imperfections or environmental influences. The introduction of topology to the field of mechanical metamaterials was largely motivated by its successful application to the description of electrons in solids Hasan and Kane (2010). One of the key elements in the understanding of these electronic systems was the classification of different topological phases according to their symmetry properties Kitaev (2009); Ryu et al. (2010). While over the last years numerous proposals Prodan and Prodan (2009); Berg et al. (2011); Kane and Lubensky (2013); Po et al. (2014); Yang et al. (2015); Kariyado and Hatsugai (2015a, b); Yang and Zhang (2016); Vitelli et al. (2014); Wang et al. (2015a); Peano et al. (2015); Rocklin et al. (2015a, b); Sussman et al. (2015); Lubensky et al. (2015); Pal et al. (2016); Salerno et al. (2016); Khanikaev et al. (2015); Mousavi et al. (2015); Xiao et al. (2015b); Ni et al. (2015); Wang et al. (2015b) and several experiments Chen et al. (2014, 2015); Paulose et al. (2015a, b); Xiao et al. (2015a); Süsstrunk and Huber (2015); Nash et al. (2015); He et al. (2015); Meeussen et al. (2016) were put forward promoting mechanical topological metamaterials, a complete classification of linear topological phonons is missing to date. In this report we intend to fill in this gap. At first sight, the dynamics in classical mechanics seems to be rather different from quantum-mechanical electron systems. Our approach is therefore to map the first to the second problem Kariyado and Hatsugai (2015a). This, in principle, allows us to import the classification Kitaev (2009); Ryu et al. (2010) from the description of electronic systems. However, a bare import of this classification is not doing justice to the rich structure mechanical systems posses by themselves. We can categorize mechanical metamaterials by two independent properties. First, the targeted functionality can either be at zero or at finite frequencies. Zero-frequency modes define structural properties such as mechanisms where parts of a material move freely Kane and Lubensky (2013); Paulose et al. (2015a). The dual partners of freely moving parts are states of self stress Lubensky et al. (2015), where external loads on a material can be absorbed in the region of a topological boundary mode. Defining such details of the load bearing properties of a material are relevant both for smart adaptive materials Paulose et al. (2015b) as well as for civil engineering applications. The design of finite-frequency properties, on the other hand, constitutes a quite different field of research. Here, the goals are to control the propagation, reflection or absorption of mechanical vibrations. This includes, e.g., wave-guiding, acoustic cloaking, or vibration isolation ranging from the seismic all the way to the radio-frequency scale. A second important separation into two distinct classes of materials arises from the presence or absence of non-reciprocal elements Fleury et al. (2014). Generically, non-dissipative mechanical properties are invariant under the reversal of the arrow of time. Non-reciprocal elements, however, transmit waves asymmetrically between different points in space. The absence of time-reversal symmetry allows for a topological invariant, the Chern number, which encodes chiral, or uni-directional wave-propagation. We will see that these two attributes: static vs. dynamic and reciprocal vs. non-reciprocal will be key to understand how the electronic classification is naturally modified for mechanical systems. Before we embark on the development of the framework needed for our classification let us state our goals more precisely. Our aim is to import and adapt the classification of non-interacting electron systems according to their local symmetries , , and , i.e., time-reversal, charge-conjugation, and their combination, respectively Kitaev (2009); Ryu et al. (2010). Clearly, we will have to specify the role of these symmetries in mechanical systems. Moreover, we only cover the “strong” indices which do not rely on any spatial symmetries. The extension to weak indices, arising from a stacking of lower-dimensional systems carrying strong indices, is straight-forward Kane and Lubensky (2013). Finally, there are many recent developments dealing with topological phases stabilized by spatial properties Teo et al. (2008); Fu (2011); Hsieh et al. (2012); Xu et al. (2012); Liu et al. (2014) such as point group symmetries. While such spatial symmetries are more easily broken by disorder, the required ingredients might be very well tailored to the mechanical setup Alexandradinata et al. (2014); Alexandradinata and Bernevig (2015). The remainder of this paper is organized as follows: We start by developing a framework to map classical problems to an equation that formally looks like a Schrödinger equation of a quantum mechanical problem. We then introduce the three symmetries , , and and discuss their appearance in mechanical problems before we provide the sought classification. Finally, an extensive example section serves two purposes: We illustrate and apply our approach. Moreover, we show a way how to construct new symmetry classes from generic building blocks. Models and theoretical framework In this manuscript we aim at characterizing discrete systems of undamped, linear mechanical oscillators. While this setup is directly relevant for simple mass-spring systems Süsstrunk and Huber (2015) or magnetically coupled gyroscopes Nash et al. (2015), the scope here is actually considerably broader. Any system that can be reliably reduced to a discrete linear model is amenable to our treatment. This includes one Xiao et al. (2015b), two Khanikaev et al. (2015); Ni et al. (2015); Mousavi et al. (2015); Yang et al. (2015), or three-dimensional Xiao et al. (2015b) systems made from continuous media, where a targeted micro-structuring enables the description in terms of a discrete model. Once we deal with a discrete model, we have a direct way to import the methods known from electronic topological insulators. In order to establish this bridge we now introduce a formal mapping of a classical system of coupled oscillators to a tight-binding hopping problem of electrons in solids. We start with the equations of motion of a generalized mass-spring model given by Here, denotes time, one of the independent displacements, and its time derivative. The mass terms are absorbed into the real and constant coupling elements and . The entries can be thought of as springs coupling different degrees of freedom, and arise from velocity dependent forces. Note, that a non-zero implies terms formally equivalent to the Lorentz force of charged particles in a magnetic field and hence arise only in metamaterials with non-reciprocal elements. In addition to constant coupling elements in 1, one can also consider periodically driven systems. Such driven system can be cast into our framework by a suitable (Magnus) expansion of the corresponding Floquet operator Floquet (1883); Lindner et al. (2011); Salerno et al. (2016). We aim at rewriting equation 1, in the form of a Schrödinger equation, or rather as a hermitian eigenvalue problem. Therefore, we need the system to be conservative (non-dissipative). This is achieved by requiring to be symmetric positive-definite and to be skew-symmetric.111 Such matrices can, e.g., be obtained from a system with Lagrangian An eigenvalue problem emerges from equation 1 via the ansatz : where we gathered the indices in a vector notation . Energy conservation requires all eigenvalues to be real, but the ansatz renders the problem complex. However, a suitable superposition of complex eigensolutions always allows to create real solutions with . While equation 2 contains all the information about the eigensolutions, for the topological classification it is advantageous to transform it into a hermitian form. To this end, we apply the transformation to . The square root of the matrix is defined through its spectral decomposition, where the positive branch of the square root of the eigenvalues is chosen. With this we arrive at As is symmetric positive-definite and is skew-symmetric, the matrix is hermitian and the differential equation for has the sought after form of a Schrödinger equation. The formulation in equation 4 is reminiscent of a single-particle tight-binding problem in quantum mechanics. Therefore, the discussion of topological properties of the eigenvectors can be directly carried over. Remember that the topological classification is based on the spatial dimensionality of the problem as well as properties of special local symmetries alone. In particular, the topological properties do not rely on translational symmetries. However, their discussion and definitions are most conveniently introduced for translationally symmetric systems. In this case, the and matrices are periodic and a spatial Fourier transform block-diagonalizes them (one block for each wave vector ). It follows that becomes block diagonal as well, because it shares its eigenvectors with . Hence we will discuss families of Hamiltonians of the form 4. Before turning our attention to the topological classification, we comment on two more points: (i) the influence of damping and (ii) the possibility for alternative hermitian forms. Every real system is prone to damping, which in turn affects the eigensolutions in two ways. The eigenvalues acquire an imaginary part and the form of the eigenvectors may change. While a slight change of the eigenvalues does not influence the subsequent discussion, the difference in the eigenvectors may alter the results. Whether or not it obstructs the use or observation of a given topological effect depends on the details of the specific system. Now to the second point. The transformation leading to equation 4 is not the only way to introduce a hermitian problem. In fact, any decomposition of the form , will allow us to achieve this goal. By introducing the auxiliary variables , we may express equation 1 as The particular choice has the advantage that (i) has the same eigenvectors as , (ii) it uses only as many auxiliary degrees of freedom as needed, (iii) it allows to directly block-diagonalize the problem in absence of , and (iv) it offers a canonical way how to choose . Nevertheless, this is not the only useful choice of . The starting point for our choice was a given and , originating from an effective model. In certain cases however, there is a natural choice of along with a physical meaning. Such cases have been considered by Kane and Lubensky Kane and Lubensky (2013); Lubensky et al. (2015). In their setup the matrix corresponds to the equilibrium matrix of a mass-spring model, where relates spring tensions to displacements of the attached masses. This allows for a beautiful discussion of (topological) states of self stress in isostatic lattices Kane and Lubensky (2013); Lubensky et al. (2015). While such states of self stress elude our description, the formulation of Kane and Lubensky is only applicable to the restricted set of isostatic models, which makes it not the favourite choice for the purpose of our discussion. Visualization of Figure 1: Visualization of -, - and -symmetry by three prototypical band-structures. The presence of a symmetry implies a certain symmetry in the band-structure (but not the other way around), see text. As mentioned before, the classification of electronic systems is based on three symmetries: time-reversal symmetry , particle-hole symmetry , and chiral symmetry . In the quantum mechanical case, these symmetries are represented by (anti-)unitary operators on the single-particle Hilbert space. For the present context of classical mechanical systems, it is important to note that these symmetries are merely a set of constraints on the Bloch Hamiltonians Ryu et al. (2010). We state here the form of these constraints and discuss their relation to natural symmetries of mechanical systems below. We call a system -symmetric if for some anti-unitary which represents . For the particle-hole symmetry , the respective criterion is with anti-unitary. Finally, the chiral symmetry we demand for a unitary , cf. Fig. 1. For a generic Hilbert space there are no additional restrictions on the representations , but here the “Hilbert space” has additional structure. Any eigenvector is of the form . Hence, after fixing the first half of the entries of the remaining half is known as well. It follows that any (anti-)unitary mapping can be written as with and (anti-)unitary. Let us have a closer look at the three symmetries 6-8 within this framework. From the definitions, it follows that has -symmetry if and only if we can find , such that with . We refer to it as -symmetry, instead of “time-reversal”, because in the setting of classical mechanics it does no longer correspond to the reversal of time. In case that , there is a generic -symmetry where is the complex conjugation operator. Note that even though has the potential to break -symmetry, does not imply the absence of it. For -symmetry, the conditions to be satisfied are with . Therefore, for any and we can find a particle hole symmetry The existence of this omnipresent particle-hole symmetry is nothing but the statement that for every eigensolution, its complex conjugate is also an eigensolution. Its presence is based on and being real. In case we have - and particle-hole symmetry, we can combine the two to obtain a unitary operator . This unitary operator represents a chiral symmetry. We can therefore conclude, that if we always have a chiral symmetry This symmetry is nothing but classical time-reversal symmetry, as every eigenvector is mapped to itself and the corresponding eigenvalue becomes minus itself. So far, particle-hole and chiral symmetries were defined with respect to , meaning that an eigensolution is related to an eigensolution with , cf. Fig. 1. However, for the purpose of topological indices, we can weaken this requirement. A potentially -dependent shift in does not change the form of the eigenvectors. Hence, it is sufficient to require the right-hand side of equations 7 and 8 to equal to instead of zero. Furthermore, particle-hole and chiral symmetries can also exist only on parts of the band-structure. Which means that it is possible to have this symmetries on a subspace of all the solutions only. These two generalizations of - and -symmetries arise naturally in the setting of mass-spring models.222Note, that these generalization are not the only ones possible. However, they emerge naturally in our present discussion. Assume that , which is a real, symmetric matrix and therefore hermitian, has a particle-hole symmetry with respect to , and that . Then, all the eigenvectors of with positive eigenvalue have a particle hole symmetry with respect to some , while all the eigenvectors of with negative eigenvalue have one with respect to . The matrix can be made block-diagonal with the two blocks and each corresponding subspace of solutions has a particle-hole (and chiral) symmetry with respect to . After discussing the above symmetries, we have all the elements we need to establish a topological classification of generic mechanical systems. Symmetries Dimensions Class 1 2 3 A 0 0 0 0 0 AIII 0 0 1 0 AI 0 0 0 0 0 BDI 1 0 0 D 0 0 0 AII 0 0 0 CII 1 0 C 0 0 0 0 CI 1 0 0 Table 1: The tenfold way. The color code is explained in the main text. This table also applies to the high-frequency problem of non-reciprocal metamaterials. With the mapping of the equations of motion to a hermitian eigenvalue problem we can in principle directly use the classification scheme of non-interacting electron systems Kitaev (2009); Ryu et al. (2010). However, the specific properties of the local symmetries discussed above warrant a more careful discussion. To make further progress, we highlight the most important concepts behind the electronic classification. For a more detailed review we refer the reader to the excellent recent review by Chiu et al. Chiu et al. (2015). A reader not interested in the details of the derivation might jump straight to tables 1-4 for a reference of possible topological phonon systems and the example section for an illustration of theses tables. For non-interacting electrons, the ground-state is given by a Slater determinant of all states below the chemical potential. The topological properties are then encoded in the projector onto the filled bands. Moreover, one can simplify the discussion by introducing a “flattened Hamiltonian” which assumes the eigenvalues for filled (empty) bands Chiu et al. (2015). The topological indices are now encoded in the mappings from the Brillouin-zone to an appropriate target space induced by . In the absence of any symmetries the target space are the set of complex Grassmanians. In even dimensions, these mappings are characterized by Chern numbers that lie in (marked in blue in Tab. 1). In case that the chiral symmetry is present, the matrices have additional structure. This structure can be used to block-off-diagonlize them Ryu et al. (2010); Chiu et al. (2015) and to obtain a mapping from the Brillouin zone to the space of unitary matrices. In odd dimensions the homotopy group of these maps is described by a winding number (marked in red in Tab. 1). These two types of indices are called the primary indices. Additional indices can be derived from the primary ones when more symmetries are present. By constructing families of and -dimensional systems whose interpolation constitute a -dimensional Hamiltonian with a primary index, one can establish topologically distinct families of such lower dimensional band-structures through descendant indices. They are marked in light-blue (light-red) for descendents of the Chern (winding) numbers. Moreover, certain symmetries restrict the primary indices to even values denoted by in Tab. 1. Concrete formulas for the Chern and winding numbers are given in the Appendix. For general formulas for the descendent indices we refer to Chiu et al. (2015) and references therein. This overview concludes our discussion of the electronic classification which is summarized in Tab. 1. For mechanical systems a few characteristics deserve special attention. First, in a mechanical system, no Pauli principle is available to give a band as a whole a thermodynamic relevance. However, it is clear that the projector to a given number of isolated bands encodes the topological properties of the (high-frequency)333“High” is to be understood as “non-zero”. gap above these bands. For engineering applications in the respective frequency range this is good enough. Second, before we apply the topological classification of Tab. 1 blindly to a generic mechanical system it is beneficial to first structure the problems at hand by “non-topological” considerations. There are two natural properties which divide the mechanical problems into four different classes: (i) A mechanical system can either be made from “passive” building blocks, or it can incorporate non-reciprocal elements. In our formulation they distinguish themselves by the absence or presence of a -term in the Hamiltonian 4. (ii) The formulation of topological indices is rather different for the case where we target the gap around (relevant for thermodynamic or ground state properties) or a gap at finite frequencies. In the following we discuss the different combinations of finite versus zero-frequency and reciprocal versus non-reciprocal materials separately. High-frequency non-reciprocal metamaterials. The presence of puts the high-frequency problem of non-reciprocal metamaterials on equal footing with the electronic problem. Therefore, no further constraints are imposed and the full Tab. 1 is explorable. Symmetries Dimensions Class 1 2 3 BDI 1 0 0 DIII 1 0 AII 0 0 0 CII 1 0 0 Table 2: Indices for high-frequency reciprocal metamaterials with . There is always a -symmetry squaring to , which can be augmented to squaring to . High-frequency reciprocal metamaterials. For reciprocal high-frequency problems, one can in principle apply the classification scheme to rather than , as already is a hermitian matrix.444Remeber that and share the same eigenvectors and we continue with to keep the discussion unified. The reality of ensures the presence of a -symmetry that squares to . One can augment this symmetry to an anti-unitary symmetry that squares to via an appropriate unitary symmetry However, it is important to note that the simultaneous presence of both a - and -symmetry will force certain indices to vanish. A careful but straight-forward analysis [cf. App.] of the indices results in Tab. 2 relevant for reciprocal high-frequency problems. Symmetries Dimensions Class 1 2 3 BDI 1 0 0 DIII 1 0 CII 1 0 0 Table 3: Indices for low-frequency reciprocal metamaterials with . Both the and symmetry need to be augmented to reach classes where these symmetries square to . Symmetries Dimensions Class 1 2 3 BDI 1 0 0 D 0 0 0 CII 1 0 0 C 0 0 0 0 Table 4: Indices for low-frequency non-reciprocal metamaterials with . Here, only the symmetry needs to be augmented as no generic symmetry is present. Low-frequency reciprocal metamaterials. Topological band-structures with non-trivial gaps around zero frequency are relevant for floppy modes in static problems Paulose et al. (2015a) or thermodynamic properties Sussman et al. (2015) of jammed granular media. As argued above, the structure of the equations of motion imply a -symmetry around . In the absence of , an additional symmetry is present. Both this built-in symmetries canonically square to . As in the case of high-frequency reciprocal materials, one can augment these symmetries by unitary symmetries to reach classes where the augmented ones square to . Tab. 3 summarizes the resulting possibilities for topological indices in this setup. Low-frequency non-reciprocal metamaterials. Similarly to the high-frequency non-reciprocal metamaterials, the generic -symmetry is absent here. Hence, there can arise effective symmetries that either square to or without the need to augment the generically present one in order to reach classes where . Given that we deal with the gap at , however, guarantees the generic -symmetry which in turn can be enriched to one that squares to . The resulting possible topologies are shown in Tab. 4. For the case of zero-frequency indices, the construction of with the help of necessary leads to trivial phases, cf. App. However, in Refs. Kane and Lubensky (2013); Lubensky et al. (2015) it was shown how a decomposition allowing for non-trivial -indices in class BDI can be constructed for Maxwell frames. How one can construct similar formulations for the other symmetry classes shown in Tabs. 3 and 4 is an interesting open problem. To clarify and reinforce our approach we provide a set of examples. We directly consider discrete models. An example on how to extract a discrete description of a continuum model is provided in the Appendix. The degrees of freedom are assumed to be ideal one or two dimensional pendula. The desired -matrix can be obtained by coupling the different pendula by springs. To encode negative coupling elements, or in case of geometrical obstructions, it might be required to replace a spring coupling by a more involved coupling composed of springs and deflection levers, as e.g. in Ref. Süsstrunk and Huber (2015). Note, that while we consider pendula as our local oscillator, all examples are generic and can be applied to any set of mechanical modes. The last ingredient we need is a . One option is to engage the Lorentz force, which directly provides such a coupling. Another possibility is to use spinning tops, or gyroscopes as in Refs. Nash et al. (2015); Wang et al. (2015a). We consider a symmetric gyroscope with a fixed point (different from the center of mass) about which it can rotate, cf. Fig. 2. For our considerations, there will be no external moment along the principal axis passing through the center of mass, rendering this rotation a conserved quantity. Hence, there are only two degrees of freedom left. Coordinate system for a spinning gyroscope. Figure 2: Coordinate system for a spinning gyroscope. In a constant gravitational field, we can use the direction of the field to define a -axis. The potential energy of the gyroscope has a minimum and one can linearize the problem about this minimum. The resulting problem has two effective degrees of freedom which we choose to be displacements along the and direction. The equation of motion for the linearized system is then of the form, cf. App., where is proportional to the spinning speed of the gyroscope and are external moments coupling to it. Such moments arise, e.g., from the couplings to neighboring degrees of freedom. For multiple gyroscopes, this allows us to obtain These are all the elements we need to discuss the following examples. While every model has a high frequency and a low frequency symmetry part, we are only looking at the former, where the generic particle-hole symmetry is irrelevant. For a detailed discussion of certain low-frequency models we refer to Refs. Kane and Lubensky (2013); Lubensky et al. (2015). We start with the simplest possible one dimensional model with a non-trivial index below. After its discussion, we show how one can combine several copies of such simple building blocks to reach a number of other symmetry classes in one dimension. Finally, we provide each an example of a two-dimensional system with a non-vanishing Chern number and a model where we employ the idea of symmetry-enrichment. Class BDI in 1D Probably the simplest model available is the analogue of the Su-Schrieffer-Heeger (SSH) model Su et al. (1979). It can be realized through a chain of one dimensional pendula, coupled through springs with alternating spring constants and . Its dynamics is governed by and for positive definitness. The model has a -symmetry (chiral symmetry), which can already be seen on the level of the matrix: The symmetry translates into a -symmetry of , which are the two blocks of after block-diagonalizing it: In addition, the model has -symmetry and therefore -symmetry as well, which puts it into symmetry class BDI. This class features a winding number through its matrix with . The matrix is already in block-off-diagonal form and hence, the winding number is given by, cf. App., The band-structure of the periodic system is shown in the left part of Fig. 3, and the eigenfrequencies of the open system are given in the middle part of the figure at the point (see below). Up to here, we were free to discuss the problem in terms of instead of . However, this is no longer possible once , as considered in the next example. Class AIII in 1D The above model is now supplemented by a non-vanishing matrix. This breaks the - and the particle-hole symmetry, but the chiral symmetries on the two subspaces (positive / negative eigenfrequencies) are left invariant. In the case that , all spectral gaps remain open and hence the topological index will not change. The evolution of the gap as well as of the edge mode (which stays invariant) for increasing is shown in the middle part of Figure 3. An exemplary band-structure for can be seen in right part of Figure 3. Breaking - and -symmetry of the BDI model did not change the topological index, because the index relies on chiral symmetry only. Class D in 1D Spectra of examples belonging to classes BDI ( Figure 3: Spectra of examples belonging to classes BDI () and AIII (). Left: The band-structure of the BDI model given in equation 18. Middle: Spectrum of an open AIII chain as a function of . Blue lines denote edge modes, whereas the gray areas represent the bulk modes. Right: The bandstructure of the AIII model for . Parameters chosen to obtain the figures are: , and . To break the -symmetry while keeping -symmetry we need to add further degrees of freedom. Starting point are two copies, and , of the above BDI model We assume that both share the same . For , the model belongs to BDI and the winding number of the lowest two bands is given by By choosing or , and turning on , we break all the symmetries except for the high-frequency -symmetry This puts the model into symmetry class D. The index gets reduced to a index, the parity of the winding number. In the case that , the breaking of the -symmetry makes the cases and equivalent, as the two edge modes can hybridize and disappear from the gap, see Fig. 4. In case that , the single edge mode from the BDI model remains as displayed in Fig. 5. Spectra of examples belonging to classes BDI ( Figure 4: Spectra of examples belonging to classes BDI () and D (trivial) (). Left: The band-structure of the periodic BDI model given in equation 21. Middle: Spectrum of the open D model. The parity of the winding number is even, therefore the topological edge modes are not protected upon turning on . Right: The bandstructure of the D model for . Parameters chosen to obtain the figures are: , , , and . Spectra of examples belonging to classes BDI ( Figure 5: Spectra of examples belonging to classes BDI () and D (non-trivial) (). Here, the parity of the winding number is odd and hence the topological edge mode is protected even when . Parameters chosen to obtain the figures are: , , , and . Class A in 2D The topology of the discussed one-dimensional models relied on the presence of a - (-) symmetry. The next model we look at does not rely on any symmetries at all and the topological index will be the Chern number. To obtain a non-vanishing Chern number, we need to break -symmetry by choosing and therefore we need to have at least two degrees of freedom per unit cell. The matrix can only take the form from equation 17 which leaves us with finding a suitable matrix. To this end, it is helpful to transform , as in equation 19, to and define By varying , we can continuously deform into a model with two decoupled blocks . If the bulk-gaps remain opened during the interpolation from to , the Chern number of any subspace will not change, and we can focus on constructing non-trivial subblocks of . We now focus on the block characterized by . This matrix is hermitian and can be written as where are the Pauli matrices and a vector with real coefficients. The vector contains all the information about the eigensolutions of the problem and therefore also about the Chern numbers of the bands. In case that for all and , we can define . Upon varying and through the Brillouin zone traces out a closed surface in . It can be shown, that the number of net encircling of the origin by this surfaces corresponds to the Chern number of the lower band Bernevig and Hughes (2013). A possible choice of coefficients giving rise to a non-trivial band-structure is Owing to the fact that and share the same eigenvalues, it is easy to see that the dynamical matrix is parameterized by for some suitable and . The approximative argument at is supported by a numerical calculation for , which confirms the presence of a non-zero Chern number. In addition, we show the spectrum of a semi-infite cylinder in Figure 6, revealing the existence of an edge mode within the bulk-gap. The presented model is a minimal model in the sense of required degrees of freedom. However, it is probably not the simplest model for an actual implementation. For such a purpose, a simpler model can be found in Ref. Nash et al. (2015). Spectrum of the class A model on a semi-infinite cylinder as a function of the wave vector around the cylinder. Gray areas represent a continuum of bulk modes, whereas blue lines denote the chiral surface modes. Parameters chosen to obtain the figure are: Class AII in 2D Up to here, all examples we looked at where based on symmetries which square to . However, we can also supplement symmetries to obtain new symmetries which can square to . As an example we discuss the quantum spin hall like system presented in Ref. Süsstrunk and Huber (2015). It mimics a Hofstadter model Hofstadter (1976) at flux plus its time-reversed copy. Its dynamical matrix is and . Spectrum of the class AII model on a semi-infinite cylinder as a function of the wave vector around the cylinder. Parameters chosen to obtain the figure are: Figure 7: Spectrum of the class AII model on a semi-infinite cylinder as a function of the wave vector around the cylinder. Parameters chosen to obtain the figure are: and . The structure of carries a -symmetry whose anti-unitary representation is just given by the complex conjugation , i.e., . Therefore this symmetry squares to . However, there is an additional structure which allows to generate an augmented symmetry which gets liftet to a symmetry of . Otherwise there are no relevant symmetries present away from , which puts the problem into symmetry class AII. Repeating the calculation from the previous model results in Figure 7. For further details on this model we refer directly to Ref. Süsstrunk and Huber (2015). In summary, we have developed a framework to map the equations of motion of a set of coupled linear mechanical oscillators to a quantum mechanical tight-binding problem. Using this mapping we showed how one can import the topological classification of non-interacting electron systems to the realm of classical mechanical metamaterials. Using the presence or absence of non-reciprocal elements as a key aspect of metamaterials, we further adapted the electronic classification to mechanical problems. With our work we provide the stage for the development of potentially new classes of materials, where topological boundary modes can be used to provide a specific functionality. Moreover, we help to clarify the recent literature in the field, where topological phonon modes have been predicted without an overarching framework. We hope, that with the extensive example section we provided the reader with the tools and concepts to construct more topological phonon models using simple building blocks. Many new directions in the field of topological mechanical metamaterials are still unexplored. Obvious problems to be solved are the presentation of a topological surface mode in a two or three dimensional continuous material or the miniaturization of the effects observed at the centimeter scale down to micron scale. Moreover, examples of materials in many of the possible symmetry classes characterized in this report have neither been theoretically proposed nor experimentally implemented. We hope that with this work, we stimulate research in this direction. Moreover, our results also provide the framework to import ideas based on crystalline symmetries. Finally, the efficient characterization of model-materials according to their topological properties is an important open problem. In electronic systems the search for topological band-structures is now routinely done using high-throughput ab-initio calculations in combination with advanced numerical tools z2p (2016) to determine the topological indices. Our framework should provide the basis for a similar approach in mechanical metamaterials and therefore open the route to various applications based on topological boundary modes. We acknowledge fruitful discussions with O.R. Bilal, T. Bzdušek, C. Daraio, and A. Soluyanov. This work was supported by the Swiss National Science Foundation. Example on how to obtain topological indices As promised we provide formulas for the primary indices in one, two, and three dimensions. The Chern number in two dimension as a functional of is given by Chiu et al. (2015) The winding number in odd dimensions is given by Ryu et al. (2010) in one dimension and in three dimensions. These two expressions can be unified using the winding number density Using these expressions for the winding number, we now argue how a combined presence of - (-) symmetries squaring to and leads to the vanishing of certain indices. Let us introduce the notation for the “signature” of the symmetry class , with denoting that the respective symmetries square to . Different signatures dictate different constraints on the winding number density. Let us start with . Here, we have Ryu et al. (2010); Schnyder et al. (2008) giving rise to the constraint for the winding number density. This leads to a vanishing winding number in one dimension. For one finds Ryu et al. (2010); Schnyder et al. (2008) which in turn leads to This results in a vanishing winding number in three dimensions. For cases where we have two signatures, e.g., and due to the enrichment of the -symmetry by a unitary symmetry to a -symmetry (see main text), both constraints apply and one loses additional indices. Low-frequency physics in class BDI Materials in class BDI carry a index as topological classification. For the low-frequency physics, this index can, e.g., be relevant for states of self stress Kane and Lubensky (2013); Lubensky et al. (2015). However, if we choose these states are not accessible, and we find that the index is always trivial, which can be seen from the following explicit calculation. To proof this claim we start from . In this case, the -matrix is given by and the eigenfunctions can be found to be The projector onto the negative bands is therefore given by and the -matrix is obtained to be This -matrix has a trivial index which concludes the proof in case . The general statement for arbitrary follows from the fact that cannot close the spectral gap at , and therefore cannot change the index, as shown next. We show below that cannot induce a gap-closing at , i.e., if there is a gap for some value of , there will be a gap for any . A gap-closing at requires at least one eigensolution of with an eigenvalue equal zero, or equivalently, we need together with we find where the second equality follows from Laplace’s formula. This shows that the determinant in A38 is independent of , which proofs the statement. Linearized equations of motion of a gyroscope We intend to describe the motion of the gyroscope close to its potential minimum, such that the motion of the tip can be described by two cartesian coordinates and . To give a general expression for the Lagrangian of the gyroscope, we start from a description in Euler angles , , such that and describes the rotation with respect to the axis through the center of mass, cf. Figure A8. In terms of the principal axis of the gyroscope, its Lagrangian is given by where denotes potential energy due to gravity and couplings to neighboring gyroscopes. are the angular velocities with respect to the principle axes and the corresponding moments of inertia. The gyroscope is assumed to be symmetric, such that and is associated with the rotation with respect to the axis through the center of mass. In Euler angles the Lagrangian takes the form In case that , which is the case we consider, the Euler-Lagrange equation for reads and we find the conserved quantity . Coordinate systems for a spinning gyroscope. Figure A8: Coordinate systems for a spinning gyroscope. The remaining Euler Lagrange equations are, for the variable , and for In these two equations, we change variables to and , make use of , and make a lowest-order expansion in and . As a result, the equation for becomes while for the one of we find We can further simplify these equations by adding times equation A48 to times equation A49 to obtain Similarly, by subtracting times equation A48 from times equation A49 we find These last two equations can be summed up as which is the desired form as used in the main part of the paper. Example for the simplification of a continuum model We motivated our approach to be relevant for more than only discrete systems. In the following we give a simple example on how it can be applied to a continuum model. The example is based on Ref. Xiao et al. (2015a), where such a reduction was performed, and we discuss how it can be cast into our classification. The system consists of a chain of dumb-bell-shaped elements, arranged in a periodic array along the axis of the dumb bells. For the details of the setup, please directly consult Ref. Xiao et al. (2015a). The collective behaviour of the full system can be understood from a constructional point of view. Each unit cell has its eigenmodes, which get coupled to the eigenmodes of the neighboring unit cell. In general, couplings between all neighboring modes need to be considered to understand the collective behaviour of the full system such as its bandstructure. However, parts of the bandstructure can typically already be understood by taking only a few local modes into account. By an apt choice of the geometry of the unit cell Xiao et al. (2015a), one can obtain a bandstructure which effectively has only two bands at around kHz, and these two bands originate from two local modes only. Hence, if we are only interested in these two bands, we can reduce the full problem to a discrete model with only two modes per unit cell. The most general form of the -matrix is then given by where is the vector of Pauli matrices. The exact coefficients depend on the details of the implementation. Nevertheless the structure of them can already be deduced from the symmetry properties of the modes and their couplings. As it turns out, one of the two modes is symmetric along the axis, while the other mode is anti-symmetric. The two modes have different eigenfrequencies and every symmetric (anti-symmetric) mode couples to its two adjacent symmetric (anti-symmetric) modes with the same coefficient due to periodicity. This implies that and . Within a unit cell the two modes do not couple (they are eigenmodes after all), but the symmetric mode on one site couples to the anti-symmetric mode on the next site. As they have different symmetries, the coupling carries an alternating sign. From this follows that and . We therefore find that The matrix has the standard -symmetry and it has a high-frequency particle-hole and chiral symmetry Because , the symmetries of get lifted to symmetries of and we find that the high-frequency part of the spectrum belongs to class BDI. The topological index is given by the winding number Chiu et al. (2015) where . By connecting two chains with distinct topological coefficients, a localized mode must exist at the interface. Such a configuration has been built and the localized mode was experimentally observed in Ref. Xiao et al. (2015a). • Cummer et al. (2016) S. A. Cummer, J. Christensen, and A. Alù, Controlling sound with acoustic metamaterials, Nature Reviews Mat. 1, 16001 (2016), URL. • Kushwaha et al. (1993) M. Kushwaha, P. Halevi, L. Dobrzysnki, and B. Djafari-Rouhani, Acoustic band structure of periodic elastic composites, Phys. Rev. Lett. 71, 2022 (1993), URL. • Liu et al. (2000) Z. Liu, X. Zhang, Y. Mao, Y. Y. Zhu, Z. Yang, C. T. Chan, and P. Sheng, Locally Resonant Sonic Materials, Science 289, 1734 (2000), URL. • Prodan and Prodan (2009) E. Prodan and C. Prodan, Topological Phonon Modes and Their Role in Dynamic Instability of Microtubules, Phys. Rev. Lett. 103, 248101 (2009), URL. • Kane and Lubensky (2013) C. L. Kane and T. C. Lubensky, Topological boundary modes in isostatic lattices, Nature Phys. 10, 39 (2013), URL. • Chen et al. (2014) B. G. Chen, N. Upadhyaya, and V. Vitelli, Nonlinear conduction via solitons in a topological mechanical insulator, Proc. Natl. Acad. Sci. USA 111, 13004 (2014), URL. • Chen et al. (2015) B. G. Chen, B. Liu, A. A. Evans, J. Paulose, I. Cohen, V. Vitelli, and C. D. Santangelo, Topological mechanics of origami and kirigami, arXiv:1508.00795 (2015), URL. • Paulose et al. (2015a) J. Paulose, B. G. Chen, and V. Vitelli, Topological modes bound to dislocations in mechanical metamaterials, Nature Phys. 11, 153 (2015a), URL. • Paulose et al. (2015b) J. Paulose, A. S. Meeussen, and V. Vitelli, Selective buckling via states of self-stress in topological metamaterials, Proc. Natl. Acad. Sci. USA 112, 7639 (2015b), URL. • Xiao et al. (2015a) M. Xiao, G. Ma, Z. Yang, P. Sheng, Z. Q. Zhang, and C. T. Chan, Geometric phase and band inversion in periodic acoustic systems, Nature Phys. 11, 240 (2015a), URL. • Süsstrunk and Huber (2015) R. Süsstrunk and S. D. Huber, Observation of phononic helical edge states in a mechanical topological insulator, Science 349, 47 (2015), URL. • Nash et al. (2015) L. M. Nash, D. Kleckner, A. Read, V. Vitelli, A. M. Turner, and W. T. M. Irvine, Topological mechanics of gyroscopic metamaterials, Proc. Natl. Acad. Sci. USA 112, 14495 (2015), URL. • He et al. (2015) C. He, X. Ni, H. Ge, X.-C. Sun, Y.-B. Chen, M.-H. Lu, X.-P. Liu, L. Feng, and Y.-F. Chen, Acoustic topological insulator and robust one-way sound transport, arXiv:1512.03273 (2015), URL. • Meeussen et al. (2016) A. S. Meeussen, J. Paulose, and V. Vitelli, Topological design of geared metamaterials, arXiv:1602.08769 (2016), URL. • Hasan and Kane (2010) M. Z. Hasan and C. L. Kane, Colloquium: Topological insulators, Rev. Mod. Phys. 82, 3045 (2010), URL. • Kitaev (2009) A. Kitaev, Periodic table for topological insulators and superconductors, AIP Conf. Proc. 1134, 22 (2009), URL. • Ryu et al. (2010) S. Ryu, A. P. Schnyder, A. Furusaki, and A. W. W. Ludwig, Topological insulators and superconductors: tenfold way and dimensional hierarchy, New J. Phys. 12, 065010 (2010), URL. • Schnyder et al. (2008) A. P. Schnyder, S. Ryu, A. Furusaki, and A. W. W. Ludwig, Classification of topological insulators and superconductors in three spatial dimensions, Phys. Rev. B 78, 195125 (2008), URL. • Berg et al. (2011) N. Berg, K. Joel, M. Koolyk, and E. Prodan, Topological phonon modes in filamentary structures, Phys. Rev. E 83, 021913 (2011), URL. • Po et al. (2014) H. C. Po, Y. Bahri, and A. Vishwanath, Phonon analogue of topological nodal semimetals, arXiv:1410.1320 (2014), URL. • Yang et al. (2015) F. Yang, F. Gao, X. Shi, X. Lin, Z. Gao, Y. Chong, and B. Zhang, Topological Acoustics, Phys. Rev. Lett. 114, 114301 (2015), URL. • Kariyado and Hatsugai (2015a) T. Kariyado and Y. Hatsugai, Hannay Angle: Yet Another Symmetry Protected Topological Order Parameter in Classical Mechanics, arXiv:1508.06946 (2015a), URL. • Kariyado and Hatsugai (2015b) T. Kariyado and Y. Hatsugai, Manipulation of Dirac Cones in Mechanical Graphene, S. Rep. 5, 18107 (2015b), URL. • Yang and Zhang (2016) Z. Yang and B. Zhang, Acoustic Weyl nodes from stacking dimerized chains, arXiv:1601.07966 (2016), URL. • Vitelli et al. (2014) V. Vitelli, N. Upadhyaya, and B. G. Chen, Topological mechanisms as classical spinor fields, arXiv:1407.2890 (2014), URL. • Wang et al. (2015a) P. Wang, L. Lu, and K. Bertoldi, Topological Phononic Crystals with One-Way Elastic Edge Waves, Phys. Rev. Lett. 115, 104302 (2015a), URL. • Peano et al. (2015) V. Peano, C. Brendel, M. Schmidt, and F. Marquardt, Topological Phases of Sound and Light, Phys. Rev. X 5, 031011 (2015), URL. • Rocklin et al. (2015a) D. Z. Rocklin, B. G. Chen, M. Falk, V. Vitelli, and T. C. Lubensky, Mechanical Weyl Modes in Topological Maxwell Lattices, arXiv:1510.04970 (2015a), URL. • Rocklin et al. (2015b) D. Z. Rocklin, S. Zhou, K. Sun, and X. Mao, Transformable topological mechanical metamaterials, arXiv:1510.06389 (2015b), URL. • Sussman et al. (2015) D. M. Sussman, O. Stenull, and T. C. Lubensky, Topological boundary modes in jammed matter, arXiv:1512.04480 (2015), URL. • Lubensky et al. (2015) T. C. Lubensky, C. L. Kane, X. Mao, A. Souslov, and K. Sun, Phonons and elasticity in critically coordinated lattices, Rep. Prog. Phys. 78, 109501 (2015), URL. • Pal et al. (2016) R. K. Pal, M. Schaeffer, and M. Ruzzene, Helical edge states and topological phase transitions in phononic systems using bi-layered lattices, J. Appl. Phys. 119, 084305 (2016), URL. • Salerno et al. (2016) G. Salerno, T. Ozawa, H. M. Price, and I. Carusotto, Floquet topological system based on frequency-modulated classical coupled harmonic oscillators, Phys. Rev. B 93, 085105 (2016), URL. • Khanikaev et al. (2015) A. B. Khanikaev, R. Fleury, S. H. Mousavi, and A. Alù, Topologically robust sound propagation in an angular-momentum-biased graphene-like resonator lattice, Nature Comm. 6, 8260 (2015), URL. • Mousavi et al. (2015) S. H. Mousavi, A. B. Khanikaev, and Z. Wang, Topologically protected elastic waves in phononic metamaterials, Nature Comm. 6, 8682 (2015), URL. • Xiao et al. (2015b) M. Xiao, W.-J. Chen, W.-Y. He, and C. T. Chan, Synthetic gauge flux and Weyl points in acoustic systems, Nature Phys. 11, 920 (2015b), URL. • Ni et al. (2015) X. Ni, C. He, X.-C. Sun, X.-p. Liu, M.-H. Lu, L. Feng, and Y.-F. Chen, Topologically protected one-way edge mode in networks of acoustic resonators with circulating air flow, New J. Phys. 17, 053016 (2015), URL. • Wang et al. (2015b) Y.-T. Wang, P.-G. Luan, and S. Zhang, Coriolis force induced topological order for classical mechanical vibrations, New J. Phys. 17, 073031 (2015b), URL. • Fleury et al. (2014) R. Fleury, D. L. Sounas, C. F. Sieck, M. R. Haberman, and Alù, Sound Isolation and Giant Linear Nonreciprocity in a Compact Acoustic Circulator, Science 343, 516 (2014), URL. • Teo et al. (2008) J. C. Y. Teo, L. Fu, and C. L. Kane, Surface states and topological invariants in three-dimensional topological insulators: Application to BiSb, Phys. Rev. B 78, 045426 (2008), URL. • Fu (2011) L. Fu, Topological Crystalline Insulators, Phys. Rev. Lett. 106, 106802 (2011), URL. • Hsieh et al. (2012) T. H. Hsieh, H. Lin, J. Liu, W. Duan, A. Bansil, and L. Fu, Topological crystalline insulators in the SnTe material class, Nature Comm. 3, 982 (2012), URL. • Xu et al. (2012) S.-Y. Xu, C. Liu, N. Alidoust, M. Neupane, D. Qian, I. Belopolski, J. D. Denlinger, Y. J. Wang, H. Lin, L. A. Wray, et al., Observation of a topological crystalline insulator phase and topological phase transition in PbSnTe, Nature Comm. 3, 1192 (2012), URL. • Liu et al. (2014) C.-X. Liu, R.-X. Zhang, and B. K. VanLeeuwen, Topological nonsymmorphic crystalline insulators, Phys. Rev. B 90, 085304 (2014), URL. • Alexandradinata et al. (2014) A. Alexandradinata, C. Fang, M. J. Gilbert, and B. A. Bernevig, Spin-Orbit-Free Topological Insulators without Time-Reversal Symmetry, Phys. Rev. Lett. 113, 116403 (2014), URL. • Alexandradinata and Bernevig (2015) A. Alexandradinata and B. A. Bernevig, Spin-orbit-free topological insulators, Physica Scripta T164, 014013 (2015), URL. • Floquet (1883) G. Floquet, Sur les équations différentielles linéaires à coefficients périodiques, Ann. l’Écol. Norm. Sup. 12, 47 (1883), URL. • Lindner et al. (2011) N. H. Lindner, G. Refael, and V. Galitski, Floquet topological insulator in semiconductor quantum wells, Nature Phys. 7, 490 (2011), URL. • Chiu et al. (2015) C. K. Chiu, J. C. Y. Teo, A. P. Schnyder, and S. Ryu, Classification of topological quantum matter with symmetries, arXiv:1505.03535 (2015), URL. • Su et al. (1979) W. P. Su, J. R. Schrieffer, and A. J. Heeger, Solitons in Polyacetylene, Phys. Rev. Lett. 42, 1698 (1979), URL. • Bernevig and Hughes (2013) B. A. Bernevig and T. L. Hughes, Topological insulators and superconductors (Princeton University Press, 2013). • Hofstadter (1976) D. R. Hofstadter, Energy levels and wave functions of Bloch electrons in rational and irrational magnetic fields, Phys. Rev. B 14, 2239 (1976), URL. • z2p (2016) (2016), URL. For everything else, email us at [email protected].
0807c4df3b585240
Forum for Science, Industry and Business Sponsored by:     3M  Search our Site: U.Va.'s Pfister Accomplishes Breakthrough Toward Quantum Computing A sort of Holy Grail for physicists and information scientists is the quantum computer. Such a computer, operating on the highly complex principles of quantum mechanics, would be capable of performing specific calculations with capabilities far beyond even the most advanced modern supercomputers. It could be used for breaking computer security codes as well as for incredibly detailed, data-heavy simulations of quantum systems. It could be used for applying precise principles of physics to understanding the minute details of the interactions of molecules in biological systems. It could also help physicists unravel some of the biggest mysteries of the workings of the universe by providing a way to possibly test quantum mechanics. Such a computer exists in theory, but it does not exist in practicality – yet – as it would need to operate with circuitry at the scale of single atoms, which is still a daunting challenge, even to state-of-the-art experimental quantum science. To build a quantum computer, one needs to create and precisely control individual quantum memory units, called qubits, for information processing. Qubits are similar to the regular memory "bits" in current digital computers, but far more fragile, as they are microscopic constituents of matter and extremely difficult to separate from their environment. The challenge is to increase the number of qubits to a practical-size quantum register. In particular, qubits need to be created into sets with precise, nonlocal physical correlations, called entangled states. Olivier Pfister, a professor of physics in the University of Virginia's College of Arts & Sciences, has just published findings in the journal Physical Review Letters demonstrating a breakthrough in the creation of massive numbers of entangled qubits, more precisely a multilevel variant thereof called Qmodes. Entanglement dwells outside our day-to-day experience; imagine that two people, each tossing a coin on their own and keeping a record of the results, compared this data after a few coin tosses and found that they always had identical outcomes, even though each result, heads or tails, would still occur randomly from one toss to the next. Such correlations are now routinely observed between quantum systems in physics labs and form the operating core of a quantum computing processor. Pfister and researchers in his lab used sophisticated lasers to engineer 15 groups of four entangled Qmodes each, for a total of 60 measurable Qmodes, the most ever created. They believe they may have created as many as 150 groups, or 600 Qmodes, but could measure only 60 with the techniques they used. Each Qmode is a sharply defined color of the electromagnetic field. In lieu of a coin toss measurement, the Qmode measurement outcomes are the number of quantum particles of light (photons) present in the field. Hundreds to thousands of Qmodes would be needed to create a quantum computer, depending on the task. "With this result, we hope to move from this multitude of small-size quantum processors to a single, massively entangled quantum processor, a prerequisite for any quantum computer," Pfister said. Pfister's group used an exotic laser called an optical parametric oscillator, which emitted entangled quantum electromagnetic fields (the Qmodes) over a rainbow of equally spaced colors called an "optical frequency comb." Ultrastable lasers emitting over an optical frequency comb have revolutionized the science of precision measurements, called metrology, and paved the way to multiple technological breakthroughs. The inventors of the optical frequency comb, physicists John Hall of the National Institute of Standards and Technology and Theodor Hänsch of the Max-Planck Institute for Quantum Optics, were awarded half of the 2005 Nobel Prize in Physics for their achievement. (The other half went to Roy Glauber, one of the founding fathers of quantum optics.) With their experiments, Pfister's group completed a major step to confirm an earlier theoretical proof by Pfister and his collaborators that the quantum version of the optical frequency comb could be used to create a quantum computer. "Some mathematical problems, such as factoring integers and solving the Schrödinger equation to model quantum physical systems, can be extremely hard to solve," Pfister said. "In some cases the difficulty is exponential, meaning that computation time doubles for every finite increase of the size of the integer, or of the system." However, he said, this only holds for classical computing. Quantum computing was discovered to hold the revolutionary promise of exponentially speeding up such tasks, thereby making them easy computations. "This would have tremendous societal implications, such as making current data encryption methods obsolete, and also major scientific implications, by dramatically opening up the possibilities of first-principle calculations to extremely complex systems such as biological molecules," Pfister said. Quantum computing can be summarized by qubit processing; computing with single elementary systems, such as atoms or monochromatic light waves, as memory units. Because qubits are inherently quantum systems, they obey the laws of quantum physics, which are more subtle than those of classical physics. Randomness plays a greater role in quantum evolution than in classical evolution, Pfister said. Randomness is not an obstacle to deterministic predictions and control of quantum systems, but it does limit the way information can be encoded and read from qubits. "As quantum information became better understood, these limits were circumvented by the use of entanglement, deterministic quantum correlations between systems that behave randomly, individually," he said. "As far as we know, entanglement is actually the 'engine' of the exponential speed up in quantum computing." Fariss Samarrai | EurekAlert! Further information: More articles from Physics and Astronomy: 23.06.2017 | Ecole Polytechnique Fédérale de Lausanne nachricht Quantum thermometer or optical refrigerator? 23.06.2017 | National Institute of Standards and Technology (NIST) All articles from Physics and Astronomy >>> The most recent press releases about innovation >>> Im Focus: Climate satellite: Tracking methane with robust laser technology Im Focus: How protons move through a fuel cell Im Focus: A unique data centre for cosmological simulations All Focus news of the innovation-report >>> Event News Plants are networkers 19.06.2017 | Event News Digital Survival Training for Executives 13.06.2017 | Event News Global Learning Council Summit 2017 13.06.2017 | Event News Latest News Quantum thermometer or optical refrigerator? 23.06.2017 | Physics and Astronomy 23.06.2017 | Physics and Astronomy Equipping form with function 23.06.2017 | Information Technology More VideoLinks >>>
36389813edd3bb86
Next Article in Journal Next Article in Special Issue Previous Article in Journal Previous Article in Special Issue Article Menu Export Article A Greatly Under-Appreciated Fundamental Principle of Physical Organic Chemistry Robin A. Cox Present address: 16 Guild Hall Drive, Scarborough, ON, M1R 3Z8, Canada; Tel.: +1-416-759-9625. Received: 19 October 2011; in revised form: 10 November 2011 / Accepted: 14 November 2011 / Published: 28 November 2011 reaction mechanism; intermediate; lifetimes; excess acidity correlations 1. Introduction In recent years, the study of the mechanisms of organic reactions has been considerably enhanced by the study of putative reaction intermediates [1], often under conditions in which the species are stable enough for spectroscopic examination. For instance, carbocations and other species have been studied extensively in superacid media by Olah and his colleagues [24]. However, if a species is to be a reaction intermediate, it has to be stable enough to have a lifetime of at least a few molecular vibrations under the reaction conditions, say greater than 10−13–10−14 s [5]. Jencks pointed this out a number of years ago now [6], as the concept of an “enforced mechanism”; if a species cannot exist under the reaction conditions a mechanism involving it is impossible, and an alternate one is “enforced”. At the time Jencks wrote his review [6] not a lot was known about the lifetimes of putative reaction intermediates. However, more is known now, and although it is still not easy to apply, the author believes that much more attention has to be paid to what I might call the “Jencks Principle”. For instance, it is certain that primary carbocations cannot exist in a primarily aqueous medium [7], although mechanisms involving them are still occasionally proposed [8]. It is now apparent that this is true of secondary carbocations too [9,10]. In some (but not all) textbooks one still sees mention of “mixed SN1 and SN2” mechanisms involving secondary substrates [11], due primarily to the early work of the Hughes and Ingold school [12,13], which has since been discredited [13]. It is now well established that secondary substrates react by an SN2 process [14], for instance as shown in Scheme I, although for the example shown [15,16] the specific mechanism given is still speculative. The scheme is drawn this way in consequence of the observation that hydroxide ion does not add to carbonyl groups directly, but instead attacks a water molecule which does the actual addition [1719]. Enough kinetic evidence to prove or disprove this probably exists [15,16], and work to do this is underway [20]. Hydroxide ion is not very reactive. It is less solvated, and hence much more reactive, in alcohol solvents, and in pure DMSO its reactivity is increased by some twelve orders of magnitude [21]. For the mechanisms of reactions in aqueous media, far more important is the observation that species such as H3O+ (usually called the Eigen cation [22]), H5O2+ (usually called the Zundel cation [23,24], although also strongly preferred by the school of Vinnik and Librovich at the Institute of Physical Chemistry in Moscow [25]), H9O4+ (first postulated by Bell [26], but often (mistakenly) also called the Eigen cation) and the many others which have been proposed [27] (not that there has ever been any believable experimental evidence for any of them [28,29]) do not have lifetimes long enough to exist. Although far less work has been done, recent studies show that HO cannot exist as such in water either [3032]. Recent very high-quality IR measurements on acid solutions [33,34] show that the only structure that has any kind of real existence in them is the proposed H13O6+ [35], shown in Scheme II [34], but even this has a very short lifetime; the authors state [36]: “The lifetime of the five central protons is close to the time of their vibrational transitions. In ~70% of these cations it is shorter than the time of normal vibrations and the IR spectrum degenerates to a continuum absorption”. In addition, in several modern theoretical calculations on proton clusters containing many water molecules it is found not to be possible to isolate the positive charge, it is simply “on the cluster” as a whole [37]. Consequently, we may only speak of “Haq+” and “HOaq” as being reactants [2834]. The Grotthuss chain transfer process along hydrogen bonds in water simply ensures that a proton or a hydroxide ion is available instantaneously where or when it is needed. (This is such a widely accepted transport mechanism in water that specific references to it are difficult to find. The original is [38]). This has all kinds of consequences for reaction mechanisms in predominantly aqueous acidic and basic media. For instance, we can no longer speak of “general” and “specific” acid and base catalysis of reactions. Far better to speak of “pre-equilibrium proton transfer”, in the case of reactions that involve the formation of a stable ionized intermediate (usually by resonance), and of “proton transfer as part of the rate-determining step”, in the other cases. Several examples follow. The highly structured nature of liquid water [39] also ensures that reaction mechanisms involving several water molecules acting in concert are also favored. The entropy involved in bringing water molecules into the right positions is not a concern as the structure is already there, and the Grotthuss process ensures that all proton transfers are essentially instantaneous. Several examples of reactions of this type will be given as well. 2. Results and Discussion 2.1. General Acid Catalysis in Strong Acid Media As far as the common strong acids HCl, HClO4 and H2SO4 are concerned, the only acid species present is “Haq+” under normal conditions, and reactions in all of them therefore ought to proceed at the same rate at the same acid concentration [40]. Sulfuric acid is the only one that can be used from 0 wt% to 100 wt%, the dilute solution containing Haq+. Above the 1:1 H2O:H2SO4 molecular ratio (84.48 wt%) there is, of course, no free water present, but the solution now contains catalytically active undissociated sulfuric acid molecules. Above 99.5 wt% autoprotolysis becomes important, with the very strong acid species H3SO4+ present as a possible catalyst as well [41]. I found catalysis by both of the latter species as far back as 1974 in the Wallach rearrangement of azoxybenzene, Scheme III [4143]. This reaction has been extensively reviewed [44,45], so I will not say much about it here. The species which are stable enough to exist in the reaction solution are indicated in the Scheme; interestingly, both of them have been observed experimentally under stable ion conditions [4]. Theoretical calculations have shown the dicationic species to have the structure shown, with little communication between the two halves of the molecule [42]. Interestingly Haq+ is not a strong enough acid species to catalyze the reaction, only catalysis by H2SO4 and by H3SO4+ being observed [41,44]. The reaction does not work in HClO4, a stronger acid system in H0 terms but only containing Haq+ with no undissociated HClO4 molecules present [45,46]. It does go in pure FSO3H and ClSO3H, both being quite strong acid species [46]. Another case of general acid catalysis was observed in the hydrolysis of several ethyl thiolbenzoates in sulfuric acid at concentrations above 60 wt%, where catalysis by Haq+ was observed, catalysis by undissociated H2SO4 molecules taking over above 80 wt% in concentration [47], Scheme IV. 2.2. Ether Hydrolyses The hydrolyses of trioxane and similar molecules in dilute acid have been taken by many authors (even by myself [48]) to be typical A1 processes, protonation followed by rate-determining breakup of the protonated intermediate. However, if H3O+ cannot exist in water, other species with positive charge on oxygen which is not resonance-stabilized are not going to be capable of existence either. This means that the mechanism of the hydrolysis of trioxane is going to be that given in Scheme V. (Scheme V shows the breakup to three formaldehyde molecules taking place all at once, but a similar stepwise breakup is of course also possible.) There is plenty of kinetic data on this reaction in several different acid media available for analysis [49]. The preferable method to use for this is the excess acidity correlation analysis [48], which is used here. The applicable rate equation is shown as Equation 2. k ψ C S = k 0 a S a H 2 O a H + aq / f = k 0 C S a H 2 O C H + aq f S f H + aq f log k ψ - log C H + aq - log a H 2 O = log k 0 + m m * X Here the observed rate constants are kψ [49], the medium-independent rate constant (i.e., the rate constant in the aqueous standard state) is k0, the proton concentration is CHaq+, the water activity is aH2O and the excess acidity is X, all available data for all three acid systems [48]. The slope parameters m* and m describe the behavior of the protonated substrate and the transition state as the acidity changes, necessarily combined here [48]. Plots according to Equation 2 are given in Figure 1. As can be seen, the plots for all three acids are accurately linear. For illustration purposes a thick line is given for all of the data combined, slope 1.333 ± 0.022, intercept –9.198 ± 0.018, correlation coefficient 0.993 over 54 points. However, the points for the three individual acids fall (very accurately, correlation coefficients 0.9990 in HCl, 0.9994 in HClO4, 0.9994 in H2SO4) on slightly different lines, which undoubtedly reflects the fact that the water activities for the three acids are not known equally well. Water activities in the aqueous sulfuric acid medium [50] are very accurately known [51], but this is not the case for HCl [5254] and, particularly, HClO4 [5558]. All of the plots fit the appropriate lines more closely than was previously found by treating the process as a traditional A1 reaction [48]. If this process is really a case of general acid catalysis, rates measured in aqueous buffer systems should show this. Trioxane hydrolysis is too slow a reaction to have been studied in this way, but the closely related hydrolysis of paraldehyde (the acetaldehyde trimer) is much faster [48], and evidence for general acid catalysis has indeed been found [59,60], although this fact does not seem to be widely known (or has been ignored). A plot like Figure 1 can also be drawn for paraldehyde, but the kinetics cover a much smaller acidity range, and the scatter is bad. Another ether system for which kinetic results are available [61] is the hydrolysis of diethyl ether at high temperatures and high acidities in aqueous sulfuric acid. The mechanism proposed here is shown in Scheme VI. This is essentially the same mechanism as that shown in Scheme V, and the same excess acidity rate equation, Equation 2, applies. In sulfuric acid this mechanism is only going to apply as long as there is free water available, i.e., not above a concentration of 85.48 wt%. Above this acidity another well-characterized mechanism takes over [61], involving a much faster direct reaction between the diethyl ether and SO3, which is available for reaction above this acidity. Thus in an excess acidity plot one would expect linearity below 85.48 wt%, and an upward deviation above this point. This is exactly what is observed, as Figure 2 illustrates. The topmost point in Figure 2 is at an acidity of 90 wt%, and deviates upwards as expected. (In the original paper [61] a plot of log rate constant against acidity curves downward over the acidity region which gives linearity here.) The m*m slope is 0.949 ± 0.015, and as different temperatures are available, the activation parameters for the reaction can be calculated: ΔH = 32.8 ± 1.4 kcal·mol−1; ΔS = −12.4 ± 4.7 cal·deg−1·mol−1, both perfectly reasonable numbers. (They only concern the substrate, as X, log CHaq+and log aH2O have all been corrected to the reaction temperature, as before [48].) The correlation coefficient is 0.9993. Figures 1 and 2 constitute strong evidence in favor of the mechanisms given here. Interestingly, it does not matter whether the substrate can be considered to be primarily protonated at the acidity of the reaction or not; oxygen-protonated species in which the charge cannot be delocalized are not going to be reaction intermediates as their lifetimes are too short! When the charge can be delocalized, intermediate lifetimes are much longer. For instance, the methoxymethyl cation, where the charge is delocalized over carbon and oxygen, is calculated to have a lifetime of about 1 ps [62], which, although short, is quite long enough for it to be a reaction intermediate. 2.3. Amide Hydrolyses Benzamides, and presumably other suitable amides, have two hydrolysis mechanisms [63]. In weakly acidic aqueous H2SO4 media, a pre-equilibrium proton transfer gives a stable delocalized protonated amide intermediate, to which water adds; see Scheme VII. From this a neutral tetrahedral intermediate is formed directly; charged ones cannot exist in an aqueous medium. (Log rate constants, corrected for incomplete amide protonation, are linear in the log water activity, slope two. Molarity-based water activities must be used for consistency with the other species concentrations, rather than the listed mole-fraction-based ones [48].) In more strongly acidic media the mechanism changes [63]; the kinetics show a second, concerted, proton transfer taking place, giving an acylium ion which is stable under the reaction conditions, and that two water molecules are involved [63]. This mechanism is a bit tricky to draw, but I have made an attempt in Scheme VIII. Since an acylium ion is involved, this mechanism would only occur for those amides capable of giving stable ones, primarily benzamides. For other types of amide evidence is lacking; amides are particularly stable and their acid hydrolysis is very slow and quite difficult to study. The catalyzing acid is given as Haq+; presumably in H2SO4 media stronger than ~85 wt% the catalyst would be undissociated H2SO4, see above [63]. 2.4. Ester Hydrolyses At acidities below ~85 wt% the mechanisms of these processes are similar to those for benzamides [63] (and benzimidates [64]) as shown in Scheme IX [64], which differs from Scheme VII for amides in that the neutral tetrahedral intermediate does not contain a nitrogen atom, and so it is susceptible to 18O-exchange, which is observed [65]; it is essentially not found in amide hydrolysis [66]. In the strong acid region, above ~85 wt% H2SO4, other mechanisms take over. If the substrate contains a group capable of forming a stable carbocation, e.g., a benzylic or a tertiary group, this can leave directly from the protonated ester, and this can be the preferred mechanism at acidities much lower than 85 wt% H2SO4 [67,68]. This is shown in Scheme X. For other esters in strong acid an additional proton transfer is probably involved, to give an acylium ion; the previously proposed [67,68] “proton switch” mechanism is probably wrong. This again is quite difficult to draw, but I have made an attempt in Scheme XI. This mechanism is not yet established, but work is underway to do this [20]. In basic media, it is becoming increasingly apparent that hydroxide ions do not themselves add directly to carbonyl groups, but that HOaq removes a proton from a water molecule which then adds to the carbonyl, the result being a neutral tetrahedral intermediate [1719]. Heavy-atom isotope effect studies make this appear even more likely [69]. Since the process is reversible, extensive oxygen exchange into the substrate is observed as well [70,71]. The most probable mechanism is given here as Scheme XII. Formation of a neutral intermediate ensures that the negative charge is dispersed into the solvent. Electronegative oxygen is certainly more able to support a negative charge than a positive one, but the principle of having any charge, positive or negative, dispersed as widely as possible ensures that all tetrahedral intermediates formed in either acidic or basic processes would be neutral. Species that are represented by various authors as T+, T, T± and, especially, T2− do not exist in aqueous media. 2.5. Mechanisms Involving Chains of Water Molecules There are quite a number of these known now. The principles seem to be that if a reaction can be achieved without any charge transfer taking place it is favored, and that reactions involving chains of water molecules are favorable because the structure necessary for reaction essentially already exists; water molecules do not have to be moved into position, which is unfavorable entropically. For instance, acylimidazoles hydrolyze by forming a tetrahedral intermediate directly, Scheme XIII [72]. Incidentally, this work showed that the excess acidity correlation analysis works well even for reactions that are not acid-catalyzed [72]. I proposed a mechanism for the hydrolysis of nitramide in neutral water on the basis of nothing but its elegance [73], and was gratified that detailed modern theoretical calculations, in the gas-phase and also in solution [74], showed that it was in fact correct. This is shown in Scheme XIV. The hydrolyses of acid chlorides and acid anhydrides are fast reactions which have not received a lot of attention. Several mechanisms have been proposed [7578], but the latest research would indicate that the actual mechanism may well be a simple cycle involving water as well, Scheme XV [78]. 3. Conclusions • If a species does not have a finite lifetime in the solution in which the reaction is performed it cannot be a reaction intermediate. No primary or secondary carbocations in aqueous media; only T0, no T+, T, T± or T2− tetrahedral intermediates. • Positive or negative charge, if present, will be as delocalized as possible during the reaction, especially in reaction intermediates, often into the aqueous solvent. A highly electronegative atom like oxygen is simply not going to support a positive charge all by itself. O+ is almost as unlikely as F+! • Also, reactions will be unimolecular, as far as possible, for entropic reasons (SN1 favored over SN2); however, mechanisms involving chains of water molecules are favored in aqueous media thanks to the highly structured nature of water and the Grotthuss process. There are a number of philosophical implications. Many years ago chemists weaned themselves from using “H+” as a reactant, once it was pointed out that free protons are only stable in a hard vacuum. Now we are going to have to wean ourselves from using “H3O+” or “HO” as reactants in aqueous solution as well. Of course these species do exist, under special circumstances. In sulfuric acid above the 1:1 mole ratio point (~85 wt%) all the remaining water is present in the form H3O+. The perchloric acid hydrate sold as a solid in glass vials is H3O+·ClO4 (and is pretty dangerous stuff!). The terms to use are “Haq+” and “HOaq”. We are going to have to cease using the terms “general” and “specific” acid and base catalysis. Much to be preferred, I think, is to refer to “pre-equilibrium proton transfer” when an intermediate that is stable under the reaction conditions is formed in a first fast step, and to “concerted with proton transfer”, or something similar, when the proton transfer is involved in the rate-determining step, as in many of the examples discussed above. Very recently some common organic reactions have begun to be studied in liquid ammonia as a solvent, rather than in water [79,80]. It is going to be very interesting to compare the mechanisms of the same reaction in the two different solvents. Valuable correspondence with Tina Amyes, Bill Bentley, John Marlier, Howard Maskill, Chris Reed and Evgenii Stoyanov is gratefully acknowledged, and I thank those who saw the poster I presented on this subject at a recent conference for their (mostly!) useful comments. • Conflict of InterestThe author declares no conflict of interest. References and Notes 1. For instance, the many reviews given in the series Reviews of Chemical Intermediates; Elsevier: Amsterdam, The Netherlands., commencing in1981. 2. Olah, G.A.; White, A.M. Stable carbonium ions. XCI. Carbon-13 nuclear magnetic resonance spectroscopic study of carbonium ions. J. Am. Chem. Soc 1969, 91, 5801–5810. [Google Scholar] 3. Olah, G.A. My search for carbocations and their role in chemistry. Angew. Chem. Int. Ed. Engl 1995, 34, 1393–1405, and references therein. [Google Scholar] 4. Olah, G.A.; Dunne, K.; Kelly, D.P.; Mo, Y.K. Stable carbocations. CXXIX. Mechanism of the Benzidine and Wallach rearrangements based on direct observation of dicationic reaction intermediates and related model compounds. J. Am. Chem. Soc 1972, 94, 7438–7447. [Google Scholar] 5. Moore, W.J. Physical Chemistry, 4th ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 1972; p. 769. [Google Scholar] 6. Jencks, W.P. When is an intermediate not an intermediate? Enforced mechanisms of general acid-base catalyzed, carbocation, carbanion, and ligand exchange reactions. Acc. Chem. Res 1980, 13, 161–169. [Google Scholar] 7. Richard, J.P.; Amyes, T.L.; Toteva, M.M. Formation and stability of carbocations and carbanions in water and intrinsic barriers to their reactions. Acc. Chem. Res 2001, 34, 981–988, and references therein. [Google Scholar] 8. Vorob’eva, E.N.; Kuznetsov, L.L.; Gidaspov, B.V. Kinetics of decomposition of primary aliphatic N-nitroamines in aqueous sulfuric acid. Zh. Org. Khim 1983, 19, 698–704. [Google Scholar]Russ. J. Org. Chem 1983, 19, 615–620. 9. Dietze, P.E.; Jencks, W.P. Oxygen exchange into 2-butanol and hydration of 1-butene do not proceed through a common carbocation intermediate. J. Am. Chem. Soc 1987, 109, 2057–2062. [Google Scholar] 10. Dietze, P.E.; Wojciechowski, M. Oxygen scrambling and stereochemistry during the trifluoroethanolysis of optically active 2-butyl 4-bromobenzenesulfonate. J. Am. Chem. Soc 1990, 112, 5240–5244. [Google Scholar] 11. Bruice, P.Y. Organic Chemistry, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2001; pp. 380–381. [Google Scholar] 12. Ingold, C.K. Structure and Mechanism in Organic Chemistry, 2nd ed.; Cornell University Press: Ithaca, NY, USA, 1969; p. 430. [Google Scholar] 13. Murphy, T.J. Absence of SN1 involvement in the solvolysis of secondary alkyl compounds. J. Chem. Educ 2009, 86, 519–524. [Google Scholar] 14. Bentley, T.W.; Schleyer, P.v.R. The SN2-SN1 spectrum. 1. Role of nucleophilic solvent assistance and nucleophilically solvated ion pair intermediates in solvolyses of primary and secondary arenesulfonates. J. Am. Chem. Soc 1976, 98, 7658–7666. [Google Scholar] 15. Bunton, C.A.; Konasiewicz, A.; Llewellyn, D.R. Oxygen exchange and the Walden inversion in sec-butyl alcohol. J. Chem. Soc 1955, 604–607. [Google Scholar] 16. Bunton, C.A.; Llewellyn, D. R. Tracer studies on alcohols. Part II. The exchange of oxygen-18 between sec-butyl alcohol and water. J. Chem. Soc 1957, 3402–3407. [Google Scholar] 17. Mata-Segreda, J.F. Hydroxide as a general base in the saponification of ethyl acetate. J. Am. Chem. Soc 2002, 124, 2259–2262. [Google Scholar] 18. Haeffner, F.; Hu, C.-H.; Brinck, T.; Norin, T. The catalytic effect of water in basic hydrolysis of methyl acetate: A theoretical study. J. Mol. Struct. (Theochem.) 1999, 459, 85–93. [Google Scholar] 19. Hori, K.; Hashitani, Y.; Kaku, Y.; Ohkubo, K. Theoretical study on oxygen exchange accompanying alkaline hydrolysis of esters and amides. J. Mol. Struct. (Theochem.) 1999, 461–462, 589–596. [Google Scholar] 20. Cox, R.A. Scarborough, ON, Canada, Unpublished work; 2011. 21. Dolman, D.; Stewart, R. Strongly basic systems. VIII. The H function for dimethyl sulfoxide-water-tetramethylammonium hydroxide. Can. J. Chem 1967, 45, 911–924. [Google Scholar] 22. Eigen, M. Proton transfer, acid-base catalysis, and enzymatic hydrolysis. Part 1. elementary processes. Angew. Chem. Int. Ed. Engl 1964, 3, 1–19. [Google Scholar] 23. Zundel, G. Hydrogen bonds with large proton polarizability and proton transfer processes in electrochemistry and biology. Adv. Chem. Phys 2000, 111, 1–217, and many earlier papers. [Google Scholar] 24. Niedner-Schatteburg, G. Infrared spectroscopy and ab initio theory of isolated H5O2+: from buckets of water to the Schrödinger equation and back. Angew. Chem. Int. Ed. Engl 2008, 47, 1008–1011. [Google Scholar] 25. Librovich, N.B.; Maiorov, V.D.; Savel’ev, V.A. The H5O2+ ion in the vibrational spectra of aqueous solutions of strong acids. 26. Bascombe, K.N.; Bell, R.P. Properties of concentrated acid solutions. Discuss. Faraday Soc 1957, 24, 158–161. [Google Scholar] 27. Robertson, E.B.; Dunford, H.B. The state of the proton in aqueous sulfuric acid. J. Am. Chem. Soc 1964, 86, 5080–5089. [Google Scholar] 28. Ault, A. Telling it like it is: Teaching mechanisms in organic chemistry. J. Chem. Educ 2010, 87, 922–923. [Google Scholar] 29. Silverstein, T.P. The solvated proton is NOT H3O+! J. Chem. Educ 2011, 88, 875. [Google Scholar] 30. Roberts, S.T.; Ramasesha, K.; Petersen, P.B.; Mandal, A.; Tokmakoff, A. Proton transfer in concentrated aqueous hydroxide visualized using ultrafast infrared spectroscopy. J. Phys. Chem. A 2011, 115, 3957–3972. [Google Scholar] 31. Marx, D.; Chandra, A.; Tuckerman, M.E. Aqueous basic solutions: hydroxide solvation, structural diffusion, and comparison to the hydrated proton. Chem. Rev 2010, 110, 2174–2216. [Google Scholar] 32. Tuckerman, M.E.; Chandra, A.; Marx, D. Structure and dynamics of HOaq. Acc. Chem. Res 2006, 39, 151–158. [Google Scholar] 33. Stoyanov, E.S.; Stoyanova, I.V.; Reed, C.A. The structure of the hydrogen ion (Haq+) in water. J. Am. Chem. Soc 2010, 132, 1484–1486. [Google Scholar] 34. Stoyanov, E.S.; Stoyanova, I.V.; Reed, C.A. The unique nature of H+ in water. Chem. Sci 2011, 2, 462–472. [Google Scholar] 35. Jiang, J.-C.; Wang, Y.-S.; Chang, H.-C.; Lin, S.H.; Lee, Y.T.; Niedner-Schatteburg, G.; Chang, H.-C. Infrared spectra of H+(H2O)5–8 clusters: evidence for symmetric proton hydration. J. Am. Chem. Soc 2000, 122, 1398–1410. [Google Scholar] 36. Stoyanov, E.S.; Reed, C.A. Private communication, Department of Chemistry, University of California: CA, USA, 2011. 37. Shevkunov, S.V. Computer simulation of molecular complexes H3O+(H2O)n under conditions of thermal fluctuation. II. Work of formation and structure. Zh. Obshch. Khim 2004, 74, 1585–1592. [Google Scholar]Russ. J. Gen. Chem 2004, 74, 1471–1477. 38. Grotthuss, C.J.T. Sur la décomposition de l'eau et des corps qu'elle tient en dissolution à l'aide de l'électricité galvanique. Ann. Chim 1806, LVIII, 54–74. [Google Scholar] 39. Marcus, Y. Effect of ions on the structure of water: structure making and breaking. Chem. Rev 2009, 109, 1346–1370, and references therein. [Google Scholar] 40. Aqueous HCl is only usable up to about 38 wt%, when the water is saturated with gaseous HCl, and aqueous perchloric acid only up to 78 wt% or so, when the solution solidifies at 25 °C. Nitric acid has problems and is not normally used; it is considerably weaker, it is an oxidizing agent, as is strong perchloric acid, and it can give NO2+ and related species at higher concentrations. Aqueous HF is not often used; it is very weak at high dilution, and if concentrated it can dissolve glassware. Trifluoromethanesulfonic acid would probably be useful, but it is very expensive. Methanesulfonic acid is not used much. Trifluoroacetic and the other carboxylic acid variants are too weak to be useful. 41. Cox, R.A. Mechanistic studies in strong acids. I. General considerations. Catalysis by individual acid species in sulfuric acid. J. Am. Chem. Soc 1974, 96, 1059–1063. [Google Scholar] 42. Cox, R.A.; Fung, D.Y.K.; Csizmadia, I.G.; Buncel, E. An ab initio molecular orbital study of the geometry of the dicationic Wallach rearrangement intermediate. Can. J. Chem 2003, 81, 535–541. [Google Scholar] 43. Buncel, E.; Keum, S.-R.; Rajagopal, S.; Cox, R.A. Rearrangement mechanisms for azoxypyridines and axoxypyridine N-oxides in the 100% H2SO4 region—the Wallach rearrangement story comes full circle. Can. J. Chem 2009, 87, 1127–1134. [Google Scholar] 44. Cox, R.A.; Buncel, E. Rearrangements of Hydrazo, Azoxy and Azo CompoundsThe Chemistry of the Hydrazo, Azo and Azoxy Groups; Patai, S., Ed.; Wiley: London, UK, 1975; Volume 1, pp. 775–859. [Google Scholar] 45. Cox, R.A.; Buncel, E. Rearrangements of Hydrazo, Azoxy and Azo Compounds: Kinetic, Product and Isotope StudiesThe Chemistry of the Hydrazo, Azo and Azoxy Groups; Patai, S., Ed.; Wiley: London, UK, 1997; Volume 2, pp. 569–602. [Google Scholar] 46. Cox, R.A.; Buncel, E.; Bolduc, R. Department of Chemistry, Queen’s University: Kingston, Canada, Unpublished observations; 1971. 47. Cox, R.A.; Yates, K. Mechanistic studies in strong acids. VIII. Hydrolysis mechanisms for some thiobenzoic acids and esters in aqueous sulfuric acid, determined using the excess acidity method. Can. J. Chem 1982, 60, 3061–3070. [Google Scholar] 48. Cox, R.A. Excess acidities. Adv. Phys. Org. Chem 2000, 35, 1–66. [Google Scholar] 49. Bell, R.P.; Bascombe, K.N.; McCoubrey, J.C. Kinetics of the depolymerization of trioxane in aqueous acids, and the acidic properties of aqueous hydrogen fluoride. J. Chem. Soc 1956, 1286–1291. [Google Scholar] 50. Giauque, W.F.; Hornung, E.W.; Kunzler, J.E.; Rubin, T.R. The thermodynamic properties of aqueous sulfuric acid solutions and hydrates from 15 to 300 K. J. Am. Chem. Soc 1960, 82, 62–70. [Google Scholar] 51. Zeleznik, F.J. Thermodynamic properties of the aqueous sulfuric acid system to 350 K. J. Phys. Chem. Ref. Data 1991, 20, 1157–1200. [Google Scholar] 52. Randall, M.; Young, L.E. The calomel and silver chloride electrodes in acid and neutral solutions. The activity coefficient of aqueous hydrochloric acid and the single potential of the deci-molal calomel electrode. J. Am. Chem. Soc 1928, 50, 989–1004. [Google Scholar] 53. Åkerlöf, G.; Teare, J.W. Thermodynamics of concentrated aqueous solutions of hydrochloric acid. J. Am. Chem. Soc 1937, 59, 1855–1868. [Google Scholar] 54. Liu, Y.; Grén, U.; Theliander, H.; Rasmuson, A. Simultaneous correlation of activity coefficient and partial thermal properties for electrolyte solutions using a model with ion-specific parameters. Fluid Phase Equilibria 1993, 83, 243–251. [Google Scholar] 55. Pearce, J.N.; Nelson, A.F. The vapor pressures and activity coefficients of aqueous solutions of perchloric acid at 25°. J. Am. Chem. Soc 1933, 55, 3075–3081. [Google Scholar] 56. Robinson, R.A.; Baker, O.J. The vapor pressures of perchloric acid solutions at 25°. Trans. Proc. R. Soc. N. Z 1946, 76, 250–254. [Google Scholar] 57. Wai, H.; Yates, K. Determination of the activity of water in highly concentrated perchloric acid solutions. Can. J. Chem 1969, 47, 2326–2328. [Google Scholar] 58. Bidinosti, D.R.; Biermann, W.J. A redetermination of the relative enthalpies of aqueous perchloric acid solutions from 1 to 24 molal. Can. J. Chem 1956, 34, 1591–1595. [Google Scholar] 59. Bell, R.P.; Brown, A.H. Kinetics of the depolymerization of paraldehyde in aqueous solution. J. Chem. Soc 1954, 774–778. [Google Scholar] 60. Hamer, D.; Leslie, J. The Hammett acidity function in reactions catalyzed by carboxylic acids. The hydrolysis of methylal and the depolymerization of trioxane. J. Chem. Soc 1960, 4198–4202. [Google Scholar] 61. Jaques, D.; Leisten, J.A. Acid-catalysed ether fission. Part II. Diethyl ether in aqueous acids. J. Chem. Soc 1964, 2683–2689. [Google Scholar] 62. Ruiz Pernía, J.J.; Tuñón, I.; Williams, I.H. Computational simulation of the lifetime of methoxymethyl cation in water. A simple model for a glycosyl cation: When is an intermediate an intermediate? J. Phys. Chem. B 2010, 114, 5769–5774. [Google Scholar] 63. Cox, R.A. Benzamide hydrolysis in strong acids—the last word. Can. J. Chem 2008, 86, 290–297. [Google Scholar] 64. Cox, R.A. A comparison of the mechanism of hydrolysis of benzimidates, esters, and amides in sulfuric acid media. Can. J. Chem 2005, 83, 1391–1399. [Google Scholar] 65. Bender, M.L. Oxygen exchange as evidence for the existence of an intermediate in ester hydrolysis. J. Am. Chem. Soc 1951, 73, 1626–1629. [Google Scholar] 66. McClelland, R.A. Benzamide oxygen exchange concurrent with acid hydrolysis. J. Am. Chem. Soc 1975, 97, 5281. [Google Scholar] 67. Yates, K.; McClelland, R.A. Mechanisms of ester hydrolysis in aqueous sulfuric acids. J. Am. Chem. Soc 1967, 89, 2686–2692. [Google Scholar] 68. Yates, K. Kinetics of ester hydrolysis in concentrated acid. Acc. Chem. Res 1971, 4, 136–144. [Google Scholar] 69. Marlier, J.F. Heavy-atom isotope effects on the alkaline hydrolysis of methyl formate. The role of hydroxide ion in ester hydrolysis. J. Am. Chem. Soc 1993, 115, 5953–5956. [Google Scholar] 70. Bender, M.L.; Ginger, R.D.; Unik, J.P. Activation energies of the hydrolysis of esters and amides involving carbonyl oxygen exchange. J. Am. Chem. Soc 1958, 80, 1044–1048. [Google Scholar] 71. Shain, S.A.; Kirsch, J.F. Absence of carbonyl oxygen exchange concurrent with the alkaline hydrolysis of substituted methyl benzoates. J. Am. Chem. Soc 1968, 90, 5848–5854. [Google Scholar] 72. Cox, R.A. The mechanism of the hydrolysis of acylimidazoles in aqueous mineral acids. The excess acidity method for reactions that are not acid catalyzed. Can. J. Chem 1997, 75, 1093–1098. [Google Scholar] 73. Cox, R.A. The acid catalyzed decomposition of nitramide. Can. J. Chem 1996, 74, 1779–1783. [Google Scholar] 74. Eckert-Maksic, M.; Maskill, H.; Zrinski, I. Acidic and basic properties of nitramide, and the catalyzed decomposition of nitramide and related compounds; an ab initio theoretical investigation. J. Chem. Soc. Perkin Trans 2001, 2, 2147–2154. [Google Scholar] 75. Bentley, T.W.; Harris, H.C. Solvolyses of para-substituted benzoyl chlorides in trifluoroethanol and in highly aqueous media. J. Chem. Soc. Perkin Trans 1986, 2, 619–624. [Google Scholar] 76. Williams, A. Concerted mechanisms of acyl group transfer reactions in solution. Acc. Chem. Res 1989, 22, 387–392. [Google Scholar] 77. Bentley, T.W.; Ebdon, D.N.; Kim, E.-J.; Koo, I.S. Solvent polarity and organic reactivity in mixed solvents: Evidence using a reactive molecular probe to assess the role of preferential solvation in aqueous alcohols. J. Org. Chem 2005, 70, 1647–1653. [Google Scholar] 78. Ruff, F.; Farkas, Ö. Concerted SN2 mechanism for the hydrolysis of acid chlorides: comparisons of reactivities calculated by the density functional theory with experimental data. J. Phys. Org. Chem. 2011, 24, 480–491. [Google Scholar] 79. Ji, P.; Atherton, J.; Page, M.I. Liquid ammonia as a dipolar aprotic solvent for aliphatic nucleophilic substitution reactions. J. Org. Chem 2011, 76, 1425–1435. [Google Scholar] 80. Ji, P.; Atherton, J.H.; Page, M.I. The kinetics and mechanisms of aromatic nuclear substitution reactions in liquid ammonia. J. Org. Chem 2011, 76, 3286–3295. [Google Scholar] Figure 1. Excess acidity plot for trioxane hydrolysis in dilute H2SO4, HCl and HClO4. Ijms 12 08316f1 1024 Figure 2. Excess acidity plot for the hydrolysis of diethyl ether in relatively concentrated H2SO4, at several temperatures. Ijms 12 08316f2 1024 Scheme I. SN2 substitution of a secondary alkyl halide by hydroxide ion. Ijms 12 08316f3 1024 Scheme II. Structure of the only solvated proton species detected in water. Ijms 12 08316f4 1024 Scheme III. Wallach rearrangement of azoxybenzene in sulfuric acid. Scheme III. Wallach rearrangement of azoxybenzene in sulfuric acid. Ijms 12 08316f5 1024 Scheme IV. Hydrolysis of ethyl thiolbenzoates in sulfuric acid. Scheme IV. Hydrolysis of ethyl thiolbenzoates in sulfuric acid. Ijms 12 08316f6 1024 Scheme V. Hydrolysis of trioxane in dilute acid. Scheme V. Hydrolysis of trioxane in dilute acid. Ijms 12 08316f7 1024 Scheme VI. Acid hydrolysis of diethyl ether. Scheme VI. Acid hydrolysis of diethyl ether. Ijms 12 08316f8 1024 Scheme VII. Acid hydrolysis of benzamides in <60 wt% H2SO4. Ijms 12 08316f9 1024 Scheme VIII. Acid hydrolysis of benzamides in >60 wt% H2SO4. Ijms 12 08316f10 1024 Scheme IX. Acid hydrolysis of esters in <85 wt% H2SO4. Ijms 12 08316f11 1024 Scheme X. Acid hydrolysis of esters capable of forming carbocations. Scheme X. Acid hydrolysis of esters capable of forming carbocations. Ijms 12 08316f12 1024 Scheme XI. Acid hydrolysis of other esters in >85 wt% H2SO4. Ijms 12 08316f13 1024 Scheme XII. Basic ester hydrolysis. Scheme XII. Basic ester hydrolysis. Ijms 12 08316f14 1024 Scheme XIII. The mechanism of hydrolysis of acylimidazoles in water. Scheme XIII. The mechanism of hydrolysis of acylimidazoles in water. Ijms 12 08316f15 1024 Scheme XIV. Nitramide hydrolysis in neutral water. Scheme XIV. Nitramide hydrolysis in neutral water. Ijms 12 08316f16 1024 Scheme XV. A possible mechanism for acid chloride hydrolysis in water. Ijms 12 08316f17 1024 Back to Top
8d9d1b6938c57723
Particles can also be called wave packets. There is some probability function that determines which part of the wave packet the mass of the particle is in. The tail of this probability function can extend into a seperate neighboring object, during which time, the particle could decide to jump to that other place and therefore reshape it's probability distribution. An example would be a scanning tunneling microscope. It has a tiny probe-tip of conducting wire mounted on a pizeoelectric arm, which enables the tip to be scanned over the sample surface at an atomic distance. If a small voltage is applied across the tip and sample, some electrons will quantum tunnel from the tip across the gap to the sample, thus creating a measurable current. As the tip scans the atoms, the current changes, and a graphical representation of that change can be created. Consider a small metal ball bearing put in a bowl. The ball bearing has an equilibrium position at the bottom of the bowl. Now if you were to push it a bit it would climb up the walls of the bowl, and fall back again, oscillate about the bottom and come to rest. If you were to push it hard enough however the ball would get out of the bowl. This is described by saying that the wall of the bowl acts as a potential barrier. The ball is in a potential well. For it to get out you must give it enough kinetic energy (push it hard enough) to get out. However for very small objects things are not so simple. If the ball had been an electron and the bowl had been a quantum bowl then the ball could have got out without having enough energy to cross the potential barrier. So it is possible for the ball to simply materialize on the other side of the wall (even when it does not have enough energy to cross it) without the wall breaking or rupturing. This is a very naive explanation of course but I hope it explains the principle behind Quantum Mechanical tunneling. Consider a particle with energy E moving towards a potential barrier of height U0 and width a: _______________________|||||||_______________________ x | -a- | Using Schrödinger's (time-independent, one-dimensional) Equation, we can solve for the wave function of the particle (using h for h-bar):    - h22ψ   --------_ + U(x)ψ = Eψ The potential U(x) is divided into three parts: U(x) = { 0 : x < 0,          U0: 0 < x < a,          0 : x > a     } In order to solve for ψ, the wave function of the particle, we also divide it into three parts: ψ0 for x < 0, ψ1 for 0 < x < a, and ψ2 for x > a. Astute readers will notice at this point that the potential is the same for ψ0 and ψ2 -- these two wave functions ought, then, to look at least somewhat similar. As we shall see, they will have the same wavelength but different amplitudes. Since U = 0 for both ψ0 and ψ2, they each take the same form as the wave function for a free particle with energy E, or: ψ(x) = A*ei*k0x + B*e-ik0x  (where k0 = √(2*m*E/h2) ) The first portion of this equation corresponds to a wave moving rightwards while the second portion corresponds to a wave moving to the left. Or they would, had we folded in time-dependence (see note at the bottom). In order to make our lives easier, it is necessary to think a little bit about what is actually physically happening in this system. Our particle is approaching the potential barrier from the left, moving rightwards. When it hits the potential barrier, common sense says that at least some of the time, the particle will bounce off the barrier and begin moving leftwards. From this, we know that ψ0 contains both the leftward (reflected particle) and rightward (incident particle) portions of the wave function. As the other nodes in this writeup explain, when the particle hits the potential barrier, in addition to bouncing off some of the time, some of the time it will pass through. So we know that ψ2 has at least the rightward-moving component. But there is nothing in the experimental setup that would cause the particle to begin moving towards the left once it has passed through the potential barrier, so we can deduce that the leftward-moving component of ψ2 has an amplitude of zero. Now, to deal with the particle while it is inside the barrier. Common sense would suggest that the particle can never actually exist within the barrier, (let alone cross over it). Physically, however, we know for sure that a particle can, in certain circumstances, pass through the barrier, so common sense would suggest that if it exists on both sides of the barrier, it must also exist within the barrier. But how on earth are we supposed to observe a particle while it is inside a potential barrier? The answer is that while we can't observe the particle inside the potential barrier, the mathematical properties of the wave function suggests that it does in fact exist while it is inside the barrier. Since the only thing that matters in physics is relative potential, we can pretend like the particle, while it is inside the potential barrier, isn't in a potential of U0, but rather simply has an energy of E - U0 = - (U0 - E)  (since U0 > E). As before, then, the equation for this situation the wave equation with a wave number (k) of √(2*m*E/h2). In this case however, the particle has negative energy (tis a very good thing we can't physically observe the particle while it is inside the barrier, since negative energies can't exist), so it has an imaginary wave number, k1 = i√(2*m*(U0-E)/h2). We now know enough to write out all three parts of the wave equation: ψ(x) = {       A*ei*k0x + B*e-ik0x : x < 0       C*e-ik1x + D*eik1x   : 0 < x < a       E*eik0x           : x > a The wave function and its first derivative have to continuous over all x ∈ R. We can use these boundary conditions to get four relationships among the constants (ψ0(0) = ψ1(0), ψ0'(0) = ψ1'(0), ψ1(a) = ψ2(a), and ψ1'(a) = ψ2'(a)). Actually solving for the constants is impossible given just these conditions (five unknowns but only four equations), but we can find the probability that the particle reflects off the barrier, and the probability that it tunnels through the barrier. Recall that the probability function of a particle with wave function ψ is P(x) = |ψ(x)|2 Since we know that the first portion of ψ0 (with amplitude A) represents the incident particle, and the second portion (with amplitude B) represents the reflected particle, the ratio of the two wave functions |B|2/|A|2 is the fraction of the time that the incident particle will reflect off the barrier. Similarly, the ratio |E|2/|A|2 is the fraction of the time that the particle will tunnel through the barrier. After a bit of extraordinarily ugly algebra (don't try this at home), we find that: |E|2/|A|2 = ----------------------- 1 + 1/4 --------------- E (U0 - E) It shouldn't be too hard to convince yourself that since the particle has to do something after hitting the barrier, the probability that it will reflect off is just 1 - |E|2/|A|2. This probability decreases exponentially with a (since sinh(x) = (ex - e-x)/2), so the largest factor in determining tunneling probability is the width of the potential barrier. tdent notes that since the probability also depends exponentially on k1, there's a large dependence on the difference between the barrier height and the energy of the particle, but since the dependence on (U0 - E) is under a square root, this still has less of an effect than a. Note: For time-independent potentials (∂U/∂t = 0), the time-dependent solution to the Schrödinger equation is just ψ(x)*e-iωt, where ψ is the time-independent wave function and ω = E/h. So, the time-dependent form of the solution ends up looking like: As t increases then, for the first part of the function to remain constant x must increase and for the second part to remain constant x must decrease. So the first portion of the equation represents a wave travelling towards increasing x (the right), and the second portion represents a wave travelling towards decreasing x (the left). From personal notes, Modern Physics by Kenneth Krane, and (for the solution to |E|2/|A|2). Potential Barriers and Quantum Tunneling - A Layman's Introduction Note: This is a layman's introduction to quantum tunneling only. For a general introduction to quantum mechanics, please see Mauler's Layman's Guide to Quantum Mechanics Quantum tunneling is a concept from quantum mechanics, a branch of modern physics. The concept is explained using the following anecdote. Suppose there is a hill, a real-world hill which you might walk up, if you were so inclined (no pun intended). Also suppose that three identical balls are rolling at different speeds towards the hill*. Due to this speed difference, each ball has a different energy of motion to the others. As the balls begin to roll up the hill, they also begin to slow down. The slowest ball does not have enough energy of motion to make it up the hill. It slows and slows, and eventually stops somewhere below the top for an instant in time, before rolling back down the hill. The second ball has enough energy to make it to the top of the hill, but no more. It comes to a stop on top of the hill. The last ball has more energy of motion than it actually needs to make it to the top of the hill. So when it makes it to the top, it still has some motion energy, and it rolls over the top, and down the other side. This is all perfectly normal behaviour for balls on hills - nothing new there. However scientists (more specifically quantum physicists) discovered earlier last century, that when the balls are very very small, something very strange happens. In the world of the very very small, balls usually behave in the same well-known manner described in the anecdote above. However, sometimes they don't. Sometimes balls which DO have enough energy to roll right up that hill and keep going down the other side, don't make it up the hill. That's weird. Imagine taking a bowling ball, and hurling it with all your might up a gentle hill. You know it's got enough energy to go over the top, but you blink, and when you open your eyes again, the bowling ball is rolling back down the hill towards you. What's even stranger though, is that in this world of the very very small (and it is the REAL world, inhabited by you and I), sometimes balls which DON'T have enough energy to get up the hill, still do so (and continue down the other side). So it's like your bowling ball comes back out of the return shute, and you take it and roll it ever so gently up that same hill. You know it doesn't have enough energy to make it to the top, but then you blink, and when you open your eyes, there it is, rolling down the other side. This puzzling behaviour has actually been observed to happen, many many times, by scientists. The phenomenon has been given the name "tunneling", for it is as if the ball (or 'particle' as we call it) digs a tunnel through that hill, to get to the other side. In such quantum experiments, scientists fire very small bullets at very small walls, and sometimes those bullets which do not have enough energy to break through the wall, are observed a short time later, on the other side (where it would seem, they have no right to be!). Regarding this strange behaviour, I stress that THIS IS A REAL PHENOMENON. It actually applies to everything in the universe, but the chance of it happening to something as large as an elephant, or even a baseball, or a marble, is very small indeed. So small in fact, that it will probably never be seen to happen by a human on this planet. The smaller a thing is, the greater the chance of quantum tunneling occuring to it. Things that you can see with the naked eye are far too big. The kinds of particles to which tunneling commonly occurs can only be seen with special microscopes**. As a final point, please note that it is probably a good thing that quantum tunneling is almost never observed to happen to everyday objects. It would not be too much fun if that butchers' knife you just placed safely on the table, suddenly tunneled through and found its way into the top of your foot. Of course it might tunnel through your foot as well, but.....well......if you ever see that happen, please let me know. * In quantum physics, the hill is known as a 'potential barrier' ** The kind of microscopes necessary to see the particles to which tunneling routinely occurs, are know as Scanning Tunneling Microscopes (S.T.M.). In an ironic twist, the technology which drives the S.T.M., itself relies on the principle of quantum tunneling to operate. Log in or registerto write something here or to contact authors.
f07b33ccb44000c6
Friday, April 28, 2006 Leaving Cert messing up our schools the failing standard of Honours MathsEducation in Ireland is becoming Machiavellian activity. In a piece in today’s Irish Independent about the falling standards in Honours Maths this line jumped out at me. There is also a decline in the capacity of candidates to engage with problems that are not of a well-rehearsed type. Now I will admit I am a bit of a Maths nerd and to me this the time independent Schrödinger equation is quiet beautiful. However this rant is not merely about how maths is so vitally important. But about how the education system and in particular grind schools are messing up the system. The leaving cert is not a test of skill it is a test of memory. Also the points system is not a measure of intelligence but popularity. This is a concept that not many people get. Indeed Physics is one of the lowest points courses in the CAO system. But the reason is not that it is easy but because it is perceived to be very hard. Due to the fact that the Leaving Cert is a system of recurring questions and patterns people predict the papers, people prepare for the system. This has been championed by the Grind Schools preparing model answers that the students regurgitate on the day. And who is to blame the kids doing this, they are trying to get the best possible outcome and by learning and not understanding they will achieve the best results. But this should not be the job of the schools the aim of the schools is to educate children not teach them. You can teach the method of how do a sum but you have to educate them why the method is. Due to the high pressure on students to produce high points they don't care to understand. The schools are relenting to the pressure and teaching the kids not educating them. This is leading to the above quote. The Exam papers nowadays ask questions that do not challenge anyone. They learn off the question and plant it down on the paper without thinking. While this might seem harmless to some it is disastrous to the country. If people coming out of the schools do not understand what they have learnt them this countries prospects are pretty dim. My solution change the format of the papers that removes the predictability to the paper. The papers are far too formulaic and leaves its self open to predication and prepared answers. Changed the format so kids have to understand the material not learn it off. It will be better off for them in the long run. Kevin Breathnach said... Changing the format, away from that of the forumliac style in existance, would be brilliant; not only because that students would understand rather than regurgitate, but because it would, I think, relieve students unnecessary pressure and many hours of, often pointless, study. Which is exactly what I'd like most. However, I suppose that by forcing students to learn a lot of things off, CAO and the respective college get a fair idea as to how serious a prospective student will take their studies come September. Kevin Breathnach said... However, I don't think there is anything inherently wrong about Grinds schools. At least, not much in this sense. I attended a few week long grinds over the Easter periods, and while I received a few Past Paper solutions, the best thing I took from it was a understanding of Photosynthesis and Biochemistry - which, with my normal teacher, I couldn't get my head around. Of course, that is not to say that others gained an understanding, rather than a few solutions! winds said... And yet, despite that, standards in higher level maths are falling? Even with the syllabus having been "streamlined" or "simplified" on at least one occasion in the past 10 or 15 years? The overwhelming impression I have been getting regards education in this country is not that it is not challenging (that's an excuse) or that it leads to rote mentality (that's an excuse as well) but that it's being perceived as a consumer item. That's why you have sixteen year olds believing that they should drive what is being taught to them, and not them learning what is given to them. It's also seen as a currency - with which you buy your way into university. It's not the education system itself which is doing this though - it is those going through the system. Personally - I'm going to come across all old here - I think the Leaving Cert has dumbed down a little bit. I'm not in tune with the idea of projects, because there are serious issues in the UK regarding who is actually doing the project and how much input responsible adults are having into continuous assessment projects there. Possibly one way to get people to refocus on what an education system is all about is to limit the number of third level places - there seems to be a thousand different colleges in the country now. The issues with education in this country are not, to my mind, limited to the Leaving Certificate - but to the mentality that you don't do what interests you, you do what will earn you loads of money. This is where the problem lies with physics, medicine and law. This attitude is killing young minds, I think. But then, possibly, so too are games consoles. I think I'll go back to pre-aging and wittering on about the youth of today. Kevin Breathnach said... From what I can gather, it certainly has been dumbed down. The one case I know of is English. Ten or twenty years ago, apparently, poetry or drama questions would zone in very specifically on one aspect of a poet. Today, most questions on studied material will be vague - like, "Give a speech to 5th year students on the poetry of Thomas Hardy" or some such. Clearly, it encourages students to learn off one generic essay, which - with little effort - they can twist to suit the question. My English teacher spins this adaptation, saying that it rewards not only the bright students, but the hard working students. Take what you will from that. Concerning languages, I'm not sure what level of fluency was needed in times gone by. Nowadays though, I could learn off three different letters and five 90 word essays and still be fairly certain of a high B in the written examine. Equally, most people treat the oral exam as little more than a recital. I'll sign-off here, for I must ponder the merits of limiting third-level places - which currently stands at 40,000 per year. Simon said... Limiting the places will do little for the education system neither will it do anything for the country. I know people who came into collage with high points and barely passed the course. I also know people who came in with little points and came out with top marks. Limiting places is going to leave the high points low degree person in and low points high degree person out. That is simply not going to work. I think the points system should be weighted. i.e. if you get an A in physics and an A in English and apply for physics you should get 200 points for Physics and 50 for English. The people with high points often get high points by picking the easier subjects like ag science and geography. That needs to be changed. By the way hope the study is going well Kevin winds said... Okay, if there are just 60000 people taking the Leaving Certificate and there are 40,000 third level places...I'd like to know what the 40,000 consists of. Does it include apprenticeships, for example? If it doesn't, I have got to say that that seems to be excessive. I agree about the weighting - I don't think it's a bad idea per se. I'd also like to see interviews required for medicine, nursing and law. I think it's already required for teaching (certainly at postgrad level I think anyway). But that's just me. Simon said... I would not agree with interviews. The greatest thing of our points system is its annoyminity. Interviews open the situation up to much "pull". Ireland is a small place it would never be fair. I have no problem with the number going in. As long as the exam standard is kept the same year on year it makes no difference. If there is 60,000 people on one year that are deserving of a year they should get there degrees Anonymous said... The weighting is a great idea alright. Splitting the LC into 2 parts seems sensible as well and that's going to happen very soon. Also, I think there are actually going to be interviews for medicine (and lower points requirements) brought in soon. winds said... I'm just not sure at the moment that the country's interests are served by people with high points doing medicine who are not really interested in it, likewise law. At least if you interview, you stand some chance of identifying the ones who are actually interested in the subject, rather than just the potential financial returns. The problem with the points system is that it's not so much that it's anonymous any more. It is, however, weighted in such a way as the amount of money you fling at your secondary education - for example via grinds schools - may place certain elements of society at an advantage. That isn't, strictly speaking, fair either. Simon said... In fairness winds there is little you can do about that. Would you suggest giving extra points for being poor.? Changing the papers away from a formulaic pattern to a truly chalanging paper might help Frank said... How you feel about mathematical equations is similar to how I feel about nicely drafted statutes - beautiful constructions. anthony c said... I think that the article, while eloquent and insightful, seems to play down student motivation to learn. For example it's safe to say that the problems listed exist, but it's equally safe to say that there is a fair contingent of students out there that genuinely want to learn! What are their perspectives? I don't agree with weighting certain subjects like the sciences and mathematics. I see this as a disincentive to learning by reinforcing the need to take subjects for the points gained. An argument well documented in this forum. Overall, frmo my own experience with transition year students is that the key issue withg learning anythnig at school is the ease of getting a job with that subject. Law, Forensics,Medicine and IT are very popular right now because they're perceived as 'safe' jobs, and the probably are. Perhaps if there was more transparency in the subject about possible employment, the student may adopt the subject more readily. It's just a theory.
dfc0439dae697142
Korteweg-de Vries equation From Encyclopedia of Mathematics Jump to: navigation, search The equation It was proposed by D. Korteweg and G. de Vries [1] to describe wave propagation on the surface of shallow water. It can be interpreted using the inverse-scattering method, which is based on presenting the KdV-equation in the form where is the one-dimensional Schrödinger operator and The Cauchy problem for the KdV-equation is uniquely solvable in the class of rapidly decreasing functions with the initial condition: (where is the Schwartz space). Let be the scattering data for the Schrödinger operator with potential , the discrete spectrum, the continuous spectrum and normalization coefficients of the eigen functions. Then are the scattering data for the Schrödinger equation with potential , and the solution is determined from the scattering data using a certain integral equation. If , the latter equation can be solved explicitly; the potentials thus obtained are known as reflection-free, and the corresponding solutions of the KdV-equation are known as -solitons (see Soliton). The KdV-equation may be written in Hamiltonian form here the phase space is and the Poisson brackets are defined by the bilinear form of the operator . The mapping is a canonical transformation to variables of action-angle type. In terms of these new variables the Hamilton equations can be integrated explicitly. The KdV-equation possesses infinitely many integrals of motion: All these integrals of motion are integrals in involution, and the Hamiltonian systems that they generate (known as higher Korteweg–de Vries equations) are completely integrable. Using the integral equations of the inverse problem, one can also find the solution of the Cauchy problem for step-type initial data: As in a neighbourhood of the front, the solution decomposes into non-interacting solitons — this is the process of step disintegration. In the case of the Cauchy problem with periodic initial data , , the analogue of reflection-free potentials is provided by potentials for which the Schrödinger operator has finitely many forbidden zones — finite-gap potentials. Periodic and almost-periodic finite-gap potentials are stationary solutions of the higher KdV-equations; the latter constitute completely-integrable finite-dimensional Hamiltonian systems. Any periodic potential can be approximated by a finite-gap potential. Let , if (), be the edges of the bands, and the hyperelliptic curve over the field . Then real-valued almost-periodic potentials with the above band edges, as well as solutions of the Cauchy problem, are expressible in terms of -functions on the Jacobi variety of the curve . Subject to certain conditions on the edges, the resulting solutions will be periodic. If one drops the conditions , one obtains complex-valued solutions of the KdV-equation (possibly with poles), which are also called finite-gap potentials. [1] D. Korteweg, G. de Vries, "On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves" Phil Mag. , 39 (1895) pp. 422–443 [2] C.S. Gardner, J.M. Greene, M.D. [M.D. Krushkal'] Kruskal, R.M. Miura, "Method for solving the Korteweg–de Vries equation" Phys. Rev. Letters , 19 (1967) pp. 1095–1097 [3] V.E. Zakharov, L.D. Faddeev, "Korteweg–de Vries equation, a completely integrable Hamiltonian system" Funct. Anal. Appl. , 5 (1971) pp. 280–287 Funkt. Anal. Prilozhen. , 5 : 4 (1971) pp. 18–27 [4] V.A. Marchenko, "Spectral theory of Sturm–Liouville operators" , Kiev (1972) [5] B.A. Dubrovin, V.B. Matveev, S.P. Novikov, "Nonlinear equations of Korteweg–de Vries type, finite-zone linear operators and Abelian varieties" Russian Math. Surveys , 31 : 1 (1976) pp. 59–146 Uspekhi Mat. Nauk , 31 : 1 (1976) pp. 55–136 [6] I.A. Kunin, "Theory of elastic media with a microstructure" , Moscow (1975) (In Russian) The Poisson bracket is explicitly given as this being the "bilinear form of the operator / x" referred to above. In more detail, the scattering data of the Schrödinger operator consist of: i) a finite number of discrete eigen values , ; ii) normalization coefficients for each element of the discrete spectrum, defined by the requirement that the eigen function belonging to satisfies as ; and iii) a normalization coefficient for each element of the continuous spectrum , defined by the requirement that the corresponding eigen functions behave like as and as . The coefficient is called a reflection coefficient (and is the corresponding transmission coefficient). (This terminology, as well as the phrase "scattering data" , comes from the "physical picture" where one considers a plane wave coming from as being scattered by the potential ; part of the wave is reflected, part transmitted; and, indeed, .) Now if evolves according to the KdV-equation, the spectrum of remains constant, i.e. the KdV-equation is an isospectral equation and defines an isospectral flow. This follows readily from the Lax representation of the KdV-equation. The other parts of the spectral data, as evolves according to the KdV-equation, evolve as indicated above. The (non-linear) mapping which assigns to a potential its spectral data is known as the spectral transform. Recovering the potential from its scattering data, by the inverse spectral transform or inverse scattering transform, is done by means of the Gel'fand–Levitan–Marchenko equation (or Gel'fand–Levitan equation): Then is found by . This whole procedure of solving the KdV-equation is known as the inverse spectral-transform method (IST-method, inverse-scattering method), and it can be seen as a non-linear analogue of the Fourier-transform method for solving linear partial differential equations with constant coefficients. In fact, the Fourier transform can be seen as a limit of the spectral transform. The modified Korteweg–de Vries equation or mKdV-equation is It can also be integrated by means of the IST-method, this time using a two-dimensional "L operator" . The two equations are connected by the Miura transformation . The mKdV-equation is also a member of a hierarchy of completely-integrable equations and there are corresponding Miura transformations between the higher mKdV- and higher KdV-equations. More generally there is a hierarchy of mKdV-like equations associated to each Kac–Moody Lie algebra (cf. Kac–Moody algebra) , and then for each simple root of there is an associated hierarchy of KdV-equations together with a corresponding Miura transformation, [a5]. These equations are sometimes called Drinfel'd–Sokolov equations. The usual mKdV- and KdV-hierarchies correspond to the Kac–Moody Lie algebra . To the simple Lie algebras one also associates another family of completely-integrable systems: the two-dimensional Toda lattices, also sometimes called Leznov–Saveliev systems. The simplest (associated to ) is the sine-Gordon equation or . There is a "duality" between the mKdV-like equations and the corresponding Toda systems, as follows. If as a function of satisfies the Toda lattice equation for and as a function of evolves according to the corresponding mKdV-like equation, then satisfies the Toda lattice equation for all , and vice versa. There is a quantum analogue of the inverse-scattering method, called quantum inverse scattering [a6], [a7]. The (quantum) Yang–Baxter equation plays an important role in this method. [a1] M.J. Ablowitz, H. Segur, "Solitons and the inverse scattering transform" , SIAM (1981) [a2] G.L. Lamb, "Elements of soliton theory" , Wiley (1980) [a3] A.C. Newell, "Solitons in mathematics and physics" , SIAM (1985) [a4] F. Caligero, A. Degasperis, "Spectral transform and solitons" , 1 , North-Holland (1982) [a5] V.G. Drinfel'd, V.V. Sokolov, "Lie algebras and equations of Korteweg–de Vries type" J. Soviet Math. , 30 (1985) pp. 1975–2005 Itogi Nauk. i Tekhn. Sovrem. Probl. Mat. , 24 (1984) pp. 81–180 [a6] L.A. [L.A. Takhtayan] Takhtajan, "Hamiltonian methods in the theory of solitons" , Springer (1987) (Translated from Russian) [a7] L.A. [L.A. Takhtayan] Takhtajan, "Integrable models in classical and quantum field theory" , Proc. Internat. Congress Mathematicians (Warszawa, 1983) , PWN & Elsevier (1984) pp. 1331–1346 [a8] M. Toda, "Nonlinear waves and solitons" , Kluwer (1989) [a9] V.A. Marchenko, "Nonlinear equations and operator algebras" , Reidel (1988) [a10] S. Novikov, S.V. Manakov, L.P. Pitaevskii, V.E. Zakharov, "Theory of solitons" , Consultants Bureau (1984) (Translated from Russian) How to Cite This Entry: Korteweg–de Vries equation. Encyclopedia of Mathematics. URL:
ad3caf9c4c15e3d1
Take the 2-minute tour × While investigating the EPR Paradox, it seems like only two options are given, when there could be a third that is not mentioned - Heisenberg's Uncertainty Principle being given up. The setup is this (in the wikipedia article): given two entangled particles, separated by a large distance, if one is measured then some additional information is known about the other; the example is that Alice measures the z-axis and Bob measures the x-axis position, but to preserve the uncertainty principle it's thought that either information is transmitted instantaneously (faster than light, violating the special theory of relativity) or information is pre-determined in hidden variables, which looks to be not the case. What I'm wondering is why the HUP is not questioned? Why don't we investigate whether a situation like this does indeed violate it, instead of no mention of its possibility? Has the HUP been verified experimentally to the point where it is foolish to question it (like gravity, perhaps)? It seems that all the answers are not addressing my question, but addressing waveforms/commutative relations/fourier transforms. I am not arguing against commutative relations or fourier transforms. Is not QM the theory that particles can be represented as these fourier transforms/commutative relations? What I'm asking this: is it conceivable that QM is wrong about this in certain instances, for example a zero state energy, or at absolute zero, or in some area of the universe or under certain conditions we haven't explored? As in: Is the claim then that if momentum and position of a particle were ever to be known somehow under any circumstance, Quantum Mechanics would have to be completely tossed out? Or could we say QM doesn't represent particles at {absolute zero or some other bizarre condition} the same way we say Newtonian Physics is pretty close but doesn't represent objects moving at a decent fraction of the speed of light? EPR Paradox: "It considered two entangled particles, referred to as A and B, and pointed out that measuring a quantity of a particle A will cause the conjugated quantity of particle B to become undetermined, even if there was no contact, no classical disturbance." "According to EPR there were two possible explanations. Either there was some interaction between the particles, even though they were separated, or the information about the outcome of all possible measurements was already present in both particles." These are from the wikipedia article on the EPR Paradox. This seems to me to be a false dichotomy; the third option being: we could measure the momentum of one entangled particle, the position of the other simultaneously, and just know both momentum and position and beat the HUP. However, this is just 'not an option,' apparently. I'm not disputing that two quantities that are fourier transforms of each other are commutative / both can be known simultaneously, as a mathematical construct. Nor am I arguing that the HUP is indeed false. I'm looking for justification not just that subatomic particles can be models at waveforms under certain conditions (Earth like ones, notably), but that a waveform is the only thing that can possibly represent them, and any other representation is wrong. You van verify the positive all day long, that still doesn't disprove the negative. It is POSSIBLE that waveforms do not correctly model particles in all cases at all times. This wouldn't automatically mean all of QM is false, either - just that QM isn't the best model under certain conditions. Why is this not discussed? share|improve this question I +1d to get rid of the downvote you had. It's the last line that did it for me. –  Olly Price Aug 13 '12 at 22:37 Anyone who is downvoting care to elaborate on where my question is unclear, unuseful or shows no effort? I'd be glad to improve it if I can. –  Ehryk Aug 13 '12 at 23:40 Try Bohmian mechanics. –  MBN Sep 6 '12 at 11:02 @Ehryk: Not my downvote, but this question is a waste of time. You misunderstood what EPR is all about. The EPR effects have nothing to do with HUP, and you can show that they are inconsistent with local variables determining experimental outcomes without doing quantum mechanics, just from the experimental outcomes themselves. This means the weirdness is not due to the formalism, but really there in nature. –  Ron Maimon Sep 11 '12 at 6:15 So in a universe without the commutative relation/HUP, where the commutative relation was sometimes zero / position and momentum could both be known, where's the paradox with EPR? You could just determine the values of both entangled particles, no paradox necessary. –  Ehryk Sep 11 '12 at 7:24 12 Answers 12 up vote 5 down vote accepted In precise terms, the Heisenberg uncertainty relation states that the product of the expected uncertainties in position and in momentum of the same object is bounded away from zero. Your entanglement example at the end of your edit does not fit this, as you measure only once, hence have no means to evaluate expectations. You may claim to know something but you have no way to check it. In other entanglement experiments, you can compare statistics on both sides, and see that they conform to the predictions of QM. In your case, there is nothing to compare, so the alleged knowledge is void. The reason why the Heisenberg uncertainty relation is undoubted is that it is a simple algebraic consequence of the formalism of quantum mechanics and the fundamental relation $[x,p]=i\hbar$ that stood at the beginning of an immensely successful development. Its invalidity would therefore imply the invalidity of most of current physics. Bell inequalities are also a simple algebraic consequence of the formalism of quantum mechanics but already in a more complex set-up. They were tested experimentally mainly because they shed light on the problem of hidden variables, not because they are believed to be violated. The Heisenberg uncertainty relation is mainly checked for consistency using Gedanken experiments, which show that it is very difficult to come up with a feasible way of defeating it. In the past, there have been numerous Gedanken experiments along various lines, including intuitive and less intuitive settings, and none could even come close to establishing a potential violation of the HUP. Edit: One reaches experimental limitations long before the HUP requires it. Nobody has found a Gedankenexperiment for how to do defeat the HUP, even in principle. We don't know of any mechanism to stop an electron, thereby bringing it to rest. It is not enough to pretend such a mechanism exists; one must show a way how to achieve it in principle. For example, electron traps only confine an electron to a small region a few atoms wide, where it will roam with a large and unpredictable momentum, due to the confinement. Thus until QM is proven false, the HUP is considered true. Any invalidation of the foundations of QM (and this includes the HUP) would shake the world of physicists, and nobody expects it to happen. share|improve this answer Why wouldn't it just invalidate it under certain conditions? For example: by some means, we completely arrest an electron. Position = center of device, momentum = 0. Both known simultaneously. Couldn't we just say QM is 'not a valid model for arrested particles but works for moving ones' without invalidating most of current physics? –  Ehryk Sep 7 '12 at 21:43 Any invalidation of the foundations of QM would shake the world of physicists. - But the center of a device is usually poorly definable, and an electron cannot be arrested completely, neither in position nor in momentum. One reaches experimental limitations long before the HUP requires it. - In the past, there have been numerous Gedanken experiments along similar and many other lines, and none could even come close to establishing a violation of the HUP. –  Arnold Neumaier Sep 9 '12 at 13:12 In practice, I get this. I'm not claiming that we can make such a machine now, or soon. But the HUP seems to say such a machine cannot exist, now or ever, with more advanced races or technologies or anything. Are you saying an electron cannot be arrested completely - anywhere, ever, inside a black hole, at absolute zero - under no conditions ever? –  Ehryk Sep 9 '12 at 19:40 until QM is proven false, the HUP is true. –  Arnold Neumaier Sep 10 '12 at 11:53 @Ehryk: here's why you're seeming nonsensical to everyone here: at small length scales, an electron looks very, very much like a wave. You get interference patterns and everything. Now, you want to 'stop' it. Well, a "slower" electron has a longer wavelength than a "faster" one, but this longer wavelength is going to spread it out farther. By the time you get to your limit of a 'stopped' electron, the electron will be spread out over all of space. –  Jerry Schirmer Sep 14 '12 at 23:13 In quantum mechanics, two observables that cannot be simultaneously determined are said to be non-commuting. This means that if you write down the commutation relation for them, it turns out to be non-zero. A commutation relation for any two operators $A$ and $B$ is just the following $$[A, B] = AB - BA$$ If they commute, it's equal to zero. For position and momentum, it is easy to calculate the commutation relation for the position and momentum operators. It turns out to be $$[\hat x ,\hat p] = \hat x \hat p - \hat p \hat x = i \hbar$$ As mentioned, it will always be some non-zero number for non-commuting observables. So, what does that mean physically? It means that no state can exist that has both a perfectly defined momentum and a perfectly defined position (since $ |\psi \rangle$ would be both a right eigenstate of momentum and of position, so the commutator would become zero. And we see that it isn't.). So, if the uncertainty principle was false, so would the commutation relations. And therefore the rest of quantum mechanics. Considering the mountains of evidence for quantum mechanics, this isn't a possibility. I think I should clarify the difference between the HUP and the classical observer effect. In classical physics, you also can't determine the position and momentum of a particle. Firstly, knowing the position to perfect accuracy would require you to use a light of infinite frequency (I said wavelength in my comment, that's a mistake), which is impossible. See Heisenberg's microscope. Also, determining the position of a particle to better accuracy requires you use higher frequencies, which means higher energy photons. These will disturb the velocity of the particle. So, knowing the position better means knowing the momentum less. The uncertainty principle is different than this. Not only does it say you can't determine both, but that the particle literally doesn't have a well defined momentum to be measured if you know the position to a high accuracy. This is a part of the more general fact in quantum mechanics that it is meaningless to speak of the physical properties of a particle before you take measurements on them. So, the EPR paradox is as follows - if the particles don't have well-defined properties (such as spin in the case of EPR), then observing them will 'collapse' the wavefunction to a more precise value. Since the two particles are entangled, this would seem to transfer information FTL, violating special relativity. However, it certainly doesn't. Even if you now know the state of the other particle, you need to use slower than light transfer of information to do anything with it. Also, Bell's theorem, and Aspect's tests based off of it, show that quantum mechanics is correct, not local realism. share|improve this answer So how do we know that all particles have a non-commuting relationship, always and forever, under all conditions, even the ones we aren't able to measure or with technology or knowledge we don't yet possess? –  Ehryk Sep 6 '12 at 10:47 What if you define position and momentum as the two real numbers that you measure at time t from experiment? (That's what most people consider "position" and "momentum" to be anyway.) What is this "new" definition of position and momentum? –  Nick Sep 8 '12 at 23:11 Let me add this: I've taken QM and done those calculations for the commutation plenty of times to figure out what sets of compatible observables there are. But I could give someone a random formula for some random integral and divide by 6.3 and say "look, this always comes out to a real value -- thus position and momentum can't be simultaneously well-defined!" and that makes no sense whatsoever. Yeah, I know the whole spiel about eigenvalues and eigenstates and identical preparations of quantum systems, but what kind of physical experiment demonstrates this limit? –  Nick Sep 8 '12 at 23:15 Noncommutativity of operators nicely explains emission spectra, which I believe were the subject of Heisenberg's (?) initial ponderings. There's a nice bit of this history explained at page 40 of this book by Alain Connes alainconnes.org/docs/book94bigpdf.pdf (there is probably a more focused reference for this history, but I don't know of one) –  Ryan Thorngren Sep 9 '12 at 0:51 The Heisenberg's relation is not tied to quantum mechanics. It is a relation between the width of a function and the width of its fourier transform. The only way to get rid of it is to say that x and p are not a pair of fourier transform: ie to get rid of QM. share|improve this answer So if by any means at all (entanglement, future machines, or divine powers) one could measure both position and momentum simultaneously, then all of quantum mechanics is false? There could be no QM in a universe in which this is possible? –  Ehryk Aug 14 '12 at 9:35 You necessarily need to change the relationship between position and momemtum. It is mathematically impossible if they just form a fourier transform pair. But considering the huge amount of datas validating QM, one can try to extend QM by adding a small term in the pair or by using a fractional commutator (with fractional derivative) for instance. –  Shaktyai Aug 14 '12 at 9:43 How about saying x and p are a pair of fourier transforms USUALLY, but not in certain circumstances such as {inside a black hole, at absolute zero, under certain entanglement experiments, in a zero rest energy universe, etc.} How do we know that because QM is right USUALLY or from what we can observe, that it is right ALWAYS and FOREVER? –  Ehryk Sep 6 '12 at 10:43 That is to say: QM as we know it is not valid in these cases. There is no possible objections to such a statement, but for it to get accepted by the physicists, you need to prove that you can explain things in a simpler way and that you can predict something measurable. –  Shaktyai Sep 6 '12 at 10:46 Because there is no proof whatsoever that QM fails. The day it fails we shall reconsider the question. However, there are many theorists working on alternative theories, so you have your chances. –  Shaktyai Sep 6 '12 at 15:09 The wave formulation has in its seed the uncertainty relation. Let me be precise what is meant by the wave formulation: the amplitude over space points will give information about localization on space, while amplitude over momenta will give information about localization in momentum space. But for a function, the amplitude over momenta is nothing else but the Fourier transform of the space amplitude. The following is jut a mathematical fact, not up to physical discussion: the standard deviation, or the spread of the space amplitude, multiplied by the spread of the momenta amplitude (given by the Fourier transform of the former) will be bounded from below by one. So, it should be pretty clear that, as long as we stick to a wave formulation for matter fields, we are bound mathematically by the uncertainty relation. No work around over that. Why we stick to a wave formulation? because it works pretty nicely. The only way someone is going to seriously doubt that is the right description is to either: 1) find an alternate description that at least explains everything that the wave formulation describes, and hopefully some extra phenomena not predicted by wave formulation alone. 2) find an inconsistency in the wave formulation. In fact, if someone ever manages to measure both momenta and position for some electron below the Planck factor, it would be definitely an inconsistency in the wave formulation. It would mean we would have to tweak the De Broglie ansatz or something equally fundamental about it. Needless to say, nothing like that has happened share|improve this answer It's a mathematical fact IF the particle can indeed be wholly represented by that specific function, right? So in the entanglement experiment, perhaps that function does not represent the state of TWO entangled particles? Maybe we have entanglement wrong, or maybe that function does not represent particles in certain conditions? Why are these possibilities not even discussed? –  Ehryk Sep 6 '12 at 18:05 @Ehryk, because scientists, as all humans, tend to do the least amount of effort that will get the job done, it really does not make economical sense to do otherwise. As i said, there would be something to discuss if something in the experiment would not turn out as expected, but it does. If you want to do your life's mission to prove false the wave representation, then you need to build an experiment that will either confirm it or disprove it. then, people will likely start seriously discussing other possibilities. –  lurscher Sep 6 '12 at 18:13 We can't prove Zeus doesn't exist, yet we don't accept his existence because of this. An idea shouldn't have to be 'debunked' to have a healthy amount of doubt in it, yet the wave formulation representing all particles, everywhere, at all times and locations seems to be presented 'beyond doubt' - so why is it stated with such certainty about unknowability and when challenged, the opposition gives in without so much as a mention? –  Ehryk Sep 6 '12 at 18:26 (I'm not trying to prove it wrong, or stating that it is, I'm asking if it can be false and if so, why it's not treated as such) –  Ehryk Sep 6 '12 at 18:28 @Ehryk, suppose someone starts asking why physicists assume that we only have one time dimension, and why we don't try to debunk that. We would reply the same thing; we have no reason to devote resources to debunk something that seems to fit so nicely with existing phenomena, so the ball is in the court of the person that insist that, say, two-dimensional time makes great deal of sense for X or Y experiment. Then, if the experiment sounds like something that has not been tested, and is under budget to implement, maybe some experimentalists will try to do it. That is how science works –  lurscher Sep 6 '12 at 18:30 If we want the position and the momentum to be well-defined at each moment of time, the particle has to be classical. We inherited these notions from classical mechanics, where they apply successfully. Also they apply at macroscopic level. So, it is a natural question to ask if we can keep their good behavior in QM. Frankly, there is nothing to stop us to do this. We can conceive a world in which the particles are pointlike all the time, and move along definite trajectories, and this will "beat HUP". This was the first solution to be looked for. Einstein and de Broglie tried it, and not only them. Even Bohr, in his model, envisioned electrons as moving along definite trajectories in the atom (before QM). David Bohm was able to develop a model which has at a sub-quantum level this property, and in the meantime behaves like QM at quantum level. The price to be paid is to allow interactions which "beat the speed of light", and to adjust the model whenever something incompatible with QM was found. IMHO, this process of adjustments still continues today, and this looks very much like adding epicycles in the heliocentric model. But I don't want to be unfair with Bohm and others: it is possible to emulate QM like this, and if we learn QM facts which contradict it, it will always be possible to find such a model which behaves like QM, but also has a subquantum level which consists of classical-like point particles with definite positions and momenta. At this time, these examples prove that what you want is possible. One may argue that they are unaesthetic, because they are indeed more complicated than QM. But this doesn't mean that they are not true. Also, at this time they don't offer anything testable which QM can't offer. So, while QM describes what we observe, the additional features of hidden variable theories are not observable, more complicated, and violate special relativity. Or, if they don't violate special relativity, they contradict what QM predicts and we observed in experiments of entanglement like that of Alan Aspect. If EPR presents us with two alternatives, (1) spooky action at distance, (2) QM is incomplete, and that you propose, (3) HUP is false, let's not forget that Aspect's experiment and many others confirmed the alternative (1). Now, it would be much better for such models if they would stop adjusting themselves to mimic QM, and predict something new, like a violation of HUP. This would really be something. In conclusion, yes, you are right and in principle it is possible to beat HUP. The reason why most physicists don't care too much about this, is that the known ways to beat HUP are ugly, have hidden elements, violate other principles. But others consider them beautiful and useful, and if you are interested, start with Bohm's theory and the more recent developments of this. Synopsis: The Certainty of Uncertainty Violation of Heisenberg’s Measurement-Disturbance Relationship by Weak Measurements (arXiv link) share|improve this answer This was rather helpful, so I appreciate it. I'm still just having difficulty wrestling with unknowability in relation to this; for example if we ever found a way to arrest a particle completely; we'd know it's position and momentum (0) both at the same time, and while it violated HUP, it could just be said 'this particle cannot be represented by a wavefunction.' The reach of the HUP seems to include this though, with no provisions, and just be accepted so OBVIOUSLY you can't stop a particle. Would we just say the particle is classical in that instance? –  Ehryk Sep 6 '12 at 18:20 @Cristi I see (and generally have no objections to) your argument, but that conclusion seems misleading. Yes, it's possible to beat HUP (by discarding quantum mechanics) in the same sort of sense that it's possible to create a macroscopic stable wormhole: not strictly ruled out, but there is no evidence to support it. So I think it's misleading to be saying that this is possible. –  David Z Sep 6 '12 at 18:45 @David Zaslavsky: Thanks. To make clear my conclusion, and less misleading, I wrote the first, rather lengthy, paragraph. This contains for instance the statement "while QM describes what we observe, the additional features of hidden variable theories are not observable, more complicated, and violate special relativity." Anyway, I considered it would be more misleading to claim that one knows HUP can't be violated no matter what. –  Cristi Stoica Sep 6 '12 at 19:31 @Ehryk: "What happened to particle-wave duality?". Particles are represented as wavefunctions. They are defined on the full space, but may have a small support (bump functions). At limit, when concentrated at a point, bump becomes Dirac's $\delta$ function. Then, it has definite position $x$, but indefinite wave vector, so it spreads immediately (this corresponds to HUP). Its "dual" is a pure wave (with definite wave vector $k_x$, hence momentum $p_x$). The "particle-wave" duality refers to these two extreme cases. But most of the times the "wavicles" are somewhere between these two extremes. –  Cristi Stoica Sep 6 '12 at 20:41 @Ehryk: "How does a wave have mass?" They have momentum and energy: multiply wave 4-vector with $\hbar$ and obtain $4$-momentum, so yes, they have mass. Interesting thing: the rest mass $m_0$ is the same, even though in general the wave 4-vector is undetermined. By "undetermined" you can understand that the wavefunction is a superposition of a large (usually infinite) number of pure wavefunctions. Pure wavefunctions have definite wave vector (hence momentum), but totally undetermined position. –  Cristi Stoica Sep 6 '12 at 20:50 You are asking if a more complete theory might show that HUP is wrong and that position and momentum do exist simultaneously. But a more complete theory has to explain all the observations that QM already explains, and those observations already show that position and momentum cannot have definite values simultaneously. This is known because when particles such as photon, electrons, or even molecules are sent through a pair of slits one at a time, an interference pattern on the detector plate appears that shows that the probability of the measured location and time follows a specific mathematical relationship. The fact that certain regions have zero probability shows that before measurement, the particles exist in a superposition of possible states, such that the wave function for those states can cancel out with other states resulting in areas of low probability of observation. The observed relationships through increasingly complex experiments rules out possibilities other than what is described by QM. The only way that QM could be superseded by a new theory is for new observations to be made that violate QM, but the new theory would still result in the same predictions as QM in the circumstances that QM has already been tested. Since HUP results directly from QM, HUP would also follow from a new theory with the only possible exception in conditions such as super high energy conditions such when a single particle is nearly a black hole. Basically you have to get used to the idea that particles are really quantized fluctuations in a field and that the field exists in a superposition of states. Any better theory will simply provide additional details about why the field behaves in that way. share|improve this answer "Accept it as true until it's debunked" is not scientific. "When a particle can be perfectly represented by a waveform and ONLY a waveform, then it cannot have definite momentum and position" is acceptable. Asserting the "When" is "Always and Forever" is not. –  Ehryk Sep 15 '12 at 0:05 if can help Open timelike curves violate Heisenberg's uncertainty principle ...and show that the Heisenberg uncertainty principle between canonical variables, such as position and momentum, can be violated in the presence of interaction-free CTCs.... Foundations of Physics, March 2012, Volume 42, Issue 3, pp 341-361 ...considering that a D-CTC-assisted quantum computer can violate both the uncertainty principle... Phys Rev Lett, 102(21):210402, May 2009. arxiv 0811.1209 ...show how a party with access to CTCs, or a "CTC-assisted" party, can perfectly distin- guish among a set of non-orthogonal quantum states.... Phys. Rev. A 82, 062330 2010. arxiv 1003.1987v2 ...and can be interacted with in the way described by this simple model, our results confirm those of Brun et al that non-orthogonal states can be discriminated... ...Our work supports the conclusions of Brun et al that an observer can use interactions with a CTC to allow them to discriminate unknown, non-orthogonal quantum states – in contradiction of the uncertainty principle... share|improve this answer The only way to make Heisenberg's principle irrelevant is to measure the speed and the position (to make it simple) of a fundamental particle. In other words, you would have to observe a particle, without having it collide with a photon or reacting to a magnetic force, or without interacting with it. There might be an other way, which would be to find a very general law (but not statistical) which describes the characteristics (spin, speed, position etc) of an elemental particle in an absolute way.... share|improve this answer I think that's just the observer effect, described in another answer, and I can beat that by hypothesyzing a future race that has developed a gravitational particle-position-and-momentum sensor machine, which does not use photons or interact with the particle in any way that would change the position or momentum (a read only sensor). Even in this case, the HUP says they CANNOT be known simultaneously. –  Ehryk Sep 6 '12 at 10:56 I want to know what evidence there is to support this, even in the case of such a hypothetical machine. –  Ehryk Sep 6 '12 at 10:57 In this case you interact using the gravitationnal interaction, so that's almost the same. –  Yves Sep 6 '12 at 11:21 Not really. Bombarding it with photons are distinct events; surrounding it by a machine that is sensitive to the gravitation inside of it would only exert the same gravity that any other matter around it would, and if done as stated in my hypothetical, would not alter the position or momentum in any way once the particle has settled inside the machine. –  Ehryk Sep 6 '12 at 11:24 Very interesting, and it would be possible if such a machine existed (my first point). But how would you measure something else than a change in the surronding gravitationnal field (which would imply an interaction with the particule) and how would you measure a spin ? It sounds like your method is equivalent to trying to measure an absolute quantity of energy, or to "forcing" the position or momentum of your particle, a case which doesn't fall under Heisenberg's principle. This reasoning might end up as a Ouroboros.. –  Yves Sep 6 '12 at 11:33 "Heisenberg uncertainty principle" is a school term that is used in popular literature. It simply does not matter. What matters is the wavefunction and Schroedinger equation. The EPR paradox experiment never used any explicit "uncertainty principle" in the proof. share|improve this answer As @MarkM pointed out above, what I meant but wasn't able to espouse was a 'non-commutation' property (a term I've not heard of in this context), or the claim that the exact position and momentum of a particle cannot be known simultaneously. I thought this was semantically equivalent to the Heisenberg Uncertainty Principle, which I guess it is not. –  Ehryk Aug 13 '12 at 23:30 Also, from wikipedia: "The uncertainty principle is a fundamental concept in quantum physics." (from the disambiguation page, main article here: en.wikipedia.org/wiki/Uncertainty_principle ). Could you explain or give sources for it 'not matter'ing? Further, the wiki article on the EPR Paradox explicitly uses the Heisenberg Uncertainty Principle - I'm not claiming WP is any authority, but it would be the source of my confusion. –  Ehryk Aug 13 '12 at 23:38 @Annix This isn't true. Firstly, Heisenberg's matrix mechanics is an equally valid formulation of QM as wave mechanics, see Zettlli page 3. Second, the uncertainty principle is a part of wave mechanics. As you say, you can easily derive it from the Schrodinger equation. I find it odd that you say that this somehow makes the uncertainty principle irrelevant. You can't simultaneously know position and momentum to perfect accuracy, since localizing the position of the particle involves adding plane waves, which then makes the momentum uncertain. –  Mark M Aug 13 '12 at 23:58 @Anixx If you claim that you may derive the HUP from the Schrödinger equation, you should show it. I actually think it is not possible, but I'm curious. One usually derive the HUP from the commutation relations and later one shows it is preserved by the unitary evolution. The Schr. equation tells us how the states evolve in time, while the HUP must be verified even in the initial state so I'm very skeptical about your derivation. In any case, the HUP is at least as fundamental as the Schr. equation and it is a term very often used in technical papers and seminars. –  drake Aug 14 '12 at 0:28 @drake You can't derive it from the SE, but from the wave mechanics formulation (which is what I guess Annix means). See the'Proof of Kennard Inequality using Wave Mechanics' sub-section here: en.wikipedia.org/wiki/… However, I agree with you that the HUP is fundamental (see my above post.). –  Mark M Aug 14 '12 at 1:32 Without gravity: The uncertainty principle is not really a principle because it is a derivable statement, it is not postulated. It is derivable and proven mathematically. Once you prove something you cannot unprove it. That means it cannot turn out to be false. For experimental verifications, see for example this article by Zeilinger et al and the references inside. Zeilinger is a world expert on quantum phenomena and it is expected that he will get Nobel prize in the future. With gravity, (and that matters only at extremely high energy, as high as the Planck scale): Intuitively you can use the uncertainty principle to give an estimate about the energy needed to resolve tiny region of space. For sufficiently small region in space you will create a black hole. So there is a limit on the spacial resolution one can achieve, because of gravity. If you try to use higher energy you will create a bigger black hole. Bottom line is, uncertainty principle does not make sense in this case because space loses its meaning and it cannot be defined operationally. share|improve this answer Things can be unproven if one of the axioms or postulates they are based on is proven false. HUP may be true if <x, y and z> are true, but it certainly is based on foundations (waveforms representing matter, for one) that are not infallible. –  Ehryk Aug 14 '12 at 11:37 @Ehryk You cannot unprove something by changing the postulates, because then you are talking about totally different problem. You can compare only 2 situations giving the same postulates/axioms. The axioms are true and not false in the sense that the coherent structure coming out of those postulates leads to predictions that are consistent with experimental observations. The world is quantum mechanical. –  Revo Aug 14 '12 at 16:03 You cannot unprove it as a model of how things could work, no, but you could show that it is just not the most accurate model of the world we live it - just like we can theorize about hyperbolic geometry as a model, though it's unlikely to be the model of reality. Is it the case that you could not have a variant of something like QM that produces similar results while in some instances allowing precise position and momentum values, in the same way newton's laws were 'good enough' for the values we had measured at non relativistic speeds up until that point? –  Ehryk Aug 15 '12 at 1:43 @Ehryk No. You could not have had something similar to Newtonian meachanics that underlies Quantum Mechanics. What you are thinking of has been thought of for long time ago, it is unknown as hidden variables theories. It has been proven experimentally that something like Newtonian mechanics or any deterministic theory cannot be the basis of Quantum Mechanics. May be you should also keep in mind the following main point: QM is more general than CM, hence it is more fundamental. Since QM is more general than CM, one should understand how CM emerges from QM, not the other way around. –  Revo Aug 15 '12 at 1:50 @Ehryk One should understand CM in terms of QM not QM in terms of CM. –  Revo Aug 15 '12 at 1:52 The way I see it, HUP cannot be disproven "at absolute zero", because absolute zero cannot be physically reached, er... due to HUP... is circular reasoning good enough? Let's try something else. Maybe try to imagine what would happen if HUP was to be violated? For one, I guess the proton - electron charges would cause one or two electrons to fall down into the nucleus, as HUP normally prevents that (if the electron fell down on nucleus we'd know it's position with great precision, requiring it to have indeterminate but large momentum, so it kind of settles for orbiting around nucleus). If you know more about the stuff than I do, try to imagine what else would happen, and how likely is that effect. For example, if HUP violation would imply violation of 2nd law of thermodynamics, this would render HUP violation pretty unlikely. That much from a layman. share|improve this answer But then why can't we just say 'HUP is only for particles not at absolute zero'? It seems like violating it is 'not an option', even as above - so an electron falls into the nucleus. It has a measurable position and momentum. Why does HUP have to hold so strongly that we instead are comfortable with 'that particle must always have energy'? –  Ehryk Sep 6 '12 at 18:31 The way I see it "absolute zero" is purely theoretical concept. Look up Bose-Einstein condensate, get a feeling for what happens at extremely low temperatures and then try to project that further to zero. Doesn't click. So saying "HUP is only for particles at absolute zero" is like saying "HUP is for all particles", for absolute zero can't be reached. –  pafau k. Sep 6 '12 at 18:54 Do you have evidence or citations that nothing can be absolute zero? Or are you just asserting it? Note that saying 'we can't get to absolute zero' is different than 'no particle anywhere, at any time, can be at absolute zero.' –  Ehryk Sep 6 '12 at 19:10 Let me quote the beginning of Wikipedia entry on absolute zero :) "Absolute zero is the theoretical temperature at which entropy reaches its minimum value", note the word theoretical. Temperature always flows from + to -, so the simple explanation is: you'd have to have something below absolute zero to cool something else to absolute zero. (this would violate laws of thermodynamics). –  pafau k. Sep 6 '12 at 19:59 Transfer heat from hot to hotter? Decrease the volume of the container. Cool matter? Increase the volume of the container. In both cases, heat is not 'transferred', but temperature (average kinetic energy) has been changed without the interaction of other matter, either hotter or colder. –  Ehryk Sep 10 '12 at 11:18 The Heisenberg uncertainty principle forms one of the most important pillars in physics. It can't be proven wrong because too many experimentally determined phenomena are a result of the uncertainty principle. However, something may be discovered in the future that can make a modification to the uncertainty principle - in a similar way that Newton's laws were modified by Einstein's special relativity. Saying that the uncertainty principle is wrong is like saying that Newton's law is wrong. In reply to the comments, I'm not saying that it can be falsified. It can't. In a classical sense, it will always be correct, in a similar way that Newton's law will always be correct.However, it can be modified. Until the day that all the open questions in physics have been resolved, how can you claim that the uncertainty principle can't be modified further? Do we know everything about extra dimensions? Do we know everything about string theory and physics at the Planck scale? By the way, it has already been modified. Please check this link. The uncertainty principle will always be correct. However, it can and has been modified. In its current formalism and interpretation, it could represent a special case of a larger underlying theory. The claim that the current formalism and limitations to the uncertainty principle are absolute and can never be modified under any circumstance in the universe, is a claim that does not obey the uncertainty principle itself. share|improve this answer The uncertainty principle is a lot closer to uncertainty law than your answer lets on. It's not really about measurement so much as it's about a Fourier Transform. –  Brandon Enright Jan 26 '14 at 23:47 The Heisenberg Uncertainty Principle is an unfalsifiable claim? All of (good) science is falsifiable. See the first paragraph: en.wikipedia.org/wiki/Falsifiability –  Ehryk Jan 28 '14 at 6:12 protected by Qmechanic Jan 26 '14 at 23:37 Would you like to answer one of these unanswered questions instead?
bf26414fc4593e7d
Atomic model Basic properties Atomic number Atomic mass and isotopes The electron Charge, mass, and spin Orbits and energy levels Electron shells Atomic bonds Conductors and insulators Magnetic properties The nucleus Nuclear forces Nuclear shell model Radioactive decay Nuclear energy Development of atomic theory The concept of the atom that Western scientists accepted in broad outline from the 1600s until about 1900 originated with Greek philosophers in the 5th century BC. Their speculation about a hard, indivisible fundamental particle of nature was replaced slowly by a scientific theory supported by experiment and mathematical deduction. It was 2,000 years before modern physicists realized that the atom is indeed divisible and that it is not hard, solid, or immutable. The atomic philosophy of the early Greeks Leucippus of Miletus (5th century BC) is thought to have originated the atomic philosophy. His famous disciple, Democritus of Abdera, named the building blocks of matter atomos, meaning literally “indivisible,” about 430 BC. Democritus believed that atoms were uniform, solid, hard, incompressible, and indestructible and that they moved in infinite numbers through empty space until stopped. Differences in atomic shape and size determined the various properties of matter. In Democritus’s philosophy, atoms existed not only for matter but also for such qualities as perception and the human soul. For example, sourness was caused by needle-shaped atoms, while the colour white was composed of smooth-surfaced atoms. The atoms of the soul were considered to be particularly fine. Democritus developed his atomic philosophy as a middle ground between two opposing Greek theories about reality and the illusion of change. He argued that matter was subdivided into indivisible and immutable particles that created the appearance of change when they joined and separated from others. The philosopher Epicurus of Samos (341–270 BC) used Democritus’s ideas to try to quiet the fears of superstitious Greeks. According to Epicurus’s materialistic philosophy, the entire universe was composed exclusively of atoms and void, and so even the gods were subject to natural laws. Most of what is known about the atomic philosophy of the early Greeks comes from Aristotle’s attacks on it and from a long poem, De rerum natura (“On the Nature of Things”), which the Latin poet and philosopher Titus Lucretius Carus (c. 95–55 BC) wrote to popularize its ideas. The Greek atomic theory is significant historically and philosophically, but it has no scientific value. It was not based on observations of nature, measurements, tests, or experiments. Instead, the Greeks used mathematics and reason almost exclusively when they wrote about physics. Like the later theologians of the Middle Ages, they wanted an all-encompassing theory to explain the universe, not merely a detailed experimental view of a tiny portion of it. Science constituted only one aspect of their broad philosophical system. Thus, Plato and Aristotle attacked Democritus’s atomic theory on philosophical grounds rather than on scientific ones. Plato valued abstract ideas more than the physical world and rejected the notion that attributes such as goodness and beauty were “mechanical manifestations of material atoms.” Where Democritus believed that matter could not move through space without a vacuum and that light was the rapid movement of particles through a void, Aristotle rejected the existence of vacuums because he could not conceive of bodies falling equally fast through a void. Aristotle’s conception prevailed in medieval Christian Europe; its science was based on revelation and reason, and the Roman Catholic theologians rejected Democritus as materialistic and atheistic. The emergence of experimental science De rerum natura, which was rediscovered in the 15th century, helped fuel a 17th-century debate between orthodox Aristotelian views and the new experimental science. The poem was printed in 1649 and popularized by Pierre Gassendi, a French priest who tried to separate Epicurus’s atomism from its materialistic background by arguing that God created atoms. Soon after the Italian scientist Galileo Galilei expressed his belief that vacuums can exist (1638), scientists began studying the properties of air and partial vacuums to test the relative merits of Aristotelian orthodoxy and the atomic theory. The experimental evidence about air was only gradually separated from this philosophical controversy. The Anglo-Irish chemist Robert Boyle began his systematic study of air in 1658 after he learned that Otto von Guericke, a German physicist and engineer, had invented an improved air pump four years earlier. In 1662 Boyle published the first physical law expressed in the form of an equation that describes the functional dependence of two variable quantities. This formulation became known as Boyle’s law. From the beginning, Boyle wanted to analyze the elasticity of air quantitatively, not just qualitatively, and to separate the particular experimental problem about air’s “spring” from the surrounding philosophical issues. Pouring mercury into the open end of a closed J-shaped tube, Boyle forced the air in the short side of the tube to contract under the pressure of the mercury on top. By doubling the height of the mercury column, he roughly doubled the pressure and halved the volume of air. By tripling the pressure, he cut the volume of air to a third, and so on. This behaviour can be formulated mathematically in the relation PV = PV′, where P and V are the pressure and volume under one set of conditions and P′ and V′ represent them under different conditions. Boyle’s law says that pressure and volume are inversely related for a given quantity of gas. Although it is only approximately true for real gases, Boyle’s law is an extremely useful idealization that played an important role in the development of atomic theory. Soon after his air-pressure experiments, Boyle wrote that all matter is composed of solid particles arranged into molecules to give material its different properties. He explained that all things are made of one Catholick Matter common to them all, and…differ but in the shape, size, motion or rest, and texture of the small parts they consist of. In France Boyle’s law is called Mariotte’s law after the physicist Edme Mariotte, who discovered the empirical relationship independently in 1676. Mariotte realized that the law holds true only under constant temperatures; otherwise, the volume of gas expands when heated or contracts when cooled. Forty years later Isaac Newton expressed a typical 18th-century view of the atom that was similar to that of Democritus, Gassendi, and Boyle. In the last query in his book Opticks (1704), Newton stated: By the end of the 18th century, chemists were just beginning to learn how chemicals combine. In 1794 Joseph-Louis Proust of France published his law of definite proportions (also known as Proust’s law). He stated that the components of chemical compounds always combine in the same proportions by weight. For example, Proust found that no matter where he got his samples of the compound copper carbonate, they were composed by weight of five parts copper, four parts oxygen, and one part carbon. The beginnings of modern atomic theory Experimental foundation of atomic chemistry The English chemist and physicist John Dalton extended Proust’s work and converted the atomic philosophy of the Greeks into a scientific theory between 1803 and 1808. His book A New System of Chemical Philosophy (Part I, 1808; Part II, 1810) was the first application of atomic theory to chemistry. It provided a physical picture of how elements combine to form compounds and a phenomenological reason for believing that atoms exist. His work, together with that of Joseph-Louis Gay-Lussac of France and Amedeo Avogadro of Italy, provided the experimental foundation of atomic chemistry. On the basis of the law of definite proportions, Dalton deduced the law of multiple proportions, which stated that when two elements form more than one compound by combining in more than one proportion by weight, the weight of one element in one of the compounds is in simple, integer ratios to its weights in the other compounds. For example, Dalton knew that oxygen and carbon can combine to form two different compounds and that carbon dioxide (CO2) contains twice as much oxygen by weight as carbon monoxide (CO). In this case the ratio of oxygen in one compound to the amount of oxygen in the other is the simple integer ratio 2:1. Although Dalton called his theory “modern” to differentiate it from Democritus’s philosophy, he retained the Greek term atom to honour the ancients. Dalton had begun his atomic studies by wondering why the different gases in the atmosphere do not separate, with the heaviest on the bottom and the lightest on the top. He decided that atoms are not infinite in variety as had been supposed and that they are limited to one of a kind for each element. Proposing that all the atoms of a given element have the same fixed mass, he concluded that elements react in definite proportions to form compounds because their constituent atoms react in definite proportion to produce compounds. He then tried to figure out the masses for well-known compounds. To do so, Dalton made a faulty but understandable assumption that the simplest hypothesis about atomic combinations was true. He maintained that the molecules of an element would always be single atoms. Thus, if two elements form only one compound, he believed that one atom of one element combined with one atom of another element. For example, describing the formation of water, he said that one atom of hydrogen and one of oxygen would combine to form HO instead of H2O. Dalton’s mistaken belief that atoms join together by attractive forces was accepted and formed the basis of most of 19th-century chemistry. As long as scientists worked with masses as ratios, a consistent chemistry could be developed because they did not need to know whether the atoms were separate or joined together as molecules. Gay-Lussac soon took the relationship between chemical masses implied by Dalton’s atomic theory and expanded it to volumetric relationships of gases. In 1809 he published two observations about gases that have come to be known as Gay-Lussac’s law of combining gases. The first part of the law says that when gases combine chemically, they do so in numerically simple volume ratios. Gay-Lussac illustrated this part of his law with three oxides of nitrogen. The compound NO has equal parts of nitrogen and oxygen by volume. Similarly, in the compound N2O the two parts by volume of nitrogen combine with one part of oxygen. He found corresponding volumes of nitrogen and oxygen in NO2. Thus, Gay-Lussac’s law relates volumes of the chemical constituents within a compound, unlike Dalton’s law of multiple proportions, which relates only one constituent of a compound with the same constituent in other compounds. The second part of Gay-Lussac’s law states that if gases combine to form gases, the volumes of the products are also in simple numerical ratios to the volume of the original gases. This part of the law was illustrated by the combination of carbon monoxide and oxygen to form carbon dioxide. Gay-Lussac noted that the volume of the carbon dioxide is equal to the volume of carbon monoxide and is twice the volume of oxygen. He did not realize, however, that the reason that only half as much oxygen is needed is because the oxygen molecule splits in two to give a single atom to each molecule of carbon monoxide. In his Mémoire sur la combinaison des substances gazeuses, les unes avec les autres (1809; “Memoir on the Combination of Gaseous Substances with Each Other”), Gay-Lussac wrote: Thus it appears evident to me that gases always combine in the simplest proportions when they act on one another; and we have seen in reality in all the preceding examples that the ratio of combination is 1 to 1, 1 to 2 or 1 to 3.…Gases…in whatever proportions they may combine, always give rise to compounds whose elements by volume are multiples of each other.…Not only, however, do gases combine in very simple proportions, as we have just seen, but the apparent contraction of volume which they experience on combination has also a simple relation to the volume of the gases, or at least to one of them. Gay-Lussac’s work raised the question of whether atoms differ from molecules and, if so, how many atoms and molecules are in a volume of gas. Amedeo Avogadro, building on Dalton’s efforts, solved the puzzle, but his work was ignored for 50 years. In 1811 Avogadro proposed two hypotheses: (1) The atoms of elemental gases may be joined together in molecules rather than existing as separate atoms, as Dalton believed. (2) Equal volumes of gases contain equal numbers of molecules. These hypotheses explained why only half a volume of oxygen is necessary to combine with a volume of carbon monoxide to form carbon dioxide. Each oxygen molecule has two atoms, and each atom of oxygen joins one molecule of carbon monoxide. Until the early 1860s, however, the allegiance of chemists to another concept espoused by the eminent Swedish chemist Jöns Jacob Berzelius blocked acceptance of Avogadro’s ideas. (Berzelius was influential among chemists because he had determined the atomic weights of many elements extremely accurately.) Berzelius contended incorrectly that all atoms of a similar element repel each other because they have the same electric charge. He thought that only atoms with opposite charges could combine to form molecules. Because early chemists did not know how many atoms were in a molecule, their chemical notation systems were in a state of chaos by the mid-19th century. Berzelius and his followers, for example, used the general formula MO for the chief metallic oxides, while others assigned the formula used today, M2O. A single formula stood for different substances, depending on the chemist: H2O2 was water or hydrogen peroxide; C2H4 was methane or ethylene. Proponents of the system used today based their chemical notation on an empirical law formulated in 1819 by the French scientists Pierre-Louis Dulong and Alexis-Thérèse Petit concerning the specific heat of elements. According to the Dulong-Petit law, the specific heat of all elements is the same on a per atom basis. This law, however, was found to have many exceptions and was not fully understood until the development of quantum theory in the 20th century. To resolve such problems of chemical notation, the Sicilian chemist Stanislao Cannizzaro revived Avogadro’s ideas in 1858 and expounded them at the First International Chemical Congress, which met in Karlsruhe, Germany, in 1860. Lothar Meyer, a noted German chemistry professor, wrote later that when he heard Avogadro’s theory at the congress, “It was as though scales fell from my eyes, doubt vanished, and was replaced by a feeling of peaceful certainty.” Within a few years, Avogadro’s hypotheses were widely accepted in the world of chemistry. Atomic weights and the periodic table As more and more elements were discovered during the 19th century, scientists began to wonder how the physical properties of the elements were related to their atomic weights. During the 1860s several schemes were suggested. The Russian chemist Dmitry Ivanovich Mendeleyev based his system (see photograph) on the atomic weights of the elements as determined by Avogadro’s theory of diatomic molecules. In his paper of 1869 introducing the periodic law, he credited Cannizzaro for using “unshakeable and indubitable” methods to determine atomic weights. The elements, if arranged according to their atomic weights, show a distinct periodicity of their properties.…Elements exhibiting similarities in their chemical behavior have atomic weights which are approximately equal (as in the case of Pt, Ir, Os) or they possess atomic weights which increase in a uniform manner (as in the case of K, Rb, Cs). Skipping hydrogen because it is anomalous, Mendeleyev arranged the 63 elements known to exist at the time into six groups according to valence (see figure). Valence, which is the combining power of an element, determines the proportions of the elements in a compound. For example, H2O combines oxygen with a valence of 2 and hydrogen with a valence of 1. Recognizing that chemical qualities change gradually as atomic weight increases, Mendeleyev predicted that a new element must exist wherever there was a gap in atomic weights between adjacent elements. His system was thus a research tool and not merely a system of classification. Mendeleyev’s periodic table raised an important question, however, for future atomic theory to answer: Where does the pattern of atomic weights come from? Kinetic theory of gases Whereas Avogadro’s theory of diatomic molecules was ignored for 50 years, the kinetic theory of gases was rejected for more than a century. The kinetic theory relates the independent motion of molecules to the mechanical and thermal properties of gases—namely, their pressure, volume, temperature, viscosity, and heat conductivity. Three men—Daniel Bernoulli in 1738, John Herapath in 1820, and John James Waterston in 1845—independently developed the theory. The kinetic theory of gases, like the theory of diatomic molecules, was a simple physical idea that chemists ignored in favour of an elaborate explanation of the properties of gases. Bernoulli, a Swiss mathematician and scientist, worked out the first quantitative mathematical treatment of the kinetic theory in 1738 by picturing gases as consisting of an enormous number of particles in very fast, chaotic motion. He derived Boyle’s law by assuming that gas pressure is caused by the direct impact of particles on the walls of their container. He understood the difference between heat and temperature, realizing that heat makes gas particles move faster and that temperature merely measures the propensity of heat to flow from one body to another. In spite of its accuracy, Bernoulli’s theory remained virtually unknown during the 18th century and early 19th century for several reasons. First, chemistry was more popular than physics among scientists of the day, and Bernoulli’s theory involved mathematics. Second, Newton’s reputation ensured the success of his more-comprehensible theory that gas atoms repel one another. Finally, Joseph Black, another noted British scientist, developed the caloric theory of heat, which proposed that heat was an invisible substance permeating matter. At the time, the fact that heat could be transmitted by light seemed a persuasive argument that heat and motion had nothing to do with each other. Waterston’s efforts met with a similar fate. Waterston was a Scottish civil engineer and amateur physicist who could not even get his work published by the scientific community, which had become increasingly professional throughout the 19th century. Nevertheless, Waterston made the first statement of the law of equipartition of energy, according to which all kinds of particles have equal amounts of thermal energy. He derived practically all the consequences of the fact that pressure exerted by a gas is related to the number of molecules per cubic centimetre, their mass, and their mean squared velocity. He derived the basic equation of kinetic theory, which reads P = NMV2. Here P is the pressure of a volume of gas, N is the number of molecules per unit volume, M is the mass of the molecule, and V2 is the average velocity squared of the molecules. Recognizing that the kinetic energy of a molecule is proportional to MV2 and that the heat energy of a gas is proportional to the temperature, Waterston expressed the law as PV/T = a constant. During the late 1850s, a decade after Waterston had formulated his law, the scientific community was finally ready to accept a kinetic theory of gases. The studies of heat undertaken by the English physicist James Prescott Joule during the 1840s had shown that heat is a form of energy. This work, together with the law of the conservation of energy that he helped to establish, had persuaded scientists to discard the caloric theory by the mid-1850s. The caloric theory had required that a substance contain a definite amount of caloric (i.e., a hypothetical weightless fluid) to be turned into heat; however, experiments showed that any amount of heat can be generated in a substance by putting enough energy into it. Thus, there was no point to hypothesizing such a special fluid as caloric. At first, after the collapse of the caloric theory, physicists had nothing with which to replace it. Joule, however, discovered Herapath’s kinetic theory and used it in 1851 to calculate the velocity of hydrogen molecules. Then the German physicist Rudolf Clausius developed the kinetic theory mathematically in 1857, and the scientific world took note. Clausius and two other physicists, the Scot James Clerk Maxwell and the Austrian Ludwig Eduard Boltzmann (who developed the kinetic theory of gases in the 1860s), introduced sophisticated mathematics into physics for the first time since Newton. In his 1860 paper Illustrations of the Dynamical Theory of Gases, Maxwell used probability theory to produce his famous distribution function for the velocities of gas molecules. Employing Newtonian laws of mechanics, he also provided a mathematical basis for Avogadro’s theory. Maxwell, Clausius, and Boltzmann assumed that gas particles were in constant motion, that they were tiny compared with their space, and that their interactions were very brief. They then related the motion of the particles to pressure, volume, and temperature. Interestingly, none of the three committed himself on the nature of the particles. Studies of the properties of atoms Size of atoms The first modern estimates of the size of atoms and the numbers of atoms in a given volume were made by the German chemist Joseph Loschmidt in 1865. Loschmidt used the results of kinetic theory and some rough estimates to do his calculation. The size of the atoms and the distance between them in the gaseous state are related both to the contraction of gas upon liquefaction and to the mean free path traveled by molecules in a gas. The mean free path, in turn, can be found from the thermal conductivity and diffusion rates in the gas. Loschmidt calculated the size of the atom and the spacing between atoms by finding a solution common to these relationships. His result for Avogadro’s number is remarkably close to the present accepted value of about 6.022 × 1023. The precise definition of Avogadro’s number is the number of atoms in 12 grams of the carbon isotope C-12. Loschmidt’s result for the diameter of an atom was approximately 10−8 cm. Much later, in 1908, the French physicist Jean Perrin used Brownian motion to determine Avogadro’s number. Brownian motion, first observed in 1827 by the Scottish botanist Robert Brown, is the continuous movement of tiny particles suspended in water. Their movement is caused by the thermal motion of water molecules bumping into the particles. Perrin’s argument for determining Avogadro’s number makes an analogy between particles in the liquid and molecules in the atmosphere. The thinning of air at high altitudes depends on the balance between the gravitational force pulling the molecules down and their thermal motion forcing them up. The relationship between the weight of the particles and the height of the atmosphere would be the same for Brownian particles suspended in water. Perrin counted particles of gum mastic at different heights in his water sample and inferred the mass of atoms from the rate of decrease. He then divided the result into the molar weight of atoms to determine Avogadro’s number. After Perrin, few scientists could disbelieve the existence of atoms. Electric properties of atoms While atomic theory was set back by the failure of scientists to accept simple physical ideas like the diatomic molecule and the kinetic theory of gases, it was also delayed by the preoccupation of physicists with mechanics for almost 200 years, from Newton to the 20th century. Nevertheless, several 19th-century investigators, working in the relatively ignored fields of electricity, magnetism, and optics, provided important clues about the interior of the atom. The studies in electrodynamics made by the English physicist Michael Faraday and those of Maxwell indicated for the first time that something existed apart from palpable matter, and data obtained by Gustav Robert Kirchhoff of Germany about elemental spectral lines raised questions that would be answered only in the 20th century by quantum mechanics. Until Faraday’s electrolysis experiments, scientists had no conception of the nature of the forces binding atoms together in a molecule. Faraday concluded that electrical forces existed inside the molecule after he had produced an electric current and a chemical reaction in a solution with the electrodes of a voltaic cell. No matter what solution or electrode material he used, a fixed quantity of current sent through an electrolyte always caused a specific amount of material to form on an electrode of the electrolytic cell. Faraday concluded that each ion of a given chemical compound has exactly the same charge. Later he discovered that the ionic charges are integral multiples of a single unit of charge, never fractions. On the practical level, Faraday did for charge what Dalton had done for the chemical combination of atomic masses. That is to say, Faraday demonstrated that it takes a definite amount of charge to convert an ion of an element into an atom of the element and that the amount of charge depends on the element used. The unit of charge that releases one gram-equivalent weight of a simple ion is called the faraday in his honour. For example, one faraday of charge passing through water releases one gram of hydrogen and eight grams of oxygen. In this manner, Faraday gave scientists a rather precise value for the ratios of the masses of atoms to the electric charges of ions. The ratio of the mass of the hydrogen atom to the charge of the electron was found to be 1.035 × 10−8 kilogram per coulomb. Faraday did not know the size of his electrolytic unit of charge in units such as coulombs any more than Dalton knew the magnitude of his unit of atomic weight in grams. Nevertheless, scientists could determine the ratio of these units easily. More significantly, Faraday’s work was the first to imply the electrical nature of matter and the existence of subatomic particles and a fundamental unit of charge. Faraday wrote: The atoms of matter are in some way endowed or associated with electrical powers, to which they owe their most striking qualities, and amongst them their mutual chemical affinity. Faraday did not, however, conclude that atoms cause electricity. Light and spectral lines In 1865 Maxwell unified the laws of electricity and magnetism in his publication A Dynamical Theory of the Electromagnetic Field. In this paper he concluded that light is an electromagnetic wave. His theory was confirmed by the German physicist Heinrich Hertz, who produced radio waves with sparks in 1887. With light understood as an electromagnetic wave, Maxwell’s theory could be applied to the emission of light from atoms. The theory failed, however, to describe spectral lines and the fact that atoms do not lose all their energy when they radiate light. The problem was not with Maxwell’s theory of light itself but rather with its description of the oscillating electron currents generating light. Only quantum mechanics could explain this behaviour (see below The laws of quantum mechanics). By far the richest clues about the structure of the atom came from spectral line series. Mounting a particularly fine prism on a telescope, the German physicist and optician Joseph von Fraunhofer had discovered between 1814 and 1824 hundreds of dark lines in the spectrum of the Sun. He labeled the most prominent of these lines with the letters A through G. Together they are now called Fraunhofer lines (see figure). A generation later Kirchhoff heated different elements to incandescence in order to study the different coloured vapours emitted. Observing the vapours through a spectroscope, he discovered that each element has a unique and characteristic pattern of spectral lines. Each element produces the same set of identifying lines, even when it is combined chemically with other elements. In 1859 Kirchhoff and the German chemist Robert Wilhelm Bunsen discovered two new elements—cesium and rubidium—by first observing their spectral lines. Johann Jakob Balmer, a Swiss secondary-school teacher with a penchant for numerology, studied hydrogen’s spectral lines (see photograph) and found a constant relationship between the wavelengths of the element’s four visible lines. In 1885 he published a generalized mathematical formula for all the lines of hydrogen. The Swedish physicist Johannes Rydberg extended Balmer’s work in 1890 and found a general rule applicable to many elements. Soon more series were discovered elsewhere in the spectrum of hydrogen and in the spectra of other elements as well. Stated in terms of the frequency of the light rather than its wavelength, the formula may be expressed: Here ν is the frequency of the light, n and m are integers, and R is the Rydberg constant. In the Balmer lines m is equal to 2 and n takes on the values 3, 4, 5, and 6. Discovery of electrons During the 1880s and ’90s scientists searched cathode rays for the carrier of the electrical properties in matter. Their work culminated in the discovery by English physicist J.J. Thomson of the electron in 1897. The existence of the electron showed that the 2,000-year-old conception of the atom as a homogeneous particle was wrong and that in fact the atom has a complex structure. Cathode-ray studies began in 1854 when Heinrich Geissler, a glassblower and technical assistant to the German physicist Julius Plücker, improved the vacuum tube. Plücker discovered cathode rays in 1858 by sealing two electrodes inside the tube, evacuating the air, and forcing electric current between the electrodes. He found a green glow on the wall of his glass tube and attributed it to rays emanating from the cathode. In 1869, with better vacuums, Plücker’s pupil Johann W. Hittorf saw a shadow cast by an object placed in front of the cathode. The shadow proved that the cathode rays originated from the cathode. The English physicist and chemist William Crookes investigated cathode rays in 1879 and found that they were bent by a magnetic field; the direction of deflection suggested that they were negatively charged particles. As the luminescence did not depend on what gas had been in the vacuum or what metal the electrodes were made of, he surmised that the rays were a property of the electric current itself. As a result of Crookes’s work, cathode rays were widely studied, and the tubes came to be called Crookes tubes. Although Crookes believed that the particles were electrified charged particles, his work did not settle the issue of whether cathode rays were particles or radiation similar to light. By the late 1880s the controversy over the nature of cathode rays had divided the physics community into two camps. Most French and British physicists, influenced by Crookes, thought that cathode rays were electrically charged particles because they were affected by magnets. Most German physicists, on the other hand, believed that the rays were waves because they traveled in straight lines and were unaffected by gravity. A crucial test of the nature of the cathode rays was how they would be affected by electric fields. Heinrich Hertz, the aforementioned German physicist, reported that the cathode rays were not deflected when they passed between two oppositely charged plates in an 1892 experiment. In England J.J. Thomson thought Hertz’s vacuum might have been faulty and that residual gas might have reduced the effect of the electric field on the cathode rays. Thomson repeated Hertz’s experiment with a better vacuum in 1897. He directed the cathode rays between two parallel aluminum plates to the end of a tube where they were observed as luminescence on the glass. When the top aluminum plate was negative, the rays moved down; when the upper plate was positive, the rays moved up. The deflection was proportional to the difference in potential between the plates. With both magnetic and electric deflections observed, it was clear that cathode rays were negatively charged particles. Thomson’s discovery established the particulate nature of electricity. Accordingly, he called his particles electrons. From the magnitude of the electrical and magnetic deflections, Thomson could calculate the ratio of mass to charge for the electrons. This ratio was known for atoms from electrochemical studies. Measuring and comparing it with the number for an atom, he discovered that the mass of the electron was very small, merely 1/1,836 that of a hydrogen ion. When scientists realized that an electron was virtually 1,000 times lighter than the smallest atom, they understood how cathode rays could penetrate metal sheets and how electric current could flow through copper wires. In deriving the mass-to-charge ratio, Thomson had calculated the electron’s velocity. It was 110 the speed of light, thus amounting to roughly 30,000 km (18,000 miles) per second. Thomson emphasized that we have in the cathode rays matter in a new state, a state in which the subdivision of matter is carried very much further than in the ordinary gaseous state; a state in which all matter, that is, matter derived from different sources such as hydrogen, oxygen, etc., is of one and the same kind; this matter being the substance from which all the chemical elements are built up. Thus, the electron was the first subatomic particle identified, the smallest and the fastest bit of matter known at the time. In 1909 the American physicist Robert Andrews Millikan greatly improved a method employed by Thomson for measuring the electron charge directly. In Millikan’s oil-drop experiment, he produced microscopic oil droplets and observed them falling in the space between two electrically charged plates. Some of the droplets became charged and could be suspended by a delicate adjustment of the electric field. Millikan knew the weight of the droplets from their rate of fall when the electric field was turned off. From the balance of the gravitational and electrical forces, he could determine the charge on the droplets. All the measured charges were integral multiples of a quantity that in contemporary units is 1.602 × 10−19 coulomb. Millikan’s electron-charge experiment was the first to detect and measure the effect of an individual subatomic particle. Besides confirming the particulate nature of electricity, his experiment also supported previous determinations of Avogadro’s number. Avogadro’s number times the unit of charge gives Faraday’s constant, the amount of charge required to electrolyze one mole of a chemical ion. Identification of positive ions Discovery of radioactivity Like Thomson’s discovery of the electron, the discovery of radioactivity in uranium by the French physicist Henri Becquerel in 1896 forced scientists to radically change their ideas about atomic structure. Radioactivity demonstrated that the atom was neither indivisible nor immutable. Instead of serving merely as an inert matrix for electrons, the atom could change form and emit an enormous amount of energy. Furthermore, radioactivity itself became an important tool for revealing the interior of the atom. The German physicist Wilhelm Conrad Röntgen had discovered X-rays in 1895, and Becquerel thought they might be related to fluorescence and phosphorescence, processes in which substances absorb and emit energy as light. In the course of his investigations, Becquerel stored some photographic plates and uranium salts in a desk drawer. Expecting to find the plates only lightly fogged, he developed them and was surprised to find sharp images of the salts. He then began experiments that showed that uranium salts emit a penetrating radiation independent of external influences. Becquerel also demonstrated that the radiation could discharge electrified bodies. In this case discharge means the removal of electric charge, and it is now understood that the radiation, by ionizing molecules of air, allows the air to conduct an electric current. Early studies of radioactivity relied on measuring ionization power (see figure) or on observing the effects of radiation on photographic plates. In 1898 the French physicists Pierre and Marie Curie discovered the strongly radioactive elements polonium and radium, which occur naturally in uranium minerals. Marie coined the term radioactivity for the spontaneous emission of ionizing, penetrating rays by certain atoms. Experiments conducted by the British physicist Ernest Rutherford in 1899 showed that radioactive substances emit more than one kind of radiation. It was determined that part of the radiation is 100 times more penetrating than the rest and can pass through aluminum foil one-fiftieth of a millimetre thick. Rutherford named the less-penetrating emanations alpha rays and the more-powerful ones beta rays, after the first two letters of the Greek alphabet. Investigators who in 1899 found that beta rays were deflected by a magnetic field concluded that they are negatively charged particles similar to cathode rays. In 1903 Rutherford found that alpha rays were deflected slightly in the opposite direction, showing that they are massive, positively charged particles. Much later Rutherford proved that alpha rays are nuclei of helium atoms by collecting the rays in an evacuated tube and detecting the buildup of helium gas over several days. A third kind of radiation was identified by the French chemist Paul Villard in 1900. Designated as the gamma ray, it is not deflected by magnets and is much more penetrating than alpha particles. Gamma rays were later shown to be a form of electromagnetic radiation, like light or X-rays, but with much shorter wavelengths. Because of these shorter wavelengths, gamma rays have higher frequencies and are even more penetrating than X-rays. In 1902, while studying the radioactivity of thorium, Rutherford and the English chemist Frederick Soddy discovered that radioactivity was associated with changes inside the atom that transformed thorium into a different element. They found that thorium continually generates a chemically different substance that is intensely radioactive. The radioactivity eventually makes the new element disappear. Watching the process, Rutherford and Soddy formulated the exponential decay law (see decay constant), which states that a fixed fraction of the element will decay in each unit of time. For example, half of the thorium product decays in four days, half the remaining sample in the next four days, and so on. Until the 20th century, physicists had studied subjects, such as mechanics, heat, and electromagnetism, that they could understand by applying common sense or by extrapolating from everyday experiences. The discoveries of the electron and radioactivity, however, showed that classical Newtonian mechanics could not explain phenomena at atomic and subatomic levels. As the primacy of classical mechanics crumbled during the early 20th century, quantum mechanics was developed to replace it. Since then experiments and theories have led physicists into a world that is often extremely abstract and seemingly contradictory. Models of atomic structure J.J. Thomson’s discovery of the negatively charged electron had raised theoretical problems for physicists as early as 1897, because atoms as a whole are electrically neutral. Where was the neutralizing positive charge and what held it in place? Between 1903 and 1907 Thomson tried to solve the mystery by adapting an atomic model that had been first proposed by the Scottish scientist William Thomson (Lord Kelvin) in 1902. According to the Thomson atomic model, often referred to as the “plum-pudding” model, the atom is a sphere of uniformly distributed positive charge about one angstrom in diameter (see figure). Electrons are embedded in a regular pattern, like raisins in a plum pudding, to neutralize the positive charge. The advantage of the Thomson atom was that it was inherently stable: if the electrons were displaced, they would attempt to return to their original positions. In another contemporary model, the atom resembled the solar system or the planet Saturn, with rings of electrons surrounding a concentrated positive charge. The Japanese physicist Nagaoka Hantaro in particular developed the “Saturnian” system in 1904. The atom, as postulated in this model, was inherently unstable because, by radiating continuously, the electron would gradually lose energy and spiral into the nucleus. No electron could thus remain in any particular orbit indefinitely. Rutherford’s nuclear model Rutherford overturned Thomson’s model in 1911 with his famous gold-foil experiment, in which he demonstrated that the atom has a tiny, massive nucleus (see figure). Five years earlier Rutherford had noticed that alpha particles beamed through a hole onto a photographic plate would make a sharp-edged picture, while alpha particles beamed through a sheet of mica only 20 micrometres (or about 0.002 cm) thick would make an impression with blurry edges. For some particles the blurring corresponded to a two-degree deflection. Remembering those results, Rutherford had his postdoctoral fellow, Hans Geiger, and an undergraduate student, Ernest Marsden, refine the experiment. The young physicists beamed alpha particles through gold foil and detected them as flashes of light or scintillations on a screen. The gold foil was only 0.00004 cm thick. Most of the alpha particles went straight through the foil, but some were deflected by the foil and hit a spot on a screen placed off to one side. Geiger and Marsden found that about one in 20,000 alpha particles had been deflected 45° or more. Rutherford asked why so many alpha particles passed through the gold foil while a few were deflected so greatly. “It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper, and it came back to hit you,” Rutherford said later. Many physicists distrusted the Rutherford atomic model because it was difficult to reconcile with the chemical behaviour of atoms (see figure). The model suggested that the charge on the nucleus was the most important characteristic of the atom, determining its structure. On the other hand, Mendeleyev’s periodic table of the elements had been organized according to the atomic masses of the elements, implying that the mass was responsible for the structure and chemical behaviour of atoms. Moseley’s X-ray studies Henry Gwyn Jeffreys Moseley, a young English physicist killed in World War I, confirmed that the positive charge on the nucleus revealed more about the fundamental structure of the atom than Mendeleyev’s atomic mass. Moseley studied the spectral lines emitted by heavy elements in the X-ray region of the electromagnetic spectrum. He built on the work done by several other British physicists—Charles Glover Barkla, who had studied X-rays produced by the impact of electrons on metal plates, and William Bragg and his son Lawrence, who had developed a precise method of using crystals to reflect X-rays and measure their wavelength by diffraction. Moseley applied their method systematically to measure the spectra of X-rays produced by many elements. Moseley found that each element radiates X-rays of a different and characteristic wavelength. The wavelength and frequency vary in a regular pattern according to the charge on the nucleus. He called this charge the atomic number. In his first experiments, conducted in 1913, Moseley used what was called the K series of X-rays to study the elements up to zinc. The following year he extended this work using another series of X-rays, the L series. Moseley was conducting his research at the same time that the Danish theoretical physicist Niels Bohr was developing his quantum shell model of the atom. The two conferred and shared data as their work progressed, and Moseley framed his equation in terms of Bohr’s theory by identifying the K series of X-rays with the most-bound shell in Bohr’s theory, the N = 1 shell, and identifying the L series of X-rays with the next shell, N = 2. Moseley presented formulas for the X-ray frequencies that were closely related to Bohr’s formulas for the spectral lines in a hydrogen atom. Moseley showed that the frequency of a line in the X-ray spectrum is proportional to the square of the charge on the nucleus. The constant of proportionality depends on whether the X-ray is in the K or L series. This is the same relationship that Bohr used in his formula applied to the Lyman and Balmer series of spectral lines. The regularity of the differences in X-ray frequencies allowed Moseley to order the elements by atomic number from aluminum to gold. He observed that, in some cases, the order by atomic weights was incorrect. For example, cobalt has a larger atomic mass than nickel, but Moseley found that it has atomic number 27 while nickel has 28. When Mendeleyev constructed the periodic table, he based his system on the atomic masses of the elements and had to put cobalt and nickel out of order to make the chemical properties fit better. In a few places where Moseley found more than one integer between elements, he predicted correctly that a new element would be discovered. Because there is just one element for each atomic number, scientists could be confident for the first time of the completeness of the periodic table; no unexpected new elements would be discovered. Bohr’s shell model With his model, Bohr explained how electrons could jump from one orbit to another only by emitting or absorbing energy in fixed quanta (see figure). For example, if an electron jumps one orbit closer to the nucleus, it must emit energy equal to the difference of the energies of the two orbits. Conversely, when the electron jumps to a larger orbit, it must absorb a quantum of light equal in energy to the difference in orbits. Bohr’s theory had major drawbacks, however. Except for the spectra of X-rays in the K and L series, it could not explain properties of atoms having more than one electron. The binding energy of the helium atom, which has two electrons, was not understood until the development of quantum mechanics. Several features of the spectrum were inexplicable even in the hydrogen atom (see figure). High-resolution spectroscopy shows that the individual spectral lines of hydrogen are divided into several closely spaced fine lines. In a magnetic field the lines split even farther apart. The German physicist Arnold Sommerfeld modified Bohr’s theory by quantizing the shapes and orientations of orbits to introduce additional energy levels corresponding to the fine spectral lines. The laws of quantum mechanics Within a few short years scientists developed a consistent theory of the atom that explained its fundamental structure and its interactions. Crucial to the development of the theory was new evidence indicating that light and matter have both wave and particle characteristics at the atomic and subatomic levels. Theoreticians had objected to the fact that Bohr had used an ad hoc hybrid of classical Newtonian dynamics for the orbits and some quantum postulates to arrive at the energy levels of atomic electrons. The new theory ignored the fact that electrons are particles and treated them as waves. By 1926 physicists had developed the laws of quantum mechanics, also called wave mechanics, to explain atomic and subatomic phenomena. The duality between the wave and particle nature of light was highlighted by the American physicist Arthur Holly Compton in an X-ray scattering experiment conducted in 1922. Compton sent a beam of X-rays through a target material and observed that a small part of the beam was deflected off to the sides at various angles. He found that the scattered X-rays had longer wavelengths than the original beam; the change could be explained only by assuming that the X-rays scattered from the electrons in the target as if the X-rays were particles with discrete amounts of energy and momentum (see figure). When X-rays are scattered, their momentum is partially transferred to the electrons. The recoil electron takes some energy from an X-ray, and as a result the X-ray frequency is shifted. Both the discrete amount of momentum and the frequency shift of the light scattering are completely at variance with classical electromagnetic theory, but they are explained by Einstein’s quantum formula. Louis-Victor de Broglie, a French physicist, proposed in his 1923 doctoral thesis that all matter and radiations have both particle- and wavelike characteristics. Until the emergence of the quantum theory, physicists had assumed that matter was strictly particulate. In his quantum theory of light, Einstein proposed that radiation has characteristics of both waves and particles. Believing in the symmetry of nature, Broglie postulated that ordinary particles such as electrons may also have wave characteristics. Using the old-fashioned word corpuscles for particles, Broglie wrote, For both matter and radiations, light in particular, it is necessary to introduce the corpuscle concept and the wave concept at the same time. In other words, the existence of corpuscles accompanied by waves has to be assumed in all cases. Broglie’s conception was an inspired one, but at the time it had no empirical or theoretical foundation. The Austrian physicist Erwin Schrödinger had to supply the theory. Schrödinger’s wave equation In 1926 the Schrödinger equation, essentially a mathematical wave equation, established quantum mechanics in widely applicable form. In order to understand how a wave equation is used, it is helpful to think of an analogy with the vibrations of a bell, violin string, or drumhead. These vibrations are governed by a wave equation, since the motion can propagate as a wave from one side of the object to the other. Certain vibrations in these objects are simple modes that are easily excited and have definite frequencies. For example, the motion of the lowest vibrational mode in a drumhead is in phase all over the drumhead with a pattern that is uniform around it; the highest amplitude of the vibratory motion occurs in the middle of the drumhead. In more-complicated, higher-frequency modes, the motion on different parts of the vibrating drumhead are out of phase, with inward motion on one part at the same time that there is outward motion on another. Schrödinger postulated that the electrons in an atom should be treated like the waves on the drumhead. The different energy levels of atoms are identified with the simple vibrational modes of the wave equation. The equation is solved to find these modes, and then the energy of an electron is obtained from the frequency of the mode and from Einstein’s quantum formula, E = hν. Schrödinger’s wave equation gives the same energies as Bohr’s original formula but with a much more-precise description of an electron in an atom (see figure). The lowest energy level of the hydrogen atom, called the ground state, is analogous to the motion in the lowest vibrational mode of the drumhead. In the atom the electron wave is uniform in all directions from the nucleus, is peaked at the centre of the atom, and has the same phase everywhere. Higher energy levels in the atom have waves that are peaked at greater distances from the nucleus. Like the vibrations in the drumhead, the waves have peaks and nodes that may form a complex shape. The different shapes of the wave pattern are related to the quantum numbers of the energy levels, including the quantum numbers for angular momentum and its orientation. The year before Schrödinger produced his wave theory, the German physicist Werner Heisenberg published a mathematically equivalent system to describe energy levels and their transitions. In Heisenberg’s method, properties of atoms are described by arrays of numbers called matrices, which are combined with special rules of multiplication. Today physicists use both wave functions and matrices, depending on the application. Schrödinger’s picture is more useful for describing continuous electron distributions because the wave function can be more easily visualized. Matrix methods are more useful for numerical analysis calculations with computers and for systems that can be described in terms of a finite number of states, such as the spin states of the electron. Antiparticles and the electron’s spin The English physicist Paul Dirac introduced a new equation for the electron in 1928. Because the Schrödinger equation does not satisfy the principles of relativity, it can be used to describe only those phenomena in which the particles move much more slowly than the velocity of light. In order to satisfy the conditions of relativity, Dirac was forced to postulate that the electron would have a particular form of wave function with four independent components, some of which describe the electron’s spin. Thus, from the very beginning, the Dirac theory incorporated the electron’s spin properties. The remaining components allowed additional states of the electron that had not yet been observed. Dirac interpreted them as antiparticles, with a charge opposite to that of electrons (see animation). The discovery of the positron in 1932 by the American physicist Carl David Anderson proved the existence of antiparticles and was a triumph for Dirac’s theory. After Anderson’s discovery, subatomic particles could no longer be considered immutable. Electrons and positrons can be created out of the vacuum, given a source of energy such as a high-energy X-ray or a collision (see photograph). They also can annihilate each other and disappear into some other form of energy. From this point, much of the history of subatomic physics has been the story of finding new kinds of particles, many of which exist for only fractions of a second after they have been created. Advances in nuclear and subatomic physics The 1920s witnessed further advances in nuclear physics with Rutherford’s discovery of induced radioactivity. Bombardment of light nuclei by alpha particles produced new radioactive nuclei. In 1928 the Russian-born American physicist George Gamow explained the lifetimes in alpha radioactivity using the Schrödinger equation. His explanation used a property of quantum mechanics that allows particles to “tunnel” through regions where classical physics would forbid them to be. Structure of the nucleus The constitution of the nucleus was poorly understood at the time because the only known particles were the electron and the proton. It had been established that nuclei are typically about twice as heavy as can be accounted for by protons alone. A consistent theory was impossible until the English physicist James Chadwick discovered the neutron in 1932. He found that alpha particles reacted with beryllium nuclei to eject neutral particles with nearly the same mass as protons. Almost all nuclear phenomena can be understood in terms of a nucleus composed of neutrons and protons. Surprisingly, the neutrons and protons in the nucleus move to a large extent in orbitals as though their wave functions were independent of one another. Each neutron or proton orbital is described by a stationary wave pattern with peaks and nodes and angular momentum quantum numbers. The theory of the nucleus based on these orbitals is called the shell nuclear model. It was introduced independently in 1948 by Maria Goeppert Mayer of the United States and Johannes Hans Daniel Jensen of West Germany, and it developed in succeeding decades into a comprehensive theory of the nucleus. The interactions of neutrons with nuclei had been studied during the mid-1930s by the Italian-born American physicist Enrico Fermi and others. Nuclei readily capture neutrons, which, unlike protons or alpha particles, are not repelled from the nucleus by a positive charge. When a neutron is captured, the new nucleus has one higher unit of atomic mass. If a nearby isotope of that atomic mass is more stable, the new nucleus will be radioactive, convert the neutron to a proton, and assume the more-stable form. Nuclear fission was discovered by the German chemists Otto Hahn and Fritz Strassmann in 1938 during the course of experiments initiated and explained by Austrian physicist Lise Meitner. In fission a uranium nucleus captures a neutron and gains enough energy to trigger the inherent instability of the nucleus, which splits into two lighter nuclei of roughly equal size. The fission process releases more neutrons, which can be used to produce further fissions. The first nuclear reactor, a device designed to permit controlled fission chain reactions, was constructed at the University of Chicago under Fermi’s direction, and the first self-sustaining chain reaction was achieved in this reactor in 1942. In 1945 American scientists produced the first fission bomb, also called an atomic bomb, which used uncontrolled fission reactions in either uranium or the artificial element plutonium. In 1952 American scientists used a fission explosion to ignite a fusion reaction in which isotopes of hydrogen combined thermally into heavier helium nuclei. This was the first thermonuclear bomb, also called an H-bomb, a weapon that can release hundreds or thousands of times more energy than a fission bomb. Quantum field theory and the standard model Dirac not only proposed the relativistic equation for the electron but also initiated the relativistic treatment of interactions between particles known as quantum field theory. The theory allows particles to be created and destroyed and requires only the presence of suitable interactions carrying sufficient energy. Quantum field theory also stipulates that the interactions can extend over a distance only if there is a particle, or field quantum, to carry the force. The electromagnetic force, which can operate over long distances, is carried by the photon, the quantum of light. Because the theory allows particles to interact with their own field quanta, mathematical difficulties arose in applying the theory. The theoretical impasse was broken as a result of a measurement carried out in 1946 and 1947 by the American physicist Willis Eugene Lamb, Jr. Using microwave techniques developed during World War II, he showed that the hydrogen spectrum is actually about one-tenth of one percent different from Dirac’s theoretical picture. Later the German-born American physicist Polykarp Kusch found a similar anomaly in the size of the magnetic moment of the electron. Lamb’s results were announced at a famous Shelter Island Conference held in the United States in 1947; the German-born American physicist Hans Bethe and others realized that the so-called Lamb shift was probably caused by electrons and field quanta that may be created from the vacuum. The previous mathematical difficulties were overcome by Richard Feynman, Julian Schwinger, and Tomonaga Shin’ichirō, who shared the 1965 Nobel Prize for Physics, and Freeman Dyson, who showed that their various approaches were mathematically identical. The new theory, called quantum electrodynamics, was found to explain all the measurements to very high precision. Apparently, quantum electrodynamics provides a complete theory of how electrons behave under electromagnetism. Beginning in the 1960s, similarities were found between the weak force and electromagnetism. Sheldon Glashow, Abdus Salam, and Steven Weinberg combined the two forces in the electroweak theory, for which they shared the Nobel Prize for Physics in 1979. In addition to the photon, three field quanta were predicted as additional carriers of the force—the W particle, the Z particle, and the Higgs particle. The discoveries of the W and Z particles in 1983, with correctly predicted masses, established the validity of the electroweak theory. Physicists are still searching for the much heavier Higgs particle, whose exact mass is not specified by the theory. In all, hundreds of subatomic particles have been discovered since the first unstable particle, the muon, was identified in cosmic rays in the 1930s. By the 1960s patterns emerged in the properties and relationships among subatomic particles that led to the quark theory. Combining the electroweak theory and the quark theory, a theoretical framework called the Standard Model was constructed; it includes all known particles and field quanta. In the Standard Model there are two broad categories of particles, the leptons and the quarks. Leptons include electrons, muons, and neutrinos, and, aside from gravity, they interact only with the electroweak force. The quarks are subject to the strong force, and they combine in various ways to make bound states. The bound quark states, called hadrons, include the neutron and the proton. Three quarks combine to form a proton, a neutron, or any of the massive hadrons known as baryons. A quark combines with an antiquark to form mesons such as the pion. Quarks have never been observed, and physicists do not expect to find one. The strength of the strong force is so great that quarks cannot be separated from each other outside hadrons. The existence of quarks has been confirmed indirectly in several ways, however. In experiments conducted with high-energy electron accelerators starting in 1967, physicists observed that some of the electrons bombarded onto proton targets were deflected at large angles. As in Rutherford’s gold-foil experiment, the large-angle deflection implies that hadrons have an internal structure containing very small charged objects. The small objects are presumed to be quarks. To accommodate quarks and their peculiar properties, physicists developed a new quantum field theory, known as quantum chromodynamics, during the mid-1970s. This theory explains qualitatively the confinement of quarks to hadrons. Physicists believe that the theory should explain all aspects of hadrons. However, mathematical difficulties in dealing with the strong interactions in quantum chromodynamics are more severe than those of quantum electrodynamics, and rigorous calculations of hadron properties have not been possible. Nevertheless, numerical calculations using the largest computers seem to confirm the validity of the theory.
f27dfa4bb06f8b3a
Top banner Bonding to Hydrogen The simplest molecule, made for connection Roald Hoffmann My first encounter with H2 was typical for a boy in the age of chemistry sets that had some zing to them. My set, made by A. C. Gilbert Co., contained some powdered zinc. It had no acids, but it taught you to generate them from chemicals it included (for instance HCl from NaHSO4 and NH4Cl), or—the manual said—you could buy a small quantity from your local apothecary. Perhaps I got it there, asking politely for the acid in my best accented English a year or so after coming to Brooklyn from Europe. I poured some of the dilute acid on the zinc in a test tube, watched it bubble away, lit (with some fear) a match and heard that distinct pop. Flammable Air Next, I encountered the gas, Henry Cavendish’s inflammable air, in a high school electrolysis experiment. We ran a current through water with a little salt dissolved in it, collected the unequal volumes of gases formed, each trapped in an inverted tube. Both gases gave small pyrotechnic pleasures—one, hydrogen, with that satisfying pop when a newly extinguished splint came near it; the other, oxygen, revived exuberantly the flame of the same splint. Primo Levi, in an early chapter in his marvelous The Periodic Table, describes an initiation into chemistry that features the same experiment, with more fearsome results: I carefully lifted the cathode jar and holding it with its open end down, lit a match and brought it close. There was an explosion, small but sharp and angry, the jar burst into splinters (luckily, I was holding it level with my chest and not higher) and there remained in my hand, as a sarcastic symbol, the glass ring of the bottom.… It was indeed hydrogen, therefore: the same element that burns in the sun and stars, and from whose condensations the universes are formed in eternal silence. 2012-09MargHoffmannFA.jpgClick to Enlarge ImageIn my high school lab I had no idea that I was reliving, with different methods, part of the experiment Antoine Laurent Lavoisier thought important enough over two days in February 1785 to invite a select group of luminaries of French science to witness. In a tour de force of the big science of his day, using some remarkable instruments he had constructed at his own expense, he decomposed water into its constituent hydrogen and oxygen, and followed that by a recombination of the elemental gases thus generated into water. Henry Cavendish had proved that water is formed in the combustion of hydrogen some years before; Lavoisier not only decomposed water, but determined that the cycle of its decomposition and reformation proceeded with conservation of mass. Not everyone was convinced—they should have been—yet with this experiment, a new chemical age dawned. Water and air, those seemingly homogeneous elements of the Greeks, were shown to be a compound and a mixture, respectively. A Diatomic Molecule Chemistry and I progressed; it took chemistry a good 75 years from Lavoisier’s time to have the macroscopic compounds—there at the beginning, with us today—be joined by a realization of an underlying microscopic reality, imagined well before it was proven, that of molecules. And it took another 65 years (now we’re circa 1925) for the new quantum mechanics to be created, explaining the why and wherefore of the molecules of dihydrogen (a nomenclature I will use when I need to distinguish hydrogen molecules from hydrogen atoms). In my education, I made that transition from compounds to molecules, much as chemistry did. Except I did it in three years instead of 140. I encountered the molecule, more precisely the quantum mechanical treatment of H2, in a class George Fraenkel taught, and beautifully so, in my last year at Columbia College. Fraenkel took us through the first calculation on H2 by Heitler and London, in 1927, a calculation parlayed by Linus Pauling into a general theory of covalent bonding. By this time the dissociation energy of H2 (the strength of the bond, the energy needed to take it apart into two hydrogen atoms) was known. It was 4.48 electron volts (eV) per molecule, 104 kilocalories per mole (kcal/mol). If that doesn’t touch you, let’s begin with the fact that a mole of H2 (roughly 22 liters of it in gaseous form at room temperature) has a mass of 2.0 grams. Not much, that’s why it was used in airships. Kcal/mol? To heat a liter (about a quart, 1.057 quarts to be exact) of water from room temperature to boiling (a real-life operation most of us, even men, have done) takes about 80 kcal. That should help—to knock 2 grams of hydrogen molecules into hydrogen atoms takes about the same energy as to heat one and a quarter liters of water to boiling. Except, don’t try it on your stove—remember the Hindenburg airship. 2012-09MargHoffmannFB.jpgClick to Enlarge ImageThe energy of the hydrogen molecule as a function of distance is described by a “potential energy curve” shown in the figure below, a graphical depiction of how the chemical potential energy of the molecule varies with separation of the hydrogen atoms (actually their nuclei) in the molecule from each other. The depth of the well relative to the separated atoms is the dissociation energy I described above. But any molecule is a quantum mechanical entity; so the molecule, in a way as a consequence of Heisenberg’s uncertainty principle, does not sit still at the minimum of the potential energy curve. The molecule vibrates, the vibrations of the molecule are quantized—and in its lowest energy state the hydrogen nuclei retain some motion (in a way like a pendulum but less deterministically so) around the “equilibrium distance.” Sometimes they are a little closer, sometimes a little farther apart, on the average they are ~0.74 × 10–8 centimeter, 0.74 Ångström (Å) from each other. We call that the bond distance. The bond distance in the H2 molecule and its dissociation energy were known by the time the new quantum mechanics came. Heitler and London got a dissociation energy of 3.14 eV, an equilibrium distance of 0.87 Å. Not too great (compared with experiment) but a remarkable result: For the first time quantum mechanics “explained” the existence of a molecule. Which classical mechanics coupled with electrostatics, try as it might, couldn’t. One could not solve the Schrödinger equation, the wave equation that describes all matter, exactly for H2, but the path down a road of increasingly accurate approximations to the exact solution seemed beautifully logical and enticing to this young apprentice. Fraenkel took us through it first of all by another method, called the molecular orbital (MO) method, pioneered by Friedrich Hund and Robert S. Mulliken. A molecular orbital is a combination of atomic orbitals, an approximate way to describe the location of electrons in a molecule—I will show you one soon. This method eventually dominated chemical thinking from the 1950s through today, but initially gave a poorer description of the H-H bond in H2. Yet both the MO and the Heitler-London methods (expanded into Pauling’s “valence bond” [VB] approach) could be systematically, logically improved. We followed and understood that path in our class, culminating in a remarkable 1933 calculation by H. M. James and A. S. Coolidge, using hand-cranked mechanical calculators, that matched experiment. I would like to show you the molecular orbitals of H2, because (a) they’re important, and (b) I can’t escape them; they bring to me new chemistry at roughly 25-year intervals. The two 1s orbitals of the individual atoms combine in in-phase and out-of-phase fashion to give molecular orbitals called σg and σu*, shown in the figure at the top of the next page. 2012-09MargHoffmannFC.jpgClick to Enlarge ImageThe σ and the subscripts and superscripts on it are labels, symmetry labels; what matters is that σg has no node between the nuclei, while σu* does. That puts σg low in energy, σu* high. And, importantly, σg is a “bonding” orbital, if occupied (as it is in H2), the electrons in it bring the atoms together, whereas σu* is an antibonding orbital, any electrons in it (there are none in an unperturbed H2 molecule, at least in the simpolest analysis) pushing the nuclei apart. Interesting that the big guys, the massive nuclei, move where the small electrons tell them to move. Treat Me Right I had a small, almost disastrous encounter with the molecule again, right after earning my Ph.D. Well, spiritually, not materially. An approximate molecular orbital method I and some fellow theoreticians working with W. N. Lipscomb had devised, called the “extended Hückel” theory, did well on some larger, organic molecules, giving reasonable geometries and relative energies. But when I tried it on dihydrogen, the molecule collapsed—the calculated internuclear distance going to zero. That was a shock. It took some courage to go on with a method that could not get right (for good reasons, as we found out) the simplest molecule in the world. Or, just maybe, this small apparent disaster helped. For it made me and my students rely less on numbers than on understanding. On we did go, and got a lot of chemistry with this deficient method. A Lousy Acid, a Lousy Base Molecular hydrogen is pretty unreactive, as is methane. Hydrogen burns, of course (with a flame that is nearly colorless but very, very hot). But to get it to burn you need a match, even though the reaction to form water, Cavendish’s and Lavoisier’s reaction, gives off ~68 kcal/mol of dihydrogen burned. That’s chemistry: Things that should spontaneously proceed by the dictates of thermodynamics (like hydrogen burning) actually encountering substantial barriers to doing so. Chemical reactivity is predominantly that of acids and bases—that is why we spend so much time in introductory chemistry on this property of molecules. A base (ammonia, for example) is a good donor of electrons; in MO terms it has an energetically high-lying filled molecular orbital. An acid (the hydronium ion, the aquated proton, H3O+) is a good acceptor of electrons, as it has a low-energy empty MO. Hydrogen has an occupied MO, just one; you’ve seen it—it’s the σg in the MO picture of the molecule (see figure at left top). That MO lies low in energy; H2’s ionization potential, a measure of the energy of that MO, is large, 15.4 eV. And H2’s lowest unoccupied MO, σu*, is relatively high lying—to promote an electron from the filled MO to the unfilled one takes ~11 eV. Put into plain English, the hydrogen molecule is a lousy base and a lousy acid. The molecule is then relatively unreactive, even as it burns giving off a good bit of heat. Other molecules lack a good handhold, so to speak, on H2. Döbereiner’s Feuerzeug Yet hydrogen was known to react all along with some metal surfaces. In another column (American Scientist 86:326–329 [1998]) I recounted how Johann Wolfgang Döbereiner discovered in 1823 that hydrogen burned on a platinum surface. This is the first well-characterized catalytic reaction. Döbereiner did not know that there were molecules in his hydrogen gas (generated the same way I did as a boy, from Zn plus an acid, sulfuric acid in his case). And, of course, he did not know in atomistic detail how those H2 molecules fell apart on his Pt surface, and how they combined with oxygen from the atmosphere. Döbereiner made a Feuerzeug, a source of fire based on hydrogen, that became a household firelighting tool for half a century. Molecular Complexes of Dihydrogen By the 1980s there emerged evidence for weak “complexation” (binding) of hydrogen with various metal atoms. And surface scientists were piecing together the mechanism of Döbereiner’s seeming magic. Elsewhere, organometallic chemists found some reactions in which hydrogen molecules added to a metal center, the two hydrogen atoms split apart in the process. Experimentalists and theorists begin to view the seeming chemical inertness of dihydrogen as a challenge rather than dogma. In 1984 Jean-Yves Saillard, a French postdoctoral associate (now at the University of Rennes) and I did a careful study of the interactions of hydrogen and methane with discrete transition metal centers with associated ligands. These MLn (M is a metal atom, L a ligand, say CO or PH3, n the variable number of such ligands) fragments, if carefully chosen to be good bases and acids at the same time, could, in our approximate calculations, bind dihydrogen. The molecular orbital essence of our argument is shown in the figure at lower left; a similar picture and interpretation is there in earlier work of three Alains—Dedieu, Strich and Sevin. 2012-09MargHoffmannFD.jpgClick to Enlarge ImageA small interlude here on so-called interaction diagrams, which is what you see in the figure at lower left on the previous page. These diagrams, my professional bread and butter, show the interaction of the important orbitals of two pieces of a molecule (when it can be taken apart into pieces). That’s the way we build understanding, putting together, in LEGO style, the orbitals of a more complex molecule from simpler pieces. The L5M(H2) molecule in the middle (at that time unknown, at least to us) is built from two simpler pieces—an ML5 fragment at left, and my old friend H2 at right. The orbitals of H2 are easy—you’ve seen them above, the σg MO, with both of the 1s orbitals of the component H atoms in-phase, at low energy; the σu* MO, unfilled by electrons, at high energy. On the other side are orbitals of the ML5 fragment, mostly on the metal. They are more complicated (the metal has important 3d orbitals), but the essential feature is that there are orbitals on the metal filled with electrons and some that are empty, and these match in symmetry and overlap reasonably well with the orbitals on the H2. The dashed lines in the figure guide us to just these stabilizing interactions. Here’s what happens in this theoretical analysis: The acid function of the ML5 fragment (its empty orbital, called dz2) interacts with the base σg of H2, the base function of ML5 (a filled dxz orbital) interacts with the σu*, the acid function of H2. (Did I not say that there is a reason for all that seeming torture on acids and bases in first-year chemistry?) Importantly, there are consequences to the strength and length of the H2 as a function of the interaction: As a result of the mixing of MOs of ML5 with those of H2, some electrons are transferred from the σg orbital of H2, depleting its bonding density. And some electrons are transferred in the opposite direction, from ML5 to the H2 σu* orbital. Both actions—decreasing bonding, increasing antibonding—will stretch the H-H distance, even as they overall bind H2 to MLn. The figure is for ML5, but the reasoning extends to other numbers of ligands bound to the metal. 2012-09MargHoffmannFE.jpgClick to Enlarge ImageSaillard and I made no prediction of specific molecules. What we did not know when we did our work is that the first such “complex” had just been made. Greg Kubas at Los Alamos had synthesized (and with no nuclear reactions involved), the molecule shown in the figure above. It was followed over the years by a significant group of dihydrogen complexes, even ones in which the metal held more than hydrogen molecule. In time the H-H distance in these molecules was determined accurately (one needs neutron diffraction for that; metric information also comes from nuclear magnetic resonance studies). Kubas understood very well what was going on—his qualitative thinking about what bound H2 in his molecules, quite independently conceived, was similar to ours. But what fun for us! A theoretical idea about how a molecule could bind—and not just any molecule, but normally inert hydrogen—translated into reality! We were happy. And Kubas deserves all the credit, because science is ultimately about the reality of a compound in hand—theories come and go, the molecule is there. The First Element under Pressure In the past few years, my colleague Neil Ashcroft and I have had a fruitful collaboration on the response of molecules and extended structures to extreme pressure. Three years ago we returned to a first love of Neil’s, hydrogen. In this we were joined by a talented French postdoc, Vanessa Labet. Experimentally, one can learn much about matter under pressure (see “The Squeeze Is On,” American Scientist 97:108 [2009]) from studies in diamond anvil cells, where in a small reaction volume, between two tough diamonds and enveloped by (one hopes) an unreactive metal, a sample of matter is compressed. At what pressure solid, cold hydrogen (yes, hydrogen freezes, at 14 degrees kelvin) metallizes is the subject of hot, current dispute. But some things people agree on—solid hydrogen retains molecular diatomic units up to pressures such as those at the center of the Earth (3.5 million atmospheres). And from a spectroscopic measurement one can even deduce the internuclear distance in the confined diatomic. As the pressure rises, the H-H equilibrium separation contracts a little, then begins to stretch. The magnitude of the excursion is small, less than 2 percent of the 0.74 Å separation. There are places in physics and chemistry where theory can afford a clearer picture of a phenomenon, and matter at extreme conditions is one such place. If one can trust the theory.… Vanessa Labet had a numerical laboratory at her disposal of the best structures calculated for compressed H2 by Chris Pickard and Richard Needs. We used that laboratory to get physical insight, to reason out why hydrogen did what it did. The figure above shows the small dance the calculated shortest, intramolecular H-H distance does with pressure—it goes down a little, up for a while, down again, up, down. The discontinuities, the jags in the curve are understandable—they are the consequence of abrupt changes from one preferred form to another, so-called phase transitions. The calculations matched experimental findings pretty well. But what was behind the small dance steps? 2012-09MargHoffmannFF.jpgClick to Enlarge ImageWe first thought about the effect of confinement, one hydrogen molecule simply squeezed by other hydrogen molecules in that tense space. Now a model for that was already there in earlier work of Dudley Herschbach and Richard LeSar. They looked at the energy levels of H2 confined in a rigid spheroidal box, as the dimensions of the box decreased. As one might expect, the internuclear separation responded by decreasing. Labet probed confinement by a slightly softer box, a hydrogen molecule imprisoned between two helium atoms, the most ungiving chemical walls we could think of. The earlier results were confirmed—such confinement only made the H2 distance contract. What else could it do? But that’s not what our numerical laboratory and experiment showed; in a real and modeled crystal of H2, the hydrogen molecule shrank, expanded, expanded some more, shrank. By just a little. What could possibly make it grow longer? As it was squeezed? At this point I remembered Kubas’s wonderful organometallic complexes. In them the coordinated hydrogen molecules expanded, to 0.82–0.89 Å in length. And from the work Saillard and I did, we knew why! The metal fragment provided electrons to populate hydrogen σu*, depopulate σg, both weakening the H-H bond. In compressed hydrogen, at pressures approaching those at the center of the Earth, there were no metals in sight. But under these extreme conditions, could other hydrogen molecules around a given H2 possibly play that role? We looked at the population of the molecular orbitals of a given molecule, and sure enough the effect was there. Model calculations confirmed that the little dance of H-H separations with pressure that experiment and theory observe in dense, cold H2 was the outcome of two competing effects: simple physical confinement, and the chemical effect of the molecular orbitals of confined and confining molecules interacting, mixing, transferring electrons, stretching that bond. I love it—the same bonding that occurs in discrete transition metal organometallic molecules is there in a highly compressed crystal of pure H2. One World The first element, the simplest diatomic molecule there is—what could be simpler? Hold on—an H2 molecule in solid H2 under pressure, an H2 molecule approaching a Pt surface in Döbereiner’s firelighter, the H2 bubbling out of the solution of a 13-year-old boy playing with slightly dangerous chemicals in a Brooklyn apartment, the H2 in transition metal complexes Greg Kubas saw for the first time in the world—of course, each is different, peculiar, set apart by its conditions of generation and preservation. But there can’t be different rules of nature operating for one H2 and not the other. The joy is in seeing the connections. I am grateful to Neil Ashcroft and Vanessa Labet for making me think about something I never thought I’d be working on again, or that I could imagine there was something left to learn about. As there is. • Kubas, G. J., R. R. Ryan, B. I. Swanson, P. J. Vergamini and H. J. Wasserman. 1984. Characterization of the first examples of isolable molecular hydrogen complexes, M(CO)3(PR3)2(H2) (M = molybdenum or tungsten; R = Cy or isopropyl). Evidence for a side-on bonded dihydrogen ligand. Journal of the American Chemical Society 106:451–452. • Labet, V., R. Hoffmann and N. W. Ashcroft. 2012. A fresh look at dense hydrogen under pressure: 3. Two competing effects and the resulting intramolecular H-H separation in solid hydrogen under pressure. Journal of Chemical Physics 136:074503. • Levi, P. 1984. The Periodic Table, trans. R. Rosenthal. Schocken Books: New York. • Saillard, J.-Y., and R. Hoffmann. 1984. C-H and H-H activation in transition metal complexes and on surfaces. Journal of the American Chemical Society 106:2006–2026. • Salem, L. 1987. Marvels of the Molecule (Mole´cule, la merveilleuse. English), trans. James D. Wuest. VCH: New York. comments powered by Disqus Bottom Banner
3543a935f5f99226
Header Ads Quantum mechanics | Mathematical formulations Quantum mechanics (QM – also known as quantum physics, or quantum theory) is a branch of physics which deals with physical phenomena at microscopic scales, where the action is on the order of the Planck constant. Quantum mechanics departs from classical mechanics primarily at the quantum realm of atomic and subatomic length scales. Quantum mechanics provides a mathematical description of much of the dual particle-like and wave-like behavior and interactions of energy and matter. Quantum mechanics is the non-relativistic limit of Quantum Field Theory (QFT), a theory that was developed later that combined Quantum Mechanics with Relativity. In advanced topics of quantum mechanics, some of these behaviors are macroscopic and emerge at only extreme (i.e., very low or very high) energies or temperatures. The name quantum mechanics derives from the observation that some physical quantities can change only in discrete amounts (Latin quanta), and not in a continuous (cf. analog) way. For example, the angular momentum of an electron bound to an atom or molecule is quantized. In the context of quantum mechanics, the wave–particle duality of energy and matter and the uncertainty principle provide a unified view of the behavior of photons, electrons, and other atomic-scale objects. The mathematical formulations of quantum mechanics are abstract. A mathematical function known as the wavefunction provides information about the probability amplitude of position, momentum, and other physical properties of a particle. Mathematical manipulations of the wavefunction usually involve the bra-ket notation, which requires an understanding of complex numbers and linear functionals. The wavefunction treats the object as a quantum harmonic oscillator, and the mathematics is akin to that describing acoustic resonance. Many of the results of quantum mechanics are not easily visualized in terms of classical mechanics—for instance, the ground state in a quantum mechanical model is a non-zero energy state that is the lowest permitted energy state of a system, as opposed to a more "traditional" system that is thought of as simply being at rest, with zero kinetic energy. Instead of a traditional static, unchanging zero state, quantum mechanics allows for far more dynamic, chaotic possibilities, according to John Wheeler. The earliest versions of quantum mechanics were formulated in the first decade of the 20th century. At around the same time, the atomic theory and the corpuscular theory of light (as updated by Einstein) first came to be widely accepted as scientific fact; these latter theories can be viewed as quantum theories of matter and electromagnetic radiation, respectively. Early quantum theory was significantly reformulated in the mid-1920s by Werner Heisenberg, Max Born and Pascual Jordan, who created matrix mechanics; Louis de Broglie and Erwin Schrödinger (Wave Mechanics); and Wolfgang Pauli and Satyendra Nath Bose (statistics of subatomic particles). And the Copenhagen interpretation of Niels Bohr became widely accepted. By 1930, quantum mechanics had been further unified and formalized by the work of David Hilbert, Paul Dirac and John von Neumann, with a greater emphasis placed on measurement in quantum mechanics, the statistical nature of our knowledge of reality, and philosophical speculation about the role of the observer. Quantum mechanics has since branched out into almost every aspect of 20th century physics and other disciplines, such as quantum chemistry, quantum electronics, quantum optics, and quantum information science. Much 19th century physics has been re-evaluated as the "classical limit" of quantum mechanics, and its more advanced developments in terms of quantum field theory, string theory, and speculative quantum gravity theories. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries when scientists such as Robert Hooke, Christian Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he later described in a paper entitled "On the nature of light and colours". This experiment played a major role in the general acceptance of the wave theory of light. In 1838, with the discovery of cathode rays by Michael Faraday, these studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, and the 1900 quantum hypothesis of Max Planck.Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" (or "energy elements") precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, known as Wien's law in his honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it was valid only at high frequencies, and underestimated the radiance at low frequencies. Later, Max Planck corrected this model using Boltzmann statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics. Among the first to study quantum phenomena in nature were Arthur Compton, C.V. Raman, Pieter Zeeman, each of whom has a quantum effect named after him. Robert A. Millikan studied the Photoelectric effect experimentally and Albert Einstein developed a theory for it. At the same time Niels Bohr developed his theory of the atomic structure which was later confirmed by the experiments of Henry Moseley. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept also introduced by Arnold Sommerfeld. This phase is known as Old quantum theory. According to Planck, each energy element E is proportional to its frequency ν: E = h \nu\ Planck is considered the father of the Quantum Theory: where h is Planck's constant. Planck (cautiously) insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizeable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. The foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur Compton, Albert Einstein, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Enrico Fermi, Wolfgang Pauli, Max von Laue, Freeman Dyson, David Hilbert, Wilhelm Wien, Satyendra Nath Bose, Arnold Sommerfeld and others. In the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the "Old Quantum Theory". Out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons (1926). From Einstein's simple postulation was born a flurry of debating, theorizing, and testing. Thus the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.  The other exemplar that led to quantum mechanics was the study of electromagnetic waves, such as visible and ultraviolet light. When it was found in 1900 by Max Planck that the energy of waves could be described as consisting of small packets or "quanta", Albert Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon) with a discrete quantum of energy that was dependent on its frequency. As a matter of fact, Einstein was able to use the photon theory of light to explain the photoelectric effect, for which he won the Nobel Prize in 1921. This led to a theory of unity between subatomic particles and electromagnetic waves, called wave–particle duality, in which particles and waves were neither one nor the other, but had certain properties of both. Thus coined the term wave-particle duality. While quantum mechanics traditionally described the world of the very small, it is also needed to explain certain recently investigated macroscopic systems such as superconductors, superfluids, and larger organic molecules. The word quantum derives from the Latin, meaning "how great" or "how much". In quantum mechanics, it refers to a discrete unit that quantum theory assigns to certain physical quantities, such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics dealing with atomic and sub-atomic systems which is today called quantum mechanics. It is the underlying mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics. Some fundamental aspects of the theory are still actively studied. Quantum mechanics is essential to understanding the behavior of systems at atomic length scales and smaller. In addition, if classical mechanics truly governed the workings of an atom, electrons would really 'orbit' the nucleus. Since bodies in circular motion accelerate, they must emit radiation and collide with the nucleus in the process. This clearly contradicts the existence of stable atoms. However, in the natural world electrons normally remain in an uncertain, non-deterministic, "smeared", probabilistic wave–particle wavefunction orbital path around (or through) the nucleus, defying the traditional assumptions of classical mechanics and electromagnetism. Quantum mechanics was initially developed to provide a better explanation and description of the atom, especially the differences in the spectra of light emitted by different isotopes of the same element, as well as subatomic particles. In short, the quantum-mechanical atomic model has succeeded spectacularly in the realm where classical mechanics and electromagnetism falter. Broadly speaking, quantum mechanics incorporates four classes of phenomena for which classical physics cannot account: • The quantization of certain physical properties • Wave–particle duality • The Uncertainty principle • Quantum entanglement. Mathematical formulations In the mathematically rigorous formulation of quantum mechanics developed by Paul Dirac, David Hilbert, John von Neumann, and Hermann Weyl the possible states of a quantum mechanical system are represented by unit vectors (called "state vectors"). Formally, these reside in a complex separable Hilbert space - variously called the "state space" or the "associated Hilbert space" of the system - that is well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system - for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a maximally Hermitian (precisely: by a self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can attain only those discrete eigenvalues. In the formalism of quantum mechanics, the state of a system at a given time is described by a complex wave function, also referred to as state vector in a complex vector space. This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments. For example, it allows one to compute the probability of finding an electron in a particular region around the nucleus at a particular time. Contrary to classical mechanics, one can never make simultaneous predictions of conjugate variables, such as position and momentum, with accuracy. For instance, electrons may be considered (to a certain probability) to be located somewhere within a given region of space, but with their exact positions unknown. Contours of constant probability, often referred to as "clouds", may be drawn around the nucleus of an atom to conceptualize where the electron might be located with the most probability. Heisenberg's uncertainty principle quantifies the inability to precisely locate the particle given its conjugate momentum. According to one interpretation, as the result of a measurement the wave function containing the probability information for a system collapses from a given initial state to a particular eigenstate. The possible results of a measurement are the eigenvalues of the operator representing the observable — which explains the choice of Hermitian operators, for which all the eigenvalues are real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator. Heisenberg's uncertainty principle is represented by the statement that the operators corresponding to certain observables do not commute. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr-Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wavefunction collapse" (see, for example, the relative state interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wavefunctions become entangled, so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics. Generally, quantum mechanics does not assign definite values. Instead, it makes a prediction using a probability distribution; that is, it describes the probability of obtaining the possible outcomes from measuring an observable. Often these results are skewed by many causes, such as dense probability clouds. Probability clouds are approximate, but better than the Bohr model, whereby electron location is given by a probability function, the wave function eigenvalue, such that the probability is the squared modulus of the complex amplitude, or quantum state nuclear attraction. Naturally, these probabilities will depend on the quantum state at the "instant" of the measurement. Hence, uncertainty is involved in the value. There are, however, certain states that are associated with a definite value of a particular observable. These are known as eigenstates of the observable ("eigen" can be translated from German as meaning "inherent" or "characteristic"). In the everyday world, it is natural and intuitive to think of everything (every observable) as being in an eigenstate. Everything appears to have a definite position, a definite momentum, a definite energy, and a definite time of occurrence. However, quantum mechanics does not pinpoint the exact values of a particle's position and momentum (since they are conjugate pairs) or its energy and time (since they too are conjugate pairs); rather, it provides only a range of probabilities in which that particle might be given its momentum and momentum probability. Therefore, it is helpful to use different words to describe states having uncertain values and states having definite values (eigenstates). Usually, a system will not be in an eigenstate of the observable (particle) we are interested in. However, if one measures the observable, the wavefunction will instantaneously be an eigenstate (or "generalized" eigenstate) of that observable. This process is known as wavefunction collapse, a controversial and much-debated process that involves expanding the system under study to include the measurement device. If one knows the corresponding wave function at the instant before the measurement, one will be able to compute the probability of the wavefunction collapsing into each of the possible eigenstates. For example, the free particle in the previous example will usually have a wavefunction that is a wave packet centered around some mean position x0 (neither an eigenstate of position nor of momentum). When one measures the position of the particle, it is impossible to predict with certainty the result. It is probable, but not certain, that it will be near x0, where the amplitude of the wave function is large. After the measurement is performed, having obtained some result x, the wave function collapses into a position eigenstate centered at x. The time evolution of a quantum state is described by the Schrödinger equation, in which the Hamiltonian (the operator corresponding to the total energy of the system) generates the time evolution. The time evolution of wave functions is deterministic in the sense that - given a wavefunction at an initial time - it makes a definite prediction of what the wavefunction will be at any later time. During a measurement, on the other hand, the change of the initial wavefunction into another, later wavefunction is not deterministic, it is unpredictable (i.e., random). A time-evolution simulation can be seen here. Wave functions change as time progresses. The Schrödinger equation describes how wavefunctions change in time, playing a role similar to Newton's second law in classical mechanics. The Schrödinger equation, applied to the aforementioned example of the free particle, predicts that the center of a wave packet will move through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more uncertain with time. This also has the effect of turning a position eigenstate (which can be thought of as an infinitely sharp wave packet) into a broadened wave packet that no longer represents a (definite, certain) position eigenstate. Nipun Tyagi. Powered by Blogger.
c877574ea7b693f8
Sunday, November 17, 2013 Constant torque as a manner to force phase transition increasing the value of Planck constant The challenge is to identify physical mechanisms forcing the increase of effective Planck constant heff (whether to call it effective or not is to some extent matter of taste). The work with certain potential applications of TGD led to a discovery of a new mechanism possibly achieving this. The method would be simple: apply constant torque to a rotating system. I will leave it for the reader to rediscover how this can be achieved. It turns out that the considerations lead to considerable insights about how large heff phases are generated in living matter. Could constant torque force the increase of heff? Consider a rigid body allowed to rotated around some axes so that its state is characterized by a rotation angle φ. Assumed that a constant torque τ is applied to the system. 1. The classical equations of motion are I d2φ/dt2= τ . This is true in an idealization as point particle characterized by its moment of inertia around the axis of rotation. Equations of motion are obtained from the variational principle S= ∫ Ldt , L= I(dφ/dt)2/2- V(φ) , V(φ)= τφ . Here φ denotes the rotational angle. The mathematical problem is that the potential function V(φ) is either many-valued or dis-continuous at φ= 2π. 2. Quantum mechanically the system corresponds to a Scrödinger equation - hbar2/2I× ∂2Ψ/∂φ2 +τ φ Ψ = -i∂Ψ/∂ t . In stationary situation one has - hbar2/2I× ∂2Ψ/∂φ2 +τ φ Ψ = EΨ . 3. Wave function is expected to be continuous at φ=2π. The discontinuity of potential at φ= φ0 poses further strong conditions on the solutions: Ψ should vanish in a region containing the point φ0. Note that the value of φ0 can be chosen freely. The intuitive picture is that the solutions correspond to strongly localized wave packets in accelerating motion. The wavepacket can for some time vanish in the region containing point φ0. What happens when this condition does not hold anymore? • Dissipation is present in the system and therefore also state function reductions. Could state function reduction occur when the wave packet contains the point, where V(φ) is dis-continuous? • Or are the solutions well-defined only in a space-time region with finite temporal extent T? In zero energy ontology (ZEO) this option is automatically realized since space-time sheets are restricted inside causal diamonds (CDs). Wave functions need to be well-defined only inside CD involved and would vanish at φ0. Therefore the mathematical problems related to the representation of accelerating wave packets in non-compact degrees of freedom could serve as a motivation for both CDs and ZEO. There is however still a problem. The wave packet cannot be in accelerating motion even for single full turn. More turns are wanted. Should one give up the assumption that wave function is continuous at φ=φ0+ 2π and should one allow wave functions to be multivalued and satisfy the continuity condition Ψ(φ0)=Ψ(φ0+n2π), where n is some sufficiently large integer. This would mean the replacement of the configuration space (now circle) with its n-fold covering. The introduction of the n-fold covering leads naturally to the hierarchy of Planck constants. 1. A natural question is whether constant torque τ could affect the system so that φ=0 ja φ=2π do not represent physically equivalent configurations anymore. Could it however happen that φ=0 ja φ= n2π for some value of n are still equivalent? One would have the analogy of many-sheeted Riemann surface. 2. In TGD framework 3-surfaces can indeed be analogous to n-sheeted Riemann surfaces. In other words, a rotation of 2π does not produce the original surface but one needs n2π rotation to achieve this. In fact, heff/h=n corresponds to this situation geometrically! Space-time itself becomes n-sheeted covering of itself: this property must be distinguished from many-sheetedness. Could constant torque provide a manner to force a situation making space-time n-sheeted and thus to create phases with large value of heff? 3. Schrödinger amplitude representing accelerated wave packet as a wavefunction in the n-fold covering would be n-valued in the ordinary Minkowski coordinates and would satisfy the boundary condition Ψ(φ)= Ψ(φ+ n2π) . Since V(φ) is not rotationally invariant this condition is too strong for stationary solutions. 4. This condition would mean Fourier analysis using the exponentials exp(imφ/n) with time dependent coefficients cm(t) whose time evolution is dicrated by Schröndinger equation. For ordinary Planck constant this would mean fractional values of angular momentum Lz= m/n hbar . If one has heff=nhbar, the spectrum of Lz is not affected. It would seem that constant torque forces the generation of a phase with large value of heff! From the estimate for how many turns the system rotates one can estimate the value of heff. What about stationary solutions? Giving up stationary seems the only option on basis of classical intuition. One can however ask whether also stationary solutions could make sense mathematically and could make possible completely new quantum phenomena. 1. In the stationary situation the boundary condition must be weakened to Ψ(φ0)= Ψ(φ0+ n2π) . Here the choice of φ0 characterizes the solution. This condition quantizes the energy. Normally only the value n=1 is possible. 2. The many-valuedness/discontinuity of V(φ) does not produce problems if the condition Ψ(φ0,t)=Ψ(φ0+ n2π,t) =0 , & 0<t<T . is satisfied. Schrödinger equation would be continuous at φ=φ0+n2π. The values of φ0 would correspond to a continuous state basis. 3. One would have two boundary conditions expected to fix the solution completely for given values of n and φ0. The solutions corresponding to different values of φ0 are not related by a rotation since V(φ) is not invariant under rotations. One obtains infinite number of continous solution families labelled by n and they correspond to different phases if heff is different from them. The connection with WKB approximation and Airy functions Stationary Schrödinger equation with constant force appears in WKB approximation and follows from a linearization of the potential function at non-stationary point. A good example is Schröndinger equation for a particle in the gravitational field of Earth. The solutions of this equation are Airy functions which appear also in the electrodynamical model for rainbow. 1. The standard form for the Schrödnger equation in stationary case is obtained using the following change of variables u+e= kφ , k3=2τ I/hbar2 , e=2IE/hbar2k2 . One obtains Airy equation d2Ψ/du2- uΨ =0 . The eigenvalue of energy does not appear explicitly in the equation. Boundary conditions transform to Ψ(u0+ n2π k )= Ψ(u0) =0 . 2. In non-stationary case the change of variables is u= kφ , k3=2τ I/hbar2 , v=(hbar2k2/2I)× t One obtains d2Ψ/du2- uΨ =i∂v Ψ . Boundary conditions are Ψ(u+ kn2π,v )= Ψ(u,v) , 0 ≤ v≤ hbar2k2/2I× T . An interesting question is what heff=n× h means? Should one replace h with heff=nh as the condition that the spectrum of angular momentum remains unchanged requires. One would have k ∝ n-2/3 ja e∝ n4/3. One would obtain boundary conditions non-linear with respect to n. Connection with living matter The constant torque - or more generally non-oscillatory generalized force in some compact degrees of freedom - requires of a continual energy feed to the system. Continual energy feed serves as a basic condition for self-organization and for the evolution of states studied in non-equilibrium thermodynamics. Biology represents a fundamental example of this kind of situation. The energy feeded to the system represents metabolic energy and ADP-ATP process loads this energy to ATP molecules. Also now constant torque is involved: the ATP synthase molecule contains the analog of generator having a rotating shaft. Since metabolism and the generation of large heff phases are very closely related in TGD Universe, the natural proposal is that the rotating shaft forces the generation of large heff phases. For details and background see the chapter Macroscopic quantum coherence and quantum metabolism as different sides of the same coin: part II" of "Biosystems as Conscious Holograms". Addition: The old homepage address has ceased to work again. As I have told, I learned too late that the web hotel owner is a criminal. It is quite possible that he receives "encouragement" from some finnish academic people who have done during these 35 years all they can to silence me. It turned out impossible to get any contact with this fellow to get the right to forward the visitors from the old address to the new one (which by the way differs from the old one only by replacement of ".com" with ".fi"). The situation should change in January. I am sorry for inconvenience. Thinking in a novel way in Finland is really dangerous activity! At 8:52 PM, Blogger Hamed said... Dear Matti, I am not sure that understand your purpose. Does that means if we provide the system that a rigid body is rotating around some axes and we wait, the space time sheet of the rigid body(in really quantum state of it) splits to n space time sheets, each one with a different Planck constant? Or does that means phase transition of Planck constant? Hence when an object is rotating, there is an evolution with respect to Planck constant?(it seems magic!) or maybe this is not really phase transition rather just the hierarchies of allowed phase transitions. also as I understand in TGD this evolution appears in the rotating magnetic systems that leads to long ranged weak magnetic fields with large Planck constant. At 9:48 PM, Anonymous Matti Pitkanen said... Dear Hamed, there is a slight misunderstanding. The point is that there is constant torque acting on the system! System is open and there is energy and angular momentum feed to it! In absence of torque the standard description would work. System could be in eigenstate of angular momentum and totally delocalised in angular variable. Nothing exotic. In the case of constant torque the first observation is that classically the system corresponds mathematically to an accelerating wave packet moving along circle. For a narrow wave packet classical picture is excellent approximation. The serious mathematical problem is that the potential describing the situation is V(phi)= tau*phi and many-valued as function of phi or discontinuous at 2*pi. A solution to the problem is sharply localised wave packet vanishing at region containing the discontinuity of V. It can propagate at most one turn. This is however not consistent with the physical picture. We want many turns! The solution is that the configuration space - now circle - is replaced with its n-fold covering so that the system can be n turns in accelerating motion. In TGD ZEO and CDs forces just this replacement. n, the maximal number of turns, corresponds the value of dark Planck constant which indeed corresponds to n-fold coverings for space-time surface for which M^4 is covered n times. The magic is that all the new elements of TGD follow automatically from the construction of quantum description of open systems, which represents a lacking chapter in the text books of quantum theory! We are indeed considering systems which are not open: there is feed of energy and angular momentum and living matter is fundamental example of this kind of systems as all self-organising systems discussed in non-equilibrium thermodynamics. And amazingly: ATP synthase, the basic molecule of metabolism, contains a generator with a rotating shaft! At 10:04 PM, Anonymous Matti Pitkanen said... Dear Hamed, still a little comment. In ZEO ontology one sees the time evolution of 3-D patter at space-time level as a 4-D pattern and quantum jumps recreate the quantum superpositions of these patterns. Therefore one need not say that time evolution for a rotating system *in accelerating motion* would increase h_eff steadily . Rather, the period of accelerated rotation lasting for some finite time and corresponds to space-time sheet with minimal value of h_eff dictated by the number of turns. The change of heff would mean addition of new sheets to the existing n-fold covering and also this is of course possible. It would correspond to a transition increasing Planck constant. A possible selection rule is that every sheet suffers same phase transition becoming n_1 sheeted. The final number of sheets would n_f= n*n_1: n would divide n_f. Prime values of n would represent "irreducible" Planck constants just as Hilbert spaces with prime dimension are primes for Hilbert spaces under tensor product operation. At 2:00 PM, Blogger Ulla said... What happen with Plancks constant in a quantum tunnelling (is the tunnel enlarged or compressed?), seen in the light that it vanish in classic physics. How is that happening, btw.? One answer I got: The conventional mechanism is just uncertainty... basically the uncertainty relation holds for energy-time in the same way it does for position-momentum, so the particle can basically 'borrow' energy because its energy is uncertain. Since we require Et <= hbar, the Planck limit defines a constraint on how much energy the particle can borrow for how long. It can get away with this trick because the particle can't be observed while it's actually tunneling... If we're willing to go beyond conventional ideas, though, there's actually a much simpler mechanism... First, consider that special relativity doesn't actually specifically forbid things from traveling faster than light... what it actually says is that IF anything travels faster than light, the result will be a violation of both energy conservation and classical causality... The reasoning is quite straightforward... if a particle, such as a tachyon, travels faster than light, in terms of the way simultaneity (the 'now') is defined in relativity, it will arrive at its destination before it leaves its source - i.e. it travels 'backwards in time' and since the 'effect' thus precedes the 'cause', causality is violated... if the particle carries energy, as it would have to, while its traveling there'll be more energy in the system than there should be, so energy conservation is violated... Now consider, that a tunneling particle, say an electron, also violates energy conservation, it 'borrows' energy that isn't there... the idea that it also violates causality is somewhat more abstract... but relates to the simple question, how does it 'know' it can tunnel through the barrier? Putting it all together is actually quite direct... we just need an elementary 'time paradox'... Having tunneled through the barrier, the tunneling particle emits a tachyon, which arrives to be 'absorbed' by the particle (as in, its prior self) just as it's about to tunnel, giving it both the energy and the 'foreknowledge' it actually needs to tunnel through the barrier... At 7:31 PM, Anonymous Matti Pitkanen said... To Ulla: The hierarchy of Planck constants leaves all existing quantum theory remains intact as far predictions are considered. This applies also to tunnelling. The new quantum physics is forced by the situation like the one considered here and is especially relevant to open systems, in particular living matter. All new physics elements in TGD solve some problem of existing physics: this applies also to the hierarchy of Planck constants. The simplest picture about tunnelling is based on Schrodinger equation in standard QM. Everything is well-defined and simple mathematically. The wave nature of the particle implies realises uncertainty principle and justifies the argument that you represent. Lubos would say that interpretational problems relate to taking classical picture too far and add something nasty about anti quantum zealots;-). Lubos cannot however silence me or whoever it is I am listening;-). What quantum classical correspondence in TGD sense could give to the understanding of tunnelling? There should exist the analog of classical orbit through the wall. A purely classical process but system defined more generally in terms of space-time sheets. What could this mean? Could ZEO provide insights? Could particle emit negative energy photon (say) describable as space-time sheet received by another system and get the needed energy to overcome the barrier as a recoil (note that here I would assume energy conservation unlike the uncertainty principle inspired argument based on wave nature). The exchanged virtual photon is tachyon but in TGD framework can be said to consist of fundamental fermions which are on mass shell and massless but possibly with negative energy. This would resemble the Feynman diagrammatic description but in TGD framework the altered arrow of geometric time could bring in something new, maybe only at microscopic level. About "how does it "know" how to tunnel through the barrier?". "Knowing" happens in subjective time and requires quantum jumps. In Schrodinger equation itself there is nothing about consciousness, it is law obeyed with respect to geometric time and wave nature alone explains tunnelling. Only the measurement telling at what side of the barrier the particle is, gives rise to conscious experience, perhaps that of "knowing". At 8:00 PM, Blogger Hamed said... Dear Matti, Thanks, it is more clear now. I write my understanding: my body that is not dark hasn’t any fold covering but there is hierarchy of (dark) magnetic bodies associated with my body. They correspond to hierarchy of Planck constants. Suppose a magnetic body with hbar =n*hbar_0. This magnetic body is in really n-fold covering of M4*Cp2. That is n=n_a*n_b space time sheets common in M^2 and S^2. The spinors of the 3-surfaces of WCW at given CD are induced from CP2 spinor connection. Corresponding to this CD at the moduli measurement resolution, there are 2 kind of coset algebra: G/H and N/N , the first one acts on the arguments of the spinor fields that are 3-surfaces and another one acts on the spinor fields themselves. at the measurement resolution, one must approximate 3-surfaces as points and G/H means the algebra corresponding to diffeomorphisms of one dimensional paths. Now spinor fields are averaged on the points of 3-surfaces. and give us one averaged spinor. There is any incorrect? How anti-commutation relations for fermionic oscillator operators correspond to anti-commutation relations for the gamma matrices of the configuration space. in really how second quantization appear in quantum TGD by transition from classical TGD ? At 3:15 AM, Anonymous Matti Pitkanen said... It seems that your view about WCW spinor fields is essentially correct. WCW gamma matrices satisfy anticommutation relations just like ordinary gamma matrices. Kahler structure of WCW makes the gamma matrices analogous to sermonic oscillator operators. These properties are obtained if gammas are linear combinations of fermionic oscillator operators for the second quantized spinor fields at space-time surfaces. This gives a deep geometric meaning for the fermionic anti commutation relations. My belief is that fermionic oscillator operators relate to Boolean cognition: oscillator operator basis corresponds to infinite-D Boolean algebra. One has induced spinor fields and modified Dirac operator defining their dynamics. One performs second quantisation for these spinor fields. What this exactly means is far from trivial! I have pondered this a lot and proposed formulas but I am not sure whether I have the final answer. Finite measurement resolution suggests strongly that the number of fermionic oscillator operators is actually finite (finite Boolean resolution) corresponding to a finite number of braid strands effectively replacing orbit of the partonic 2-surface and carrying fermion number. On the other hand, stringy picture suggests that stringy degrees of freedom give infinite number of indices just as in string models. Very probably a cut also in stringy modes emerges from finite measurement resolution. Post a Comment << Home
65d560a201428290
Editor's Note: Reprinted from How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival by David Kaiser. Copyright (c) 2011 by David Kaiser. Used with permission of the publisher, W.W. Norton & Company, Inc. Click here to see a Scientific American video that explains quantum entanglement.  [from Chapter 2, pp. 25-38:] The iconoclastic Irish physicist John S. Bell had long nursed a private disquietude with quantum mechanics. His physics teachers—first at Queen's University in his native Belfast during the late 1940s, and later at Birmingham University, where he pursued doctoral work in the mid-1950s—had shunned matters of interpretation. The "ask no questions" attitude frustrated Bell, who remained unconvinced that Niels Bohr had really vanquished the last of Einstein's critiques long ago and that there was nothing left to worry about. At one point in his undergraduate studies, his red shock of hair blazing, he even engaged in a shouting match with a beleaguered professor, calling him "dishonest" for trying to paper over genuine mysteries in the foundations, such as how to interpret the uncertainty principle. Certainly, Bell would grant, quantum mechanics worked impeccably "for all practical purposes," a phrase he found himself using so often that he coined the acronym, "FAPP." But wasn't there more to physics than FAPP? At the end of the day, after all the wavefunctions had been calculated and probabilities plotted, shouldn't quantum mechanics have something coherent to say about nature? In the years following his impetuous shouting matches, Bell tried to keep these doubts to himself. At the tender age of twenty-one he realized that if he continued to indulge these philosophical speculations, they might well scuttle his physics career before it could even begin. He dove into mainstream topics, working on nuclear and particle physics at Harwell, Britain's civilian atomic energy research center. Still, his mind continued to wander. He wondered whether there were some way to push beyond the probabilities offered by quantum theory, to account for motion in the atomic realm more like the way Newton's physics treated the motion of everyday objects. In Newton's physics, the behavior of an apple or a planet was completely determined by its initial state—variables like position (where it was) and momentum (where it was going)—and the forces acting upon it; no probabilities in sight. Bell wondered whether there might exist some set of variables that could be added to the quantum-mechanical description to make it more like Newton's system, even if some of those new variables remained hidden from view in any given experiment. Bell avidly read a popular account of quantum theory by one of its chief architects, Max Born's Natural Philosophy of Cause and Chance (1949), in which he learned that some of Born's contemporaries had likewise tried to invent such "hidden variables" schemes back in the late 1920s. But Bell also read in Born's book that another great of the interwar generation, the Hungarian mathematician and physicist John von Neumann, had published a proof as early as 1932 demonstrating that hidden variables could not be made compatible with quantum mechanics. Bell, who could not read German, did not dig up von Neumann's recondite proof. The say-so of a leader (and soon-to-be Nobel laureate) like Born seemed like reason enough to drop the idea. Imagine Bell's surprise, therefore, when a year or two later he read a pair of articles in the Physical Review by the American physicist David Bohm. Bohm had submitted the papers from his teaching post at Princeton University in July 1951; by the time they appeared in print six months later, he had landed in São Paolo, Brazil, following his hounding by the House Un-American Activities Committee. Bohm had been a graduate student under J. Robert Oppenheimer at Berkeley in the late 1930s and early 1940s. Along with several like-minded friends, he had participated in free-wheeling discussion groups about politics, worldly affairs, and local issues like whether workers at the university's laboratory should be unionized. He even joined the local branch of the Communist Party out of curiosity, but he found the discussions so boring and ineffectual that he quit a short time later. Such discussions might have seemed innocuous during ordinary times, but investigators from the Military Intelligence Division thought otherwise once the United States entered World War II, and Bohm and his discussion buddies started working on the earliest phases of the Manhattan Project to build an atomic bomb. Military intelligence officers kept the discussion groups under top-secret surveillance, and in the investigators' eyes the line between curious discussion group and Communist cell tended to blur. When later called to testify before HUAC, Bohm pleaded the Fifth Amendment rather than name names. Over the physics department's objections, Princeton's administration let his tenure-track contract lapse rather than reappoint him. At the center of a whirling media spectacle, Bohm found all other domestic options closed off. Reluctantly, he decamped for Brazil. In the midst of the Sturm und Drang, Bohm crafted his own hidden variables interpretation of quantum mechanics. As Bell later reminisced, he had "seen the impossible done" in these papers by Bohm. Starting from the usual Schrödinger equation, but rewriting it in a novel way, Bohm demonstrated that the formalism need not be interpreted only in terms of probabilities. An electron, for example, might behave much like a bullet or billiard ball, following a path through space and time with well-defined values of position and momentum every step of the way. Given the electron's initial position and momentum and the forces acting on it, its future behavior would be fully determined, just like the case of the trusty billiard ball—although Bohm did have to introduce a new "quantum potential" or force field that had no analogue in classical physics. In Bohm's model, the quantum weirdness that had so captivated Bohr, Heisenberg, and the rest—and that had so upset young Bell, when parroted by his teachers—arose because certain variables, such as the electron's initial position, could never be specified precisely: efforts to measure the initial position would inevitably disturb the system. Thus physicists could not glean sufficient knowledge of all the relevant variables required to calculate a quantum object's path. The troubling probabilities of quantum mechanics, Bohm posited, sprang from averaging over the real-but-hidden variables. Where Bohr and his acolytes had claimed that electrons simply did not possess complete sets of definite properties, Bohm argued that they did—but, as a practical matter, some remained hidden from view. Bohm's papers fired Bell's imagination. Soon after discovering them, Bell gave a talk on Bohm's papers to the Theory Division at Harwell. Most of his listeners sat in stunned (or perhaps just bored) silence: why was this young physicist wasting their time on such philosophical drivel? Didn't he have any real work to do? One member of the audience, however, grew animated: Austrian émigré Franz Mandl. Mandl, who knew both German and von Neumann's classic study, interrupted several times; the two continued their intense arguments well after the seminar had ended. Together they began to reexamine von Neumann's no-hidden-variables proof, on and off when time allowed, until they each went their separate ways. Mandl left Harwell in 1958; Bell, dissatisfied with the direction in which the laboratory seemed to be heading, left two years later. Bell and his wife Mary, also a physicist, moved to CERN, Europe's multinational high-energy physics laboratory that had recently been established in Geneva. Once again he pursued cutting-edge research in particle physics. And once again, despite his best efforts, he found himself pulled to his hobby: thinking hard about the foundations of quantum mechanics. Once settled in Geneva, he acquired a new sparring partner in Josef Jauch. Like Mandl, Jauch had grown up in the Continental tradition and was well versed in the finer points of Einstein's, Bohr's, and von Neumann's work. In fact, when Bell arrived in town Jauch was busy trying to strengthen von Neumann's proof that hidden-variables theories were irreconcilable with the successful predictions of quantum mechanics. To Bell, Jauch's intervention was like waving a red flag in front of a bull: it only intensified his resolve to demonstrate that hidden variables had not yet been ruled out. Spurred by these discussions, Bell wrote a review article on the topic of hidden variables, in which he isolated a logical flaw in von Neumann's famous proof. At the close of the paper, he noted that "the first ideas of this paper were conceived in 1952"—fourteen years before the paper was published—and thanked Mandl and Jauch for all of the "intensive discussion" they had shared over that long period. Still Bell kept pushing, wondering whether a certain type of hidden variables theory, distinct from Bohm's version, might be compatible with ordinary quantum mechanics. His thoughts returned to the famous thought experiment introduced by Einstein and his junior colleagues Boris Podolsky and Nathan Rosen in 1935, known from the start by the authors' initials, "EPR." Einstein and company had argued that quantum mechanics must be incomplete: at least in some situations, definite values for pairs of variables could be determined at the same time, even though quantum mechanics had no way to account for or represent such values. The EPR authors described a source, such as a radioactive nucleus, that shot out pairs of particles with the same speed but in opposite directions. Call the left-moving particle, "A," and the right-moving particle, "B." A physicist could measure A's position at a given moment, and thereby deduce the value of B's position. Meanwhile, the physicist could measure B's momentum at that same moment, thus capturing knowledge of B's momentum and simultaneous position to any desired accuracy. Yet Heisenberg's uncertainty principle dictated that precise values for certain pairs of variables, such as position and momentum, could never be known simultaneously. Fundamental to Einstein and company's reasoning was that quantum objects carried with them—on their backs, as it were—complete sets of definite properties at all times. Think again of that trusty billiard ball: it has a definite value of position and a definite value of momentum at any given moment, even if we choose to measure only one of those properties at a time. Einstein assumed the same must be true of electrons, photons, and the rest of the furniture of the microworld. Bohr, in a hurried response to the EPR paper, argued that it was wrong to assume that particle B had a real value for position all along, prior to any effort to measure it. Quantum objects, in his view, simply did not possess sharp values for all properties at all times. Such values emerged during the act of measurement, and even Einstein had agreed that no device could directly measure a particle's position and momentum at the same time. Most physicists seemed content with Bohr's riposte—or, more likely, they were simply relieved that someone else had responded to Einstein's deep challenge. Bohr's response never satisfied Einstein, however; nor did it satisfy John Bell. Bell realized that the intuition behind Einstein's famous thought experiment—the reason Einstein considered it so damning for quantum mechanics—concerned "locality." To Einstein, it was axiomatic that something that happens in one region of space and time should not be able to affect something happening in a distant region—more distant, say, than light could have traveled in the intervening time. As the EPR authors put it, "since at the time of measurement the two systems [particles A and B] no longer interact, no real change can take place in the second system in consequence of anything that may be done to the first system." Yet Bohr's response suggested something else entirely: the decision to conduct a measurement on particle A (either position or momentum) would instantaneously change the properties ascribed to the far-away particle B. Measure particle A's position, for example, and—bam!—particle B would be in a state of well-defined position. Or measure particle A's momentum, and—zap!—particle B would be in a state of well-defined momentum. Late in life, Bohr's line still rankled Einstein. "My instinct for physics bristles at this," Einstein wrote to a friend in March 1948. "Spooky actions at a distance," he huffed. Fresh from his wrangles with Jauch, Bell returned to EPR's thought experiment. He wondered whether such "spooky actions at a distance" were endemic to quantum mechanics, or just one possible interpretation among many. Might some kind of hidden variable approach reproduce all the quantitative predictions of quantum theory, while still satisfying Einstein's (and Bell's) intuition about locality? He focused on a variation of EPR's set-up, introduced by David Bohm in his 1951 textbook on quantum mechanics. Bohm had suggested swapping the values of the particles' spins along the x- and y-axes for position and momentum. "Spin" is a curious property that many quantum particles possess; its discovery in the mid-1920s added a cornerstone to the emerging edifice of quantum mechanics. Quantum spin is a discrete amount of angular momentum—that is, the tendency to rotate  around a given direction in space. Of course many large-scale objects possess angular momentum, too: think of the planet Earth spinning around its axis to change night into day. Spin in the microworld, however, has a few quirks. For one thing, whereas large objects like the Earth can spin, in principle, at any rate whatsoever, quantum particles possess fixed amounts of it: either no spin at all, or one-half unit, or one whole unit, or three-halves units, and so on. The units are determined by a universal constant of nature known as Planck's constant, ubiquitous throughout the quantum realm. The particles that make up ordinary matter, such as electrons, protons, and neutrons, each possess one-half unit of spin; photons, or quanta of light, possesss one whole unit of spin. In a further break from ordinary angular momentum, quantum spin can only be oriented in certain ways. A spin one-half particle, for example, can exist in only one of two states: either spin "up" or spin "down" with respect to a given direction in space. The two states become manifest when a stream of particles passes through a magnetic field: spin-up particles will be deflected upward, away from their previous direction of flight, while spin-down particles will be deflected downward. Choose some direction along which to align the magnets—say, the z-axis—and the spin of any electron will only ever be found to be up or down; no electron will ever be measured as three-quarters "up" along that direction. Now rotate the magnets, so that the magnetic field is pointing along some different direction. Send a new batch of electrons through; once again you will only find spin up or spin down along that new direction. For spin one-half particles like electrons, the spin along a given direction is always either +1 (up) or -1 (down), nothing in between. (Fig. 2.1.) No matter which way the magnets are aligned, moreover, one-half of the incoming electrons will be deflected upward and one-half downward. In fact, you could replace the collecting screen (such as a photographic plate) downstream of the magnets with two Geiger counters, positioned where the spin-up and spin-down particles get deflected. Then tune down the intensity of the source so that only one particle gets shot out at a time. For any given run, only one Geiger counter will click: either the upper one (indicating passage of a spin-up particle) or the lower one (indicating spin-down). Each particle has a 50-50 chance of being measured as spin-up or spin-down; the sequence of clicks would be a random series of +1's (upper counter) and -1's (lower counter), averaging out over many runs to an equal number of clicks from each detector. Neither quantum theory nor any other scheme has yet produced a successful means of predicting in advance whether a given particle will be measured as spin-up or spin-down; only the probabilities for a large number of runs can be computed. Bell realized that Bohm's variation of the EPR thought experiment, involving particles' spins, offered two main advantages over EPR's original version. First, the measurements always boiled down to either a +1 or a -1; no fuzzy continuum of values to worry about, as there would be when measuring position or momentum. Second, physicists had accumulated decades of experience building real machines that could manipulate and measure particles' spin; as far as thought experiments went, this one could be grounded on some well-earned confidence. And so Bell began to analyze the spin-based EPR arrangement. Because the particles emerged in a special way—spat out from a source that had zero spin before and after they were disgorged—the total spin of the two particles together likewise had to be zero. When measured along the same direction, therefore, their spins should always show perfect correlation: if A's spin were up then B's must be down, and vice versa. Back in the early days of quantum mechanics, Erwin Schrödinger had termed such perfect correlations "entanglement." Bell demonstrated that a hidden-variables model that satisfied locality—in which the properties of A remained unaffected by what measurements were conducted on B—could easily reproduce the perfect correlation when A's and B's spins were measured along the same direction. At root, this meant imagining that each particle carried with it a definite value of spin along any given direction, even if most of those values remained hidden from view. The spin values were considered to be properties of the particles themselves; they existed independent of and prior to any effort to measure them, just as Einstein would have wished. Next Bell considered other possible arrangements. One could choose to measure a particle's spin along any direction: the z-axis, the y-axis, or any angle in between. All one had to do was rotate the magnets between which the particle passed.  What if one measured A's spin along the z-axis and B's spin along some other direction? (Fig. 2.2.) Bell homed in on the expected correlations of spin measurements when shooting pairs of particles through the device, while the detectors on either side were oriented at various angles. He considered detectors that had two settings, or directions along which spin could be measured. Using only a few lines of algebra, Bell proved that no local hidden variables theory could ever reproduce the same degree of correlations as one varied the angles between detectors. The result has come to be known as "Bell's theorem." Simply assuming that each particle carried a full set of definite values on its own, prior to measurement—even if most of those values remained hidden from view—necessarily clashed with quantum theory. Nonlocality was indeed endemic to quantum mechanics, Bell had shown: somehow, the outcome of the measurement on particle B depended on the measured outcome on particle A, even if the two particles were separated by huge distances at the time those measurements were made. Any effort to treat the particles (or measurements made upon them) as independent, subject only to local influences, necessarily led to different predictions than those of quantum mechanics. Here was what Bell had been groping for, on and off since his student days: some quantitative means of distinguishing Bohr's interpretation of quantum mechanics from other coherent, self-consistent possibilities. The problem—entanglement versus locality—was amenable to experimental test. In his bones he hoped locality would win. In the years since Bell formulated his theorem, many physicists (Bell included) have tried to articulate what the violation of his inequality would mean, at a deep level, about the structure of the microworld. Most prosaically, entanglement suggests that on the smallest scales of matter, the whole is more than the sum of its parts. Put another way: one could know everything there is to know about a quantum system (particles A + B), and yet know nothing definite about either piece separately. As one expert in the field has written, entangled quantum systems are not even "divisible by thought": our natural inclination to analyze systems into subsystems, and to build up knowledge of the whole from careful study of its parts, grinds to a halt in the quantum domain. Physicists have gone to heroic lengths to translate quantum nonlocality into everyday terms. The literature is now full of stories about boxes that flash with red and green lights; disheveled physicists who stroll down the street with mismatched socks; clever Sherlock Holmes-inspired scenarios involving quantum robbers; even an elaborate tale of a baker, two long conveyor belts, and pairs of soufflés that may or may not rise. My favorite comes from a "quantum-mechanical engineer" at MIT, Seth Lloyd. Imagine twins, Lloyd instructs us, separated a great distance apart. One steps into a bar in Cambridge, Massachusetts just as her brother steps into a bar in Cambridge, England. Imagine further (and this may be the most difficult part) that neither twin has a cell phone or any other device with which to communicate back and forth. No matter what each bartender asks them, they will give opposite answers. "Beer or whiskey?"  The Massachusetts twin might respond either way, with equal likelihood; but no matter which choice she makes, her twin brother an ocean away will respond with the opposite choice. (It's not that either twin has a decided preference; after many trips to their respective bars, they each wind up ordering beer and whiskey equally often.) The bartenders could equally well have asked, "Bottled beer or draft?" or "Red wine or white?" Ask any question—even a question that no one had decided to ask until long after the twins had traveled far, far away from each other—and you will always receive polar opposite responses. Somehow one twin always "knows" how to answer, even though no information could have traveled between them, in just such a way as to ensure the long-distance correlation. [from Chapter 3, pp. 43-48:] John Clauser sat through his courses on quantum mechanics as a graduate student at Columbia University in the mid-1960s, wondering when they would tackle the big questions. Like John Bell, Clauser quickly learned to keep his mouth shut and pursue his interests on the side. He buried himself in the library, poring over the EPR paper and Bohm's articles on hidden variables. Then in 1967 he stumbled upon Bell's paper in Physics Physique Fizika. The journal's strange title had caught his eye, and while lazily leafing through the first bound volume he happened to notice Bell's article. Clauser, a budding experimentalist, realized that Bell's theorem could be amenable to real-world tests in a laboratory. Excited, he told his thesis advisor about his find, only to be rebuffed for wasting their time on such philosophical questions. Soon Clauser would be kicked out of some of the finest offices in physics, from Robert Serber's at Columbia to Richard Feynman's at Caltech. Bowing to these pressures, Clauser pursued a dissertation on a more acceptable topic—radio astronomy and astrophysics—but in the back of his mind he continued to puzzle through how Bell's inequality might be put to the test. Before launching into an experiment himself, Clauser wrote to John Bell and David Bohm to double-check that he had not overlooked any prior experiments on Bell's theorem and quantum nonlocality. Both respondents wrote back immediately, thrilled at the notion that an honest-to-goodness experimentalist harbored any interest in the topic at all. As Bell later recalled, Clauser's letter from February 1969 was the first direct response Bell had received from any physicist regarding Bell's theorem—more than four years after Bell's article had been published. Bell encouraged the young experimenter: if by chance Clauser did manage to measure a deviation from the predictions of quantum theory, that would "shake the world!" Encouraged by Bell's and Bohm's responses, Clauser realized that the first step would be to translate Bell's pristine algebra into expressions that might make contact with a real experiment. Bell had assumed for simplicity that detectors would have infinitesimally narrow windows or apertures through which particles could pass. But as Clauser knew well from his radio-astronomy work, apertures in the real world are always wider than a mathematical pinprick. Particles from a range of directions would be able to enter the detectors at either of their settings, a or a'. Same for detector efficiences. Bell had assumed that the spins of every pair of particles would be measured, every time a new pair was shot out from the source. But no laboratory detectors were ever 100% efficient; sometimes one or both particles of a pair would simply escape detection altogether. All these complications and more had to be tackled on paper, long before one bothered building a machine to test Bell's work. Clauser dug in and submitted a brief abstract on this work to the Bulletin of the American Physical Society, in anticipation of the Society's upcoming conference. The abstract appeared in print right before the spring 1969 meeting. And then his telephone rang. Two hundred miles away, Abner Shimony had been chasing down the same series of thoughts. Shimony's unusual training—he held Ph.D.s in both philosophy and in physics, and taught in both departments at Boston University—primed him for a subject like Bell's theorem in a way that almost none of his American physics colleagues shared. He had already published several articles on other philosophical aspects of quantum theory, beginning in the early 1960s. Shimony had been tipped off about Bell's theorem back in 1964, when a colleague at nearby Brandeis University, where Bell had written up his paper, sent Shimony a preprint of Bell's work. Shimony was hardly won over right away. His first reaction: "Here's another kooky paper that's come out of the blue," as he put it recently. "I'd never heard of Bell. And it was badly typed, and it was on the old multigraph paper, with the blue ink that smeared. There were some arithmetical errors. I said, ‘What's going on here?'" Alternately bemused, puzzled, and intrigued, he read it over again and again. "The more I read it, the more brilliant it seemed. And I realized, ‘This is no kooky paper. This is something very great.'" He began scouring the literature to see if some previous experiments, conducted for different purposes, might already have inadvertently put Bell's theorem to the test. After intensive digging—he came to call this work "quantum archaeology"—he realized that, despite a few near misses, no existing data would do the trick. No experimentalist himself, he "put the whole thing on ice" until he could find a suitable partner. A few years went by before a graduate student came knocking on Shimony's door. The student had just completed his qualifying exams and was scouting for a dissertation topic. Together they decided to mount a brand-new experiment to test Bell's theorem. Several months into their preparations, still far from a working experiment, Shimony spied Clauser's abstract in the Bulletin, and reached for the phone. They decided to meet at the upcoming American Physical Society meeting in Washington, D.C., where Clauser was scheduled to talk about his proposed experiment. There they hashed out a plan to join forces. A joint paper, Shimony felt, would no doubt be stronger than either of their separate efforts alone would be—the whole would be greater than the sum of its parts—and, on top of that, "it was the civilized way to handle the priority question." And so began a fruitful collaboration and a set of enduring friendships. Clauser completed his dissertation not long after their meeting. He had some down time between handing in his thesis and the formal thesis defense, so he went up to Boston to work with Shimony and the (now two) graduate students whom Shimony had corralled onto the project. Together they derived a variation on Bell's theme: a new expression, more amenable to direct comparisons with laboratory data than Bell's had been. (Their equations concerned S, the particular combination of spin measurements examined in the previous chapter.) Even as his research began to hum, Clauser's employment prospects grew dim. He graduated just as the chasm between demand and supply for American physicists opened wide. He further hindered his chances by giving a few job talks on the subject of Bell's theorem. Clauser would later write with great passion that in those years, physicists who showed any interest in the foundations of quantum mechanics labored under a "stigma," as powerful and keenly felt as any wars of religion or McCarthy-like political purges. Finally Berkeley's Charles Townes offered Clauser a postdoctoral position in astrophysics at the Lawrence Berkeley Laboratory, on the strength of Clauser's dissertation on radio astronomy. Clauser, an avid sailer, planned to sail his boat from New York around the tip of Florida and into Galveston, Texas; then he would load the boat onto a truck and drive it to Los Angeles, before setting sail up the California coast to the San Francisco Bay Area. (A hurricane scuttled his plans; he and his boat got held up in Florida, and he wound up having to drive it clear across the country instead.) All the while, Clauser and Shimony hammered out their first joint article on Bell's theorem: each time Clauser sailed into a port along the East Coast, he would find a telephone and check in with Shimony, who had been working on a draft of their paper. Then Shimony would mail copies of the edited draft to every marina in the next city on Clauser's itinerary, "some of which I picked up," Clauser explained recently, "and some of which are probably still waiting there for all I know." Back and forth their edits flew, and by the time Clauser arrived in Berkeley in early August 1969, they had a draft ready to submit to the journal. Things were slow at the Lawrence Berkeley Laboratory compared to the boom years, and budgets had already begun to shrink. Clauser managed to convince his faculty sponsor, Townes, that Bell's theorem might merit serious experimental study. Perhaps Townes, an inventor of the laser, was more receptive to Clauser's pitch than the others because Townes, too, had been told by the heavyweights of his era that his own novel idea flew in the face of quantum mechanics. Townes allowed Clauser to devote half his time to his pet project, not least because, as Clauser made clear, the experiments he envisioned would cost next to nothing. With the green light from Townes, Clauser began to scavenge spare parts from storage closets around the Berkeley lab—"I've gotten pretty good at dumpster diving," as he put it recently—and soon he had duct-taped together a contraption capable of measuring the correlated polarizations of pairs of photons. (Photons, like electrons, can exist in only one of two states; polarization, in this case, functions just like spin as far as Bell-type correlations are concerned.) In 1972, with the help of a graduate student loaned to him at Townes's urging, Clauser published the first experimental results on Bell's theorem. (Fig. 3.1.) Despite Clauser's private hope that quantum mechanics would be toppled, he and his student found the quantum-mechanical predictions to be spot on. In the laboratory, much as on theorists' scratch pads, the microworld really did seem to be an entangled nest of nonlocality. He and his student had managed to conduct the world's first experimental test of Bell's theorem—today such a mainstay of frontier physics—and they demonstrated, with cold, hard data, that measurements of particle A really were more strongly correlated with measurements of particle B than any local mechanisms could accommodate. They had produced exactly the "spooky action at a distance" that Einstein had found so upsetting. Still, Clauser could find few physicists who seemed to care. He and his student published their results in the prestigious Physical Review Letters, and yet the year following their paper, global citations to Bell's theorem—still just a trickle—dropped by more than half. The world-class work did little to improve Clauser's job prospects, either. One department chair to whom Clauser had applied for a job doubted that Clauser's work on Bell's theorem counted as "real physics."
7efa10a15af89c80
New Physics Theory Resolves Paradoxes For 100 years, most people have found it impossible to understand physics. Examples include Joseph Heller (“writhing in an exasperating quandary over quantum mechanics”), Bill Clinton (“I hope I can finally understand physics before I leave the earth”, Richard Feynman (“One had to lose one’s common sense”), and even Albert Einstein (“fifty years of pondering have not brought me any closer to answering the question, what are light quanta?). Science News-4c 8.25 x 5.375 Julian Schwinger’s Contribution to Physics And yet, there is a theory that makes perfect sense and can be understood by anyone. This theory, with roots in the 1930s, was finally perfected by Julian Schwinger, who once had been called “the heir-apparent to Einstein’s mantle”. This achievement occurred several years after Schwinger had already achieved physics fame for solving the “renormalization” problem, described by the NY Times as “the most important development in the last 20 years” and was duly awarded the Nobel prize. Julian SchwingerHowever for Schwinger this wasn’t good enough. He felt that Quantum Field Theory, as it stood then, was still lacking. His goal was to include matter fields and force fields on an equivalent basis. After several years of hard work, he published a series of five papers entitled “The theory of quantized fields” in 1951-54. Physicists have been fighting a particles-vs.-fields battle for over 100 years. There have been three “rounds”, starting when Einstein’s concept of light as a particle (called photon) triumphed over Maxwell’s view that light is a field. Round 2 occurred when Schrödinger’s hope for a field theory of matter was overcome by the particle-like behavior that physicists could not ignore. And round 3 occurred when Schwinger’s field-based solution of renormalization was superseded by Feynman’s easier-to-use particle based approach. For that reason, and others, Schwinger’s final development of Quantum Field Theory, which he regarded as far more important than his Nobel prize work, has been sadly neglected, and is indeed unknown to most physicists – and to all of the general public. However there are signs that QFT, in the true Schwingerian sense is reemerging, so in this sense it is a “new” theory. There have been several books and articles, such as “The Lightness of Being” by Nobel laureate Frank Wilczek, “There are no particles, there are only fields” by Art Hobson, and “Fields of Color- The theory that escaped Einstein” by Rodney Brooks. The last one explains QFT to a lay reader, without any equations, and shows how this wonderful “new” theory” resolves the paradoxes of Relativity, Quantum Mechanics and physics that have confused so many people. Gravitational Waves Explained The recent detection of gravitational waves at LIGO (Laser Interferometer Gravitational-Wave Observatory) has captured the imagination of the public. It will stand as one of the great feats of experimental physics, alongside the famous Michelson-Morley experiment of 1887 which it resembles. In fact by comparing these two experiments, you will see that understanding gravitational waves is not as hard as you think. Contraction. Michaelson and Morley measured the speed of light at different times as the earth moved around its orbit. To their – and everyone’s – surprise, the speed turned out to be constant, independent of the earth’s motion. This discovery caused great consternation until George FitzGerald and Hendrick Lorentz came up with the only possible explanation: objects in motion contract. Einstein then showed that this contraction is a consequence of his Principles of Relativity, but without saying why they contract (other than a desire to conform to his Principles). In fact Lorentz had already provided a partial explanation by showing that motion affects the way the electromagnetic field interacts with charges, causing objects to contract. However it wasn’t until Quantum Field Theory came along that a full explanation was found. In QFT, at least in Julian Schwinger’s version, everything is made of fields, even space itself, and motion affects the way all fields interact. Waves. Electromagnetic waves, e.g., radio waves, have long been known and accepted as a natural phenomenon of fields. Now in QFT gravity is a field and, just as an oscillating electron in an antenna sends out radio waves, so a large mass moving back and forth will send out gravitational waves. But it didn’t take QFT to show this. Einstein also believed that gravity is a field that obeys his equations, just as the EM field obeys the equations of James Maxwell. In fact gravitational waves have been accepted by many physicists, from Einstein on down, who see gravity as a field. Curvature. But what about “curvature of space-time”, which many people today say is what causes gravity? You may be surprised to learn that’s not how Einstein saw it. He believed that the gravitational field causes things, even space itself, to contract, analogous to the way motion causes contraction. In fact Einstein used this analogy to show the similarity between motion-induced and gravity-induced contraction: they both affect the way fields interact. It is this gravity-induced contraction that is sometimes called “curvature”. Evidence. The first detection of gravitational waves was done at LIGO, using an apparatus similar to Michelson’s and Morley’s. In both experiments the time for light to travel along two perpendicular paths was compared, but because the gravitational field is much weaker than the EM field, the distances in the LIGO apparatus are much greater (miles instead of inches). Another difference is that while Michelson, not knowing about motion-induced contraction, expected to see a change (and found none), the LIGO staff used the known gravity-induced contraction to see a change when a gravitational wave passed through. The Forgotten Genius of Physics I started my graduate study in physics at Harvard University in 1956. Julian Schwinger had just completed his reformulation of Quantum Field Theory and was beginning to teach a three-year series of courses. I sat mesmerized, as did others. Julian SchwingerAttending one of [Schwinger’s] formal lectures was comparable to hearing a new major concert by a very great composer, flawlessly performed by the composer himself… The delivery was magisterial, even, carefully worded, irresistible like a mighty river… Crowds of students and more senior people from both Harvard and MIT attended… I felt privileged – and not a little daunted – to witness physics being made by one of its greatest masters. – Walter Kohn, Nobel laureate (“Climbing the Mountain” by J. Mehra and K.A. Milton) As Schwinger stood at the blackboard, writing ambidextrously and speaking mellifluously in well-formed sentences, it was as if God Himself was handing down the Ten Commandments. The equations were so elegant that it seemed the world couldn’t be built any other way. From the barest of first principles, he derived all of QFT, even including gravity. Not only was the mathematics elegant, but the philosophic concept of a world made of properties of space seemed to me much more satisfying than mysterious particles. I was amazed and delighted to see how all the paradoxes of relativity theory and quantum mechanics that I had earlier found so baffling disappeared or were resolved. Unfortunately, Schwinger, once called “the heir-apparent to Einstein’s mantle” by J. Robert Oppenheimer, never had the impact he should have had on the world of physics or on the public at large. It is possible that Schwinger’s very elegance was his undoing. Julian Schwinger was one of the most important and influential scientists of the twentieth century… Yet even among physicists, recognition of his funda­mental contributions remains limited, in part because his dense formal style ultimately proved less accessible than Feynman’s more intuitive approach. However, the structure of modern theoretical physics would be inconceiv­able without Schwinger’s manifold insights. His work underlies much of modern physics, the source of which is often unknown even to the practi­tioners. His legacy lives on not only through his work, but also through his many students, who include leaders in physics and other fields. – “Climbing the Mountain” by J. Mehra and K.A. Milton Schwinger is remembered primarily, if he is remembered at all, for solving a calculational problem in QFT called renormalization, for which he shared the 1965 Nobel prize with Sin-Itiro Tomanaga and Richard Feynman. Feynman’s particle-based approach, which had no theoretical basis, proved to be easier to work with than Schwinger’s (and Tomanaga’s) field-based approach, and Schwinger’s method was relegated to the archives. It is Feynman’s image, not Schwinger’s, that was enshrined on a postage stamp. However Schwinger was not satisfied with his renormalization work: The pressure to account for those [experimental] results had produced a certain theoretical structure that was perfectly adequate for the original task, but demanded simplification and generalization… I needed time to go back to the beginnings of things… My retreat began at Brookhaven National Laboratory in the summer of 1949. It is only human that my first action was one of reaction. Like the silicon chip of more recent years, the Feynman diagram was bringing computation to the masses… But eventually one has to put it all together again, and then the piecemeal approach loses some of its attraction… Quantum field theory must deal with [force] fields and [matter] fields on a fully equivalent footing… Here was my challenge. – from “The Birth of Particle Physics”, ed. by Brown and Hoddeson. Schwinger’s final version of the theory was published between 1951 and 1954 in a series of five papers entitled “The Theory of Quantized Fields”. I believe that the main reason these masterpieces have been ignored is that many physicists found them too hard to understand. (I know one who couldn’t get past the first page.) Schwinger went on from there to develop a new approach to Quantum Field Theory that he called source theory (and he called its practitioners “sourcerers”), which is also virtually unknown. In addition to these momentous contributions to Quantum Field Theory, Schwinger had other accomplishments. As a 19-year old graduate student at Columbia University he was the first to determine the spin of the neutron. In 1957 he found the correct form for the weak field equations before Gell-mann and Feynman. He was the first to suggest electroweak unification, for which Sheldon Glashow, Steven Weinberg and Abdus Salam received the 1979 Nobel Prize. And he suggested the Higgs mechanism before Peter Higgs, who shared the 2013 Nobel Prize with Francois Englert. There is no doubt that Julian Schwinger more than fulfilled his promise as “the heir-apparent to Einstein’s mantle”, and yet many physicists – let alone the general public – don’t even know his name. As I wrote in the preface to my book (see But most of all I dedicate this book to the memory of Julian Schwinger, one of the greatest physicists of all time and, sadly, one of the most forgotten. It was Schwinger who turned Quantum Field Theory into the beautiful structure that I have tried to convey to a wider public. To Read more about Quantum Field Theory, Click Here Quantum Field Theory – A Solution to the “Measurement Problem” Definition of the “Measurement Problem” A major question in physics today is “the measurement problem”, also known as “collapse of the “wave-function”.  The problem arose in the early days of Quantum Mechanics because of the probabilistic nature of the equations.  Since the QM wave-function describes only probabilities, the result of a physical measurement can only be calculated as a probability.  This naturally leads to the question: When a measurement is made, at what point is the final result “decided upon”.  Some people believed that the role of the observer was critical, and that the “decision” was made when someone looked.  This led Schrödinger to propose his famous cat experiment to show how ridiculous such an idea was.  It is not generally known, but Einstein also proposed a bomb experiment for the same reason, saying that “a sort of blend of not-yet and already-exploded systems.. cannot be a real state of affairs, for in reality there is just no intermediary between exploded and not-exploded.”  At a later time, Einstein commented, “Does the moon exist only when I look at it? The debate continues to this day, with some people still believing that Schrödingers cat is in a superposition of dead and alive until someone looks.  However most people believe that the QM wave-function “collapses” at some earlier point, before the uncertainty reaches a macroscopic level – with the definition of “macroscopic” being the key question (e.g., GRW theory,  Penrose Interpretation, Physics forum).   Some people take the “many worlds” view, in which there is no “collapse”, but a splitting into different worlds that contain all possible histories and futures.  There have been a number of experiments designed to address this question, e.g., “Towards quantum superposition of a mirror”. We will now see that an unequivocal answer to this question is provided by Quantum Field theory. However since this theory has been ignored or misunderstood by many physicists, we must first define what we mean by QFT. Definition of Quantum Field Theory The Quantum Field Theaory referred to in this article is the Schwinger version in which there are no particles, there are only fields, not the Feynman version which is based on particles.*  The two versions are mathematically equivalent, but the concepts behind them are very different, and it is the Feynman version that is used by most Quantum Field Theory physicists. *According to Frank Wilczek, Feynman eventually changed his mind: “Feynman told me that when he realized that his theory of photons and electrons is mathematically equivalent to the usual theory, it crushed his deepest hopes…  He gave up when… he found the fields introduced for convenience, taking on a life of their own.” In Quantum Field Theory, as we will use the term henceforward, the world is made of fields and only fields.  Fields are defined as properties of space or, to put it differently, space is made of fields.  The field concept was introduced by Michael Faraday in 1845 as an explanation for electric and magnetic forces.  However the concept was not easy for people to accept and so when Maxwell showed that these equations predicted the existence of EM waves, the idea of an ether was introduced to carry the waves.  Today, however, it is generally accepted that space can have properties: To deny the ether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view. – A. Einstein (R2003, p. 75) Moreover space-time itself had become a dynamical medium – an ether, if there ever was one. – F. Wilczek (“The persistence of ether”, Physics Today, Jan. 1999, p. 11). Although the Schrödinger equation is the non-relativistic limit of the Dirac equation for matter fields, there is an important and fundamental difference between Quantum Field Theory and Quantum Mechanics.  One describes the strength of fields at a given point, the other describes the probability that particles can be found at that point, or that a given state exists. However the fields of Quantum Field Theory are not classical fields; they are quantized fields.  Each quantum is a piece of field that, while spread out in space, acts as a unit.  It has a life and death of its own, separate from other quanta.  (This quantum nature is what leads to the particle-like behavior.)  The term quantum was introduced in 1900 by Planck, who said in his Nobel speech, “Here was something entirely new, never before heard of, which seemed called upon to basically revise all our physical thinking”.  How right he was. Quanta can be either free or bound together.  Examples of free quanta are a photon emitted by a lamp or an electron emitted from a cathode.  Example of bound quanta are protons and neutrons in an atomic nucleus, or the electron field surrounding a nucleus.  There are also self, or attached, fields that are not quanta but are created by quanta – for example, the EM field around an electron, or the strong field around a nucleon.  These fields do not have a life of their own, but remain attached to their source. The fields of Quantum Field Theory possess an internal property called spin or helicity.  Matter fields have a spin of ½, from which the Pauli Exclusion Principle follows, while force (or boson) fields can superimpose, even to the classical limit.  Another important feature ofQuantum Field Theory is that, like spin in QM, field strengths are described by vectors in (infinite dimensional) Hilbert space, and the dynamics of the fields are described by operators in this Hilbert space.  This means that field strength is described by a superposition of values, so when we refer to the field strength at a given point we can only speak of expectation values.  The fact that quantum,fields are different from classical fields bothers some people, but starting with the Stern-Gerlach experiment in 1922 we have had almost a hundred years to get used to the idea that physical quantities are quantized (which is what leads to the use of Hilbert space).  Of course when we take the classical limit, as we can do with force fields, the equations for the expectation value reduce to the classical equations of EM theory and General Relativity. The fields of Quantum Field Theory behave deterministically as per the field equations, with one exception: Quantum collapse Quantum collapse occurs when a field quantum suddenly deposits its energy (or momentum) into an absorbing atom.  This is a very different thing from “collapse of the wave function” in QM: it is a physical event, not a change in probabilities.  When it happens the quantum, no matter how spread-out it may be, disappears from space.  While there is no theory to describe this, we must remember that it is necessary if the quantum is to act as an indivisible unit.  Collapse also occurs if some energy (or momentum) is transferred to another substance.  It can also occur with multiple quanta that are bound together, as when an atom or molecule is captured by a detector. As stated, quantum collapse is not described by the field equations.  In fact there is no theory to tell us when, where, or how it happens.  However we know that the probability is related to the field strength at a given point.  This is troubling to some people, but even if we don’t have a theory for something, that doesn’t mean it can’t happen.  Physics history is filled with examples of observations that had no explanation or theory at the time.  Another troubling fact is that quantum collapse is non-local.  However non-locality has been proven in many experiments, and it does not lead to any inconsistencies or paradoxes. In the many-worlds theory, there is no collapse.  Instead there is a spitting into two different worlds: one in which the transfer or absorption occurs and one in which it doesn’t.  However from the point of view of an observer in our world the effect is collapse, so whatever it is called, it is when the “decision” – the point of no return – is reached. The solution – Quantum Field Theory Quantum collapse is Quantum Field Theory’s answer to the measurement problem.  In the case of Schrödinger’s cat, if the radiated quantum is captured by an atom in the Geiger counter it starts an irreversible chain of events that results in the death of the cat. If it is not captured, then the cat lives (at least until the next radioactive emission occurs). Some may now ask, OK, but isn’t it possible that the collapse/split occurs at a later time, closer to the point when a measurement (macroscopic change) occurs?  The problem is this:  These changes can not proceed further “up the line” unless energy or momentum has been transferred to an absorbing atom.  For example, in the cat experiment there can be no Townsend discharge unless an atom has been ionized, and ionization can only occur if there has been a quantum collapse.  [Is this true?]  All else then follows inevitably (with minor microscopic variations).  In Schrödinger’s words, “the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid” that kills the cat.  It’s like a Rube Goldberg device where you drop a ball into a chute at one end and after a series of actions, a cake appears at the other end. Nor is there any experiment that could possibly rule out the above description of collapse/splitting.  In any experiment designed to study collapse, there must at some point be a macroscopic detection of an event.  But this detection can only determine that a collapse occurred.  It cannot determine how far up the chain of events the supposed superposition proceeded. Quantum Field Theory is the Solution Quantum Field Theory is an elegant theory that rests on a firm mathematical foundation.  It resolves or explains the many paradoxes of Special Relativity and Quantum Mechanics that have confused so many people.*  And as shown here, it supplies a simple and unique answer to a current problem in physics.  There are no entanglements, there are no superpositions, there are no quantum “states.  There is simply a field quantum that collapses (deposits some or all of its energy or momentum) into an absorbing atom.  And once again, the fact that we have no theory to describe this doesn’t mean it doesn’t happen.  One can only wonder why this theory hasn’t been embraced and taught in all the schools.  Maybe it’s time for physicists to WAKE UP AND SMELL THE QUANTUM FIELDS. *see Fields of Color: The theory that escaped Einstein” by the author How Quantum Field Theory Solve the “Measurement Problem” It is not generally known that Quantum Field Theory offers a simple answer to the “measurement problem” that was discussed on the September letters page of Physics Today.  But by QFT I don’t mean Feynman’s particle-based theory; I mean Schwinger’s QFT in which “there are no particles, there are only fields”.1 Max PlankThe fields exist in the form of quanta, i.e., chunks or units of field, as Planck envisioned over a hundred years ago. Field quanta evolve in a deterministic way specified by the field equations of QFT, except when a quantum suddenly deposits some or all of its energy or momentum into an absorbing atom. This is called “quantum collapse” and it is not described by the field equations. In fact there is no theory that describes it. All we know is that the probability of it happening depends on the field strength at ​a given position. Or, if it is an internal collapse, like a change in angular momentum, ​the probability depends on the component of angular momentum in the given direction. In QFT this collapse is a physical event, not a mere change in probabilities as in Quantum Mechanics. Many physicists are bothered by the non-locality of quantum collapse in which a spread-out field (or even two correlated quanta) suddenly disappears or changes its internal state. Yet non-locality is necessary if quanta are to act as a unit, and it has been experimentally proven. It does not lead to inconsistencies or paradoxes. It may not be what we expected, but just as we accepted that the earth is round, that the earth orbits the sun, that matter is made of atoms, we should be able to accept that quanta can collapse. ​In some cases quantum collapse can lead to a macroscopic change or “measurement”. However the measurement outcome, i.e., the “decision”, was determined at the quantum level. Everything after the collapse follows inevitably. There is no “superposition” or “environment-driven process of decoherence.”​​ Take Schrödinger’s cat as an example. If a radiated quantum collapses and deposits its energy into one or more atoms of the Geiger counter, that initiates a Townsend discharge that leads inexorably to the death of the cat. In Schrödinger’s words, “the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid” and the cat dies.  On the other hand, if it doesn’t collapse in the Geiger counter then the cat lives. Of course we don’t know the result until we look, but we never know anything until we look, whether it’s tossing dice or choosing a sock blindfolded. The fate of the cat was determined at the time of quantum collapse, just as the outcome of tossing dice is determined when they hit the table and the color of the sock is determined when it is pulled out of the drawer.  After the quantum collapse there is no entanglement, no superposition, no decoherence, only ignorance. What could be simpler? In addition to offering a simple solution to the measurement problem, Quantum Field Theory provides an understandable explanation for the paradoxes of Relativity (Lorentz contraction, time dilation, etc.) and Quantum Mechanics (wave-particle duality, etc.).  It is unfortunate that so few physicists have accepted QFT in the Schwinger sense. Rodney Brooks (​author of Fields of  Color: The theory that escaped Einstein) 1 A. Hobson, “There are no particles, there are only fields,” Am. J. Phys. 81, 211–223 (2013). The Uncertainty Principle Heisenberg - Quantum Field theoryThe probabilistic interpretation of Schrödinger’s equation eventually led to the uncertainty principle of Quantum Mechanics, formulated in 1926 by Werner Heisenberg. This principle states that an electron, or any other particle, can never have its exact position known, or even specified. More precisely, Heisenberg derived an equation that relates the uncertainty in position of a particle to the uncertainty of its momentum. So not only do we have wave-particle duality to deal with, we have to deal with particles that might be here or might be there, but we can’t say where. If the electron is really a particle, then it only stands to reason that it must be somewhere. Resolution. In Quantum Field Theory there are no particles (stop me if you’ve heard this before) and hence no position – certain or uncertain. Instead there are blobs of field that are spread out over space. Instead of a particle that is either here or here or possibly there, we have a field that is here and here and there. Spreading out is something that only a field can do; a particle can’t do it. In fact Heinsenberg’s Uncertainty Principle is not much different from Fourier’s Theorem (discovered in 1807) that relates the spatial spread of any wave to the spread of its wave length. This doesn’t mean that there is no uncertainty in Quantum Field Theory. There is uncertainty in regard to field collapse, but field collapse is not described by the equations of QFT; Quantum Field Theory can only predict probabilities of when it occurs. However there is an important difference between field collapse in QFT and the corresponding wave-function collapse in QM. The former is a real physical change in the fields; the latter is only a change in our knowledge of where the particle is. Einstein’s bomb. In 1935 Einstein attacked the role-of-the-observer concept by imagining a keg of gunpowder that could be triggered by the quantum instability of some particle. The quantum mechanical equation for this situation, he said, “describes a sort of blend of not-yet and already-exploded systems.” But, he added, this cannot be “a real state of affairs, for in reality there is just no intermediary between exploded and not-exploded” (I2007, p. 456). Schrödinger’s cat. Worried that an explosion that is only half-real might not be enough to convince people of the point, Schrödinger extended Einstein’s bomb idea to an animal that, according to the Copenhagen interpretation, would be half-alive and half-dead, thereby creating the most famous cat in physics history. Resolution. Qauntum Field Theory supplies a simple answer for Schrödinger’s cat, and also for Einstein’s bomb. There is no role of the observer.  The bomb explodes (or not) and the cat dies (or not), regardless of whether anyone looks. Field collapse does not depend on an observer. The fields evolve according to field equations and then collapse, but neither process requires that someone be there to observe it. In Schrödinger‘s hypothetical cat experiment, the radioactive nuclei do not emit particles. For example, if they are beta emitters, they emit a “yellow” electron field that slowly spreads through space. At some point in time that cannot be determined from the theory the electron field collapses into the detector and starts the chain of events that kills the cat. Until that time the cat is alive. After that time the cat is dead… Summary. In Quantum Field Theory the paradoxes of QM have simple, almost trivial, answers: • There is no wave-particle duality because there are no particles, only fields. The particle-like behavior is explained by the fact that a field quantum lives and dies as a unit. This phenomenon is called field collapse • The Uncertainty Principle is simply a statement that fields are not localized; they spread out. • There is no role of the observer.  Field collapse occurs regardless of whether anyone is looking. Is that all there is to it? Did I give too little space to discussing these “profound” paradoxes? Well, that’s really all there is to it. In Quantum Field Theory everything is fields. They spread out, they collapse, and they do all this without requiring an observer. When I hear people complaining about the weirdness and inaccessibility of modern physics, I want to ask, “What part of Quantum Field Theory don’t you understand?” Scientific American, EINSTEIN DIDN’T SAY THAT! Scientific American September IssueIn the September “Einstein” issue of Scientific American, readers are given the impression that gravity is caused by curvature of space-time.  For example, on the first page of that section, we read “gravity… is the by-product of a curving universe”, on p. 43 we find that “the Einstein tensor G describes how the geometry of space-time is warped and curved by massive objects”, and on p. 56 there is a reference to “Albert Einstein’s explanation of how gravity emerges from the bending of space and time”. In fact, many physicist today emphasize “curvature” as the explanation for gravity.  As Stephen Hawking wrote in A Brief History of Time, “Einstein made the revolutionary suggestion that gravity is not a force like other forces, but is a consequence of the fact that space-time is not flat, as had been previously assumed: it is curved, or warped.” The problem is, that’s NOT what Einstein said.  Einstein made it quite clear that gravity is a force like other forces, with (of course) certain differences.  In the very paper cited by Scientific American (“The foundation of the general theory of relativity”, 1916) he wrote, “[there is] a field of force, namely the gravitational field, which possesses the remarkable property of imparting the same acceleration to all bodies”.  The G tensor, said Einstein “describes the gravitational field.” The term “gravitational field” or just “field” occurs 58 times in this article, while the word “curvature” doesn’t appear at all (except in regard to “curvature of a ray of light”).  And Einstein is not the only physicist who believes that.  For example Sean Carroll, a leading physicist of today, wrote: To suppress the field concept and focus on “curvature” not only misstates Einstein’s view; it also gives people a false or misleading understanding of general relativity. So where does “curvature” come from?   According to Einstein (in the cited paper), the gravitational field causes physical changes in the length of measuring rods (just as temperature can cause such changes) and it is these changes that create a non-Euclidean metric of space.  In fact, as Einstein pointed out, these changes can occur even in a space which is free of gravitational fields – i.e., a rotating system.  He then showed that this non-Euclidean geometry is mathematically equivalent to the geometry on a curved surface, which had been developed by Gauss and extended (mathematically) to any number of dimensions by Riemann.  That this is a mathematical equivalence is clearly stated by Einstein in a later paper: “mathematicians long ago solved the formal problems to which we are led by the general postulate of relativity”. Well, you may say, if the gravitational field is equivalent to curvature of space-time, what difference does it make?  It makes a lot of difference. First, most people cannot visualize physically four-dimensional curvature, while the fact is, they don’t have to.  The curvature of space-time, although mathematically equivalent, is not necessary for a complete understanding of Einstein’s theory.  The field concept, introduced by Faraday in 1845, is all that is needed. Second, by eliminating or suppressing the role of the gravitational field, you destroy the great unity that the field concept brings to physics.  To quote Nobel laureate Frank Wilczek: Physicists trained in the more empirical tradition of high-energy physics and quantum field theory tend to prefer the field view… the field view makes Einstein’s theory of gravity look more like the other successful theories of fundamental physics, and so makes it easier to work toward a fully integrated, unified description of all the laws. As you can probably tell, I’m a field man. Finally, the theory that many physicists believe is our best and most consistent description of reality, Quantum Field theory, has once again been ignored.  For example, calling the uncertainty principle an unresolved mystery that “not even the great Einstein” could solve (p. 48), ignores the fact that in QFT it is a natural consequence of the way fields behave.  And to say (p. 34): “Relativity and quantum mechanics are just as incompatible as they ever were”, ignores the fact that they are united in Quantum Field Theory.  In August 2013 Scientific American printed an article that actually dismissed Quantum Field Theory as invalid because the fields described by the theory are not “what physicists classically understand by the term field”.  (To which one can only reply, “Duh, maybe that’s why they’re called quantum fields.”) Please note that it is not just me who believes that the field concept is central to the understanding of general relativity.  That is also the view of, among others, Sean Carroll and Nobel laureates Julian Schwinger, Frank Wilczek, and Steven Weinberg. When do Fields Collapse Field CollapseA major question in physics today is about collapse of the “wave-function”: When does it occur? There have been many speculations (see, e.g., Ghirardi–Rimini–Weber theory, Penrose Interpretation, Physics forum) and experiments (e.g., “Towards quantum superposition of a mirror”) about this. The most extreme view is the belief that Schrödinger’s cat is both alive and dead, even though Schrödinger proposed this thought-experiment (like Einstein’s less-well-known bomb experiment) to show how ridiculous such an idea is. The problem arises because Quantum Mechanics can only calculate probabilities until an observation takes place. However Quantum Field Theory, which deals in real field intensities – not probabilities, provides a simple unequivocal answer. Unfortunately, Quantum Field Theory in its true sense of “there are no particles, there are only fields” (Art Hobson, Am. J. Phys. 81, 2013) is ignored or misunderstood by most physicists. In QFT the “state” of a system is described by the field intensities (technically, their expectation value) at every point. These fields are real properties of space that behave deterministically according to the field equations – with one exception. The exception is field collapse, but in Quantum Field Theory this is a very different thing from “collapse of the wave function” in QM. It is a physical event, not a change in probabilities. It occurs when a quantum of field, no matter how spread-out it may be, suddenly deposits its energy into a single atom and disappears. (There are also other types of collapse, such as scattering, coupled collapse, internal change, etc.) Field collapse is not described by the field equations – it is a separate event, but just because we don’t have a theory for it doesn’t mean it can’t happen. The fact that it is non-local bothers some physicists, but this non-locality has been proven in many experiments, and it does not lead to any inconsistencies or paradoxes. So when field collapse occurs, the final “decision” – the point of no return – is reached. This is QFT’s answer to when does collapse occur: when a quantum of field colapses. In the case of Schrödinger’s cat, this is when the radiated quantum (perhaps an electron) is captured by an atom in the Geiger counter. Before a field quantum finally collapses, it may have interacted or entangled with many other atoms along the way. These interactions are described (deterministically) by the field equations. However the quantum cannot have collapsed into any of those atoms, because collapse can happen only once, so whatever you call it – interaction, entanglement, perturbation, or just “diddling” – these preliminary interactions are reversible and do not lead to macroscopic changes. Then, when the final collapse occurs, those atoms become “undiddled” and return to their unperturbed state. To sum up, in QFT the “decision” is made when a quantum of field deposits all its energy into an absorbing atom. Besides answering this question, QFT also explains why time dilates in Special Relativity and resolves the wave-particle duality question of Quantum Mechanics. One can only wonder why this theory hasn’t been embraced and made the basis for our understanding of nature. I believe it’s time for physicists to WAKE UP AND SMELL THE QUANTUM FIELDS. Book Simplifies Baffling Quantum Field Theory The following is a recent article written about Quantum Field Theory and the book, Fields of Color. The article appeared in the Leisure World News on September 4, 2015. The book “Fields of Color: The Theory that Escaped Einstein” simplifies the complex Quantum Field Theory so that a layman can understand it. Written by Leisure World resident Rodney Brooks, it contains no equations—in fact, no math—and it uses colors to represent fields, which in themselves are hard to imagine. It shows how the field picture of nature resolves the paradoxes of quantum mechanics and relativity that have confused so many people. It is original, comprehensive, and entertaining. quantum field theory Brooks is amazed and delighted with the success of his book, which was published in 2011. He says 6,000 copies have been sold, unusual for a self-published book on physics. In addition, the book has a 4.4 (out of 5) star rating on Amazon with more than 90 reader reviews — a higher rating than Einstein’s own book on relativity and higher than Stephen Hawking’s popular book “The Theory of Everything.” In its essence, quantum field theory (QFT) describes a world made of fields, not particles (neutrons, electrons, protons) as most physicists believe. However the field concept is not easy to grasp. To quote from Chapter 1 of “Fields of Color”: “To put it briefly, a field is a property or a condition of space. The field concept was introduced into physics in 1845 by Michael Faraday as an explanation for electric and magnetic forces. However, the idea that fields can exist by themselves as “properties of space” was too much for physicists of the time to accept.” (Chapter 1 in its entirety can be read at Colors of Fields Later this concept was extended to other fields. “In Quantum Field Theory the entire fabric of the cosmos is made of fields, and I use (arbitrary) colors to help people visualize them,” says Brooks. “If you can picture the sky as blue, you can picture the fields that exist in space. Besides the EM (electromagnetic) field (‘green’), there are the strong force field (‘purple’) that holds protons and neutrons together in the atomic nucleus and the weak force field (‘brown’) that is responsible for radioactive decay. Gravity is also a field (‘blue’), and not ‘curvature of space-time’ which most people, including me, have trouble visualizing.” He continues: “In QFT, space is the same old three-dimensional space that we intuitively believe in, and time is the time that we intuitively believe in. Even matter is made of fields—in fact two fields. I use yellow for light particles like the electron and red for heavy particles, like the proton. But make no mistake, in QFT these ‘particles’ are not little balls; they are spread-out chunks of field, called quanta. Thus the usual picture of the atom with electrons traveling around the nucleus like little balls, is replaced by a ‘yellowness’ of the space around the nucleus that represents the electron field.” Brooks’ interest in physics was first sparked when at age 14 he read Arthur Eddington’s “The Nature of the Physical World.” This book describes how a table is made of tiny atoms that in turn could be split into even tinier objects. “So this is what the world is made of,” Brooks thought at the time. In college at the University of Florida he majored in math with a minor in physics. He was then drafted into the army for two years. Quantum Field Theory Answers Problem Fast forward to graduate school at Harvard University where Brooks was a National Science Foundation scholar, majoring in physics. During this time, he attended a three-year formal lecture series taught by Julian Schwinger. The Nobel prize-winning physicist had just completed his reformulation of QFT, so the timing was perfect. “I was amazed that all the paradoxes of relativity and quantum mechanics that had earlier confused me disappeared or were resolved,” Brooks says. After receiving his Ph.D. at Harvard under Nobel laureate Norman Ramsey, Brooks worked for 25 years at the National Institutes of Health in Bethesda, Md., in neuroimaging. His first research was on the new technique of Computered Tomography (CT), during which time he invented the method now known as dual-energy CT. Next, he did research on Positron Emission Tomography (PET) and finally in Magnetic Resonance Imaging (MRI). All in all, Brooks published 124 peer-reviewed articles. After he retired, he and his wife, Karen Brooks, moved to New Zealand in 2001. That was when he became aware of the widespread confusion about physics, especially quantum mechanics and relativity, while his beloved QFT that resolves the confusion was overlooked, misunderstood, or forgotten. “And so I took on the mission of explaining the concepts of quantum field theory to the public,” Brooks says. His book was first published in New Zealand in 2010, and is now in its second edition. In 2012, his grandchildren, who live in Maryland called out, and he and his wife moved to Leisure World, where he continues to work on his mission. While Einstein eventually came to believe that reality must consist of fields and fields alone, he wanted there to be a single “unified” field that would not only include gravity and electromagnetic forces (the only two forces known at the time), but would also include matter. He spent the last 25 years of his life unsuccessfully searching for this unified field theory. Referring to the particle picture that he espoused, physicist Richard Feynman once said, “The theory… describes Nature as absurd from the point of view of common sense. And it agrees fully with experiment. So I hope you can accept Nature as She is – absurd.” Brooks, on the other hand, concludes his introductory chapter by saying, “I hope you can accept Nature as She is: beautiful, consistent and in accord with common sense—and made of quantized fields.” Space-Time Curvature & Quantum Field Theory General Relativity is the name given to Einstein’s theory of gravity that was described in Chapter 2 of my book. As the theory is usually presented, it describes gravity as a curvature in four-dimensional space-time. Now this is a concept far beyond the reach of ordinary folks.. Just the idea of four-dimensional space-time causes most of us to shudder… The answer in Quantum Field Theory is simple: Space is space and time is time, and there is no curvature. In QFT gravity is a quantum field in ordinary three-dimensional space, just like the other three force fields (EM, strong and weak). This does not mean that four-dimensional notation is not useful. It is a convenient way of handling the mathematical relationship between space and time that is required by special relativity. One might almost say that physicists couldn’t live without it. Nevertheless, spatial and temporal evolution are fundamentally different, and I say shame on those who try to foist and force the four-dimensional concept onto the public as essential to the understanding of relativity theory. Riemannian GeometryThe idea of space-time curvature also had its origin in mathematics. When searching for a mathematical method that could embody his Principle of Equivalence, Einstein was led to the equations of Riemannian geometry. And yes, these equations describe four-dimensional curvature, for those who can visualize it. You see, mathematicians are not limited by physical constraints; equations that have a physical meaning in three dimensions can be generalized algebraically to any number of dimensions. But when you do this, you are really dealing with algebra (equations), not geometry (spatial configurations). By stretching our minds, some of us can even form a vague mental image of what four-dimensional curvature would be like if it did exist. Nevertheless, saying that the gravitational field equations are equivalent to curvature is not the same as saying that there is curvature. In Quantum Field Theory, the gravitational field is just another force field, like the EM, strong and weak fields, albeit with a greater complexity that is reflected in its higher spin value of 2. While QFT resolves these paradoxical statements, I don’t want to leave you with the impression that the theory of quantum gravity is problem-free. While computational problems involving the EM field were overcome by the process known as renormalization, similar problems involving the quantum gravitational field have not been overcome. Fortunately they do not interfere with macroscopic calculations, for which the QFT equations become identical to Einstein’s. Your choice. Once again you the reader have a choice, as you did in regard to the two approaches to special relativity. The choice is not about the equations, it is about their interpretation. Einstein’s equations can be interpreted as indicating a curvature of space-time, unpicturable as it may be, or as describing a quantum field in three-dimensional space, similar to the other quantum force fields. To the physicist, it really doesn’t make much difference. Physicists are more concerned with solving their equations than with interpreting them. If you will allow me one more Weinberg quote: steven weinbergThe important thing is to be able to make predictions about images on the astronomers photographic plates, frequencies of spectral lines, and so on, and it simply doesn’t matter whether we ascribe these predictions to the physical effects of gravitational fields on the motion of planets and photons or to a curvature of space and time. (The reader should be warned that these views are heterodox and would meet with objections from many general relativists.) – Steven Weinberg So if you want, you can believe that gravitational effects are due to a curvature of space-time (even if you can’t picture it). Or, like Weinberg (and me), you can view gravity as a force field that, like the other force fields in Quantum Field Theory, exists in three-dimensional space and evolves in time according to the field equations.
de6c33b982340029
Slaying a greenhouse dragon by Judith Curry On the Pierrehumbert thread, I stated: So, if you have followed the Climate Etc. threads, the numerous threads on this topic at Scienceofdoom, and read Pierrehumbert’s article, is anyone still unconvinced about the Tyndall gas effect and its role in maintaining planetary temperatures?   I’ve read Slaying the Sky Dragon and originally intended a rubuttal, but it would be too overwhelming to attempt this and probably pointless. I was hoping to put to rest any skeptical debate about the basic physics of gaseous infrared radiative transfer.  There are plenty of things to be skeptical about, but IMO this isn’t one of them. Well, my statement has riled the authors of Slaying the Sky Dragon.   I have been involved in extensive email discussion with the authors plus an additional 10 or so other individuals (skeptics).  Several of these individuals  on John O’Sullivan’s email list actually agree with my assessment, even though they regard themselves as staunch AGW skeptics. One of the authors, Claes Johnson, along with John O’Sullivan, expects a serious critique from the climate community.  Johnson says he intends to submit his papers to a peer reviewed journal.  I agreed to host a discussion on Johnson’s chapters at Climate Etc., provided that the publishers of Slaying the Sky Dragon would make Johnson’s chapters publicly available on their website (which they have). Johnson’s first chapter is entitled “Climate Thermodynamics,” which presents an energy budget for the earth and its atmosphere that does not include infrared radiation.   The second chapter is entitled “Computational Black Body Radiation,”  which seeks to overturn the last 100 years of modern physics  and concludes that “back radiation is unphysical.” For background info: • Claes Johnson’s website is here • Johnson’s blog is here, see specifically these posts ( here and here) • John O’Sullivan’s advert for the debate at Climate Etc. (note Monckton and Costella are in  my “corner” in criticizing the book and Johnson’s chapters). I suspect that many undergrad physics or atmospheric science majors at Georgia Tech could effectively refute these chapters.  I’m opening up this discussion at Climate Etc. since • the Denizens seem to like threads on greenhouse physics • I’m hoping we can slay the greenhouse dragon that is trying to refute the Tyndall gas effect once and for all. It will be interesting to see how this goes.  Claes Johnson has said that he will participate in the discussion. Note: this is a technical thread, please keep your comments focused on Johnson’s arguments, or other aspects of Slaying the Sky Dragon.   General comments about the greenhouse effect should continue on the Pierrehumbert thread. 2,518 responses to “Slaying a greenhouse dragon 1. It’s an interesting concept, that an atom cannot absorb (but only reflect) incoming EM at a cooler temp than its own blackbody emission temp at that instant. No idea if it’s true. My layman’s understanding of the thermodynamics constraint was just that it described net transfer, which must always be from hot to cold. 2. I have to agree with omnologos on this. By inferring that all those skeptical of the man-made global warming meme (some, like us, skeptical of the greenhouse gas theory, itself) are supposed to be seeking a unified front as if we are a political or military force is, frankly, absurd. We prefer to leave ambitions to claim a consensus to the post-normal science green brigade; they appear to have abandoned the traditional tenets of the scientific method. Consensus is utterly meaningless- being proven right is the goal even when the so-called ‘consensus’ is adamant we are wrong. The statement, “I suspect that many undergrad physics or atmospheric science majors at Georgia Tech could effectively refute these chapters” is so funny coming from someone who is “too busy” to do what she infers is such a basic task, herself. • Well, I mainly found it interesting that a number of people on your self selected email list were highly critical of the book and Johnson’s chapters. Your email list does not begin to reflect the broader range of skeptical opinions. • Judy, I intentionally invited to participate those who I knew to have contrary views . This is the whole point of debate isn’t it? Let’s see some actual analysis please rather than insults and hand waving so far displayed by those made uncomfortable by what the book presents. • Thank you for mentioning me John as my comment has been snipped out. Perhaps I should be glad it deserved that much of an attention. 3. Hi judith, I was positively surprised by the first chapter, which correspond to the mental model I have formed about GH effect, but I do not really see where it is in conflict with mainstream view nor why it is independent of infrared radiation. On the contrary, it explicitely agree with mainstream view, that is that TOA is variable in height and the higher the more GH gaz is present. The only thing it add is that lapse rate below TOA is related to thermodynamic and not radiation, and that lapse rate can vary with humidity and thus is a potential feedback (negative feedback). Up to here, I perfectly agree, appart that one should mention that it is an approximative model because all radiation does not happen a a precise TOA height, but that TOA is an average concept, the atmosphere is not perfectly IR opaque and then IR transparent, it is semi-transparent so radiation is a diffuse process and all radiation occuring at TOA is only a (usefull?) approximation. At this point, the model does not allow to predict the change of T_ground when CO2 is doubled, what would be needed is the change in TOA from CO2 doubling, and the various H2O feedbacks (on TOA itself, and on lapse rate). Still, this model seems to me much more useful and closer to reality than pure radiative model with an IR opaque shell-like atmosphere concentrated at TOA, and the (negative) feedback of H2O on lapse rate seems perfectly valid (and not mentioned explicitely on previous GH accounts I have read). This first chapter does not ring any physical alert bells though, so I guess reading the rest makes sense, and I am for now positively surprised by “Slaying…”… • oups, forgot to say: read the Pierrehumbert thread where I attempt to expose the mental model I built about GH effect. Done that only from the various GH threads here, at wuwt and rc, not from the “Slaying….” chapters….So u see why this first chapter was appealing to me :-) • ouch, started reading second chapter about blackbody…yikes, this one is definitely in crackpot territory, so “Slayer….” is a kind of mixed bag imho, if most chapter are long the first one, it is worthy, else (or if conclusion hold only if all chapters are true), then it will easily debunked… • Kai, the first chapter rests on the result of the 2nd chapter (they are both written by Claes Johnson), i.e. there is no back radiation and atmospheric infrared radiative transfer is not important in the earth’s energy balance. So if Ch 2 is crackpot, then Ch 1 is also. • Dr Curry, To help the readers understand, please: 1. elaborate your definition of back radiation and your concept of it, 2. explain whats wrong with ” atmospheric infrared radiative transfer is not important in the earth’s energy balance”. The Earth’s mass is so huge compared with atmospheric mass. The Earth’s IR energy emitted is so huge as compared with atmospheric absorption of IR energy. Will you care to do a comparison? Why is NASA’s radiation energy balance for K-12 incorrect? • Sam, I essentially agree with Pierrehumbert’s essay on this topic, see the previous pierrehumbert thread • Dr. Curry, I admire your tactics of diverting your GT students’ attentions for avoiding direct answers to direct questions soon I found the Figure 1 model there is not a true representation of the atmosphere radiation transfer, namely, lack of the cloud radiation transfer and lack of layers direct radiation transfer to the Earth surface. • kai, “… this one is definitely in crackpot territory”. This is very unrespectful to an an author who try to sort out radiation misconceptions, care to elaborate? • Is he really trying to sort out radiation misconceptions? Whether he tries it or not, the result leads to think the opposite. The book in, which the article appears is definitely trying to increase misconceptions. • Its easy to make a generalized comment. I find generalised comments do increase misconceptions. Will you be more specific, such as list them out item by item, concept by concept, misconception by misconception, page by page? Doing it this way helps the readers understand your points of view. • Sam I have done that kind of commenting in tens of messages. Repeating similar statements hundred or thousand times more, is not going to stop requests like yours. When you stop commenting, there will always be a new participant, who starts from the beginning again. That will go on as long as this site is active. 4. Dr. Curry, I am sorry you felt obliged to give so much space to the ‘Dragon’ book. But if anything, it will unify skeptics by giving many something to agree on that fails as a skeptical case. I see this book as sort of a left hand paranthesis to Hansen’s Venus-ization of Earth as a right hand paranthesis, expressing clear markers where wishful thinking has taken over. 5. Judy: I do not say that radiative transfer plays no role in climate. It would be helpful for the debate if you woul read what I write and not freely invent crackpot themes. • Well yes, you admit to solar radiation and black body radiation. But your treatment in the first chapter completely omits atmospheric gases (and cloud) infrared radiative transfer (and includes that ludicrously incorrect diagram from more than 10 years ago that somehow continues to exist on a NASA web site). • Having said that Johnson is wrong, I’d like to point out that his first chapter on climate thermodynamics – emphasizing heat transport by convection and evaporation, but not including radiation from the atmosphere, is no more wrong than Pierrehumbert’s article which does the opposite – making the incorrect claim that the surface temperature can be determined by calculation that only includes radiation, ignoring convection and evaporation. And yet of these two incorrect articles, Judith refers to one as “excellent” but says that the other could be refuted by undergraduates. I wonder if these same undergraduates could refute the Pierrehumbert article? I expect most could not, because the new generation of students are being brainwashed in the same way, for example by the GaTech course “EAS8803 – Atmospheric Radiative Transfer”. I note that the blurb for this course says that “Topics to be covered include the radiative balance at the surface”. I do hope that you have some students bright enough to realise that there is no radiative balance at the surface, and that one day this fact will dawn on those who design and teach the course. • Sorry, posted this in the wrong place in the thread. • Why NASA did not correct it and misled the general public for over 10 years with that incorrect diagram? Or NASA is incapable of understanding the subject of radiation? Or under the authority of James Hansen, no one in NASA dare to correct it? • This diagram apparently first appeared in a doc designed for K-12 education. The names Eric Barron (currently president of Florida State University) and John Theon were on the doc (back when theon was still employed at NASA and Barron was at Penn State, which places it in the mid 90’s). But I assume this diagram was drawn by a staff person, and Barron didn’t pay close attention. That is the only way I can explain this. Somehow John O’Sullivan spotted this (or at least publicized this). And it sits on a web site to the present day. In spite of my contacting several people about this. The bottom line is that there is too much form and not enough substance oversight on public communication documents (as opposed to satellite data quality issues, where there is a lot of oversight and checks and balance in place at NASA). • Over the 10 years, this diagram has misled the K-12 students, the teachers, the politicians and the world who visited the NASA site. This is a serious American educational flaw that NASA, Eric Barron and John Theon should be informed to correct the diagram or delete from the NASA website and owe the American Education and the world an apology. If you have not asked them to correct it, please do as an educator at the Georgia Tech. 6. Ok fine then Judy: You don’t like the Kiehl-Trenberth diagram. So what is then wrong with it, as you see it? Maybe we share some insights? • I’m not clear which diagram you are discussing here. If it is Fig 5 of Chapter 2 of Johnson then it does closely resemble Fig 7 of Kiehl and Trenberth 1997. If the latter was ‘ludicrously incorrect’ then it was still given pride of place 10 years later (with added colour but no other changes apart from the caption) in IPCC AR4 WG1 Chapter 1, p 96 (2007). But I thought Dr Curry was referring to Fig 4 of Ch 1 of Johnson, which is also attributed to NASA, but which differs from the Ch 2 version in not showing any downward long-wave radiation. Is that also derived from K & T? 7. Judy: You say that “I suspect that many undergrad physics or atmospheric science majors at Georgia Tech could effectively refute these chapters”. I suggest that you actually try this as a take home exam for your students. From your teaching they will understand that Kiehl-Trenberth is wrong but maybe they will find something they think is right. Go ahead! 8. Apart from an over-indulgence in post-modern civility, the chapter on Climate Thermodynamics pursues the misconceptions underlying current AGW theory. A helpful touchstone for pdf files is a scan for the word equilibrium where used to describe what physical science calls steady states. I find three such instances in this chapter, all wrt the adiabatic lapse rate. Equilibrium states have no net fluxes of matter or energy entering or leaving. (Canonical ensembles allow fluctuations.) Equilibrium profiles are isothermal and the adiabat is not. Steady states require external fluxes to prevent them from relaxing to equilibria. The alert student should now be asking, how do I determine this flux needed to maintain an adiabatic profile? With CO2 doubling, one typically calculates a 2% flux reduction and then presumes a 2% increase in the thermodynamic potential difference (1/T) is needed to restore the flux level. Thus, given a 65K tropospheric differential, 1.3K. An alternative interpretation is that adding CO2 increases the resistivity of the troposphere, just as traces of phosphorus disproportionately increase the resistivity of a copper wire. Thermodynamics asks, what change in potential is required to restore the original rate of dissipation of free energy? In high school we learned the expression E^2/R, albeit in a different guise. Ergo, only a 1% potential change now compensates a 2% resistivity change to restore energy balance. When our student resolves the difference in these solutions, he should be able to answer his earlier question. Perhaps herein lies Sommerfeld’s dilemma – thermodynamics is not the intuitively obvious subject it may superficially appear. To paraphrase yet another quotation, ” …, and you’re no Arnold Sommerfeld.” • Is the presumption of a 2% change in (1/T) tied to a 2% change in flux found in textbooks, and generally accepted in the climate change literature? If so, then the generally accepted value for climate sensitivity is a factor of 4 too large. The reason is that the Stefan-Boltzmann law says j is proportional to T^4. Taking the derivative of both sides with respect to T, and then dividing both sides by the Stefan-Boltzmann law, and rearranging shows that the % change in T will be 1/4 times the % change in flux. Because 1/T contains T^1, the % change in (1/T) will also be 1/4 times the % change in flux. You and all other knowledgeable bloggers are asked to comment on and make any corrections to my calculations found at posted on Feb. 7 at 7:44 pm. 9. Well, of course Johnson is wrong. It is perhaps instructive and useful to try to explain why. In the ‘blackbody’ chapter he seems to think that a warm body can warm a cooler one but not a warmer one. He says at one point (sadly no page numbers) that there is two-way propagation of waves, but only one-way propagation of energy. How does that work? Are there two types of EM wave, one transporting energy and one not?! We can also ask him this : an isolated backbody is radiating into a vacuum. Then a warmer body is brought in. How does the first body ‘know’ to stop radiating energy in that particular direction? Later on he tries to use equations – but his equation (4) is just wrong. Where does this equation come from? What is u supposed to represent? Why is radiation given by the third time derivative of u? 10. The email debate of last week was the first geniune airing of the flawed Physics of AGW in all history. The fundamental flaws are explained in “OMG….Maximum CO2 Will Warm Will Warm Earth for 20 Milliseconds” posted at and at the website. Surprising that the truth was hidden in plain sight for so long. Since the show is now over, I felt it necessary to add one final comment “Climate Follies Encore” which explains the post 20 Millisecond exchanges. This has been the greatest education process, for the wisest among us, and we will now share. My chapter includes over 100 pages of footnotes and is supported by 60 articles in archive and Canada Free Press. We share a glorious future of truth. My thanks to Judy for enduring my repeated, well meaning barbs for over a year now. (co-author of SSD) 11. To PaulM: You have not read and understood my argument: I present a differential equation modeling two-way wave propagation combined with radiation and with a dissipative effect making the energy transfer one-way, from higher to lower frequency. If you don’t like this equation, give me one you think is a better model. Just words is to diffuse to discuss. • Perhaps you should start by reading a standard text book on Radiation Heat Transfer and then move on to some papers (H C Hottel would be a good start) and learn about the subject rather than propose some wild theory? The fact of ‘backradiation’ has been well tested in many situations, furnaces, radiation shields for thermocouples etc., let’s see you apply your theory to such situations and see how it works? • Please define back radiation which confuses me even though I had written something about it. To me, back radiation is reflection from the back with wall or relective radiation. A thermocouple when placed at the center of a pipe gain heat from the flowing media as well as radiation directly from the wall concentrated at the thermocouple measured an erronous fluid temperature. With such a wall you get radiation concentration. Without a wall, the radiation is minmal. Similar analogy for the greenhouse situation, greenhouse has glass or sheets of clear plastics to trap most IR, without this layer of wall, no trapping of IR and hence no greenhouse. It is obvious. 2 black bodies at different temperatures, they all emit IR with the resultant energy flow from the hotter to the colder in a free radiation condition. The colder can have an extremely small effect of slowing down cooling of the hotter unless a back wall from the colder reflect the radiation to other directions are reflected by the back wall. I have not read Mr. Claes Johnson’s article about the radiation. I will assume he is mostly correct in a free field radiation as in most climate situations. There is no back radiation. Furnace, thermocouples etc cases are not free field radiation cases which involved walls of reflecting radiations. Radiation involves walls of reflection has back radiation. 12. curryja | January 31, 2011 at 8:31 am As an “update” how about this diagram and text on Wikipedia, it appears a little more recent. The text does not appear “improved” either. So, no real changes to it appear warranted according to AGW. Maybe you could explain what makes the old one and the “new” one “ludicrously incorrect”, that might help in discussing what the “slayers” are showing, saying, suggesting, and raising for discusion. Heck, we might even get to a better understanding of where the science actually is at present. 13. PS to PaulM: I start from the same equation as Planck did 100 years ago, but combine with finite precision computation instead of freely invented ad hoc statistics. Statistics is not physics, just imagination, and physical particles have little imagination. • “Statistics is not physics, just imagination, and physical particles have little imagination”. How dismiss 200 years of thermodynamics and physical statistics, with the only clue of a single metaphor: the “not-thinking” particle. Funny (what about: Einstein debunked, there is no light speed maximum: photons don’t care about cops and driving speed limits ?). Anything else more substantial, perhaps? 14. Read the second chapter – it’s goofy, not physics. The initial clain to get rid of wave particle duality pretty much floored me, since this aspect has been very well experimentally shown. To accept this assertion means ignoring what you can see with your own eyes (and instrumentation) in a laboratory. A fatal flaw is confusing net energy flow with absolute energy flow – this is in the black body discussion. To say that a colder black body can’t radiate to a warmer black body (he calls this “back radiation”) is beyond ridiculous. Basically, he presents a circular argument without proving his ridiculous premise, throws a bunch of jibberish (maybe not jibberish, but I don’t call it physics) in the middle to make it all seem scientific, and then returns to his unproven assertion that a colder body doesn’t have black body radiation in the direction of a warmer body. Thus, besides claiming no one knows the nature of a photon (as part of an argument against the traditional treatment of blackbody radiation – yet single photon experiments have been run for decades) , he negates the Superposition Principle and relies on some mysterious instantaneous knowledge existing in one body about the temperature and direction of all other bodies in the universe. I think the spook guys would love to have this type of instantaneous directional communication device in their hands. Just to make it more clear, suppose you have two black bodies at different temperatures facing each other, with a shutter over each blocking all radiation. Remove the shutter in front of the colder body an very short time before removing the shutter in front of the hotter one. Then initially radiation would flow from the colder one toward the hotter one, and then reverse direction when the second shutter is opened. • The question is, can a cold body make a warm body hotter? Everything with mass and a temperature radiates. Who disputes that? Imagine a hot body (with an internal or external heat source) and a passive body floating in the vacuum of space. Can the passive body make the hot body hotter? Imagine the passive body gets closer to the hotter…it will absorb more radiation, right? It will get warmer. If there was such a thing as back-radiation heating, then the hot body gets more of it back. Then the bodies get so close together…that they touch. Now the radiation effect is greatly magnified (whatever radiation can do, conduction does much better). Does the hot body, at any time during this process, ever get hotter? Radiation from a passive source cannot make a hot body hotter. • Radiation from a passive source cannot make a hot body hotter. It certainly can, put a thermocouple in a flame and you’ll measure a certain temperature which is lower than the surrounding flame because of conductive losses down the wire and radiative losses to the surroundings. Surround the ThC with a silica tube and the temperature measured will increase due to radiation from the cooler tube. Check out ‘Suction Pyrometers’: • I can’t tell if you’re kidding, Phil. Transport your experiment into space so we can focus only on radiation effects. Then replace the flame heat source with a resistive one so it will work in a vacuum. Now, tell me how the passive thermocouple can increase the temperature of the heated body. The only thing it can do is cool the heated body…at various rates and with varying degrees of coupling, sure. But, under no condition can it make the heated body hotter. The passive body is never a source of heating for the source. Never. Now, what does that tell you about Trenberth and Keihl’s energy balance schematic? The earth’s surface is heated by back radiation from passive CO2 and water vapor? • I never kid, your complete failure at understanding the applicable physics, lack of reading comprehension and refusal to read the cited material makes responding to you a complete waste of time! • The topic is radiation, Phil, the supposed mechanism for global warming caused by increasing CO2 in our atmosphere. You love to talk about conduction and convection as if I don’t understand these concepts, but that is a hand-waving distraction. Focus, Phil. We’re talking about radiation…and how a passive body can heat a body with a heat source. I know how a passive body can cool a hot body…let us count the ways. Your GHG theory depends on passive materials heating hot materials. What are you going to do with radiation, Phil. Store it? Delay its transit time to space? You can reflect it, diffuse it, deflect it or focus it. You can’t store it or “back radiate” it to make a warm surface warmer. • You can’t store it or “back radiate” it to make a warm surface warmer It reradiates in all directions – the use of “back” is arbitrary and capricious, and assumes the location of another black body is somehow important. To be correct in what you say, it would have to stop radiating in a particular direction just because there is a black body in that direction – that’s ridiculous. You’re confusing net heat flow with absolute heat flow. A hot black body is in fact warmer if there is a cooler black body radiating toward it, simple because net heat flow is less. Dr Curry may confirm that I’m a definite skeptic, but I’m also a physicist and the linked chapter 2 and the posts here based on it are not even close to reasoned. • Reading comprehension still lacking I see! Missed this did you? And this, the first sentences in the cited reference: “When a bare thermocouple is introduced into a flame for the measurement of gas temperature, errors arise due to the radiative exchange between the thermocouple and its surroundings. In the standard suction pyrometers a platinum-rhodium thermocouple, protected from chemical attack by a sintered alumina sheath, is surrounded by two concentric radiation shields.” Yes Ken we are talking about radiation but unfortunately you don’t understand it. • If you are checking things out: try the 2nd la of Thermodynamics. • Phil, apparently you are confused by the slowing of a flux as you are not actually measuring the temp of the hot body, only the heated body, the thermocouple. 15. The atmosphere is in thermodynamic equilibrium. There are slight variations which are caused by certain cyclical processes which the proponents of AGW mostly refuse to accept. CO2 concentration is not one of them. John Tyndall did not prove a damn thing about CO2 absorption. His equipment was far too primitive to distinguish between absorption, reflection, refraction, diffusion, scattering or anything else. He incorrectly concluded that all energy missing between the source and the pile in his half baked experiments had been absorbed by CO2. Above all he ignored Kirchhoff’s law. The conservation of energy falsifies the “greenhouse effect” because as per Kirchhoff’s law that which absorbs, equally emits. This fact is absent from Tyndall’s ramblings and exposes him for what he was. Nothing traps in heat, quote: “In either case, the characteristic spectrum of the radiation depends on the object and its surroundings’ absolute temperatures. The topic of radiation thermometry for example, or more generally, non-contact temperature measurement, involves taking advantage of this radiation dependence on temperature to measure the temperature of objects and masses without the need for direct contact.” “The development of the mathematical relationships to describe radiation were a major step in the development of modern radiation thermometry theory. The ability to quantify radiant energy comes, appropriately enough, from Planck’s quantum theory.” According to Kirchhoff’s Law any substance which absorbs energy will equally emit that energy. CO2 has a lower specific heat capacity to O2 and N2. The atmosphere which is 99% N2 and O2 is in relative equilibrium. Therefore adding more CO2 at trace amounts to the atmosphere will simply force the CO2, with its lower specific heat capacity, into equilibrium with the rest of the atmosphere. The higher the concentration of CO2 the lower the overall atmospheric temperature will become. “A simple reproducible experiment” “Specific Heat Capacity of Gases” AGW theory requires that we suspend our knowledge of this obvious fact and accept that it is the 0.0385% CO2 which forces the other 99% of the atmosphere into equilibrium with itself. It is the same logic as claiming that by taking a pee in the ocean, you have warmed the ocean. When in fact your pee has been chilled by the ocean. It’s called semantics. It is interesting that Judith has played the appeal to authority card. It is also interesting that those who appeal to the authority of Tyndall and the RS (7GT/1Gt human v’s natural CO2!) fail to acknowledge that they are relying on primitive out of date 150 year old “science” which has not even been critically re-examined. Anyone who quotes John Tyndall as the man who proved the “physics” of the “greenhouse effect” displays nothing short of sheer ignorance. It is the ultimate in the bogus appeal to authority. John Tyndall was fool and a fraud. Above all he was an insider at the Royal Society. Tyndall’s experiments have as much value as Sir Paul Nurse’s implication in his recent Horizon “program” that natural processes account for 1 Gt CO2 while humans account for 7 Gt CO2, i.e. NONE. So can we quickly dispense with the pseudo science of John Tyndall and get back to reality? That appeal to authority was for yesterday’s people, those who had faith in the integrity of science, scientists and trust in the Royal Society. Those people are long gone. (Last seen heading south on highway 51 with Trust in the passenger seat and Faith at the wheel!) • You can see why I am personally not taking this on in any detail, it is just endless. You incorrectly state Kichoff’s Law. αλ = ελ, where Lambda should be a subscript. it says that at a particular wavelength, the fractional absorptivity equals the fractional emissivity, where the fractional part is relative to the intensity of black body radiation at that wavelength. So if an oxygen molecule at temperature 200K receives a bunch of solar radiation in the ultraviolet bands, it will also emit in the ultraviolet bands, but because the oxygen molecule is relatively cold, there is almost no actual energy emitted by an oxygen molecule with temperature of 200K. Your next sentence is a mistaken interpretation of very basic elements of the kinetic theory of gases. And on and on . . . My point in not rebutting all this personally is that I would need to spend an hour on each incorrect sentence to try to educate people that don’t already understand this. Roy Spencer and scienceofdoom have already tried. And there are hundreds of such sentences to rebuke. • I fully understand agree on your point on this issue. I can not understand your thoughts/position on “post normal science” and why “climate scientists” opinions should be given an preference in regards to governmental policy. • when did I EVER say that scientists should be given a preference in regards to governmental policy? I have been very actively fighting against that! • Then I have misunderstood your thoughts and the meaning applied to post normal science. • yes, that is my great frustration. • Post Normal Science, or Special Pleading? • I have to admit to misunderstanding it too, in that case… • Judy, The tail does not wag the dog. E in = E out. DITO darling. • no, energy in does not equal energy out. • No, energy in does not necessarily HAVE TO EQUAL, on virtually any time scale equal energy out. Does energy never get used? Does energy never get taken out of the system “permanently”? Of the energy taken out of the system, what determines when it is put back into the system, and how, as what? • Derek, Please consider the principal of the “conservation of energy”. “Does energy never get used?” I believe “converted” is the word you are looking for. “Does energy never get taken out of the system “permanently”? Sorry, NO COMPRENDE ? ? ? Taken out by what Derek ? • Will, Sorry, NO COMPRENDE ? ? ? K&T “timescales”, do NOT compute Will. Hence “permanently”…, sedimentary rocks. Re “converted” – does that mean “some” will never be returned to escape from the “system” as heat (energy lost to space)? “Life”, and “work done”, being the obvious examples. • Derek I have made the point about the energy that does not leave the system in my paper here: I think you know what I mean when I say E in = E out. Apart from hair splitting, do you actually have a point? • E in = E out. That’s an equilibrium condition, Will. Given that we’ve been doubling the amount of CO2 we add to the atmosphere every three or four decades for the past century or more, we are nowhere near equilibrium. Nor will we be until (a) we hold constant the amount of CO2 we add each year and (b) nature catches up. Expect (b) to happen roughly three decades after (a). But don’t expect (a) to happen until we can no longer afford fossil fuel. And at that point (a) won’t happen anyway since our CO2 production will decline thereafter rather than holding steady. David Archer believes things will remain hot thereafter. I disagree: I believe that after we stop emitting CO2 the temperature will plummet even faster than it has been rising due to the way equilibrium works. Conceivably by 2150 we’ll be in a Younger Dryas type ice age, though at that point we’ll surely have figured out some way of preventing that. • Will, earths atmosphere is not a closed system. The sun adds energy to it, which it then rediates to open space. Hence E in = E stuck in system + E out. • That is probably the most enlightening statement in this thread!!!! • Ein – Eout = delta global energy storage (mostly as heat in the oceans and atmosphere)- only at top of atmosphere • Energy is also stored with chemical changes, plant growths, animal grows … and stored energy dissipated thru plant deaths, fossil fuel uses … 16. To Harold: Yes physics is very goofy, in particular particle statistics physics. I start from the same wave equation as Planck and use a finite precision dissipative effect instead of jibberishy statistics. So what I do is less spooky than what you hint at. It is remarkable that in a discussion about the “greenhouse effect” physicists have nothing to say. To me it is a physical phenomenon that physicists should be able to grasp, but it seems they don’t. • I guess you didn’t get it. I’m a trained Physicist, now retired. I was pretty sure I just said something about the greenhouse effect, and particularly pointed how a simple thought experiment shows how wrong your theory is. I don’t intend to try to convince you, but my thought experiment should convince almost any reasonable idiot your theory is wrong. I don’t use different standards for either side of AGW. I have fairly rigorous standards,, which you have failed, and most ot the AGW papers also fail my standards. Sloppy work on the AGW’s crowd’s part doesn’t excuse sloppy work on the anti-AGW’s side. As for dragging Tyndall into the discussion and how the physics hasn’t been looked at, try reading some of Dr. Earl W. McDaniel’s and others’ books from decades ago on details of atmospheric excitation and radiation. Dr. Curry – FYI, Dr. McDaniel taught Physics at Georgia Tech, and had a great sense of humor. 17. “Statistics is not physics, just imagination, and physical particles have little imagination.” If there’s a more rigorous, more unchallengable, more awe-inspiring development in physics than the development of statistical thermodynamics I am not aware of it. 18. What are the main results of statistical thermodynamics with some form of informative content? • Claes This is a very interesting thread. If it wouldn’t be too much trouble, would you mind using the ‘reply’ button to respond to comments. It can be found next to the name and date/time of the person that you are replying to. It positions your response at the correct point in the blog and makes it easier for us lurkers to follow the argument. Many thanks. 19. Having read just the first couple pages of the second chapter, I know this argument is going to ‘creative’. The first argument is rather interesting. Blackbody radiators absorb all frequencies of light (definition of ‘black’), but only emit radiation with a specific spectrum determined by the body’s temperature. I think that’s fairly standard physics canon. But the train gets off the tracks pretty quickly after that. The author makes the statement, ‘The net result is that warmer blackbody can heat a colder blackbody, but not the other way around.’ which is obviously wrong. But why? Before this gem of a statement, the author goes through several analogies (which aren’t as informative as equations) proving to himself that only high frequency light that is not being emitted by the blackbody can increase the temperature of the body. This energy is absorbed, then Stokes shifted to lower energy by coupling to internal modes of whatever form of matter we’re discussing. In molecules, mostly vibrational and rotational degrees of freedom, along with intermolecular collision, play this role. The implied converse of this statement is that energy absorbed by the warmer blackbody from the colder blackbody is not outside of the frequency range of the warmer blackbody’s spectrum, therefore it doesn’t add heat and doesn’t increase the temperature! The question then becomes, if we are talking about blackbodies that absorb ALL frequencies of light, what happens to the energy in the lower frequencies absorbed by the warmer blackbody? Surely there is energy in those photons/waves. Because the warmer blackbody is, in fact, a blackbody, it MUST be absorbing those lower frequencies. What happens to that energy? I think most of us know that according to the conservation of energy, those lower frequencies absorbed by the warmer blackbody increase the temperature of that blackbody, even though the radiation was emitted by a colder blackbody. Kirchoff’s law is the mathematical manifestation of this fact. The emissivity of the blackbody in thermal equilibrium equals its absorptivity. Therefore, the thermal equilibrium of a blackbody can be shifted by changing its absorptivity in ANY SPECTRAL REGION, not just the high frequency region. This can be accomplished by increasing the inward flux of low frequency radiation due a nearby colder blackbody, as is the case with atmospheric greenhouse gases in the case of climate. So, I don’t doubt that the author’s math and equations are correct. Unfortunately for him, it’s the interpretation of those equations, along the lines of high pass filters and classrooms, that is flawed and ultimately leads to the incorrect conclusion that the greenhouse gas can’t exist. Not even that it doesn’t exist, but that it can’t. It’s brilliant in its simplicity, really. I would like to see this author handle the fact that we clearly observe a completely isotropic cosmic microwave background surrounding everything corresponding to a 4K collection of intra-solar and inter-stellar gases. I think that fact is fairly irrefutable proof that this guy is totally wrong. Moreover, why are we continuing to clamor to convince people like this that they are incorrect. Anyone who put enough time into convincing themselves of this type of theory after many, many attempts of others to ‘disprove’ them is not going to be swayed by observational evidence, proof of principle experiments or even reason. It’s better to not give their theories credence by taking the time to ‘debunk’ them. Comments welcome. • Comments welcome. You must be aware that the authors are partaking in these discussions on this thread. They’ve already posted a number of comments. So why are you referring to the authors in the third person? Take a glass of water at 99 C and surround it with a dozen glasses of water, all also at 99 C. Does the water in the glass in the center get warmer? Does it cool off more slowly than it would if the other glasses were not present? • The net transfer of energy between the 99C glasses of water will be zero but all will be radiating energy at a rate appropriate for a single glass of 99C water. Again, all radiating but net transfer zero. As for cool down, yes the central glass will cool slower. I suppose you could say that the other glasses are insulating it. What is happening mechanically is that the outer glasses are exposed to a cooler room so the net energy flow between them results in heat loss to the room. The inner glass then is exchanging energy with an incrementally cooler glass of water so it experiences an incremental net loss of energy. Until final equilibrium is achieved, the inner glass will remain warmer than the outer glasses and the outer glasses will remain warmer than the room. • So at what point does the glass in the centre become warmer? Answer: At no point. Take note Roy Spencer and Science of doom, its called ENTROPY. Without a continuous energy input you have no net increase from so called “back radiation”. • But there is a continuous energy input into the Earth’s climate system; it is provided by the Sun. • “But there is a continuous energy input into the Earth’s climate system; it is provided by the Sun.” There is also continuous darkness. In reality, any given spot on the surface directly under the solar point receives at most 25% of the continuous energy input from the sun in any 24 hour period, but generally much less. You know that night/day warming/cooling thing? • Wow, this is really simple physics. I’m not a all sure how this can be so hard to follow. In the water glass case, there is no steady energy input to the center glass of water so it will loose energy to the environment. The surrounding glasses increase the time that it takes to cool by radiating heat ‘back’ to it as they also cool by radiating their energy. To increase the temperature of an warmer object by a cooler object, the warmer object must have an continuing energy input that must be dissipated. In that case, the presence of the cooler object near it radiating part of its energy toward the warmer object results in that energy being added to the total that the warmer object must dissipate. The temperature of the warmer object must increase again to radiate that somewhat larger total energy input. Of course, if you insist there is no such thing as a photon or that two streams of photons cannot pass each other traveling in opposite directions, we probably will never be on the same ‘wavelength’. I’ve spent my life around electronics, radio, and nuclear physics. Maxwell’s equations rock for many aspects of electronics and radio but they are just very handy tools. Because Maxewell’s math works a good share of the time does not mean it defines reality. It is not very useful for use when counting gammas to determine the activity level of a radioactive source. Calculations based upon photons and nuclear interactions are the tools that work there. You should use the tool that suits the job at hand. I believe the photon view is the correct one for radiated energy discussions. This is basic physics and basic engineering stuff. However, do not take this to mean that I believe doubling of the atmospheric CO2 is going to be a big problem. I don’t. I just prefer simple physics not be twisted to make a point. • jae Could you have picked a more complicated example, if you’re being rhetorical? Dr. Curry’s more creative students will no doubt be asking about partial gas pressure, room temperature, convection, conductivity, where the lights are in the room, and will you refill the glasses as evaporation causes volume change, to start with. ;) A glass ingot at 99 C, I could understand. Especially if you included conditions like, “in a closed system initially at STP,” and “surrounded in all directions with no significant gaps,” and “all ingots behave as uniform spherical black bodies,” etc. Does the water in the glass at the center get warmer? Unlikely, though Dr. Curry’s students could contrive extrinsic conditions to make it so, I am sure. Does it cool more slowly? Likely, for most sets of extrinsic conditions, I think Dr. Curry’s able students will find. More to the point, could you expand on your point, please, as it elludes me. (Though I’m sure Dr. Curry’s students would be able to explain it to me.) • I think we have to be very careful about how we set this problem up. We are using language of ‘external energy sources’ versus the system of interest, the surface of the earth in most of this discussion. In this case, the outer glasses ARE an energy source for the central glass. They just happen to be at the same temperature as the central glass, in stark contrast with the earth-like situation in which the sun has a dramatically different temperature from the earth. So let’s a assume the ‘other’ glasses are in a circle around the central glass and that the glasses can only emit energy out into the plane that contains all the glasses, for simplicity. Being all at 99 C, each glass will have some the same emissivity and emit the same spectrum. It is very likely that each glass can absorb most, if not all, of the energy emitted by the other glasses. Now, to me, there seems to be two parameters that matter the most. 1) the temperature of the surrounding air and 2) the distance of the 12 ‘other’ glasses from the central glass. The temperature difference between each glass of water and the surrounding air will determine the difference in the energy absorbed by the central glass and the energy that it gives off due to the air by conduction. The distance between the 12 ‘other’ glasses will determine what percentage of the emitted energy from those glasses can be absorbed by the central glass. The closer the ‘other’ glasses are to the central glass, the more of the emitted energy the central glass can absorb. The further away, the smaller the percentage of energy that can be absorbed. There *should* could be an air temp and distance from the ‘other’ glasses at which the central glass is taking in more energy from the surrounding air than it is giving off. In such a case, the central glass would increase in temperature as per the conservation of energy. It would be an interesting experiment to set up at least. • maxwell …..”It would be an interesting experiment to set up at least.”…. Its been done. Sounds like the proof of the zeroth law of thermodynamics. • Bryan, ‘Sounds like the proof of the zeroth law of thermodynamics.’ I don’t think this experiment would prove that there is no thermal motion at 0 K. I mean, that is the zeroth law of thermodynamics. Can you please explain a bit more what you are saying? • maxwell | There are a number of glasses at the same temperature “Being all at 99 C, each glass ” The zeroth law of thermodynamics may be stated as follows: If two thermodynamic systems are each in thermal equilibrium with a third system, then they are in thermal equilibrium with each other. In the early days this was assumed but later questioned. It had to be experimentally determined and since we already had Laws 1,2 and 3 it was called the zeroth. Perhaps you are thinking about law 3 which is about absolute zoro • I have to agree with Zorro.. er, with Bryan, I mean. Zeroeth law well-established, a black body among bodies of equal temperature will not increase in temperature, though this says nothing of how quickly each will lose temperature or in what pattern. If the air were above the temperature of the glasses, or if there were certain complex salts that underwent a physical change in solution in the central glass, if it contained fissile materials in high enough concentrations, if exothermic chemical changes happened in the ‘water’, if the air pressure were suddenly increased compressing tiny soda bubbles (or at 99C, nearly boiling so water vapor bubbles), if the glass were in an atmosphere of pure reactive metal particulate suspension (potassium, say), if there were a series of lasers deflected toward the central glass, or electric currents, or sound waves.. there are all sorts of extrinsic conditions that might raise the temperature of the central glass. Which, as I said, a complex example.. and I still don’t follow the point of it originally being posted. • It looks like a jumped the gun under some confusion. Sorry for that. • How about debunking quickly this way – two black bodies at different temperatures separated by a perfect reflector. With the reflector in place, heat travels from each black body, bounces off the reflector, and returns to the originating black body. Under the theory that was proposed in chapter2, when the mirror is removed, the heat from the cooler black body must still return to the cooler black body – it has to act as if there is still a reflector in place, but not so for the hotter black body. A ridiculous result. Bad physics… • No reasoned rebuttal yet? • Harold, before quantum physics and the idea that a photon was an actual particle there was wave physics that was, and still is, experimentally proven. Those physics experiments described scattering, reflection, interference, and cancellation. Why would this section of physics suddenly become null just because you apparently have forgotten it? There are several possible explanations of why a cooler body would not heat a warmer body contained in this PROVEN area of physics and which are contained in the correct energy equations that give a NET energy flow. • When creating the waves and waves of orcs in the big battle of Gondor CGI, one of the directors apparently asked one of the programmers, “That one orc, why’s he running the wrong way?” The programmer answered, “Looks like he panicked.” Panic didn’t make the waves of orcs not waves of orcs, or invalidate the wave equations, or nullify the physics of Middle Earth. And while waves of panic can be observed in mobs, a one-orc wave isn’t really well-modeled by wave equations. Which are, after all, only proven mathematical models, not themselves mathematical axioms. • Bart R, we don’t need no stinkin’ wave EQUATIONS!! We have stinkin’ empirical data. That’s all I am asking of the cold object heating, or slowing the cooling of, the warm object by radiation. Empirical data. One of the thought experiments I really like is the one where the heated object is surrounded by a cooler sphere which is thermostatically controlled. My imagination tells me that the radiation from the sphere is cancelled by the radiation from the heater leaving the net to be drawn off by the cooling system of the sphere with nary an effect on the heater itself. I am told that this cool sphere will actually heat the heater. If the heater is made hotter by the cooler sphere, its radiation should be elevated measurably. If it isn’t measurable I really don’t care with respect to the climate disagreements. • Right, right. Data and thought experiments, very nice. But then why were you bringing up data and thought experiments in a discussion of wave equations, again? I’m not sure I follow the analogy built into your thought experiment. Which is the thermostat? Which the air conditioner? Help me with my own thought experiment: There is a mall full of inventory that is replenished through another channel, with customers entering and leaving all the time through multiple sets of doors. The doors are designed to allow customers in without hindrance, but to slow some customers on the way out by redirecting them randomly through the mall (the mall hopes to increase sales this way). There’s a ‘sellostat’ set by the mall manager designed to set how much the doors slow outbound customers, but the manager’s salary is set by sales figures, and every day he gets greedier. Can you see where my thought experiment is going, and maybe suggest improvements? • Uhh, Bart R, where did I suggest a thought experiment? Haven’t the vaguest on yours. I’m a simpleton remember? By the way, doors are NOT a way to allow people in without hindrance. A strip mall does that. • kk To your first question, that’d be: “One of the thought experiments I really like is the one where..” And one simpleton to another, remember what? I like your idea about using a strip mall rather than doors. Much clearer, and suggests better parallels. So, strip mall manager welcomes all buyers to his locale, and takes some steps (advertising posters facing people as they walk away from the strip mall, principally, but also shills who block the way out and chat up secret sales, and the smell of food from the strip mall’s food vendors) to hinder buyers as they leave. At first, the manager sets out only small hindrances, but he believes that they work because the theory of advertising tells him so, and his mall sells more and more as he puts more hindrances up, and he sees no reason to stop putting up hindrances since he’s rewarded by profit. He’s so rewarded by profit, he pays off the local officials to allow him to put up more posters, and muscles out any competing shills trying to get buyers to leave for their strip malls up the street. So, do you think the mall will be more crowded, the more the manager hinders buyers from leaving? • And I continued saying I would like it done as an experiment. I still would. I do not want to discuss it as a though experiment. It has been done to death. • kk Right, but all experiments start with design, with an analogy or meaning to the model built, with some hypothesis they test… And I’m still not sure what yours tests. Could you expand on that? • I wouldn’t say test so much as measure. It would seem that climate science, and maybe other fields, agree that there is backradiation and that either it slows cooling of the hotter body or heats it. I want at least one experiment, preferably several differing ones, that quantifies that relationship. If it slows cooling, by how much. If it heats the body, by how much. Does there appear to be conditions that increase or decrease the effect we need to research more… Why am I so adamant about real experiments? Because thought experiments are limited to the variables that we put into the experiment, you know, just like models. If we do not know about it or do not know it well enough to get a reasonable ball park figure, or cannot convert it to mathematics at a resolution that is useable, we have no way of knowing whether the results of our thought experiment is valid. The world has a reality that we need to include and we do not know all that reality. Look at the empirical experiments that detail an effect very well, yet, we do not see the result of the effect generally in reality due to offsetting effects. I will grant that there needs to be a lot of thought put into designing the experiments for these same reasons and there is where we find a good use for thought experiments. If we allow contamination the physical experiment will not be giving us results useful for the original purpose. Thought experiments can help us design the experiment to try and exclude contamination so we have a higher certainty of measuring what we think we are measuring. • kk I’m aware of multiple real measurements cited in this topic and elsewhere on this blog using advanced and proven equipment for decades. I’m aware of multiple real experiments cited in this topic and elsewhere on this blog and in countless other sources. On its face, the phenomenon of reflected radiation is so everyday commonplace that it would take extremely strong evidence, and with the addition of so much experimental proof, much better rebuttal than has yet been offered on these pages, to credit your words, “I want at least one experiment..” as anything but flat-out, and excuse me for being blunt, lie. • Bart R, when did Backradiation become REFLECTED radiation?? This is something I definitely missed along with 99.99% of papers and work I have never read that are obviously inexistence in spite of my ignorance. Please note, I am NOT trying to claim there isn’t literature, only that I am extremely limited in my exposure to the literature and reality. • kk You get the distinction between reflected radiation and back radiation? Means you have an advanced and subtle grasp of the topic, and should be able to handle the things talked about in the blog by people who do serious measurement, experimentation, analyses and interpretation of these things for a living. (Which would be not me.) You want quantified relationships, and that’s all well and good. Check out the slightly unpleasant quote referenced in or the very nicely put lower down this page for context and background about quantified measurements, and some of the problems with experimental interpretation present in the subject. I frankly don’t believe we are going to get to widely accepted experimental results along the lines of your suggestion any time soon, unless someone builds a pair of hermetically-sealed IR-transparent domes the size of Nebraska and experiments with changing their CO2 concentrations repeatedly under differing conditions of sunlight. Too much room for waffle, and too much brute force logic. And even then, questions of applicability will bedevil us. What we need is a guy with a teacup and some milk, and the ability to clearly explain so anyone can understand why the milk particulates suspended in it move.. erm, sorry. Wrong experiment. But you get my drift? • Bart R, I have just a thin veneer of knowledge and none of the math skills making it a very thin veneer. • Photon? Why did you switch frames? I said heat, nothing about photons – I’m using a classical EM frame. Maybe you thought I was talking about photons, since the waves would have to suddenly turn around and return to their source under the proposed theory, which doesn’t make sense to you. That’s my point, the proposed theory doesn’t make physical sense. • Harold, how does the HEAT travel without waves or particles?? • Sorry Harold, I got lost. OK, back to waves. The classical wave experiments show that waves can interfere, cancel, and augment (sorry for the layman’s terminology). Interference partially cancels or deflects, cancellation negates and augmentation adds. What happens to the energy Harold?? This has been shown to happen in experiments. Doesn’t it happen out there in the atmosphere? The thought experiment is that 2 bodies are radiating against each other. The colder body will be radiating at a lower energy peak, but the warmer body will be radiating at that wavelength also. Why won’t the waves at the same frequencies cancel or interfere? At the quantum level I am even less adept, but, I understand that the particles need to have a correct energy state to absorb energy. What happens if the bodies do not have the correct open energy state to absorb the wave/photon carrying the energy? Won’t it be deflected/reflected instead? Isn’t this a more reasonable explanation of what we see in the atmosphere between GHG’s and the surface and each other for that matter? Finally you suggested the wave would HAVE TO RETURN TO ITS SOURCE. . The wave would be deflected in another direction, although it would seem that it could be deflected back to the originating body. What amazes me is that there is all this partially understood and misunderstood knowledge being tossed around. Yeah, kinda like me. • kuhnkat, reference your comment about e/m waves from several sources interacting with each other, maybe you should take intensity and phase (and polarisation?) into consideration too. It may help to move your though experiment on if you gave some consideration to a practical experiment carried out around 1800 by English scientist Thomas Young described in “Instruction Manual and Experiment Guide .. ADVANCED OPTICS SYSTEM .. Experiment 4: The Wave Nature Of Light ( Alternatively you could set up youjr own experiment using a candle and some card board sheets with slits cut in them ( If you’re interested, this interaction of e/m waves from a single source taking different routes to a common destination can present a surprising problem for Line-of-Site (LoS) radio communications links. Although the optical path from transmitting to receiving aerial may be unobstructed, the radio signal can be reduced (even to zero) as a result of cancellation of the signal travelling over several different paths ( I’m sure that Joel, the thread’s resident expert in all things to do with theoretical physics, can explain it all in simple terms far better than I. He must do it all of the time when lecturing to his RIT students. Best regards, Pete Ridley • Pete, to be honest, I find kuhnkat’s posts quite painful to read. He understands just enough about the existence of interference to be led astray into a variety of completely wacky conclusions. First of all, interference occurs in only a very carefully prescribed set of circumstances. The light has to be of the same wavelength and to be “coherent”, which means that the waves are in lock-step with each other. Also, the geometry matters. Waves traveling in opposite directions (even assuming the coherence and all) don’t cancel each other out except at very specific locations separated by distances of half of wavelength, which means on the order of microns for what we are talking about. In between, they add together constructively. The result is a standing wave, such as is seen on a guitar string. So, I really don’t see anything useful coming out of kuhnkat’s ramblings. They are just an attempt to turn the nonsense that we know Claes and the other Slayers are spewing into something intelligible. But, you can’t produce sense from nonsense by adding more nonsense. • Hi Joel,. I felt confident that a top lecturer like you woujld be able to present a compexy subject like e/m wave interactions in a simple manner. Well done, but can you please try to avoid unusual words like “coherent” and concepts like “standing waves” which might confuse us simple lay people. Best regards, Pete Ridley. • Pete, (1) The adjective “top” as in “top lecturer” is yours, not mine. (2) I gave brief descriptions of what “coherent” and “standing waves” mean. However, I also wanted to use the correct terminology so that people can easily look on the web to find more detailed description. For example, here is the Wikipedia page on coherence: and here is their discussion of standing waves: • The problem was, Joel, that Ridley turned to his library to look it up. But he couldn’t find either “coherent” and “standing waves” in his dog-eared copies of The Elders of Zion and Mein Kampf. So it’s good that you’ve given him links to look the terms up. The other problem is, he won’t. Like Kuhnkat, Ridley uses ignorance as a war club to bludgeon his enemies. Pete also prefers using his time and bandwidth to search for Holocaust deniers, Neo-Nazis, and Jihadists like Daniel E. Michael to quote. (Did you READ the Michael letter Ridley quoted yesterday that ends with “Death to America!!!”) • Joel, As the earth emits a relatively continuous band it emits the same wavelengths that are emitted by the GHG’s. While I readily agree that the amount of interaction is probably quite small, if we toss out enough minimal influences we make other amounts larger. (one of many issues with models) How about an actual experiment to measure the backradiation effect. Something like a tube with earth at one end and a short wave source at the other. Use at least two runs, one with atmospheric gasses with no GHGs and one with GHGs computed to give the actual backradiation of a column in the open atmosphere. Measuring how fast the earth is warmed with and without GHGs should give a rough idea of how much the backradiation effect is. Or, has this been done and can you point me to the paper?? Simply shining IR through a tube of co2 tells us little about the effects of the radiation emitted by that co2 or h2o or ch4… on the ground. For the truly anal we could use differing types of material such as granite, dirt, wet dirt, loam, sand… to see how the effect is modified if there are measureable differences. Actually a third run with close to a vacuum would be good to show that there is no difference between non-GHGs and a vacuum insofar as the rate of warming of the surface. That is, there is negligible backradiation from non-GHGs matching their negligible absorption. This is the type of straightforward experiment that MIGHT convince some sceptics and deniers that there really is a measurable, significant in relation to the earth system, increase in warming speed. It should be able to clarify which of the ideas of no effect, slows radiation from the earth, or warms the earth is correct. I would note that some significant warmists apparently believe there is a real warming. An actual series of experiments should be able to sort this mess out!! It is really silly to have all these conflicting discussions over the number of angels that can dance on the head of a pin when we should be counting them with electron microscopes or other detectors. (well I guess there is the issue of finding the pin they are dancing on or luring them to our dance) • Kuhnkat: I don’t even understand your experiment…and I don’t really see why scientists should waste their time running it. For one thing, the basic physics of the radiative transfer in the atmosphere and specifically radiative forcing of CO2 is well-accepted and well-tested science by everyone who has even a small modicum of respect within the scientific community (e.g., Roy Spencer and Richard Lindzen accept it). So, the issue comes down to feedbacks and that is not something that can be settled by such a simplistic experiment. For another, I am under no illusions that we can ever convince “skeptics” who doubt such basic tenets of science to become AGW believers. Such people are like Young Earth Creationists: they don’t disbelieve AGW because they doubt the science; rather they believe any bogus nonsense attacks on the science because they are ideologically opposed to the actions that follow from addressing AGW. If you guys can’t even comprehend and accept basic science about which there is no serious controversy whatsoever and instead believe nonsense, how am I ever to convince you on the issue of feedbacks and climate sensitivity, which actually require weighing the balance of the evidence? It is like telling me that if I can only get a Young Earth creationist to abandon the belief that the earth is only 6000 years old, he will actually fully accept evolutionary theory…Ain’t gonna happen! • Hi Joel, I agree with your comment (yesterday at 9:09 pm) about the heat retaining effect of water vapour and some trace atmospheric gases preventing some of the IR energy that is emitted by the earth from radiating back out unobstructed (AKA the Greenhouse Effect) and that humans adding a tiny amount of CO2 could result in a small (beneficial?) rise in temperature. Ias you say there are not many respected or knowledgeable scientists who consider otherwise. On the other hand I’ll be very very surprised if you can provide a sound analysis of your own that convinces true sceptics that the balance of evidence indicates that a global climate catastrophe looms as a result of our continuing use of fossil fuels. Rest assured that the use of fossil fuels will continue for many many decades yet and all of the scare-mongering by the power hungry, the UN, the politicians and the environmental activists will not change that. I still haven’t seen your refutation of the analysis carried out by Roger Taguchi showing that the feedback effect is negligible. Was it too hard for you? OK, here a simple question. If positive feedback due to increased water vapour arising from a slight increase in global temperature due to our use of fossil fuels is able to cause a global climate catastrophe in the next 90 years why didn’t such a disaster happen during the Roman warming or during the MWP? I’m sure that you can explain that in simple enough terms for lay people like me to understand, but please don’t try to argue that the rate of warming now is far greater than ever experienced during the past 300M years or that Mann was correct and there was no such thing as the MWP. BTW, have you started reading “The Hockey Stick Illusion” yet – no, I thought not. Best regards, Pete Ridley • Joel, the experiment is to see how fast the material warms with and without ghg’s in the atmosphere giving an empirical figure for the effect of backradiation in a carefully controlled experiment. Why is this important? Because deniers like me say there is none. Luke-Warmers and warmers believe in varying amounts of slowing of the surface cooling, and some alarmists say the backradiation actually raises the temperature of the material above the level that the short wave can make it. Even if everyone suddenly went sane and decided there was only a reduction in the rate of cooling (faster warming also) it would be good to actually quantify by empirical experiment exactly what the magnitude of the effect is. You say: Yet, that statement says NOTHING about the magnitude of the effect on the earth itself. I am sure you agree that different materials would react differently even if your theory is correct. Being able to put constraints on the effect in the models would be a real contribution outside of just making some people happy that their position was proven. The Climate Science community appears to me to be adverse to the drudge work of detailed science. It is time they stopped talking about saving the earth and started doing the real work necessary to prove the hypotheses and giving us more information on what may need to be done. This one paragraph shows how hopelessly confused you are about something that is just basic physics! You make this distinction between “rais[ing] the temperature of the material above the level that the short wave can make it” and “a reduction in the rate of cooling”. There is no such contradiction between those two pictures: CO2 slows the rate of cooling and, in doing so, it causes the temperature of the earth to be warmer than it would be in its absence because the earth is heated by the sun and its steady-state temperature is determined by the balance between the rate at which it receives energy from the sun and the rate at which “cools itself” by sending energy back into space. The fact that you have been unable to comprehend this shows how you are unwilling to allow yourself to comprehend the most basic of scientific principles. The magnitude of the radiative effect of CO2 is not under debate in any serious quarters. Roy Spencer and Richard Lindzen and the rest of the scientific community all agree it is 3.8 W/m^2 (+/- 5%, or at most 10%). The magnitude of the resulting temperature change is still under debate, but this involves the question of feedbacks, which alas can’t be settled by any experiment smaller than the entire scale of the earth. (Which is not to say we can’t learn a lot about feedbacks from empirical data. In fact, we can and have. See, for example, here: ) • Joel, You have very bad radiation, the Earth cooling and the energy content concept here. 0.04% CO2 in the atmosphere has absolutely minimal energy content in it when comparing the energy content of the atmosphere (orders of magnitude more than CO2) not to mention the LW radiation energy from the Earth surface (orders of magnitude larger than atmosphere). I guess you know the mathematical differention of infinitely small -> 0, thats CO2 capable of warming the air -> 0, warming the Earth -> 0 and CO2 capable of slow cooling -> 0. CO2 cooling warms the Earth is absolutely absurd if you have any energy concept at all. Warming and cooling are mainly due to huge amount of water presents on the Earth. The movement of water causes most weather changes. I would advise you to appreciate the energy contents in them and study the physical properties of water, CO2 and the energy they are involved or you will never learn and keep on misinforming the general public wasting your life unless you have an agenda in order to stay on the gravy train. The Earth receives the Sun energy, stores (chemically and physically) some of it, reflects some of it, refracts some of it, conducts some of it, convects some of it, radiates (naturally including decays, volcano eruptions, human consumptions of food and fossil fuels) some of it. The Sun itself also in an ever changing state of emitting energy. There is no steady state temeperature, only instantaneous temperature. The fact that you have been unable to comprehend this shows how you are unwilling to allow yourself to comprehend the most basic of scientific principles of energy, cooling, heating and radiation. • “Warming and cooling are mainly due to huge amount of water presents on the Earth” should be amended as “Warming rate and cooling rate are mainly due to huge amount of water presents on the Earth” • Sam NC: It would have been more precise of me to talk about the rate at which energy is emitted or absorbed by the earth. Yes, the conversion of this into a rate at which temperature changes involves the heat capacity which, as you note, is largely due to thelarge amount of water. However, this doesn’t change the end result, i.e., the final steady-state temperature, but just how long it takes to get there. [Of course, this ignores water vapor or cloud feedbacks, which can affect the end result.] Well, if I fail to comprehend this, I am in good company with basically all of the scientific community. Why do you think you understand these things better than the National Academy of Sciences, the authors of the major physics textbooks which discuss global warming, etc., etc.? You are just fooling yourself…It is the Dunning Kruger effect ( ). Look, if you want to believe nonsense, I can’t stop you. Go play with your fellow travellers who believe the Earth is only 6000 years old and all the rest of the folks who would rather believe pseudoscience than science that conflicts with their ideology. Ignorance can only be cured if someone wants to learn. You want to remain ignorant and so you will. 20. To Judy: It is clear that you miss the points I want to make. Of course there are endless little things you can focus on and question, but in the spirit of Leibniz I ask you to try get the main message. I am not saying that my model is perfect. I try to make a point about radiative heat transfer based on a mathematical analysis of the same equation Planck tried to use but gave up with. If you focus on this equation, do see something of interest in my analysis? What is your model for radiation? Does it contain “backradiation”? Is it a stable phenomenon in your model? Next, you said you did not like Kiehl-Trenberth, and I asked you why? I do it again. And have you given your students my chapters for homework? It could be an educational experience, and students need assignments, right? • “I am not saying my model is perfect” Your model and main message are fundamentally flawed, as was easily shown. 21. To Maxwell: A warm body also absorbs low frequency waves but re-emit them and thus avoid getting heated by low-frequency stuff. Like an educated person simply does not get heated up by silly remarks from uneducated, only by remarks from more educated. Right? • Claes, to me, that seemed to be a weak response to a very clear post by Maxwell. You wanted Judith to give you the opportunity to debate the science contained in your book, so debate it properly rather than handwaving away difficult objections. • Mr. Johnson, Is the irony lost on you? ‘A warm body also absorbs low frequency waves but re-emit them and thus avoid getting heated by low-frequency stuff.’ Without warming the warm body with low frequency light from the colder body, there is lack of energy conservation. In order to emit more low frequency light (ie the low frequencies already being emitted and the absorbed low frequencies from the colder body) the thermal equilibrium must change, coming to a higher temperature according to the Stefan-Boltzmann law. Raising the temperature costs energy. So there are two options 1) your theory violates the conservation of energy because the emission of low frequency light by warmer blackbody doesn’t change in response to increase flux of low frequency light from a colder blackbody or 2) conservation of energy is preserved and your thesis (cold blackbody can’t heat warm blackbody) is wrong. I’ll let you pick which options you want. With respect to your poorly thought out classroom analogy, I am constantly learning from people who have less education than me. On an almost daily basis in fact. So not only is your analogy not informative in the context of energy transfer via radiation, it’s as fundamentally incorrect as your physical theory. Any other thoughts? 22. One thing that may be overlooked, in these discussions on whether or not a cold object can heat a warm body though exchange of radiation is that, a photon doesn’t know where it came from, the only thing it “knows” is its frequency. All the properties, momentum, wavelength and energy are directly related to its frequenncy and vice-versa. Measure one of the four and you know all of them. 23. The trouble with photon particles carrying energy back and forth is that it is an unstable phenomenon, or do really think there is a highway with left and right lanes connecting two bodies? Why would photon respect such traffic laws? Which equation is describing the physics you are hinting at? • Why does there have to be left and right lanes, or what happens when two photons traveling in opposite directions reach the same point in space? Do they collide, or interact in anyway? Or do you have anything other than handwaving to support this statement from your book? “We argue that such two-way propagation is unstable because it requires cancellation, and cancellation in massive two-way flow of heat energy is unstable to small perturbations and thus is unphysical.” Why does it require cancellation and why is it unstable? • Why would it be unstable? In second chapter you obtain an equation witch is the same as Boltzmann law for 2 bodies and infinitely small T difference. So conclusion about stability should be the same. By the way, your equation is not symmetrical, meaning that cold and hot temperature have not the same influence. So how do you generalise to a N>2 body problem? Looks trickier than classic Boltzmann to me. But more important, you throw out quanta interpretation. Sure it is not intuitive, but since Boltzmann it has been used to derive a huge amount of physical equations, and explain a lot of experimental results. Throwing out quanta to obtain radiative transfer equations you like better is only the begining of the story, because now you will have to reinterpret THE major part (more important imho than relativity) of modern physics ( post WWI physics). This is not out of question, but it is a huge task, and a task far far far too big to start from just radiative heat transfer…even if historically it was the start up of quantum mechanics. To make such a body of inference collapse, a single new fact may be sufficient, but the new fact will usually not be the same as the one at the origin of the old theory, and the new theory should be as powerful as the one it replace. Not bearing well for your new interpretation, so yes, even if I like the first chapter a lot, the second one is definitely in crackpot territory… • I notice you have avoided the challenge of applying your theory to a real world problem such as heat loss from a pipe, the concept of a cooler body transferring heat to a warmer has great success in these situations and has been tested many times. Cut to the chase, try some of the problems on page 582 of Mills, ‘Heat and Mass Transfer’. Here’s a link in case you don’t have a copy. • BTW, your interpretation of radiative heat transfer is very easily testable experimentally: consider heat exchange between a hot body at T_h and a cold body at T_c=T_h/2. classic equation (eq 20 in chapter 2) gives R =sigma (T_h^4-T_c^4)=15/16*sigma T_h^4 your new equation (eq 21) gives R =4*sigma *T_h^3*(T_h-T_c)=2*sigma T_h^4, i.e. almost 2 times the heat transfer predicted by S-B. Quite easy to test using simple calorimetric experiment, no? 24. Believers in the greenhouse effect will not honestly take in information counter to their belief, no matter how it is couched. As long as everyone pretends that there is a legitimate scientific debate being engaged here, it is obvious that situation will continue unchanged. Meanwhile, the truth lies elsewhere than the mass of climate scientists, and the hapless public, supposes. What follows is a comment I started to post on Claes Johnson’s site a few days ago, but didn’t because I realized no one was listening. I’ll put it here just because I exist, and the facts exist, and it has to be said, and eventually admitted by everyone: You need to establish first how the atmosphere is basically warmed: By atmospheric absorption of direct solar infrared irradiation, or by surface absorption of visible radiation followed by surface emission of infrared. Climate scientists, and their defenders, who tout the greenhouse effect, believe the latter [which leads to the infamous backradiation], and ignore the former. But as I have tried to communicate, to other scientists and to the public (see my blog article, “Venus: No Greenhouse Effect”), comparison of the atmospheric temperatures of Venus and Earth at corresponding pressures, over the range of Earth atmospheric pressures (from 1 atm. down to 0.2 atm.), shows the ONLY DIFFERENCE between the two is an essentially constant 1.176 multiplicative factor (T_venus/T_earth) which is just due to the relative distances of the two planets from the Sun. Nothing more. It has nothing to do with planetary albedo, or with the concentration of carbon dioxide or other “greenhouse gases”. The only (small) deviation from this general condition is in the strictly limited altitude range of the clouds on Venus (pressures between about 0.6 and 0.3 atm. only), where the Venus temperature is LOWER (not higher, despite the carbon dioxide atmosphere) by just a few degrees than the strict 1.176 x T_earth relationship, due no doubt to the cooling effect of water (dilute sulfuric acid) in those clouds. The only way this overwhelming and definitive experimental finding (T_venus/T_earth = essentially constant = 1.17 very closely, encompassing the data of two detailed planetary atmospheres) can be explained is that the atmospheres of both planets are heated by the SAME PORTION of the solar radiation, attenuated only by the distance from the Sun to each planet. This means absorption of visible radiation at Earth’s surface, followed by surface emission of infrared (heat) radiation into the Earth atmosphere, cannot have anything to do with the basic warming of the atmosphere, because Venus is largely opaque to the visible solar radiation, and it cannot reach Venus’s surface (and is thus not part of that common portion warming both atmospheres). So the first unarguable fact is: Earth and Venus are both warmed by direct atmospheric absorption of the same infrared portion of the solar radiation. There is no speculation, no theory in this statement: It is an amazing (because so many scientists believe otherwise) statement of experimental fact, based on the actual detailed temperature and pressure profiles measured for the two planets (which have been available to climate scientists promoting the greenhouse effect for nearly 20 years now, which means they are incompetent). And it completely invalidates ANY “greenhouse effect” of additional warming by adding carbon dioxide to the atmosphere: Venus has 96.5% carbon dioxide (compared to Earth’s 0.04%), yet its atmospheric temperatures relative to Earth’s atmosphere have nothing to do with that huge concentration, but only and precisely to the fact that Venus is closer to the Sun than is the Earth. Venus’s surface temperature is far higher than Earth’s, because Venus’s atmosphere is far deeper than Earth’s. To tell the public — and to teach students — otherwise is to recklessly spread an obvious falsehood and steal hard-earned knowledge from the world, thereby misusing and ultimately defaming the authority of science in the world. Stop playing around with theoretical put-downs, and talking past each other, and admit that the Venus/Earth data completely and unambiguously invalidates the greenhouse effect. 25. Claes starting point is not concerned with the climate change issue as such. His contribution is to question if Plank and Einstein were correct to abandon classical wave theory in favour of the quantisation of electromagnetic radiation. To be sure Plank and Einstein were deeply unhappy with the situation and regarded the concept of the photon as a “fix” or even a “trick” which would give way to some fuller explanation of phenomena like the photoelectric effect and so on. IMHO the photon explanation is the best we have at the moment but I’m glad that imaginative people like Claes are ready to reexamine the fundamentals from time to time. I’m sure if a real problem about heat transfer required a solution Claes would produce a solution that competent Physicists would agree with. He would probably use the Poynting vector to give the direction and magnitude of heat flow. Which of course as Clausius pointed out is always from higher to lower temperature bodies. On the climate change issue he would say I’m sure that the colder atmosphere cannot increase the temperature of the warmer Earth Surface. And he, in turn, throws out the superposition principle ( the two black bodies’ radiation patterns can be solved for indepandantly, and then added together), which holds for classical wave theory. I don’t see switching to a classical EM frame, and then having to destroy a central tenet of the classical EM theory an advance. You can’t have it both ways – classical EM holds and classical EM doesn’t hold. The very frame it’s put itno says his theory is flat 100% wrong. 26. Maxwell posts: which is obviously wrong. But why?” One warm body in dark space radiates energy in all directions except back at itself (ignoring internal self-balancing). Two warm bodies in dark space do that but also each warms the other which reduces the rate at which they cool. This is true regardless of relative temperatures. The cooler body radiates the warmer body (can’t be helped – it doesn’t know it is the cooler body and science doesn’t care) and that unavoidably slows the rate of cooling of that warmer body. Unlike electricity passing through a straight taut wire (and a tapered wire will demonstrate distributed radiated energy), no part of the wire radiates any other part of the wire. It is like that solitary radiating body in space. The thermal distribution is a consequence of the local resistance and thermal conductance. Not the case with radiated energy. Each object paints any other visible object and that object is compelled to react to that energy. • dp, it was a rhetorical question, but I appreciate your answering it in the context you used. It’s more practical than my own and hopefully will get through to more readers. • OK, I’m late to this and have what may be a very dumb question. But, dp, in the scenario you posit isn’t it possible, depending on the temperature, size, and proximity of the two bodies, that the cooler may actually increase in temperature, at least for a period of time, while it is never possible that the warmer object would increase in temperature? And isn’t that the point some are making, i.e. the colder body can not warm the hotter body? • The warmer body does indeed warm the colder body, but at the same time the warmer body gets also warmer than it would be without the colder body. It would still radiate as much as without the colder body and this radiation would disappear to the empty space. What the colder body does is that it is also radiating (although less) and some of this radiation is going to hit the warmer body and bring some heat to it. Some additional heat is heating the body whatever its source is. • Pekka, I have seen this explained before. If the colder body causes the warmer body to heat then the radiation of the warmer body will increase and it should be measurable. If we cannot measure it the effect is so small as to be ignored in the context of the climate debate (much larger effects are ignored by the Models). Can you point us to papers showing the experimental data on this? No one else has bothered to beat us over the head with the actual empirical data, that I have seen, and my head is really hard so takes a lot to penetrate it. 27. Claes, I could recommend some books on statistical thermodynamics if you’re interested. It seems to me that if one is going to dismiss it as “jibberishy” one ought to know something about it, if one is not to be be considered a crank. 28. To David: I have tried to learn from books on statistical thermodynamics but I belong to the large group of mathematicians who cannot understand what this theory tells you about reality. As Harry DH says: A constructive debate requires constructive minds. To argue with a three year old who has decided to not do something, requires something else than good old logic. Yes; it is a good idea to go back and understand that Planck and Einstein and Schrodinger were not happy at all with particle statistics. Maybe they had some good reasons not to be which are still valid. 29. Right, they are challenging Planck and Einstein so we should prove it. From the chapter on Blackbody radiation: “7.13 Stefan-Boltzmann’s Law for Two Blackbodies The classical Stefan-Boltzmann’s Law R = T 4 gives the energy radiated from a blackbody of temperature T into an exterior at absolute zero temperature (0K). For the case of an exterior temperature Text above zero, standard literature presents the following modification: R = T 4 − T 4 ext, (20) where the term T 4 ext conventionally represents ”backradiation” from the exterior to the blackbody. It is important to understand that this is a convention which by itself does not prove that there is a two-way flow of energy with T 4 going out and T 4 ext coming in. In our analysis, there is no such two-way flow of heat energy, only a flow of net energy as expressed writing (20) in the following differentiated R  4T 3(T − Text) (21) with just one term and not the difference of two terms. The mere naming of something does not bring it into physical existence.” If you have two bodies, or one body radiating to an exterior, which can be considered as two bodies. They both are radiating, and how do they know of the existence of the other, which would be required to determine the magnitude of the net flow of energy. Pretty much requiring inanimate objects having knowledge of other inamimate objects is what your analysis requires. We can detect the cosmic background radiation, and those photons, when they enter a detector, must add that energy to the detector in order to satisfy consevation of energy, which warms the detector, slightly. That cosmic background radiation is just blackbody radiation extremly red-shifted. I guess we are getting somewhere, as those who are trying to disprove the greenhouse gas effect, realize that in order to do that, they must attack Einstein, Planck and the Photon, and you wonder why they are labeled crack-pots. 30. I often find challenges to my existing perspectives to be enlightening, because in responding, I’m forced to review my own understanding, and on ocassions, revise it. In this case, however, the claim that a cooler body can’t cause a warmer body to become warmer still (if that is indeed claimed) is so nonsensical that it would be hard to learn anything from refuting it. Instead, I will simply suggest a simple experiment. I assume most of us are located in what is now a relatively cold time of year. Here is what I suggest. When the temperature outside is 2 deg C and your body skin temperature is, say 35 C (measured by a thermometer taped to your skin and insulated to shield it from the outside), go outside dressed only in a short-sleeve shirt and shorts, wait for about an hour, and then take your temperature. It will be lower – record the value. It might be around 32-33 C. Now go back in the house, and put on heavy clothes and an overcoat, taken from the closet at 20 C (obviously colder than your body skin temperature). Again, take your temperature after an hour. Did the 20 C clothes cause your 32 C temperature to go up or down? The mechanism of warming by the clothes is primarily convective, while the warming of the surface from the atmosphere is primarily radiative, but the principle is the same – a cooler body can cause the temperature of a warmer body to rise. For this to happen, of course, the cooler body must itself be exposed to heat that originated in an even warmer source than the current temperature of the warmer body. In the absence of such a source, a cool object can’t raise the temperature of a warmer object (although it can cause it to cool less than if the warm object were simply radiating to space). For your skin, that heat is generated by metabolism sufficient to maintain your internal temperature above the 32 C skin temperature, and the clothes retard its escape. For the atmosphere, the heat comes from the sun, and is transmitted to the atmosphere by absorption of solar radiation and IR radiation from the surface. Ultimately, of course, the net heat flow is from warm to cold – from the sun, via various routes, to the Earth, and then to space. In the meantime, the greenhouse effect operating on the atmosphere makes the Earth’s temperature habitable. • Fred, with all due respect, everyone here understands conduction and its little brother convection (and convection’s little brother advection) just fine. The nonsensical greenhouse gas theory is based on radiation and radiation balance causing heating. Bringing conduction, convection and insulation into the conversation is off-topic and a distraction…and certainly seems intentional to me…like a magician trying to distract the audience from the things going on in his left hand. • Fred Moolten | During a school lesson the Physics teacher might say the force of gravity causes “bodies” to accelerate towards the Earth at 9.81m/s2. A pupil might ask “is the body alive”. Fred we are not talking here about heat sources that have a means of regulating their power output such as an animal. Will putting clothes on a bronze statue at a temperature of say 350K cause its temperature to rise above 350K if the ambient temperature is say 275K? Of course not! All the clothes can do is to insulate the body i.e. to reduce the rate of heat loss from the object. • I believe most readers will understand the point I made. • Fred Moolten Most readers will conclude you don’t know much about heat transfer! • I’ll take my chances, Bryan. • Fred, I do hope you are joking here as the reason I will be warmer after putting on the 20c clothes is the energy I’m burning and turning into heat (you know, calories) will not be lost as quickly allowing my body to warm. • Ken – Your explanation is correct, but there was no joke intended. The point is simple – as long as a heat source is available for the cooler object to operate on, that object can raise the temperature of a warmer object. In this example, the heat source is body metabolism. For the greenhouse effect, the heat source is the sun. The inability of a cooler object to raise the temperature of a warmer object applies when there is no source of heat for the cooler object to divert back toward the warmer object, but that is not the case with our atmosphere. • Bryan, if a reader came to the conclusion that Fred was discussing convection or conduction it would point more to reader’s inability to decipher the most important aspect of the his example rather than a real lacking on the part of Fred. Yet, here you are. • Fred seemed to have interlinked lines of confusion . Power sources that regulate their output. Insulation does not imply that the insulator transmits heat by any method to the source of heat. • Actually I don’t understand Fred. Let me see, if I put a brick out in the sun and it warms to X degrees, and if I then split the brick in 2 and seperate them a couple of centimetres, they will “become warmer still”? • Possibly yes, because of increased surface exposure to sunlight, but it depends on air temperature, the absorptivity of the bricks for solar and IR wavelengths, their IR emissivity, conductivity and temperature at the surface they are resting on, and other variables. • I don’t know why you would introduce all those variables. It’s the one brick under the one sun sitting on the one surface. All I do is tap it with my trovel and split it in half (like a good brickie would) the properties of the 2 halves are identical. If it’s T rises due to the greater surface area, what has that got to do with the discussion about a cool body increasing the T of a warm body via radiation? OK we’ll void the extra surface area by placing a brick under the sun until it reaches X degrees. We now get a 2nd brick from the shed and place it next to the first one. Will the T of the first one now rise above X degrees because a 2nd brick was placed next to it? • Yes, under many circumstances (see Frank Davis’s link below to Spencer’s blog). • that means we could….for instance…..increase the surface temperature of the Moon from 107DegC to a somewhat higher T by placing an atmosphere around it? • The mean (day/night average) lunar temperature could be increased by an atmosphere containing greenhouse gases. • Why are you mentioning the mean T? The moon has a T of 107DegC during the day. If we introduce a cool body next to it (an atmosphere) will it increase the moons T? It’s a simple question expanding on our discussion so far. You didn’t introduce ‘mean’ or day night into the brick example. So what will the new daytime T be? • BH – It will increase it more at night, but it would also increase the daytime temperature as long as the cool body did not shield the moon from the sun. • Splitting a brick is a very good example. If we split the moon in two halves, will they warm each other, so each alves become warmer? I dont thnk so. • At equilibrium, the temperatures won’t change as long as the new surfaces have the same physical properties (emissivity/absorptivity) as the original surface. That is because the moon’s temperature is determine by the level at which radiative loss to space equals radiative gain from sunlight. Since splitting the moon won’t change the incoming solar energy in W/m^2, the outgoing flux and therefore the surface temperature won’t change. • actually if you split the moon in half, the two halfs would both be cooler as the combined surface area would be greater • Rob – the extra surface area would both absorb and radiate more heat. The temperatures would remain unchanged, because they are dictated by solar absorption on a W/m^2 basis. Surface area is therefore irrelevant. • Fred- If you slice a moon or planet in half wouldn’t it expose the warmer core of each half, which would result in greater heat loss • just kidding • Baa Humbug Indeed, what a number of the IPCC adherents miss out is that a colder object can make a warmer object colder than it would be in the absence of the colder object. Why do they have this blindspot? • I don’t understand your (Moolten’s) point either. Clothing does not heat up a human body via “back-radiation” or “back-conduction.” There is no heat transfer from cold to hot without work input (Clausius), and clothing (and likewise the atmosphere) cannot add work input. “The total surface area of an adult is about 2 m^2, and the mid- and far-infrared emissivity of skin and most clothing is near unity, as it is for most nonmetallic surfaces. Skin temperature is about 33 deg C, but clothing reduces the surface temperature to about 28 deg C when the ambient temperature is 20 deg C. Hence, the net radiative heat loss is about Pnet = 100 W.” Clothing “reduces the skin surface temperature” because the human body has to supply heat energy to the colder clothing to increase the temperature of the clothing. Clothing does limit convection, as do glass panes in a greenhouse, but CO2 has no such ability. Thus, the analogy fails. Please point me to a textbook of physics which contains the terms “back-radiation” or “back-conduction.” • Without the clothing, the temperature would decline even further. If you are skeptical, try the experiment I proposed. 31. “I belong to the large group of mathematicians who cannot understand what this theory tells you about reality.” It is impossible to take anything you say seriously when you make statements like this. When classical thermodynamics fails to explain the specific heat of your atomic crystalline solid, to where do you turn? Maybe you can guess what technique Einstein used to model the solid. 32. No I am serious, as serious as Einstein when he distanced himself from statistics as a way of understanding physics. • As in the statistical emission properties of a theoretical S/B, solid two-dimensional black-body disc. As aposed to the physical emission properties of a real, fluid, three-dimensional grey-body gas. The Stefan/Boltzmann BBD argument is a infra-red herring. It leads nowhere because it is an apple and oranges comparison. We can easily compare a body of CO2 to a body of air and clear up the argument in seconds. “An easily reproducible experiment” This simple experiment demonstrates that CO2 in the atmosphere is forced in to equilibrium by and with, the O2 and N2. Not as AGW has it, the other way around. • Interesting. You choose a classical frame for your work, and then use a statement about the interpretation of QM wave functions to boslter your argument .. but fail to note that Einstein didn’t distance himself from statistical mechanics, etc. I see no clarity in your thoughts or arguments, merely throwing in red herrings instead to answering the obvious inconsistencies which result from your theory . 33. Judy says that something is wrong with the KT energy budget, but refuses to tell what is wrong. What kind of debate is this? Is it some kind guess play? So Judy, please tell me now what it is you find is wrong with KT? • Exactly when and where have I said something is wrong with the KT energy budget? KT’s numbers are almost certainly inexact. Attempting to do some sort of globally averaged energy balance may not be the best way to go about it. But that does not mean that atmospheric infrared back radiation does not exist. 34. To Fred Molten: Can you give me the equations you are using showing that heat by itself (without external input of energy) can move from cold to warm? Of course putting on clothes makes it possible to keep a higher body surface temperature but the heat comes from the catabolism of your body, not from your clothes, at least if you live in Sweden. 35. The present physical theories are perfectly able to describe all basic processes that need to be considered in analyzing atmosphere and they have been tested extensively in very many different setups. There are no reasons to replace any of this knowledge by some conflicting physical laws. Most physicists are, however, unaware about, how much of the physical understanding can be described in several different ways. Handling of electromagnetic radiation is one good example. One of my former colleagues did theoretical research on laser physics. Most descriptions of lasers start immediately with quantum field theory, but his approach was based on classical electromagnetic field theory and it was very successful. It was not in contradiction with quantum mechanics, but the mathematical approach was very different. I can see in Claes Johnson’s texts superficial similarities with that approach. The way quantization is brought into the calculations can be chosen from several alternatives. In some approaches it can make sense to state that there are not forward radiation and back radiation, but only the net radiation. If the final results differ from the conventional approach they are certainly wrong as the conventional approach has been validated so well, but the alternative approach may also be correct as long as it leads to the same results. I do not believe that the alternatives would often be easier to understand or of any particular value, but I would be careful before declaring some non-conventional approach automatically wrong. The case of analyzing lasers that I mentioned at the beginning is proof of the fact that sometimes one may indeed find advantage from postponing the quantization and using classical formulation as far as possible. Using obscure alternative formulation and vague argumentation as evidence on weaknesses in the conventional understanding of physics is another matter. When it is done in parts of physics, which have been applied widely for years without any conflict with observation, I would not give any weight on such claims. • Steven Mosher I’ll suggest a cage match. Johnson versus maxwell. no other commenters allowed. People can then see that Johnson will not be able to maintain his position. we will them ask him to admit his honest error and ask the publishers to correct the book. • As publisher of the North American and Oceania version…I accept this challenge. I’m happy to publish errata and a new edition if and when the errors reach a critical mass. I’m not sure how to prove anything when the topic gets this esoteric…I prefer lab experiments where the data verifies or falsifies a claim. No models. No dueling weblinks or appeals to authority in any form. It makes things tough when you need a vacuum to isolate the experiment from conduction and convection effects. We’ll see how it goes, I suppose. Good idea, Steve. • Steven Mosher Thank Ken. I suggested the same thing for the IPCC. We need to make room for the admission and correction of honest error. The IPCC could not do it. I do not trust them as a consequence and thus am forced to look at primary research on my own to come to a considered judgement. • Now that we have aired some stuff, I agree that the discussion is best left to those with a degree in physics (maxwell, pekka, and there are others among the denizens of climate etc that have not shown up). • I’d be down for this ‘cage match’ if I thought it would do any good. Alas, we’ve seen that even when faced with the idea that his theory violates the conservation of energy (the 1st law of thermodynamics, the very theory he claims supports him), he is unwilling to concede or even engage. It’s my opinion, based on this fact and the lack of transparent discussion perpetuated by some other commenters, that science is not of interest to these people. Maybe it is an ‘honest’ mistake that Johnson has gotten to this place, but I see his poorly thought out analogies beginning Chap. 2 as a way for him to rationalizing away the physical meaning of some of the most well-known and thoroughly tested laws physics has given us thus far. In such a case I have to wonder how much honesty is involved… • Pekkka, a good historical example of what you are saying is the Drude model. It posits that electrons are classical in enough numbers when confined in a solid. There is a basic kinematic equation describing the force acting on each electron that, when solved for the appropriate situation, gives an answer that fits ‘reasonably well’ to observations. You may be familiar with this model if your friend works on lasers. But the Drude model, and other so-called ’empirical models’, is flawed physically. Just as your friend’s laser theory is flawed. That is to say, it is practical for a well-trained experts to use such a theory because he/she understands its flaws and faults. It works for back of the envelope calculations which are quite important in the lab. What happens when we are trying to determine a ‘physical understanding’, however? In such cases, it’s my opinion that we must do our damnedest to get to the meat of a problem. Even if that means dispelling a computationally practical and useful formalism like the Drude model. Because the Drude model doesn’t give us transistors or quantum wells or superconduction…or lasers for that matter. Having relied on the Drude model takes away from our understanding of reality. In the same way, while Mr. Johnson’s attempts might seem like an interesting facet of science, they fundamentally take away from a broader understanding of reality. There is no basis in it’s being real other than the words on a pdf. It is especially problematic since so many here are willing to simply regurgitate his memes without any skepticism at all. I think the most important aspect of doing science, as Mr. Johnson claims he is doing, is determining whether or not you can handle being wrong. If you cannot handle such an outcome, as Mr. Johnson’s reaction to the criticism he has faced here makes me think, you are not interested in science. I don’t think Mr. Johnson is interested in science. I’d be interested in your take on that. 36. Steven Mosher You and John Sullivan utterly mis understand the concern about the “united front” If, for example, the AGU were to offer some session time to discuss skeptical issues, the first question is WHICH skeptical positions should be given time? If, for example , a research center were to open its laboratory time to test skeptical ideas on GCMs WHICH skeptical positions should be given time. It was a PRAGMATIC discussion about a PRACTICAL problem. Now then warmists could pick the WORST skeptical ideas and only discuss those. this is what realclimate does. • Give an some specifics for a really good skeptical idea that Real Climate has ignored. Paul Middents • As far as I remember RC has covered just about every paper which has been promoted by the skeptics in recent years. In the end there aren’t good skeptical ideas and bad skeptical ideas, there are just good and bad ideas, and good ideas will generally get proper consideration. Maybe there are some exceptions – if someone can provide evidence that there are good, credible ideas out there which are not being considered then fine, until then I remain, well, skeptical. • Andrew Adams, I will be glad to give you an example of bad ideas that RC still supports. Hockey Sticks. Have they admitted yet that Mann’s and associated work are all severely flawed and should be withdrawn? That they do NOT support the claims they make? • Hi kuhnkat, I suspect that none of the “Hockey Team” ground staff at RealCLimate have been allowed to read respected investigative science journalist Andrew Montford’s excellent exposé “The Hockey Stick Illusion” ( This was declared by another respected investigative science journalist Matt Ridley ( as being “…a rattling good detective story and a detailed and brilliant piece of science writing .. ”. Ref. your comment yesterday at 11:48 pm, for Andrew to “ .. Dig harder man!!! .. ”, he had the opportunity on 1st July and because he refused to remove his blinkers he threw it away. Investigative science journalist? – pull the other one. Best regards, Pete Ridley. • Montford is a “respected investigative science journalist” by what standards exactly? Has he won awards of his colleagues? Has his work appeared in prestigious publications? I haven’t read Montford’s book but my experience is that those who have make lots of charges regarding Mann that they can’t actually defend, most likely because they are false. (At least if they are true, noone has provided evidence to rebut my evidence that they are false.) I am not sure whether they got this info from Montford but that has been my impression. • I strongly recommend reading Montford’s book. It is very well written and extremely well documented. • Joel, instead of waffling from a position of ignorance try reading the book and following the references, do your own assessment then go over to the blogs of Steve McIntyre’s blog ( and Andrew Montford ( and try to convince them that you know better than they do. Let me know how you get on. They may let you co-author a paper with them on the subject. Best regards, Pete Ridley • Pete and Dr. Curry: Well, next time I find it in a bookstore, I will look through it and see what it has to say about the “censored” directory and about the Tiljander proxies. If it just repeats the same unsubstantiated nonsense that I see from people like “Smokey” on WUWT, I will be very unimpressed. If not, then maybe it is more worthwhile. • Joel, rather than reading the books, try going to their websites and reading the archives. Especially Climate Audit, Steve McIntyre’s site, as he was central to debunking the Hokeystick. You might even ask him directly about the “censored” directory with r2 information that was not published as he wrote about it first I believe. Of course, even if that was an inflated anecdote by some unknown person, the fact is that the r2 statistics for the Hokeystick FAILS! The difference is whether Mann knowingly misled people or is just sloppy and ignorant about the statistical methods he uses. Here is a start at CA: Be sure to ask Steve directly about how he knows the “censored” directory really came from Mann’s FTP server. 37. To Pekka: You seem to agree that macroscopic physics cannot be modeled by quantum mechanics, and so macroscopic equations are needed for atmospheric radiation. Now macroscopic radiation seems to be well described by Maxwell’s equations, modulo the difficulty of the ultraviolet catastrophy, which destroys everything. What I suggest is a rational way to avoid the catastrophy and keep the great advantage of Maxwell’s equations as compared to primitive particle statistics. Isn’t that something to think of a bit, in the spirit of Planck and Einstein, rather than dismissing without reflection? And the radiative transfer equations are much cruder than Maxwell, right? • Claes, I do not agree that quantum mechanics cannot be used in those parts of atmospheric physics, where it has been used. What I was saying that in some situations the agreement with quantum theory can be obtained in surprisingly many different ways. Even for effects where the quantum theory differs from traditional classical physics the correct results may sometimes be obtained in ways where the quantum effects are somehow hidden. Hamiltonian formulation of mechanics allows for presenting some quantum effects in less common fashion etc. Einstein was not happy with quantum theory. I think that the main reason is related to conceptual difficulties in joining it with general relativity described in the elegant ways that he had developed. His dissatisfaction came out also in his statement about God not playing dice or in his paper with Podolski and Rosen, which has now been proven to be in conflict with experiments after Bell had formulated his inequality along the lines of that paper. In this case Einstein erred and quantum mechanics prevails. The problems in interpreting quantum mechanics are also related to some of the possibilities of doing the calculations differently. The quantum mechanics is, however, extremely successful in giving correct predictions with high accuracy. Thus it is a very good and valid physical theory in pragmatic sense. Most physicists do not worry about the philosophical problems and do their work successfully. Whether the philosophical difficulties turn out to have some relationship to the next paradigm, which would solve the problems of Einstein and unify gravity and quantum mechanics in a elegant way, remains to be seen. Perhaps not by our generation, but our children or grandchildren. I am still not telling the name of my former colleague, but I can tell that he has been later professor at KTH. When we were working at the same institute, we had some very interesting discussions on the foundations of quantum mechanics. • I add that sometimes it has also turned out that results generally thought to depend on quantum mechanics turn out to be true in more general settings. This is not very common and I cannot give examples, but I have certainly heard about such cases. • Claes, Concerning back radiation I certainly believe that it is a useful concept and that the radiative energy transfer can be handled most easily by including it in the calculation. I cannot figure out, how all correct results could be obtained without considering it explicitly. On macroscopic level avoiding it may be possible, but on the more detailed microscopic level it seems almost impossible, but only almost. 38. To Judy: OK so now you say the KT is basically correct and that backradiation is a real physical phenomenon. Very good because we now have something concrete to discuss. May I then ask you about the equations describing your effect of backradiation? Without equations anything is possible. 39. Mr.Johnson: In your description of a IR camera you admit that the instrument , directed appropiately, show radiation. At the same time you negate that this radiation reflect some reality. Could you please explain this ? I am extremely confused. • I think that what Claes is saying, is that the radiation you measure is a result of the temperature. Not the other way around. Sounds good to me. 40. To Judy: Do you claim that radiative transfer equations model backradiation? 41. I hope that this is relevant to the discussion: July 23rd, 2010 by Roy W. Spencer, Ph. D. • Frank Davis Google the famous “Pictet Experiment” Its of great historical importance and quite relevant to this discussion. Why do they have this blindspot? • Thanks, Dr. Curry, for hosting this debate. @ Frank Davis: Dr. Spencer says in his article: “So, once again, we see that the presence of a colder object can cause a warmer object to become warmer still.” However, the process he refers to is not heat transfer by radiation from a colder system to another system, but a kind of isolation, like in a thermos. Yet, the colder system is not providing “more” energy to the warmer system, but just would be avoiding that the warmer system emits heat towards the colder system. This argument is not true because the thermal energy is transferred to the colder system, invariably, unless the colder system is a perfect reflecting material or the colder system has a very low heat capacity. I would make Dr. Spencer to recall that the Earth is not a thermos; his argument could be possible if the highest layer of the Earth’s system, i.e. the thermosphere, had a mass density higher than that of the surface. It’s not the case for the real Earth. On the other hand, if you wish to consider QM on this thread, you must include also the well-described by Einstein induced emission, which has been corroborated experimentally and in the construction of some devices, and the well-know and demonstrated radiation pressure. These two real physic phenomena debunk any idea of a “backradiation” from the atmosphere warming the surface. • Nasif, I think you are confused. Roy’s posts on this topic are very clear and definitely show that there is in fact backradiation toward the surface from the atmosphere. He ultimate experiment used an IR thermometer to measure the actual temperature of the air several hundred feet above via the IR light it emits. Even more confusing are your statements concerning stimulated emission and radiation pressure. Can you please explain specifically how those physical processes play a role in radiative transfer or the lack there of in the atmosphere? • @ maxwell… No more than you are. Roy’s “experiment” only demonstrates that there is energy flow by radiation, whatever his conclusions could be. Both processses, induced emision (or induced radiation) and radiation pressure, influence radiation. If you know what those terms mean, you won’t be so confused, as you are, on “backradiation” issues. • Nasif, you’re damn right I’m confused. You still haven’t provided a physical mechanism for how those interesting terms have to with the most important aspects of radiative transfer. On the point of Roy’s experiment, what type of energy transfer is he measuring? If the IR thermometer is conducting energy from the surrounding air, his thermometer would have measure about 300 K. Instead, his thermometer read around 200 K. Where does that difference come from in terms of energy transfer? Are you saying that the thermometer can conduct energy from the upper reaches of the lower atmosphere without conducting through all the layers? That would be a monumental theory! Also, if you’re going to charge that a particular person doesn’t understand some terms you use, you should make sure you know what you’re talking about. I have extensive experience in classical optics, quantum optics, atomic and molecular spectroscopy and nonlinear optics (I built an optical parametric amplifier over the past week in fact) so I KNOW those terms you’re using have absolutely nothing to do with this discussion, the greenhouse effect or ‘backradiation’. It’s a purely quantum mechanical, spontaneous effect. It was an interesting go at it though. • Dear Maxwell, Please, visit again Roy’s experiment and see what the box floor is and on what kind of surface it was placed on. You’ll get the answer. Regarding induced emission, you should not forget the natural photon streams, so upwards, during nighttime, as downwards, during daytime. On the other issue, if you make the proper calculations on radiation pressure, you’ll find that the downwelling radiation heating up the surface is not possible in the real world. If Dr. Curry, the owner of this blog, grants me permission to go out of topic, I will proceed to answer properly your questions. • Nasif, the hole just keeps getting deeper. The experiment to which I was referring had Roy traveling around in his convertible sedan pointing an IR thermometer into the air. Not his make-shift holhraum. In that case, where he is clearly measuring the temperature of the atmosphere directly several hundred feet above him, how does energy interact with the thermometer to produce a reading of 200 K? I’ll give you a hint, it has nothing to do with radiation pressure. It’s becoming more and more clear to me that you are using words that have one meaning to you, but a totally different meaning to actual optical scientists. You ought to look into the ways in which these terms are used in scientific circles so that you can more easily communicate your points in a scientific debate. • Dear Maxwell, Don’t go further on this or you’ll get disappointed on your own limitations about those concepts. Take your book on Radiative Heat Transfer and you’ll see I’m absolutely correct. I don’t want to go further on discussing those concepts because they are out of topic and I respect the admonitions of Dr. Curry on the purpose of this blog thread. Well, what the ground on which Roy placed his box and what the floor of the box was? Could you be so kind as to tell us what was it, specifically? Third, when he was “meassuring the temperature travelling around on his convertible sedan”… maxwell, tell me honestly, don’t you know how thermometers work and what thing makes them work? • Nasif, I’m trying to determine if your lack of comprehension of my comments is due to the possibility you are not a native English speaker or just plain stupidity. I’m willing to give you the benefit of the doubt and assume the first option, but not for too much longer. Again I’m discussing Roy’s use of a IR thermometer, not his makeshift holhraum. Please make an important mental note of that fact and stop your persistent confusion over this fact. It’s making you look dumb. An IR thermometer measures IR light (heat) emanating from a body or gas. Therefore, if Roy is pointing this thermometer at the sky, the thermometer reads the temperature of the sky via its IR emission. Therefore, the atmosphere is emitting IR radiation toward the ground that began its journey in ‘life’ at the surface, making it ‘backradation’. QED. As for radiative transfer, I’ve extensively studied ‘Introduction to Three Dimensional Modeling’ by Washington and Parkinson. From this text I am able to recover what both quantum mechanics and thermodynamics imply should be a downwelling IR emission from the atmosphere. Do you have other certified texts that you feel are better than Washington and Parkinson? Furthermore, you continue to lack any sort of meaningful physical description of what you are talking about. Based on the plethora of these facts so far, I must say I don’t think very highly of your opinion on this matter. It’s been real though. • Maxwell, I have an analogy I think is quite good. If you have a 6Volt potensial, and a “current sink” at 1 Volt, you will have a current from 6 Volt to 1 Volt. Increase the “sink” to 4 Volts. The 6 Volt source will drain slower. But the current is still seen as going from 6 Volts to 4 Volts. But we dont talk about “back-current”. That would be confusing. • Maxwell, darn, you probably will never see this to answer my question, but, just on the offchance that someone does and can: I am under the impression that the atmosphere absorbs virtually all of the IR radiation (except the window) within about 15ft of the surface. if this is so, exactly what was Dr. Spencer measuring from the ground?? Wouldn’t downward IR also be absorbes so that all he would be able to measure would be about 20 feet over his head and not an average of several hundred feet???? 42. Again Judy: Which equations do you claim model backradiation? As I said I want equations and I want the equations to be motivated or derived mathematically. Which equations are you referring to? • The equation that models backradiation is the Planck law for the intensity of light emitted by a blackbody at a specific temperature. This equation is carried out for every layer in the atmosphere, which has a stratified temperature profile. The absorption of light, all frequencies, is modeled by the Beer-Lambert law which is easily derivable from Maxwell’s equations via the electromagnetic wave equation. If we wanted to get down and dirty with the most fundamental equation governing the behavior of absorbing material to first order in the perturbation due to the interaction with light, we would have to use the quantum master equation with a phenomenological coupling to the vacuum field. We can go to second order in the perturbation to get to scattering processes if we liked as well. Ever wonder why the sky is blue? This process would allow us to see absorption and spontaneous emission (the dominant form of emission in the atmosphere) on a per atom/molecule level. The Beer-Lambert and Planck laws get the overall average effect of the quantum master equation in this context. So from first principles, we can easily calculate (grad school quantum problems) the rate of absorption and emission of a particular molecules when the light in question is on resonance with a particular allowed quantum transition, the linewidth of that transition based on different broadening processes and the necessary equipment to test the predictions of any such calculation. From there, we can sum over all the molecules in our volume and get an answer to compare to the observational laws used in climate models. You can see whether the agreement between these methods is good. Let me know how it goes. • Maxwell, doesn’t CO2 absorb and emit based on its molecular bond configuration as opposed to planck energy?? Maybe someone can jump in and explicate what the difference is if any?? • For example, CO2 emits at 15 microns at an intensity according to the number of molecules and their temperature using the Planck function for temperature that wavelength (which actually peaks not far from 15 microns for normal atmospheric temperatures). This emission is seen at the ground as part of the back-radiation, together with all the other CO2 and H2O bands in clear sky that make up all the back-radiation. • kuhnkat, Not one opposed to the other but both combined. The molecular properties determine which wavelengths have strong emission and absorption, i.e. they determine the emissivity, which is equal to absorptivity. Planck’s law tells how strong the emission is at those wavelength as the strength is a product of the emissivity and Planck’s law for black body at that wavelength. When the emissivity of a gas is strong for a particular wavelength, it means that its is significant already for a thin layer and very close to one for a thick layer in accordance with the Beer-Lambert law. Then the the strength of emission at that wavelength is the same as for a black body of the temperature of the gas. This is true for those wavelengths, but at other wavelengths the gas does not emit at all or very little. • Then Planck’s laws are applicable to emission whether it is from level changes in single atoms or bond interactions. • As Pekka pointed out above it sets the upper limit at any wavelength, if there is no allowed transition at a particular wavelength then the emission will be zero no matter what the Planck value is. The Co2 band at 15μm will emit strongly up to the Planck limit. O2 can emit in the UV at around 220nm but in the atmosphere the Planck limit will be generally so low that this emission will be very weak (Judith made this point earlier). 43. To Lucia: If you had read the equations I refer to as Navier-Stokes you would have seen that they express conservation of mass, momentum and total energy and are the basic equations of thermodynamics describing transformation between kinetic and heat energy through work. Are you familiar with thermodynamics? • Claes, The basic equations of thermodynamics are called “The first law of Thermodynamics” and “The 2nd law of Thermodynamics”. One of the clues is that these equations contain the word “thermodynamics”. In contrast, conservation of momentum is “mechanics” and the navier-stokes equations are basic equations for fluid mechanics. Conservation of mass is used in analyses, but that doesn’t transform the equation into “a basic equation of thermodynamics”. Are you familiar with thermodynamics? I’m laughing myself to tears here. I am familiar enough to know that you are making errors. :) • This is lucia. • Kim— You are correct. I don’t know why wordpress auto-filled the name incorrectly. I should have seen that. • The mistake is to think that it is possible to consider thermodynamics and fluid mechanics as separate in a gas or liquid. They are inseparable in the context of free atmospheric thermalisation. To imply otherwise is erroneous, perhaps even fallacious. 44. To Lucia: The convective adjustment that you think is science, is just an ad hoc fix up without any mathematical basis. If you are allowed to adjust what your equations tell you, then you can get anything you want. • Still waiting for you to apply your model to a real world example such as those which I showed above. Most applications of standard radiation heat transfer have a substantial overlap between the incoming spectrum and the emitting spectrum by the way. • Claes– Since your paper suggests you think the first law of thermodynamics is the 2nd law, and the navier stokes equations is the basic equation of thermodynamics, I am not surprise that you think the convective adjustment is just an adhoc fix up. To understand the physical motivation, will need to apply thermodynamics. At my blog I gave you a tip on how to distinguish the first law from the second: The second law should contain an inequality symbol ≤, a symbol that represents entropy (S is often used), and a symbol to represent temperature (T is a popular choice, but rebels sometimes use θ). Also, if I recall correctly, it generally contains no work term (i.e. W would not appear.) As for this: If you are allowed to adjust what your equations tell you, then you can get anything you want. Yes. I agree. • Also lucia. • The above is me– Lucia. • Lucia, I did not check carefully, but I think the equations that Claes is presenting do present correctly the second law. The inequality is hidden in the requirement that D ≥ 0. The formulation is not the one we all have seen most often, but I think it is correct. The same statement that Claes presents correct formulas in a less conventional way seems to apply to the other chapter as well, but there I have doubts on, whether all equations are correct or only some of them. I did not read in this text at all carefully or study the equations more than superficially as I do not think that his approach is useful even when it is correct. Many of the claims in the text are strange if not outright wrong. • Pekka– Specifying 0≤D where D is dissipation is a consequence of the 2nd law of thermodynamics. However, it does not turn those equations into the 2nd law. That equation may be a correct representation of something but it is not the 2nd law of thermo. This is not a matter of notation. Other puzzling things about that equation may have something to do with non-conventional representations — for example, it’s not clear to me that it’s even a correct formulation for the first law. But in order to pinpoint the problems, I need to know whether that’s supposed to be a control volume formulation or an analysis on a fixed volume, and possibly where the boundaries are etc. My impression is it’s supposed to be a control volume with the top at the top of the troposphere– but if so quite a few terms may be missing. (Or not. It depends on whether we have a control volume whose shape is permitted to change– in which case…. well…) • Lucia, My purpose is not to defend the book or conclusions presented by Claes Johnson in the book. I certainly disagree on very many things. I am only noting that texts that are obviously wrong, when they lead to definitely wrong conclusions may not be wrong in all of their details. Most people seem to agree that this chapter is actually correct in what it describes. Its content may be used in reaching wrong conclusions outside its range of validity, but that is another matter. It is also possible that the unconventional way the equations are presented contributes to wrong conclusions, but even so the equations may be correct. Claes Johnson presents two inequalities in eq. (2). They are equivalent when combined with the first law /eq. (4). Of course this is not the most general presentation of the first and second law, but for the problem considered they appear to be equivalent with the general formulation. It is clear that using these laws as more basic than the general formulation may lead to errors. Perhaps such an error is really done, when considering radiative processes in the other chapter. I am not really interested enough to even check. Also in this chapter the formulas (5) and the related discussion are obscure. If not for other reasons then at least in the total neglect of considering units properly. The equations can only be valid in units where temperature is dimensionless (i.e. 1 K = 1) and the unit of acceleration is inverse of the unit of length. Furthermore it is stated that specific heat capacity cp = 1. Whether all that is possible at all is certainly not obvious. But then again all that is more or less forgotten when the next formulas are standard knowledge. The whole paper is confusing and may well be misused, but even so it is good avoid erroneous claims about its content. • Pekka– I have never suggested things that are wrong in their results must be wrong in all their details. I am pointin I am saying is that those equations are not “The second law of thermodynamics”. The reason I am saying they aren’t is that they aren’t. In undergraduate fluid mechanics problems, students solving pipe flow and other simple problems, often use an equation referred to as “the mechanical energy equation” or sometimes “the energy equation”. It is derived from conservation of mass and momentum, sort kinda-sorta like the first law of thermo and includes a dissipation term. The 2nd law requires that dissipation term to be positive. So, using that equation lets students impose the requirements of the 2nd law on their analysis. However, you don’t get to call that equation “the second law of thermodynamics” merely because it permits students to correctly incorporate the effects of dissipation on pressure drop in pipeflow. Likewise, what Claes writes down is not the 2nd law of thermodynamics. Moreover, I find your clain that To be rather dubious. In fact, based on the text, I’m not convinced it is possible to pin down what “the problem considered” really is. • Lucia, At least I agree on your last point. Reading the text of CJ it is often very difficult to pin down what he is writing about or where he is aiming to. 45. Dear Friends, I come late to the interesting discussion, so I did not read through all. Therefore I do have a remark. A flat hot body with two sides, unit heat capacity and with time dependent temperature Th(t), starting at Th(0)without an internal or external energy source cools from both sides with the rate dq/dt = sigma Th(t)^4 per unit area. Now you put a cold body with Tc(t) adjacent, facing exactly one side without touching, the hot body cools from this side with the rate dq/dt = sigma*(Th(t)^4 – Tc(t)^4) per unit area and with dq/dt = sigma Th(t)^4 per unit area from the other side. Therefore the hot body in both cases is cooling all the time, since Th(t) is always greater or equal to Tc(t). However, the hot body Th(t) stays in the second case warmer all the time than in the first case. But this is different from saying it gets warmer than initial Th(0). If Tc(0) is smaller than or equal to Th(0), then Th(t) is always smaller than Th(0). Of course as Roy Spencer showed a hot body with an internal or an external energy source can get warmer than Th(0), if you put a cold body adjacent to it. Best regards • Dear Günter… Anyway, the colder system IS NOT heating up the warmer system, but cooling it, continuously, if we wish, but only up to the point of equilibrium, i.e. when both systems reach the same energy density. And even so, the internal or external source of heat would continue heating up the warmer system. Take off the internal or external operator, for example, and you’ll see the colder system cannot heat up to the warmer system but quite the opposite. It is the internal or external PRIMARY heat source what heats up the system, not the colder system. The latter is Dr. Spencer’s argument. • Dear Nasif, that’s what I wrote if you reread my paragraph.. Therefore I said: “Of course as Roy Spencer showed a hot body with an internal or an external energy source can get warmer than Th(0), if you put a cold body adjacent to it.” Of course it is the energy source that heats the body up. I think it is important not to confuse “getting warmer” or “keeping warmer” with a energy source that heats a body up. • Dear Günter, Yes, you’re right. I misinterpreted the last paragraph of your post. Sorry… You’re also right on not confounding “getting it warmer” and “keeping it warming”. All the best, • In his discussion of a hot plate next to a cold plate, Dr Roy Spencer says: The 2nd Law of Thermodynamics: Can Energy “Flow Uphill”? In the case of radiation, the answer to that question is, “yes”. While heat conduction by an object always flows from hotter to colder, in the case of thermal radiation a cooler object does not check what the temperature of its surroundings is before sending out infrared energy. It sends it out anyway, no matter whether its surroundings are cooler or hotter. Yes, thermal conduction involves energy flow in only one direction. But radiation flow involves energy flow in both directions. Of course, in the context of the 2nd Law of Thermodynamics, both radiation and conduction processes are the same in the sense at the NET flow of energy is always “downhill”, from warmer temperatures to cooler temperatures. But, if ANY flow of energy “uphill” is totally repulsive to you, maybe you can just think of the flow of IR energy being in only one direction, but with it’s magnitude being related to the relative temperature difference between the two objects. Clearly Spencer thinks that radiative heat transfer is completely different from conductive heat transfer, and can go ‘uphill’. He writes: The only way I know of to explain this is that it isn’t just the heated plate that is emitting IR energy, but also the second plate….as well as the cold walls of the vacuum chamber. Does that mean that while radiative heat transfers don’t ‘check’ to see which way to go, conductive heat transfers actually do ‘check’? • Frank, The separation is not that clear. On molecular level even conduction may “go uphill”, but this is not visible and can be ignored. In conduction as in radiation energy goes in both directions at micro level. In conduction this is related to the motion of energetic atoms or molecules or to vibrations (phonons) in solids. The distances are usually very short. Therefore only the collective conduction is observable and described by an equation that describes only the net flow. In radiation it is often possible to set measuring equipment to detect separately radiation in each direction. One photon may go over a large distance etc. The back radiation is thus observable and it may also be that the easiest way of calculating the net energy transfer represents separately the two directions. In some cases it may be easier to consider directly the net flow, but as I said above, this is not always true. Ah! So there is ‘back-conduction’. I would express it differently. Conduction describes the *process* by which heat flows along an existing temperature gradient. Radiation is a something that a body *does* based on its temperature and emissivity. The former process directly involves both/all bodies that define the local temperature gradient; the latter by definition only depends on the characteristics of the radiating body itself. At least that’s the way I look at it. • The approach used in describing conduction can easily be extended to part of radiative heat transfer, to those wavelengths with strong absorption. Heat is transferred in accordance of essentially the same diffusion type differential equation in atmosphere by radiation near the center of the 15 um IR band. For wavelengths with weak absorption this approach does not work well, because such radiation does not proceed with small steps in diffusive fashion but by long leaps to a point where the temperature may be significantly different or even escape through the whole atmosphere. Most backscattering occurs in the region where the diffusion-like process describes the heat transfer rather well. On this basis one could describe all this with the diffusion equation and remove most of the back scattering from being considered explicitly. The way the calculations are done does of course not affect what really happens, but it affects often the way this is described. • Frank, Radiative heat transfer consist of two radiative energy flux, one from hot to cold and one from cold to hot. Radiative heat or net radiative energy flows from hot to cold, radiative energy in both directions. It is a little bit confusing, since “energy” and “heat” are sometimes used interchangeably, which is strictly speaking a bit wrong. However, scientists are doing that occasionally and the reader needs to bring it into context. Bad style, though. The second law as stated by Clausius reads: “There is no change of state that only results in transferring heat from cold to hot.” Note, it is not energy in general. Heat in this context should not be interchanged with energy. Best regards • Frank Davis Look at the blackbody spectrum of an object at say 300K. Superimpose the BB spectrum of the identical object at 400K Now using the spectra predict what would happen if these two objects were brought closer together so that they radiate to each other. We notice that; 1. The hotter object has at the short wavelength end, frequencies absent from the lower temperature object. 2. Pick any wavelength that both objects have in common. You will notice that the hotter object is emitting more radiation than the colder once. Now examine the hot surface; It is emitting more radiation of every wavelength than it is receiving. You can now hopefully appreciate that a colder object can never increase the temperature of a hotter object. • I understand your point, Bryan. Perhaps you agree with Guenter Hess’s comment just before yours, in which he wrote: If that’s how it is, then if it were possible to block or divert the radiative flux going from the hotter object to the colder object, while continuing to allow the radiative flux from the colder object to the hotter object (a sort of diode), then the colder object would heat the hotter object. • Frank Davis For the hotter object to radiate to the colder it must “see” the colder object. Since light rays must be able to travel backwards (rectilinear propagation) no such diode effect is possible. We therefore are forced to agree with Clausius that even for radiative transfer Heat only travels from the hotter object to the colder object. Yes I agree with Guenter Hess’s comments. 46. I would like to take this section of Chapter 1 as a point of departure for my comments. It says: “We have formulated a basic model of the atmosphere acting as an air conditioner/refrigerator by transporting heat energy from the Earth surface to the top of the atmosphere in a thermodynamic cyclic process with radiation/gravitation forcing, consisting of ascending/expanding/cooling air heated by low altitude/latitude radiative • descending/compressing/warmingair cooled by high altitude/latitude outgoing radiation, combined with low altitude evaporation and high altitude condensation. The model is compatible with observation and suggests that the lapse rate/surface temperature is mainly determined by thermodynamics and not by radiation.” Yes, of course they’d like to formulate a simple “model” that works this way, as some of their other conclusions might nicely fall in line, and in so doing, to re-write some laws of physics in the process, but unfortunately, their simple thermodynamic model is simply not the way the real atmosphere of the planet works, nor in fact the way the laws of physics work. It takes hardly anything more than a few basic real world observations to provide proof that radiational balance is a far more potent regulator of atmospheric temperatrue then the authors of this book would like in their “simple” model. But then, isn’t that the point they are trying to refute? For observational proof, take the role of water vapor as an GH gas, using the predicted GCM forecasts that the planet will see higher night time temperatures due to the increase in water vapor keeping more LW radiation near the surface. Witness to this is the fact that 37 U.S. cities and hundreds of other cities across the globe set night time high temperature readings in 2010, a year in which saw a record in precipitation. Based on a their simple thermodynamic cyclic process, this result would not be expected as that additional night time heat at the surface would surely have been carried away via convective thermal processes and added to the TOA output. This increase in global water vapor, measured over the past few decades is exactly as predicted by every GCM when using well established and quantified GH physics with the addtional radiative forcing caused by the additional accumulation of CO2 and water vapor in the atmosphere. Warmer night time temps are exactly what one would expect when considering the real world (i.e. measured) absorbtion and retransmission of LW radiation by increasing amounts of GH gases in the troposphere. Futhermore, one only needs to step outside on a calm cloud-less winter night and then step outside on a similar night when is has a nice overcast sky to feel the radiative GH effects of the water vapor in those clouds. I would ask the authors this: how would their model explain the warmer night time ground temperatures as measured throughout the world if not for the LW radiative effects of additonal GH gases? • Futhermore, one only needs to step outside on a calm cloud-less winter night and then step outside on a similar night when is has a nice overcast sky to feel the radiative effects of a smaller delta-T between the earth’s surface and the water vapor in those clouds. • Some data/citations, please? A word of caution – a clear night can feel much colder than an overcast one, even if the air temperature, as measured by thermometer, is the same. That’s because your perspiration evaporates more readily in drier air, so the perception of temperature can be largely subjective. 47. R. Gates: So, how do you explain the Medieval Warm Period? • Steven Mosher 1. our knowledge of the extent and amplitude of the MWP is VERY uncertain. 2. The presence of large amplitude warmings is evidence FOR long term natural oscilations, it is NOT evidence against the physics of radiation. 3. The final temp is the result of many forcings, not merely C02. Basically, your comment is OT to the discussion of the physics of the tyndall gas effect • Thank you Steven, I couldn’t have said it better myself, though I would welcome a discussion of the MWP on some other thread, perhaps in the context of Dansgaard-Oeschger and their likely Holocene cousins, the Bond events, a subject which fasninates me to no end… • not to hijack the thread but mostly for my own clarification, can we also agree to the converse: that the existence of the physics of radiation are not evidence against long term natural oscillation? Discussions such as this one may frustrate some, but I do feel they go a long way to clarifying what aspects of the science are clear and where and why there is uncertainty and/or a lack of clarity. Moreover, can we acknowledge basic processes but still differ as to their relative impact, rate and magnitude of change, and, of course, our ability to adapt to the changes they invoke? 48. To R Gates: Yes the model is simple but the point is that it is more complete (with thermodynamics) than a model with radiation only, which is the basic model of CO2 climate alarmism based on a “greenhouse effect” from radiation alone. • I would agree that both forms, thermodynamic and radiative, need to be included in any full understanding of the climate dynamics, but specifically, when speaking to the well-established science behind the GH properties of atmospheric gases, I believe the simple thermodynamic model falls far short, and can simply not explain or predict real world effects of GH gas increases as well as a GCM’s can when considering their full LW absorption/retransmission radiative effects. • Dear Mr. Gates, it is the other way around. The main physical reason for the effect of GH gases is not „back radiation“ , but rather the effect on the TOA balance, which is a decreasing outgoing longwave radiation (OLR), before reaching a new stationary state. “Back radiation” is only an internal energy flux that does not alter the energy content of the earth system. Changing OLR, however changes the energy content. The concept of emission height or “cooling to space” together with thermodynamics/lapse rate is enough to explain the greenhouse effect. Heat transfer by radiation, latent heat or sensible heat is enough. “Back radiation” is a parameter included in heat transfer by radiation ,though. Absorption/Reemission or “back radiation” alone cannot explain the greenhouse effect: I know that there are texts out there that try that, but they stay incomplete. • I agree that back radiation shouldn’t be invoked as the “cause” of surface and atmospheric warming. A TOA flux imbalance is required for the temperatures to change, but the mechanism by which the imbalance is transmitted by the atmosphere to the surface involves back radiation. If downward radiation to the surface didn’t increase as a result of greenhouse gas forcing and the consequent TOA imbalance, the surface wouldn’t warm. • “…but the mechanism by which the imbalance is transmitted by the atmosphere to the surface involves back radiation…” • In the context of the greenhouse effect surface and troposphere warm simultaneously because of the TOA imbalance, we have a radiative – convective equilibrium. The sun warms the surface. The net effect of longwave radiation is cooling to space, integrated across the globe. Back radiation increases with temperature, not the other way round. Back radiation is a parameter in the energy balance of the surface, even though you can measure downwelling radiation. Downwelling longwave radiation can heat a patch of surface, if the air is warmer on top of it. However, globally integrated downwelling longwave radiation is more than balanced by sensible heat, latent heat and radiative energy from the surface. Otherwise we would not have an decreasing temperature gradient with height on average. • Back radiation increases with air temperature, and in turn increases the temperature of the surface. That is how atmospheric heating from an energy imbalance is transferred to the surface. If the lapse rate is linear, the temperature changes equally at all laltitudes. In reality, lapse rates may not always be perfectly linear, but the approximation is a reasonably good fit with observations. It is not correct to imply that downwelling radiation only heats the surface if the air is warmer on top of it. It heats the surface even when the air is cooler, as is typically the case. • To avoid confusion about terminology, my point is that back radiation from an atmosphere cooler than the surface makes the surface warmer than it would be otherwise. The net IR flow is from the surface upward. • Steven Mosher Thanks Guenter. You will note however that now the conversation has shifted from Johnson defending his mistakes to you explaining how things really work. They are of course related. • Are we discussing CO2 greenhouse effect, or general atmospheric warming? It is important to note that there is no way to determine if “downwelling” IR has been emitted from CO2 or any other atmospheric molecule. All molecules and therefore all gas molecules emit IR. So “downwelling” IR should be expected. But that does prove a net increase in energy, or “greenhouse effect”. If you cannot show with real world experiment that more CO2 = higher temperature, you fail. “More CO2 = less temperature” Why? Because of specific heat capacity. • But that NOT does prove a net increase in energy, or “greenhouse effect”. I should have said! The origin of downwelling IR can be identified by its spectral signaature. Almost all will be from CO2 and water. • Incorrect. The spectral signature is not determined by the substance that emits IR but by the temperature of that substance compared to the surrounding ambient temperature when the IR was emitted. • Claes Johnson, do you agree with Will since he seems to be on your “side” of the debate? • I am on no ones side Judith. I just happen to know that adding CO2 to the atmosphere does not cause warming. In fact it causes cooling. I have demonstrated it with experiment. I have given an explanation with supporting references with regard to specific heat capacity. Further evidence : • Will, If the experiment you reference is the one in the link given in an earlier comment: More CO2=Less Temperature”, then you should know that this experiment, even if conducted with utmost care and precision (which I doubt), proves quite the opposite of what you’re stating. For the container with pure CO2 SHOULD BE, by the very processes you claim don’t occur, be cooler than the one with “ordinary air”, as that “ordinary air”, would, I presume, contain ordinary water vapor, and as such, with a much greater percentage of “ordinary water vapor” and would naturally show a greater GH effect (assuming of course that all the other varibles are the same). In addition the experiment is flawed for many other reasons, for the title states “more CO2 = less temperature,” and in such an experiment one would expect to have a control container that is kept under the same conditions as all the others, and then one would expect that the only varible to change would be the amount of CO2 in a serios of other containers. One could then produce a series of data points that would show how the temperature of the container varied with the only variable being the change in the amount of CO2. All this aside, I highly suspect that the container with “pure CO2” is indeed that, as one can see condensation on the inside, and since CO2 (under these pressure and temperature conditions) is a non-condensing gas, then that condensation is most likely water vapor, so the entire experiment is invalid as the container is certainly not “pure CO2”. • Will I agree. Another proof: In a scientific argument, the judge is the observation, not the theory! • Visit the HITRAN database. Each IR emitter has a spectral signature. The temperature of the emitter vis-a-vis its surroundings is irrelevant, and in fact, the temperatures are for practical purposes identical – i.e., they exist in local thermodynamic equilibrium (LTE). The temperature of the emitter does influence the quantitative balance in the intensity of one spectral line vs another from that emitter, but the wavelength of the CO2 and H2O lines is almost completely unaltered by temperature – at least within the atmospheric range of temperatures. • “the wavelength of the CO2 and H2O lines is almost completely unaltered by temperature – at least within the atmospheric range of temperatures.” All you need to know: • Will – Using emphatic language (“Nonsense”) doesn’t strengthen a case that can’t be made. To the extent the site you link to is informative, it confirms my statement. It refers to positions, intensities, and line widths of CO2 and H2O, but with no suggestion that the wavelengths oft these molecules are shifted by temperature. Any such change under atmospheric conditions would be miniscule. If you have data to the contrary, link to it specifically rather than citing a long list of article titles. • Wrong again Will, the spectrum is determined by the identity of the emitter, however it can not emit more at any wavelength than that defined by S-B. • Which is determined by its absolute temperature. Which in turn is determined by its surrounding ambient temperature as per its altitude. Say above 5km @ -80º C . As for your comment below : “Not true, O2, N2 and Ar notably in our atmosphere do not!” (emit IR) So that would mean that 99% of the atmosphere cannot cool to space via radiation at TOA right? Come on! • Judith Curry wrote in her posting: We see again that there is little sign of that becoming true. • Downwelling IR is not “Backradiation”. Downwelling IR does not add energy to the system because it is energy which is already present. It does not cause net E increase. Let us leave the subject of downwelling radiation there. The so called “Backradiation” is the energy we expect to find from the claimed “greenhouse effect”. The ability of a substance to absorb/emit, or radiatively transfer IR does not say anything about its ability to store that energy. Increasing CO2, increases the radiative transfer properties of the atmosphere in the far infra-red region. How is that even remotely like a “greenhouse effect”? How does a decrease in overall resistance of a poor conductor such as air, produce an increase in temperature? It is unphysical. It is the opposite of reality. “The physics of deep convection have been formulated since 1958 and are based on sound thermodynamics and measurements on location. The trends of the temperature in the high atmosphere in the last half century are very negative, starting on this height where the convection reaches. That means that more CO2 has a cooling effect rather than a warming effect.” See here: Correct, now you’re getting it, which is precisely why the change in CO2 concentration is so important (999645 ppm of the atmosphere does not absorb or emit IR). • Phil you are silly. ALL substances above 0K emit IR. That is not controversial physics. Your misleading statement has been repeated many times by the warmist’s but repetition cannot make it true. Why are you here making such false statements and clouding the issue? • Steven Mosher Wrong. Please tell me you have nothing to do with the design of aircraft, sensor systems, or other devices meant to protect our country. Start with this design guideline. • LOL – Very good Mosh. As a former weapons instructor I appreciate why you attached this citation. Those rocket scientists certainly knew a thing or two about missle guidance, CO2 and the IR spectrum. • Another non-sequitur Steven? • All molecules and therefore all gas molecules emit IR. • Imho Johnson first chapter is quite good, and is consistent with the explanation Guenter gives. Chapter 2, on the other hand, is…hum, well, it is clearly inferior to classical black body radiation, which is the most polite thing I can say ;-) Problem is that below the troposphere, heat is exchanged by both radiation and convection (with latent heat release ), only conduction can mostly be ignored. So no simple model, either purely convective or purely radiative, is complete. However, all flux analyses I have seen show clearly that more heat is transported by convection (and a lot more when latent heat release is present) than by conduction. It follows that, if a simple model including only one heat transfer mechanism has to be chosen, better to use a convective one. Moreover, convective lapse rate is a stability condition, so I see it (and, from what I get, classic climatology “above the atmosphere=rigid shell level” see it the same) as a limit for temperature gradient that can not be exceeded, due to stability reason. It thus makes sense that one can derive a max ground temperature from TOA temperature using this lapse rate, without knowing exactly how much the heat flux. Above TOA, we have radiative transfer, so we know TOA temperature from S-B law. Heat flux is then determined by conservation of energy, convective heat flux is just what is missing to ensure equilibrium. The only error I see with this model is that it is too simple: 1D, and it does not take into account the fact that radiation is diffuse, so all radiation to space does not occur at a precise TOA level, it is only an average notion. But still, compared to simple shell-like purely radiative atmosphere (1D also, all those shells and the earth are considered perfectly conductive in the horizontal directions), the model with the lapse rate is head and shoulder above: at least it does not neglect the largest heat transfer to keep only the smaller radiative one because it is tractable. This is one of the biggest error in climatology vulgarisation: the CO2 blanket is completely wrong, but it may be enough for those allergic to science/mathematics. The radiative shell (or multishell) models are mathematically complex enough do deter those one, and thus is presented as a simple but usefull model. It is not, it is almost as wrong as the CO2-reflective blanket, and frankly, it paint a very poor image of climatology for those scientifically-minded enough to understand it, but who start to evaluate it compared to an earth-like atmosphere … • The first chapter has a major error in assigning the 10 C/km lapse rate to radiation while also referring to it as the dry adiabatic lapse rate. Radiation has nothing to do with the 10 C/km dry adiabatic lapse rate. A radiative equilibrium is isothermal, not isentropic. This mess confuses the whole later argument about lapse rates. • Jim D Bad news and good news First the Bad news …….”a major error in assigning the 10 C/km lapse rate to radiation while also referring to it as the dry adiabatic lapse rate. “…… The dry adiabatic lapse rate is given by dT/dh = -g/Cp Where g = Gravitational Field Strength Cp = Heat Capacity. In other word the temperature acquired by air molecules after contact with the surface drops by almost 10K per Km of ascent. Now in the case of the dry adiabatic troposphere although water vapour may be absent, CO2 being well mixed should be there as usual. However it seems to play no part that I can see. Even more alarming, in this Nasa description of the atmosphere with various conditions specified there is no mention of greenhouse gases! Surely the radiative effects of CO2 must get at least a tiny mention, shouldn’t they? Now the good news The greenhouse theory has been banished to the TOA. The radiative gases radiate long wavelength EM radiation to space to attempt an overall radiative balance for the Earth. It acts like the drain hole at the bottom of the bath. The Sun acting like the water flowing from the bath taps. If the drain hole is too narrow, water level rises(temperature); if too wide temperature falls. Now back to a dry atmosphere; the temperature lapse rate will still fall at 9.8K/km in the troposphere. The net effect then of changing CO2 and H2O vapour is to move the tropopause up and down. Now this truncated version of the Greenhouse Theory is one that I think is very plausible. • In some way we can agree that the tropospheric lapse rate is fixed by the dry and moist adiabatic lapse rates, and therefore its whole temperature profile is linked to the surface temperature, which is in turn affected by a radiative balance. CO2 can’t change the lapse rate, which is based on physical constants, such as g, cp, latent heat constant, gas constants, etc., but can only affect the surface temperature to raise the effective radiating level of GHGs. The troposphere’s only degree of freedom is the surface temperature in this simplified model that represents CO2 effects in one atmospheric column. • The lapse rate is determined by thermodynamics of moist air as long as there is a sufficient heat flow from the surface to the upper atmosphere to keep the real lapse rate at the adiabatic limit. That requires that the surface is warm enough to release the required amount of energy excluding that part that escapes through the atmosphere without being absorbed. The heat flow is a combination of radiative transfer, convection and advection of latent heat. Convection is the part that guarantees automatically that the temperature gradient cannot exceed the adiabatic lapse rate. Therefore the strength of the radiative transfer does not influence the result as long as the surface is warmed so strongly that the adiabatic lapse rate would be exceeded without convection. Adding CO2 influences the situation in at least two ways. The first is due to the reduction in the amount of energy that escapes without being absorbed. Due to this effect less energy is leaving directly from the surface. The same applies also to the low clouds. In equilibrium all this reduction must be compensated by increased radiation from the upper atmosphere and increased heat flow from the surface to the upper atmosphere. The second effect occurs around tropopause. The increased CO2 concentration moves the effective radiating altitude of CO2 higher up. Combining both effects we notice that the radiation that escapes from the upper atmosphere must be both stronger and originate higher up. Both requirements lead to an increase in the temperature of the atmosphere at a fixed altitude if upper troposphere near tropopause. The two effects are separate. The first comes from the increase of CO2 at lower altitudes, the second from its increase at tropopause. My understanding is that the first effect is stronger than the second, but I have not done any calculations to support this conjecture. • Pekka Look at a description of the broad outlines of the atmospheres structure with particular emphasis on the troposphere. There is no mention of the greenhouse effect. The effect of water vapour is explained through the mechanism of latent heat. Of course CO2 and H2O radiate in the IR. It just doesnt seem to be that important. • Bryan That is a description of certain issues. That something else is not mentioned there is not an argument against that. I didn’t notice anything there that would in some way contradict what I wrote here or in numerous other messages on this site. It is also dishonest to pick one sub-chapter from the tutorial stating that it does not discuss greenhouse effect when the previous sub-chapter does discuss it. I think you might try to avoid being dishonest. • Pekka …”I think you might try to avoid being dishonest.”.. I try to avoid using language like that. I have no way of knowing how honest you are but I give you the benefit of the doubt. I was genuinely surprised when I came across the NASA document. Beforehand I would have thought that the radiative effect of CO2 would have to be accounted for even in a dry adiabatic Earth atmosphere. In fact it would be a good experimental method of isolating the CO2 effect from the H2O effect in the limit. There seems to be a growing body of opinion that the radiative effects of CO2 are either minor or self cancelling. A number of IPCC advocates are now promoting this and say the real and significant greenhouse effect is to be found at TOA. • To put it simply, CO2 affects the absolute temperature, not the lapse rate in a dry atmosphere. This is why it is important. It displaces the whole temperature profile according to its radiative effect. • Bryan, I have become less polite to you after your baseless insulting comments towards me some times ago. I told you that the previous sub-chapter of the same tutorial tells that the CO2 is important. Why do you neglect that and choose to concentrate on the next, which discusses other things. If you find the chapters contradictory, the fault may be in your understanding of the content and its significance. For that the only help comes from studying the basics. Trying to make guesses from more advanced texts (even when they are tutorials like in this case) leads often to such misunderstandings that are visible on this site all the time. • Jim D I think we are in close agreement about the broad outlines. On the dry adiabatic atmosphere I used to be a bottom up advocate. Surface temperature determined by Sun/Earth interaction. Gravity giving rise to lapse rate of 9.8K/km. This very simple structure then modified by convection, latent heat and radiative effects till the convective impetus petered out at the tropopause. Above the tropopause the radiative effects adjusted to keep the Earth energy in/out in balance. However recently I find the top down approach quite compelling. The TOA conditions acting like a gate. The consequences of the gate being too narrow being passed back down by the same dry adiabatic lapse rate to determine the surface temperature. • Pekka I’m sorry if I addressed you in a way that you found disrespectful. I think I used the word IPCC apologist rather than my usual term IPCC advocate so I must have been loosing my cool. I think that one undisputed plus for Judith’s site has been to tone down the insult level. However if you are a sceptic you have to develop a much thicker skin. For a laugh go onto a site like Deltoid and pretend to be Nasif Nahle. You wont get out alive! • Bryan, The net discussions are often difficult. Short messages cannot always transmit the tone correctly. Some of the participants are provocative by purpose, and some others write claims that they know to be false, even deliberate lies. In climate science and in particular in the physics behind the climate science there is very much that I have full confidence in based on my schooling and understanding based on that. There are many other things I have much less confidence in and also conjectures that I consider more likely to be false than true. In these discussions I comment most often on issues I am certain about. Trying to do that as well as I can and getting answers that show no evidence on willingness to learn, is often frustrating and leads to doubts about the goals and even honesty of other participants. All concrete hints to the same direction strengthen these suspicions. At the same time I know perfectly well that many points are difficult and cannot be verified personally without specialized education. I try to stay polite, but sometimes it leads to a point, where I start to think that I am played with and that I am making fool of myself unless I react strongly. I know that this is going to happen also in the future, if I continue to comment on climate sites. • Bryan, I think the dry adiabatic atmosphere can be thought of from both perspectives, top and bottom, which both lead to a requirement that the whole temperature profile is displaced in the warmer direction when CO2 is added. My view is that more CO2 initially reduces outgoing IR but also causes the surface to warm, which in turn convectively forces the atmosphere to warm, increasing the outgoing IR till it balances again. • I just came across this discussion, and since it was a discussion rather than an argument, I thought I would offer my perspective. In general, a TOA radiative imbalance due to impeded loss of IR to space is translated into more energy at each layer, ultimately impacting the surface temperature. In turn, this further warms the atmosphere over time as the surface temperature rises. The immediate result of atmospheric warming is an increase in lapse rate beyond the adiabat due to greater warming at low than at high altitudes. This results in static instability that triggers a convective adjustment restoring an adiabatic profile (which in most regions eventually proves closer to a moist than dry adiabat due to latent heat transfer with release at higher altitudes). The radiative changes are very rapid. The dry convective adjustment (according to Andy Lacis) is slower, and the full change including the latent heat effects occurs over many days or longer. The “super-adiabat” would tend to enhance surface warming because of the higher lapse rate. On the other end, the moist adjustment creates a negative lapse rate feedback that reduces the warming effect. This, however, is accompanied by a positive water vapor feedback, and the combined water vapor/lapse rate feedbacks are generally computed to show a net positive effect. • Fred, are you finally going to tell us about the hot spot? I believe you need a hot spot for there to be any appreciable top down warming don’t you? • kuhnkat, Is this really so difficult? Nobody claims that there would be warming in the sense you imply – nobody at least of people supporting main stream climatology. Therefore there is absolutely no need for such a hot spot. This is not in contradiction with the fact that atmosphere radiates to surface and contributes to a temperature increase. If you do not understand the point after all these discussions and hundreds of messages where it has been explained in different words, then I propose looking in the mirror. • Pekka, you are a very reasonable, intelligent, respectful person. I respect you for your knowledge and comportment. Unfortunately I am often none of the above. Frank started discussing heating at elevation which is caused by bottleneck in IR emissions. He did not give a mechanism for the purported bottleneck. He also talked about heating from the top down. With emissions bottlenecks, heating from top down, backradiation, and eventual heating of the surface, exactly what am I supposed to assume he is talking about?? I have actually read explanations of this effect and have always been confused about how the bottleneck comes about. The statements seem to say that the heating will raise the effective emission altitude as the heated atmosphere expands. As the new higher altitude is supposed to be cooler than the old average altitude less IR can be emitted. Hopefully you can clear this up for me. If the atmosphere expands from warming, doesn’t that say the higher altitude will be about the same temperature as the old altitude? That is, the altitude will average higher but the temp will be about the same because everything is warmer. If we are saying that this warming will not happen it would seem to me that the temperature is more controlled by the lapse rate and convection, in which case there will be no significant warming in the first place without major perturbation. Thank you for any clarification you can give on this “hot spot” issue. • kuhnkat, Nobody of us is capable of always finding clear expressions for his messages. While many issues are not really complicated, they involve anyway numerous details and attempts to explain the issues in limited space and simpler language requires leaving something out. All too often happens that just those things left out are for some reason in the mind of the other party of discussion. Another problem is that the concepts are not defined precisely. What means “warming a body”? In these discussions some participants expect that the effect that warms must be the final source of heat or energy that rises the temperature to its final value. A colder body can never do that for a warmer one. Many others mean by the sentence “body A warms body B” that taking the A away would lead to a colder B. This is very often possible even when A is colder than B, if B is heated also by some other source. I have still difficulties to understand why this second way of interpreting “A warms B” is not understood by everybody. I commented to the most recent post of Judith that many people can much better form general views on issues than present scientific type arguments in their support. It is very common, that the role of detailed arguments is overvalued. They are overvalued often both by those who are competent in presenting them and by others for whom a more general intuition works much better. This is also a source of dispute and confusion, when people are sure that they are right in the main issue, but cannot justify it by a detailed arguments. There is too much belief that detailed arguments are the way of winning argumentation, even when that does not work at all. In climate issues this fact comes up all the time. Even for experts a more general and intuitive approach may give more reliable results than trying to prove by detailed arguments when not enough is known about those details. • You certainly nailed it! Very good. This may be because it’s now past midnight in Sweden. 49. Judith, I want to comment that I am increasingly an admirer of your approach, especially on this technical thread. By letting others take a turn at being the authority, people seem to come to more openly examine their own ideas and knowledge – including errors. By just minding the store, wrong assumptions and weak knowledge claims are brought to the surface by others, instead of driven underground by your authority. It’s a better learning process than confrontation. 50. To complement the many comments made above indicating that the radiative transfer principles contributing to the greenhouse effect, including the role of back radiation (downwelling longwave radiation) are consistent with the laws of physics, it’s worth pointing out that the back radiation predicted from these equations has been confirmed by measurement. For a general overview, readers should revisit the Radiative Transfer Models post to review the links Judith Curry has cited, with particular reference to the Atmospheric Radiation Measurement (ARM) program – the post is at Radiative Transfer For a particularly informative description of the ARM program, see – ARM Prrogram 51. Claes, You write: “Let us now sum up the experience from our analysis. We have seen that the atmosphere acts as a thermodynamic air conditioner transporting heat energy from the Earth surface to a TOA under radiative heat forcing. We start from an isentropic stable equilibrium state with lapse rate 9.8C/km with zero heat forcing and discover the following scenario for the response of the air conditioner under increasing heat forcing: 1. increased heat forcing of the Ocean surface at low latitudes is balanced by increased vaporization, 2. increased vaporization increases the heat capacity which decreases the moist adiabatic lapse rate, if the actual lapse rate is bigger than the actual moist adiabatic rate, then unstable convective overturning is triggered, 4. unstable overturning causes turbulent convection with increased heat The atmospheric air conditioner thus may respond to increased heat forcing by (i) increased vaporization decreasing the moist adiabatic lapse rate combined with (ii) increased turbulent convection if the actual lapse rate is bigger than the moist adiabatic lapse rate. This is how a boiling pot of water reacts to increased heating.:” I think your model is incomplete, since the “heat forcing” as you name it is external and you describe only energy flux that is internal. “Heat forcing” increases the energy content of the earth system and therefore leads to increased temperature on the long run to decrease your so-called “heat forcing” by increasing outgoing longwave radiation (OLR). Your model leads necessarily also to increased temperature. You describe radiative-convective equilibrium as well. So what is different in your model compared to the classical model Best regards 52. “If they are wrong, prove it” Done already 53. One thing that always puzzles me when IR and the GHE are discussed is why on a nice clear summer day in Atlanta I don’t melt. I mean, we supposedly have an AVERAGE downwelling radiation of 324 wm-2. I would imagine that the downwelling radiation at noon on a humid day in Atlanta would be higher than the average due to all the water vapor in the air. Let’s make it 25% higher, or 405 wm-2. Now, let’s add the sunshine, which is certainly greater than 900 wm-2 at noon. So we now have 1305 wm-2 on my greybody. Using the SB equation, with emissivity of 1, that translates to 116 C. Something doesn’t add up. • Hi Jae… Excelent observation! You could calculate the energy the human body would absorb, from those 405 W/m^2, by knowing that it has an average absorptivity of 0.7. Imagine the hard work the body would perform for getting rid of that excess of energy! • A black body radiates 400 W/m^2 at a temperature 0f 17 C. It’s the sunlight that would cause a problem for an object unable to shed heat via perspiration, reflection, conduction, or respiratory heat loss. With 900 W/m^2 absorbed, its temperature would equilibrate at 82 C. • At an ambient temperature of 40 °C, a normal human body absorbs 43.4 W. That figure represents an intensity of 160.71 W/m^2. However, Jae mentions c.a. 1305 W/m^2 the energy emitted by the atmosphere, if the stuff of backradiation were true. Fortunately, as Jae points out in his post, it’s not true because, if it were true, the human body would absorb the dizzying amount of 913.5 W, which would represent an intensity of 3,383 W/m^2. On the other hand, if you are considering an idealized blackbody emitter, emitting 400 W/m^2, then the human body would absorb 280 W, which corresponds to an intensity of absorption of 1,037 W/m^2. Now, let’s consider a blackbody-ambient at 17 °C; the human body would be losing, not gaining, 23.17 W (-23.17 J/s), which corresponds to -85.82 W/m^2. • Your figures aren’t well explained. The average human has a surface area of about 1.7 m^2, so I’m not sure what you mean when you imply that 160 W/m^2 corresponds to 43.4 W absorption. More importantly, an ambient temperature of 40 C is very hot (and represents much higher than average back radiation). It is equal to a Fahrenheit temperature of 104 F, which is very difficult for humans to tolerate on a sustained basis, although they can adapt temporarily through sweating and panting. It is incorrect to state that 1305 W/m^2 is emitted by the atmosphere. Most of that figure comes from the assumed value of 900 for sunlight, which would be an immense problem for an individual who could not adapt, and would be unsustainable for any extended period. Back radiation has little to do with it. Finally, in the example I gave, which you cite, of ambient temperature at 17C, this is easily tolerable, because human metabolism generates enough heat to compensate for the heat loss. In fact, tolerable climates for humans require some degree of heat loss to the environment, because we can’t shut down our metabolism, and so if we couldn’t lose heat, we would quickly die. In essence, the values I gave in my earlier comment are correct, and the most significant problem in the cited example is the sunlight. • Fred: I originally thought you digged the conversation, bro., but it appears that you don’t have a clue! • Dear Fred, 0.27 m^2 exposed to radiation, unless it is naked. 40 °C is a usual temperature, here, during summer. The average absorptiviy of the skin, in a normal human being, is 0.7. I never said you’re wrong. I only made the calculations for the conditions you specified in your post. At 17 °C the human body would lose 23.17 W of energy, which would be transferred to the environment. It would be a problem if we were endothermic organisms. Fortunately, we are self-regulating thermodynamic systems; otherwise, we should spend many hours under the sunbeams, as lizzards, for example. Now, if you say that a blackbody at 17 °C is emitting 400 W of thermal energy, how much Watts it would emit in my location when the temperature can reach, easily, 40 °C in summer? • Nasif – A true black body at 40 C (313 K) would radiate about 544 W/m^2 in accordance with the SB equation. Humans can’t afford to sustain a body temperature of 40 C for very long. At 37 C body temperature, they lose heat by all the mechanisms I mentioned above, not just radiation. I’m sure humans can tolerate an ambient temperature of 40 C for intervals, but I doubt they can tolerate it for a very long sustained period, day and night, without some exogenous cooling source, such as drinking cold water. • Dear Fred, Exactly! An idealized blackbody at 40 °C would emit 544 W/m^2, which is not the case if we consider the real system atmosphere-lithosphere. The external operator, for the case of my location, where we undergo up to 40 or higher degrees Celsius during the summer daytime and 30 or more degrees Celsius through the nighttime (and, believe me, we have survived it through many days), cannot be other but the Sun, and you will agree on this because the atmosphere cannot “store” such load of heat. Primarily, because the absorptivity of the whole atmosphere, including a 4% of water vapor, is quite low (by the order of 0.01 when considering the mean free path length of photons and the time they spend to leave the Earth’s atmosphere). That’s why, I sustain that the current models on TAO (or TOA) are absolutely flawed. • I’m not sure what your point is. The emissivity of the atmosphere in the IR range of greenhouse gas emssion and absorption is certainly less than unity, but although the emissivity of any small atmospheric layer, even near the surface, is small due to the low concentration of greenhouse gases, the total downwelling longwave radiation comes from multiple layers and is substantial. Radiative transfer codes derived from the Schwartzhcild radiative transfer equations, in conjunction with observed values of CO2, H2O, and surface temperature, yield values for both OLR and downwelling radiation that match observations very well, confirming the validity of the principles on which they are based. • Fred… I’m referring to the time that a photon takes to abandon the atmosphere, as wide as it is, and to the distance that a photon can travel without touching a molecule air, those molecules that can absorb it or scatter it. From the databases of both parameters, we find that the air, as dense as it is, we find that the emissivity of the air, 4% of water vapor included, is 0.01; no more. The atmosphere is not a blackbody. Perhaps those observers on the downwelling radiation are observing other things, except any downwelling radiation? • Here is the reductio ad absurdum. A human body is like a black body at 37 C which emits about 525 W/m2. Now according to this theory proposed above, nothing can emit towards a human body that isn’t as warm as it, so when you go out at night you are losing heat at 525 W/m2. Wouldn’t you cool down really fast even on a balmy night with a 20 C ground temperature? The fact is, everything emits towards everything else regardless of relative temperature. We do have incoming radiation to us at night even from the cooler ground. Go out and try it. Explain how this is different from the atmosphere radiating towards the warmer ground. • Wow! Jim! You have got rid of S-B Law! Please, tell me, are you related in some way to the Hockey Stick producers? Besides, you made us, humans, real blackbodies! Jim, a human body has a temperature of c.a. 37 °C. If it (the human body) is exposed to an environment at 17 °C, it would lose 23.17 W, i.e. the energy transferred by radiation from the human body to that environment at 17 °C, according with the S-B Law derived formulas. No more. The formula is quite easy: Q = e (A) (σ) (Te^4 – Thb^4) Where e is the emissivity of the system (human body in this case), A is the area exposed area of the human body, σ is Stefan-Boltzmann constant, Te is the ambient temperature in K, and Thb is the average temperature of a normal human body. Go on, make your calculations. • See, you now have the ambient air radiating towards the body when the slaying book says it can’t because it is colder. • Jim… I’m not having the air radiating towards the body, but quite the opposite. The body is losing energy, not gaining it from the environment. Under those conditions, the body is pushed to generate more thermal energy, from metabolism, to maintain his energy state in a quasi-stable state. In summer, only when the environmental temperature is higher than the body’s temperature, the body gains energy from the environment; however, the thermoregulating system starts working to get rid of the excess of thermal energy absorbed. If you applied the S-B formula correctly, you had to obtain a negative result, which means that the body is losing energy, not gaining it; the body must generate more energy through the cellular respiratory process and other mechanisms for not cooling off, in this case. • The Te term in your equation comes from back-radiation is all I am saying. If you believe your equation, you implicitly agree with back-radiation. I am not saying your equation is wrong, I am saying it proves back-radiation exists. • The Te is the temperature of the environment and it comes from the energy it has absorbed from the surface. e (A) (σ) Te^4 Heat received from surroundings e (A) (σ) Tb^4 Heat emitted by body • Jim and Phil… You’re much confused. It’s the energy from the human body to the environment… Have you noticed that the energy flows ALWAYS from the warmer system to the colder system? Backradiation doesn’t apply because it is the human body what is radiating, not the environment. Again, for this case, the human body is LOSSING energy, NOT gaining it. • No, Thb is from the body to the environment, Te is from the environment to the body, which is why they have opposite signs. Since the environment is colder than the body, this is the term the slaying book says should be zero. We clearly agree the book is wrong on this matter. The environment is preventing the body from losing heat at an unrealistic rate of 525 W/m2 in the same way as the atmosphere prevents the ground from losing heat at an unrealistic rate (where a similar formula applies with Thb being from ground temperature, Te from the atmosphere). • LOL… Thb is temperature of the body in K, and Te is temperature of the environment in K. :) If you read well my posts, I’m always referring to an “idealized” blackbody. Got it or start again? • Nasif, you have already contradicted slaying the dragon by having the Te term, but you haven’t realized it yet. I suggest you argue with those authors about that term. I am not arguing about it. • I’m afraid it’s you who’s confused Nasif, the environment radiates according to its temperature and the body absorbs it, the body also radiates according to its temperature. The net effect is that when the body is warmer than its surroundings the body loses heat (when the environment is hotter than 37ºC the body gains heat). The environment doesn’t stop radiating because the warmer body is present, ‘back radiation’ is always present. that’s what the term, e (A) (σ) Te^4, represents. • Nope, confusion is on your side. I’m afraid you think the environment is never colder than your body. The formula is the S-B equation, and you’re blatantly misinterpreting and twisting it, as usual in AGW idea. • Good Grief, why dont you give it a rest. Nahle is correct. • Phil, you’re absolutely wrong. If you eliminate the term Tb^4 from the formula, you would be referring to the energy of the atmosphere. It has nothing to do with “energy received from surroundings”. You have only one term, the temperature of the environment, and it is the result of the FLOW of energy IN the environment. • To Phil… Anser this question for me: what the value of “e” could be in the formula that you say it is “energy received from surroundings”? If you are referring only to the temperature of the air, then you have to introduce the value of “e”, and in the case of the human body, you have to introduce the value of “e” for the emissivity of the human body. It’s very simple. You’ve dissected the formula and you’re referring to two different things. • To Phil… I suggest you look up Kirchoff’s Law, as the heat is being absorbed it should be ‘a’ not ‘e’, however a=e. • @Phil My question: “To Phil… Phil’s answer: You didn’t answer my question… Yours is blah, blah blah. I repeat, it is S-B equation. Check your books. If your environment has e = 1 and a = 1, you’d be scorched… Mmm… • Fred?? Everyone keeps telling me that we ADD all incident radiation, no matter where it is from, to determine what the temperature should be. What are YOU saying here?? • What I’m saying is the Noel Coward song line – “Mad dogs and Englishmen go out in the midday sun.” • Is THAT your scientific basis for all of your comments? • jae, absolutely correct. Another simple example of how the purely radiative calculation as proposed by Pierrehumbert and other climate scientists gives completely the wrong answer. The correct answer of course, is that you need to take into account other mechanisms of heat transfer such as convection and evaporation. This point has been made dozens of times on all these threads. • Bruce Cunningham There is convection etc, within the Earth’s atmosphere, but the only way that heat (energy) is released from the Earth to outer space is through radiation. Reflection of incoming solar radiation by clouds is the big question. Until someone can accurately define how this changes the amount of incoming energy, predictions of future temperatures cannot be accurately calculated. • No sweat? 1) Water vapor in the air changes from 1% to 4%. 2) CO2 in the air is about 0.038%. 3) Since the industrial revolution, the proportion of CO2 in air has increased by 0.01 % (from 280 to 380ppm) 3) Both water vapor and CO2 are greenhouse gasses. 4) A natural change in 3% of water vapor in air does not cause global warming. 5) How can a change in 0.01% of CO2 (3/100th of the natural change in water vapor) due to human use of fossil fuel cause global warming? • Your question is somewhat off-topic, but relative humidity has not increased over the past century, while CO2 has risen almost 40 percent over its pre-industrial concentration. Water vapor, in fact, has such a short atmospheric lifetime that its absolute humidity value cannot remain elevated in the absence of some other factor that causes the atmosphere to warm and thereby retain more water. That is why it operates as a feedback mechanism amplifying warming mediated by CO2, solar increases, or other forcings rather than acting as a forcing in its own right. If the average relative humidity had in fact increased by 300 percent, the warming would have been immense. I believe this has been discussed in the threads on feedback and on climate sensitivity. You might want to review the previous discussions before proceeding further, so as not to repeat material already covered. I don’t agree with “short atmospheric lifetime” argument regarding water vapor. Does not every half a day, the temperature drops at night? Is the “atmospheric lifetime” of water vapour less than half a day? Do not tell me what I discuss here. This is not your blog! • Girma Interesting point. In light of what Dr. Curry has said elsewhere: ..could you help me out by relating what you say directly to the topic at hand, and how the two connect? Much obliged. • Thanks, Bart. I tried to say it very tactfully, but your direct approach is better. • Please take discussion of water vapor to the Pierrehumbert thread. • Derry MCCarthy you might find an answer of sorts here by Robert H. Essenhigh, Department of Mechanical Engineering, The Ohio State University, Columbus, USA. In press in the journal ‘Energy and Fuels’, but now available at ACS website • Derry… The residence time of carbon dioxide in the atmosphere could be as long as you wish… The important thing here is that the lapse time for the thermal energy to stay in the atmosphere is quite low: 0.0097 milliseconds! The mean free path lenght of one photon of thermal energy is 21 m. Besides, from experiments realized by many physicists, at its current concentration in the atmosphere and under the current physical conditions of the atmosphere, the carbon dioxide cannot absorb-emit more than 0.002 of thermal energy. • “carbon dioxide cannot absorb-emit more than 0.002 of thermal energy.” Hi Nasif. Is this figure a percentage? If so, what is it a percentage of? Surface emission? • As you rightly imply, it cannot. 55. The bottom line in all this is that there is absolutely no proof–or even a reasonable demonstration–of an “atmospheric greenhouse effect.” All planets/moons with an atmosphere have a surface temperature that is much higher than the SB equations–based on the IR from those bodies–at about 100 mbar–suggest. It is high time that the “climate science community” HONESTLY faces the questions that are posed by the skeptics (and stop with the dishonest, unconvincing, meaningless, disgusting, and typically liberal insult of “denialists). The “community” has already lost the public and only has politicians and rent-seekers on its side. The smart ones are already publishing papers refuting the stupid, ever-present “catastrophe” of our times (aka, Chicken Little). Grow up! 56. Fred Molten, one can make an interesting thought experiment about “back radiation”. Let’s assume we have the earth system as a stationary state with 280 ppm CO2, well mixed. Normal lapse rate. In the first case, we bring in a thin layer of CO2 that contains a similar amount of CO2 compared to the whole atmosphere in a thin layer next to the surface. In the second case, we bring in a thin layer of CO2 that contains a similar amount of CO2 compared to the whole atmosphere in a thin layer next to the top of the atmosphere. Both layers are equilibrated with respect to temperature. “Back radiation” is highest in the first case, but surface temperature is lowest. It is the emission height that counts. As I said, it is the cooling to space that rules. That is, why I don’t think “back radiation” is a necessity to explain the greenhouse effect. Best regards • I don’t believe there is any way to warm the surface without back radiation. In its absence, radiative imbalances in the atmosphere would change atmospheric temperature but not surface temperature (except for the minimal effects of conduction). Regarding your thought experiment, my assessment is the following, at least at first consideration. If we ignore water vapor as well as non-radiative phenomena, I believe that the same number of CO2 molecules will absorb the same number of photons, regardless of altitude. At equilibrium, they will emit as much energy as they absorb, and the temperature of that layer will therefore rise until it suffices for that emission to occur. For the high altitude case, this would cause a temperature inversion such that temperature is much higher at the height of the absorbing layer than it is below. This is clearly an unphysical situation, but something vaguely similar occurs in the stratosphere, where ozone absorbs solar UV, resulting in a temperature inversion. There may be other factors that I’m ignoring in addressing your thought experiment, but my first paragraph rather than the second is what I would emphasize – the surface can’t warm unless it receives the radiation needed to warm it. • There is only one way the physics really works, but there are many ways of putting this into words and more than one way of formulating the equations used to calculate the correct results. There are no limits on the number of ways the physics can be misrepresented and we have already seen pretty many in comments on this site. Countering these erroneous claims is made more difficult by the fact their details may well be in agreement with some of the correct descriptions while the errors are in putting these pieces together. Some of the erroneous theories are pure nonsense from start to end, but not all of them. There is a continuing argumentation on whether one mechanism can heat an object which is actually receiving heating through many processes or from many sources. Then one may claim that any single process cannot heat it, if the processes are individually weaker than cooling of the object. Such arguments are presented as if all heat sources would not add up whatever their mechanism is and as if each of the heat sources would not have its share in the total heating. How can this kind of argumentation be supported by so many? 57. Claes Johnson, Your statement that “back radiation” is fictional, a figment of the imagination for any length of time longer than a fraction of a second, I totally agree. I will read your paper (book) as I get time and I might not totally agree with the methods you use to describe this. Maybe so. I have always viewed “back radiation” as a null operator: — 2 units or energy leaves a surface cooling the surface by that 2 units. — That 2 units are absorbed by molecules (GHGs) warming the gases locally. — 1 unit of energy is radiated to space and lost to the system and also cooling the gases by 1 unit. — 1 unit is radiated back to the surface to be reabsorbed warming the surface by 1 unit and also cooling the gases by 1 unit. — NET EFFECT: In the end the surface has cooled by 1 unit and 1 unit is lost to space, all in a few milliseconds. All other effects have totally cancelled. One way to view this is a reduction of effective emissivity of the surface by at factor near one half. That seems very close to your initial statements I was reading and I agree, there is no real warming. After reading onward I may not agree with the exact methods you use to place this effect into a physics framework but I will read it, that takes time. • Yes, Wayne, but if the surface was at a temperature that demanded it radiate 2 units and it only radiated a net 1 unit, then it is not in equilibrium anymore and its temperature must go up. (I think I got that right, I normally lurk on the technical threads and keep my head well down!) Regards, Rob • Hi Rob, I started to write you a detailed explanation, but after reading many of your comments, I’m afraid it would be pointless if you are not able to take my example above and limit it the exact case I gave. The two units must be radiated upward, those two are not all radiated upward (your injection of temperature), and those two units must be absorbed and not transported directly to space without absorption (window). If you can not grasp even that simple example there probably is no hope of you understanding Dr. Miskolczi’s methodology he used in his latest papers and which is very close to my example above. Kind regards. I like to lay low too. Open your mind, the AQUA AMSU temperature just hit the same temperature that was read thirty years ago, how can that be? If I were you I would get real curious right now. I have already found my answers. 58. Michael Larkin Whatever else can be said of this thread, I am enormously grateful for the earlier link to Roy Spencer’s explanation of the GHG effect which is worth repeating: This is the clearest explanation for non-specialists like me that I have ever come across. I’ve saved it to my hard drive. I have a question. Spencer says that with no atmosphere, the earth’s surface would be around 0 deg. F (-18 deg C or 255 deg K). Suppose all GHGs (but nothing else) were removed from the earth’s atmosphere (I’m assuming there would be no water on the planet). Would the temperature be greater than 0 deg. F? I’m hoping that’s on topic, because I’m trying to establish in my own mind whether just the presence of an atmosphere pretty much as dense as the one we have now, but sans GHGs, would in some way produce warming. I hope it makes sense to ask the question. • afaik, yes: what I think would happen is that all radiation would occur at the surface, because the atmosphere would be perfectly transparent for all wavelength, at by K., would also emit no EM radiation (In reality, it would not be like that, but I guess it is the idealised situation you have in mind). So, the surface T at equilibrium would be computed the same as in the no-atmosphere case. But I am not so sure about the T profile in this transparent atmosphere. Quite fast, we should reach the lapse rate for this gravity field and adiabatic fluid, by convection. I think, after some time, conduction should produce uniform T, which seems to be the no -heat flow limit regime (well, assuming 1D problem)… but I am not sure uniform T is the equilibrium in a gravity well, some equirepartition principle may mean that T goes down the higher you go (some interpretation of virian theorem would say so too, which makes sense: monoatomic gases modeled as elastic spheres, should have a lower velocity at top of atmosphere…else they would reach escape velocity) which would falsify simple conductive transfer, except if “total” temperature incorporate somehow potential energy. Interesting question, I would be interested about what gaz kinetic theory specialists would have to say about that, all in all my hinch would be for non-constant T and conduction process acting with a “total” T incorporating potential energy…. • yes, definitely a non-constant T at equilibrium due to gravity: after all, simple heat transfer linearly proportional to T gradient is not a fundamental law, it is derived from kinetic gas theory, one of the hypothesis being, iirc, no volume forces. Gravity is a volume force, so I am almost sure the Fourier law for conduction is not strictly valid in this case (it is a first order phenomenological law, nothing fundamental there), but that heat transfer must incorporate gravitational potential energy….. • Yes the convective equilibrium lapse rate is g/cp, about 10 K/km, so I would expect something like that. It is complicated by variations in surface heating with latitude and the diurnal cycle, so it is not clear what temperature this would equilibriate to over the surface, but since the non-GHG atmosphere has no other cooling mechanism than contact with a colder surface, the surface temperature would somehow control its eventual equilibrium temperature profile. Michael – Removing only GHGs would have slightly greater cooling effects than removing the entire atmosphere. This is because atmospheric molecules (O2, N2, CO2, etc.) scatter some sunlight back to space, and in their absence, all solar radiation would reach the Earth’s surface. The 255 K figure assumes no other changes. In fact, in the absence of water, there would be no ice, snow, or clouds, and the Earth’s albedo (percent of sunlight scattered or reflected back to space) would decline significantly. As mentioned above, some scattering would still occur from air molecules, and some from light-reflective surfaces such as sand, but it would be far less than the current 30 percent figure. As a result, the Earth would absorb more heat, and warm well above 255 K. I don’t know what the exact temperature would be. It would be colder than today, but probably by only a modest amount. • A small correction – Above, I should have omitted CO2 from my example of light-scattering molecules, because you were asking what would happen if it were removed. Of course, N2, O2, argon, etc., would remain, and their contributions would be little diminshed by the removal of a minor constitutent by volume such as CO2. • Michael Larkin Thank you for your clear and not-too-technical response, Fred. Might have seemed a peculiar question, but it elicited useful extra information for me. 59. To summarize my position: 1. Radiative heat transfer is carried by electromagnetic waves described by Maxwell’s equations. The starting point of a scientific discussion of radiation should better start with Maxwell’s equations than with some simplistic ad hoc model like the ones typically referred to in climate science with ad hoc invented “back radiation” of heat energy. If there is anything like “backradiation” it must be able to find it in Maxwell’s wave equations. In my analysis I use a version of Maxwell’s wave equations and show that there is no backradiation, because that would correspond to an unstable phenomenon and unstable physics does not persist over time. I welcome specific comments on these two points. 60. 2. Agreed. And I am not too comfortable with the model hierarchy used in Climatology: pure radiative models are imho correct,but they do represent the main heat transfer in earth system well…so are useless for earth. TOA+lapse rate is better, but I think they are not so solid mathematically, I do not really like the treatment of it. Should be consolidated, and then it is 1D, so predictive value is not clear, but at least this model could have heat transfer similar enough to actual heat transfer on earth to be somewhat useful. Finaly, there are GCM….but they are huge, use numerical methods I do not like (FD for something with complex continental shapes – yuck), and introduce a lot of approximation (solving NS equation on earth lenghtscale is ridiculous…so it is not NS that is solved, but some kind of approximation of it. Never really have seen the PDO that are solved in fact, which is in itself very worrying. Lot of blackboxes modelling different process connected to each other (radiative module – ocean module – salinity module – biological C cylce module), so it is more an ad-hoc model that something starting from first principles or even a solid set of PDO. OK, not easy to do better, but the validation is pitifull for this kind of model, which live and die by extensive validation. 1. Not agree: Maxwell’s equation are ok, but you need quanta (or a replacing full theory) to deal with radiative heat transfer: It is not even needed to accept black body treatment by Plank to know Maxwell alone will not be up to the task: those EM are radiated by molecules, that can not be modeled by maxwell: remeber the paradox for Bohr atom model of orbiting electrons? Why does the electrons not fall down in the nucleus, when all its kinetic energy should be dissipated by bremstrhallung/synchrotron radiation? This was a fundamental problem (before or at the same time as BB radiation) that was solved by quantization. You may not like Quantum mechanics (I myself have trouble with it, it seems like an unfinished and overly complex theory), but it is extremely successfull, maybe the biggest success of physics. Going against it is a huge task, there is a reason it was accepted between the wars although it is quite often counter-intuitive: it explains and predict a lot, much more than simple BB radiation. By the way, you continue to mention that backradiation would be unstable. A few posts (some of mine too) challenged this. You still not have explained why you believe it would be unstable, just that it is and is a flaw of S-B model. This is not a tenable position, you need to show how S-B is unstable and how your theory is not. Good luck! • So if you don’t accept Maxwell’s equations for radiation, which are then your equations and what do they tell you? • Maxwell equation are valid for propagation. For emission/absorption, you need to take into account the quantitized nature of emmiters/absorbers when those emitters are molecules or atoms. Which is the case for the IR wavelength of interest. If you want to use continuous Maxwell down to atomic lenghtscale and energies, you predict unstable atoms. Everything should go back to neutronium, which will be a problem for predicting EM radiation with Maxwell equations ;-) • Actually I believe that it is possible to describe the situation without the need of standard way of introducing the quantization. The quantum field theory of electromagnetism (QED) is used in practical calculations as perturbation theory in the form of Feyman diagrams, but this is not necessary in principle. Similarly the quantum transitions of molecular states are introduced in the spirit of Copenhagen interpretation of quantum mechanics. This is again not necessary while very useful in practice. Both choices are valuable practical tools in quantitative physical analysis, but they are not really required. In principle one can formulate the whole problem by writing the full equations to describe all molecules in the atmosphere and all radiation by Schrödinger equation and Maxwell’s equations and possibly introducing modifications related to QED. There is no basic reason to assume that these equations cannot be used in another way, which does not involve the traditional way of quantization at micro level but aiming directly to answering some macroscopic questions. It may even be possible that this approach gives many results more easily and directly than the standard procedure. What I have seen in the text of Claes Johnson is certainly not a complete and valid presentation in this line of thought, but it may be partially correct and it might be possible to continue in this direction and reach correct results. I have full trust that the final results would agree with the results of the standard approach, but it is likely that the same results would indeed be reached in a way that does not include back radiation. This would be an extension of the idea of wave-particle dualism. The description in terms of waves does not include back radiation, but it would still give the same quantitative results. Agreeing with accepted physics does not require dogmatic adherence to the standard way of describing the details. • agreed, that’s what is a little bit disturbing about QD imho: not easy to draw where quantum description start, and where classical physics end. For example, a lot of classical QD imply wave/particles in external potential….but those potential are themselves caused by phyical objects, so by W/P assemblies. Why are they represented by perfectly know and unchanging potential fields them, like some kind of ghost of classical Newtonian entity? I guess QD has progressed since (I only have some training about early stage QD, probably from the Plank/Einstein era, and still it is vulgarisation). But I have 2 problems with C. J. approach. one is that is is hopeless imho to try to use Maxwell equations only, you have to introduce some quantization, or an equivalent effect, to avoid molecules to radiate even at 0K just by electron orbiting. Or you can say that bohr atom model is not correct, but this is just another way to make QD come back through the backdoor…As you said, QD can be introduced in many ways (which I find slightly disturbing, but I also agree with you that QD is one of the most (if not the most) succesful physical theory), but Maxwell has to be complemented somehow. C.J approach seems to be “add a phenomenological structural damping for elementary resonators”. I am fine with that, even if I think it explain less than quanta as introduced by planks and so is a poorer approach. The problem number 2, unfortunately, can not be bystepped just by saying that choosing the method is a matter of personal preference: the radiative exchange presented is not equivalent to S-B law, we have R = 4 s T³ (T-T_cold) versus R = s (T⁴-T_cold⁴). Not the same, and I prefer S-B for symmetry reason (the fact that each body radiates without having to know his surrounding is a huge plus in S-B), but here personal preference has no play: the difference is so high as to be easily tested by simple calorimetric experiment. Maybe I have misunderstood C.J., and his derivation is in fact strictly equivalent to S-B. But then, why the fuss? it is only a re-interpreation of the same formula, and by definition, should have exactly the same effect, being used for computing calorimeter calibration, heat exchange in a turbine, or GH effect… • kai, I am not for CJ, I am only noticing that much that has been used against it is not valid argumentation but presents lack of knowledge about the variety of the ways the same basic physics can be approached in practice. The full dynamic equations are very complex and cannot be solved directly. Therefore some ways have been developed for solving then stepwise. The standard approach goes through the micro physics. The method is based on perturbation theory which is equivalent to introducing photons. The method implies also discussing the emission and absorption of the photons by transitions between the ground state and vibrational state of individual molecules. Each photon is a separate entity having a random phase of EM fields in relation to other photons. This is in accordance with a state collapse in the Copenhagen interpretation of quantum mechanics. Thus we describe the wide macroscopic phenomena as a combination of a huge number of independent microphysical phenomena. This leads to good results because the higher order terms of the perturbative analysis are very small and the coherence between micro-processes very weak. While the above approach has provided very good results, it is not the only possible approach of making the original insolvably difficult problem solvable. Another approach would be to look at the macroscopic problem and use some clever averaging and smoothing to make the field equations solvable. I am not at all sure that this can be done in practice, but it is not excluded. If the approach works in is likely to involve solving Maxwell’s equations with some clever way of describing the interaction of electromagnetic fields with molecules. This interaction must conform with quantum mechanical description of molecules, i.e. with the Schrödinger equation, but this may be done without the use of the state collapse of Copenhagen interpretation. Like the Schöringer’s cat the molecules will remain both alive and dead, i.e. it is not known whether they are in the exciter on in the ground state. What I have written is highly speculative and would have its right surrounding in a site, where different interpretations of QM are discussed, such as Copenhagen interpretation, many worlds, hidden variables etc. What I have written is in line of my own longstanding thoughts on these issues and I do not know, how many others would agree on them. • One may ask, how is the above second approach consistent with the fact that we can measure backradiation with a measuring device. There is no problem in that. In that approach electromagnetic field is present everywhere in space and the measuring device is interacting with this field. In this approach the field is no more forward or back radiation, it is just EM field in a state consistent with all matter that interacts with it. Gas that is conventionally described as radiating back-radiation is influencing this field reducing the energy carried by the field upwards, but there is not specific back-radiation. 61. Backradiation is unstable because it would correspond to a negative dissipative effect, which is unstable just like the backward heat equation with negative diffusion. This is well known and supported by solid math. You cannot unsmooth a diffused image by negative diffusion. If you don’t believe try it in photoshop. • It is not: “back radiation” (I put it in quotes, because there is only radiation, not main or back) is always lower than the other one. You can not analyse stability of “back radiation” only, you have to include all radiative exchange in your stability analysis. Including all radiative exchange, S-B always predict net heat exchange from hot to cold, never the opposite. In a many-body case, it is thus a diffusive equation, not a negative diffusive equation, and it is thus stable. If you do not agree with this statement, you just have to provide an example using S-B relations that would lead to unstable situation, where entropy would decrease (hot will get hotter, cold colder). Best would be to start with isothermal and find a perturbation that grow, but even in non-isothermal situation, if you can find an example where entropy is decreased by S-B, it would be enough ;-) • Claes, for clarity can we also stipulate that “Backradiation” in this context refers to the net energy increase caused by the so called “greenhouse effect”. As opposed to downwelling radiation which is the result of general emission based cooling of air at 30km alt and above. • There’s no ‘negative diffusion’ and no instability. The cooler body transfers heat to the warmer one, but the warmer one transfers more heat to the cooler one, so the net heat flux is always from the warmer to the cooler. 62. Sometimes experiments serve better than words at resolving differences of description. Consider a 1-D system of two black body plates set at 100K and 400K. The Stefan-Boltzmann flux is 1446 W/m^2. Next, insert two intermediate plates, positions otherwise irrelevant. The steady-state temperatures become (100K, 304.53K, 361.62K, 400K) and the flux drops to 482W/m^2. Next insert a central fifth plate. The temperatures are now (100K, 283.67K, 336.69K, 372.36K, 400K) and the flux 361 W/m^2. Adding the fifth plate has lowered the temperature of one plate and raised that of another. Does Johnson’s physics yield these numbers? (I have no idea!) If not, there’s a simple experiment to do. If so, where’s the beef? • “Sometimes experiments serve better than words at resolving differences of description.” Sorry Quondam, I did not see an experiment, I just saw words. Please go and perform your experiment, preferably recorded to video, see how that goes. I doubt you will achieve the same results as your “thought experiment”. • I have no doubt that he would since that is standard Radiational Heat Transfer Engineering which is applied in such situations every day! I invited Claes to apply his method to such problems several times but he ignores it. I guess the mathematician likes to derive his new equation but can’t test it against real world situations. 63. Perhaps on this point we could also ask: – How much time does the energy represented by a photon from the Sun spend in the Earth system before it is lost to space? – How many individual molecules does that energy represented by a photon from the Sun spend time in before it is lost to space? – Why does the surface only warm by 0.017 joules/m2/second during the height of the day when the sunshine is beating down at 960.000 joules/m2/second. – Why is there no “time” component in any of the greenhouse radiation physics equations. The vacuity of the greenhouse gas hypothesis to answer these questions, to my mind, is its undoing especially since we are now finding so much empricial evidence telling us CO2 causes no warming. • Are these your questions, John? Bill Illis asked the same questions at lucia’s last night. • John, I think those are very good questions to ask in my opinion. I have always thought that the best way we can understand the effect of manmade CO2 was to calculate the extra time energy spends in the earth climate system in response to an increasing greenhouse effect. I think that way of thinking about the problem gives us the best handle on how big of an issue it really is. I think the essential problem, however, is that the transient nature of climate is neglected in most of these treatments. As a molecular physicist who studies time-dependent transient behavior of absorbing molecules, it seems to me that this is the area in which climate science needs the most work. I heard a talk by Ricky Rood in which he told an audience member that the typical atmospheric transient is gone in a few days, yet La Nina and El Nino events, which represent the coupling of the atmosphere and oceans, have very long time transients. These could be represented in the amplitude fluctuations of the El Nino/La Nina events, their phase or their damping and range from a few weeks to a few years, maybe even decades. We don’t even know yet. So his answer struck me as quite odd. The steady state solution in most important cases in a limiting case. Since we (the community of scholars and interested public) are convinced this case is pretty well understood, it’s time to move on to transient scenarios that better model the real world each person sees on a year to year basis. I think from there we might be able to answer the questions you pose. I think they deserve an answer. • John – let me address your various points one at a time. 1. Time is an important component of computations involving radiative warming of the Earth and atmosphere as a function of the concentration of CO2 and other greenhouse gases. However, this is not because of the time needed for radiative energy transfer within the atmosphere, which is almost instantaneous. Rather, it is because heating of the surface is a time-related function of specific heat capacity, combined with elements of thermal conductivity, and in the oceans, turbulence and convective mixing. More below. I don’t know where your figure of 0.017 W/m2 for solar heat uptake come from – can you provide a reference to the relevant data? However, I’m not sure the figure is very meaningful. Land warms (and cools) much faster than water, but 70 percent of the Earth’s surface is ocean, and most of the heat from the sun and from back radiation originating in the atmosphere is stored in the ocean. Because ocean heat capacity is so enormous, diurnal changes in radiation entering from above exert appreciable temperature effects only near the surface. Mixing of the upper layers quickly averages out these effects, so that temperature changes in the entire mixed layer are very unresponsive to short term variation in radiation. For this layer, one tends to think in terms of months and years, and for the entire ocean, centuries to millennia – not hours. In essence, most of the W/m^2 radiated into the ocean is absorbed, the remainder being reflected as a function of albedo, which in the case of water is relatively small. Of course, increased absorbed radiation is met with an increase in emitted radiation, along with an increase in latent heat transfer via evaporation and convection. I suspect the figure you cited, if accurate, may refer to very superficial layers of the ocean, but in any case, one must specify what “surface” is involved when citing such statistics. Ultimately, the warming from the exposure you describe will be greater than the figure you cite. 2. The number of molecules among which a photon’s energy is diffused is astronomical because of thermalization. The vast majority of excited CO2 molecules, for example, are de-excited by collision with neighboring gas molecules, thereby raising the average kinetic energy (i.e., the temperature) of their surroundings. Since the energy of each collision is immediately distributed widely via further collisions, one would have to calculate a mean number based on the Boltzmann distribution. I’m sure it could be done, but I’m not sure how informative it would be for our purposes. 3. By similar reasoning, I’m not sure how informative we would find an analysis of the mean time a photon’s energy spends in the climate system, although the calculation could probably be done. Perhaps it would provide a clue as to the warming potential of greenhouse gases, but if so, it would be a very indirect means to that end. In explaining the greenhouse effect to non-scientists, CO2, water, and other GHGs are sometimes described as “delaying” the escape of radiation to space, but the description is misleading. It is true that energy radiated from the surface, and absorbed and reradiated many times before escaping is delayed in a temporal sense, but the time delay, which is extremely small by our mundane concepts of time, is not the mechanism underlying the warming. Rather, warming occurs because of a temporary imbalance between the incoming solar radiation and the longwave radiation escaping to space due to the fact that the GHGs intercept upwelling radiation and cause it to be reradiated in all directions including downward. This imbalance is translated into increased radiative energy absorbed within each layer of atmosphere down to the surface, and a balance can be restored only when each of these entities warms sufficiently so that outgoing longwave radiation, which depends on temperature, returns to its former level. Because escape is impeded by higher GHG levels at any given altitude, energy must reach a higher altitude for adequate escape, and since higher altitudes are colder, they must be warmed from below to mediate IR emission sufficient for a full restoration of balance. In essence, the greenhouse effect can be quantified not by asking “how long?” but rather by “how high, and how cold?”, and computing the results over a spectrum of wavelengths. These theoretical calculations are now well confirmed by observational data. 4. I’m surprised by your claim that empirical evidence refutes a warming role for CO2. I’m familiar with the climate science literature, including data from recent and current measurements, as well as data extending back more than 400 million years – all converging from multiple sources to demonstrate a very substantial role for CO2. It would be illegitimate in science to insist that any phenomenon, including a warming role for CO2, can be demonstrated with 100 percent certainty, but in this case, the level of certainty is high enough to approach 100 percent. I’m unaware of any evidence at all that suggests the absence of CO2-mediated warming, and so I believe your statement is simply wrong. However, I would be interested in appropriate data references that have led you to make your claim. In truth, though, the realistic element of uncertainty is not whether CO2 warms the climate appreciably, but to what extent. This quantitation has been the subject of numerous discussions here and elsewhere. • Fred, that was a thorough answer and quite informative. I did take notice of one particular statement. I think this statement is only meaningful if we assume the climate system is a strictly steady state system. Obviously, for a steady state system time dynamics are not interesting because we’ve assumed that they have dissipated, whatever they were. That is the definition of steady state. The climate system, however, is inherently dynamical and its the transient in the climate system that cause the up ticks/down ticks in snowstorms, hurricanes, floods (when people aren’t causing them) and the other ‘wonderful’ events we witness in this world. I also think that it is incomplete to think that the radiative transfer happens instantaneously. I agree that when we focus on the gases in the atmosphere thermalization occurs very quickly and when a lone CO2 or water is excited and along enough, radiative decay happens faster than we can perceive. That said, it may be possible for transients in the atmosphere to manifest themselves in other aspects of the climate system and get propagated for much longer times. Couplings to the oceans, cryosphere and biosphere are very poorly understood at this point in time, especially because they are heterogeneous. I can imagine reasonable cases in which transients of greenhouse effect could cause plant growth that impacts an ecosystem for many years or cause the overturning of a current in a different way or melts/freezes portions of glacier, in all cases causing changes that last much longer than ‘instantaneous’. To the zeroth order, I think the steady state picture provides a useful tool. I just wonder if we’ve used most to all of its utility. • Bill Illis has answered these question’s at lucia’s Blackboard. Hey, Judy, how about a main post for this gem. It’s shiny. • Maxwell – I agree completely that in our current non-steady state, time constants are an important element in determining climate dynamics. I tried to make that point when I mentioned the very long times involved in ocean heat storage. I can’t agree with John that these elements are neglected, and in fact, both models and observational studies are often aimed at quantifying the time relationships. My more limited point was that radiative changes in the atmosphere in response to a change in radiative balance at the top of the atmosphere occur extremely rapidly. It is the non-radiative elements of climate dynamics, including convection in the atmosphere and energy transport and storage in land and oceans that consume more time. 64. I’ve been looking for easier ways to understand what’s happening to global temperature and why. The concepts of back radiation and the second law of thermodynamics both seem to me to make the reasoning very complicated, with the result that one can use these concepts to prove anything you want, including that the planet is cooling, or that it is warming. We’ve seen endless examples of this sort of reasoning not just on Climate Etc. but all over the web. So I asked myself, is there one single phenomenon to which all such questions can be reduced, which doesn’t allow the outcome to be argued either way according to what one believes? I think there is. It is how many photons are leaving Earth. Or how much radiation if you don’t like thinking about photons. There seems to be no serious debate as to how much radiation is arriving. The intensity of sunlight at 1 Astronomical Unit (AU) from the Sun, which is where we are, is around 1370 W/m2. The area of the Earth capturing this as a disk is around 127 .5 million sq. km (precisely one quarter of the area of the surface of the Earth as a sphere). And Earth’s albedo is around 0.3, meaning only 70% of the intercepted insolation is heating Earth. Multiplying these together gives 1.37 * 127 * 0.7 = 121.8 watts, with the decimal place 3+12 = 15 places to the right. This comes to 122 petawatts, a phrase that’s easily googled if you want to check the math. For equilibrium, that is, in order to maintain a steady temperature, Earth must radiate 122 petawatts to outer space. Each photon of that radiation can come from only two places: the Earth’s surface, or a molecule of one of the greenhouse gases in the atmosphere. These two sources of radiation behave very differently. Earth’s radiation is sufficiently broadband as to be reasonably modeled as radiation from a “black body” at around 288 K. In sharp contrast the greenhouse gases radiate at certain wavelengths called emission lines. These lines coincide in wavelength, if not always exactly in strength, with absorption lines. The radiation leaving Earth can therefore be classified into two kinds: the black body radiation leaving the surface of the Earth, and the emission lines leaving the atmosphere. The last line of these tables shows that 80% of the blackbody radiation leaving Earth’s surface is between 7.62 and 32.6 microns in wavelength. Some of these wavelengths are open to the escaping radiation while some are blocked by the absorption lines of the atmosphere’s many greenhouse gases. The two dominant greenhouse gases are H2O or water vapor and CO2 or carbon dioxide, having respective molecular weights of 18 and 44. (There are variants of these with an extra neutron or two in each atom but those are in a distinct minority and hence can be ignored here.) Human population has been growing exponentially for many thousands of years, doubling around every 90 years or so in the past couple of centuries. The per capita fuel consumption has also been growing exponentially over this period, with the result that we are doubling our contribution of CO2 to the atmosphere every three or four decades. The late David Hofmann, shortly after his retirement as director of NOAA ESRL Boulder, claimed a more precise doubling period of 32.5 years, along with 1790 as the approximate date when the residue remaining in the atmosphere from our additions was 1 part per million by volume (ppmv) of CO2. He assumed this residue to be added to a natural base of 280 ppmv during the previous few centuries. Barring any strenuous objections to these numbers I’m happy to go along with them. The upshot is that we can estimate CO2 over the past few centuries as 280 + 2^((y − 1790)/32.5) where y is the year. For example if y = 2010 then this formula give 389 ppmv which is in excellent agreement with the CO2 level measured at Mauna Loa. All this arithmetic is mainly to make the point that we are increasing the CO2 in the atmosphere, while adding a little corroborative detail. Of the photons escaping from Earth’s surface, some are at wavelengths blocked more or less strongly by CO2. Call a wavelength closed when the probability that a photon leaving Earth’s surface will be absorbed by a CO2 molecule before reaching outer space is less than 1/2, and open otherwise. (Sometimes 1/e instead of 1/2 is used, in conjunction with the terminology of unit optical thickness, but it doesn’t make much difference to the outcome and 1/2 is easier to relate to.) The HITRAN08 database of CO2 absorption lines lists 27995 lines in the above-mentioned range from 7.62 microns to 32.6 microns. Currently 605 of those lines are closed. According to Hofmann’s formula CO2 will double by 2080, which will close a further 120 lines. This will leave 27,270 absorption lines of CO2 still open, of which only a further 2502 lines will close when and if the CO2 level rises to 40% of the atmosphere by volume, a more than lethal level for all mammals. Now the closed lines aren’t truly closed because they can emit as well as absorb. These account for the photons radiated to space from the atmosphere, as opposed to from the surface of the Earth. It is tempting to argue that increasing CO2 will increase the radiation from these closed lines. To see why this is wrong, picture the CO2 molecules in the atmosphere as grains of white sand on a black sheet of cardboard. When there are very few grains the cardboard looks black, but as the grains fill up it gradually turns white. Furthermore the more grains there are, the higher above the cardboard are the visible grains. The same effect is happening with CO2 molecules that both absorb and emit. For any given wavelength, with very little CO2 an observer in outer space looking at just that wavelength sees the surface of the Earth. As the CO2 level increases the observer starts to see CO2 molecules covering the Earth’s surface. And as the level continues to increase, the visible CO2 molecules are found higher and higher, just as with the grains of sand. But the higher they are, the colder, at least up to the tropopause (the boundary between the troposphere and the stratosphere). So radiation from CO2 molecules decreases with increasing level of CO2 in the atmosphere. This is not true of the CO2 molecules in the stratosphere, but there are too few of them to make a significant difference. This is a complete analysis of the impact of increasing CO2 on how much radiation leaves the Earth at each wavelength. It describes what’s going on both simply and precisely, unlike accounts based on back radiation and other phenomena which are far harder to analyze accurately. This analysis ignores the impact of feedbacks, most notably the increase in water vapor in the atmosphere expected from the temperature increase induced by the increasing CO2. That increase could work either way: more water vapor could block heat at other absorption lines since water vapor is a greenhouse gas. But water vapor also conducts heat from the surface to the clouds, a cooling effect. Hence the net effect of such feedbacks needs to be analyzed carefully. However the feedback cannot result in an overall cooling, since the feedback depends on CO2 raising the temperature in order to evaporate more water. The question is only whether the feedback reduces the warming effect of CO2 by some factor between 0 and 1, a negative feedback, or enhances it by a factor greater than 1, a positive feedback. It cannot reduce the warming effect to zero since then there could be no feedback. This pretty much covers the whole thing. • Vaughan: “This pretty much covers the whole thing. ” Nope, you missed out entirely geothermal energy loss from Earth’s core. Where’s that 5000 C degrees of heat going? IN = OUT or BOOM! Your equation means ‘BOOM!’ • John, you raise an excellent question, one that was asked in the 19th century. Based on the thermal insulating qualities of the Earth’s mantle and crust, Lord Kelvin calculated that the heat at the core must be leaking out at a rate that would prove that Earth could not have formed more than 50 million years ago. However the geologists were unable to reconcile Kelvin’s figure with what they were observing in the geological record, which suggested the Earth was billions of years old. This huge discrepancy was a great puzzle for a while, until it occurred to physicist Ernest Rutherford to calculate the heat that could be generated by a small quantity of radioactive material (uranium etc.) in Earth’s crust. He found that it would not take much to exactly balance the amount of heat leaking out through the crust. If this were not so, in the four billion years of Earth’s life the core would long ago have cooled down to something closer to the surface temperature. In effect the small amount of radioactivity in the crust is acting like a stove to keep Earth’s core at a steady temperature over billions of years. Global warming has only kicked in strongly over the past half century. Compared to the billions of years in which the core could have cooled down but didn’t, half a century is nothing timewise. • @ Vaughan Pratt… You say: I have led myself to make the calculations, from the observational and experimental derived formulas, and have found the results corresponding to Photons Mean Free path and to Photons Lapse Time before the absorbent molecules of the atmosphere hit or diffuse them. I have done it for each component of the atmosphere and for the whole atmosphere. Most relevant results are as follows: Crossing time-whole column of mixed air (r = 14 Km, wv = 0.04) = 0.0097 s Crossing time dry atmosphere (r = 14 Km) = 0.0095 s Lapse time rate-whole mixed air (r = 14 Km, wv = 0.04) = 20.78 m Absorptivity-whole mixed air = (r = 14 Km) = 20.79 m Crossing time-water vapor at 0.04 (r = 14 Km) = 0.0245 s Lapse time rate-water vapor at 0.04 (r = 14 Km) = 8.05 m Crossing time-whole column of carbon dioxide (r = 14 Km) = 0.0042 s (4 milliseconds) Absorptivity-whole column of carbon dioxide = (r = 14 Km) = 46.8 m Total aborptivity of the whole mixture of air (r = 14 Km, wv = 0.04) = 0.01 Total emissivity of the whole mixture of air (r = 14 Km, wv = 0.04) = 0.0096 Total absorptivity of dry air (r = 14 Km, wv = 0.04) = 0.01 (rounded up from 0.0099) Total emissivity of dry air (r = 14 Km, wv = 0.04) = 0.0094 Total absorptivity of water vapor at 0.04 = 0.024 Total emissivity of water vapor at 0.04 = 0.0237 Total absorptivity of carbon dioxide at 0.0004, whole column = 0.0039 Total emissivity of carbon dioxide at 0.0004, whole column = 0.0039 Overlap water vapor/carbon dioxide, absorptivity = 0.024 Overlap water vapor/carbon dioxide, emissivity = 0.0235 Those are well reviewed results, supported by observation and experimentation. Now tell me, do you think the “downwelling” radiation heats up the surface? Why to talk about a “downwelling” radiation when we perfectly know that the possibility for the energy to be emitted is, equally, at every trajectory? Besides, there is a photon stream, stronger than any photon stream coming from the atmosphere that nullifies any backradiation” or “downwelling radiation from the atmosphere. The term “backradiation” is absolutely invented and incorrect; why? Because the air is not a mirror. Why dismissing convection, when we perfectly know that it is the prevailing way of heat transfer in the atmosphere? • WordPress have mixed up all the lines corresponding to the data. Please, go to the following table: Since both you and I have rejected the concept of “back radiation” as not helpful (if not for exactly the same reasons—in particular I don’t consider it incorrect, just harder to work with) it sounds like we’re both more or less on the same page regarding that aspect. I agree that convection can make a difference in the thermal insulating qualities of the atmosphere, for example by transporting heat from the surface upwards, e.g. via thermals. What I was focusing on however was the heat leaving Earth for outer space, which cannot be accomplished by convection because there is no significant flow of matter from Earth to outer space. Radiation is the only way available to Earth to shed the 122 petawatts of heat that the Earth is constantly absorbing from the Sun. Hence to understand how an increase in CO2 could heat up the Earth it suffices to consider how increasing CO2 blocks some of the departing radiation. One point I neglected to make is that the heating resulting from blocking radiation raises the temperature of the Earth until it is once again shedding 122 petawatts, the amount of heat it is absorbing from the Sun. The additional lines closed by increasing CO2 make for a smaller atmospheric window through which to push those 122 petawatts. In order to get the same amount of heat through this smaller window, Earth’s temperature has to increase. This is analogous to having to raise the voltage across an increasing resistance if you want to maintain a constant current. 65. Judith, The misconception of science is that it is suppose to be a balanced system. It is far from it. Our concept is so far out of balance with what is actually happening due to the past down theories that apply to ALL of the planet at the same time. Hmmm. Round Planet rotating. • Claes, the issue is this. For the past many decades climate researchers and physicists have put their equations, data and analyses out there. The story of IR emission by gases hangs together very well in terms of observations, theory, and radiative transfer modeling. The challenge is in your court to demonstrate that any of this is incorrect, and to put forward a coherent case that convinces people that are knowledgeable of the observations, theory, and modelling. IMO you have failed to do this. This isn’t about exchanging equations. The body of physics and chemistry that underlies the calculations of gaseous absorption and emission made by line-by-line radiative transfer models is well understood, apart from some issues related to the water vapor continuum absorption under very high humidity conditions (this is understood in terms of the observations, but not theoretically, and hence is parameterized empirically in the models). • “The story of IR emission by gases hangs together very well in terms of observations, theory, and radiative transfer modeling.” And yet still all the counter arguments which have been presented here supported by hard evidence and real-world observation are far more compelling. No thought experiment or computer model can change reality. As Richard Feynman said: • Judy: You just repeat a mantra without mathematical basis. I prove that the “backradiation” of the KT energy budget which you say you believe describes real physics, is not to be found in Maxwell’s equations, which have shown to model almost all of macroscopic electromagnetics. You say nothing about this proof. You are still convinced and probably teach your students that in some mysterious way a cold body sends out some mysterious particles which in some mysterious way heats a warmer body. It is a mystery in every step from scientific point of view, but mystery is not science. I have demonstrated that “backradiation” is fiction, and it is now up to you show that my proof or assumption is incorrect, or accept it as correct. Can we agree on this? So what is wrong with my argument? Have you read it? • Your argument is incapable of explaining radiational heat transfer which is used in practical situations everyday where theoretical predictions are confirmed by measurement. You have dodged the challenge to apply your ‘theory’ to a practical situation, until you do you’re just hand waving. Show your working for us to follow your calculations, until you do it’s just ‘hot air’, I’ll await your calculations. • Sure, they are on the way. There are many things you can compute from Maxwell’s equations. • You talk too much… Demonstrate that Claes is wrong with your own numbers. I’ll be waiting here… … … … … • Do you dispute that if you put an infrared radiometer on the surface of the earth and point it upwards, that it will measure an IR radiance or irradiance (depending on how the instrument is configured)? Go to for decades worth of such measurements. And that this infrared radiation comes from IR emission by gases such as CO2 and H2O and also clouds? If you say yes, well this is what people are calling back radiation (a term that I don’t use myself). If you say no, then I will call you a crank – all your manipulations of Maxwell’s equation will not make this downwelling IR flux from the atmosphere go away. • Dr. Curry, The IR irradiance from the lower temperature/frequency/entropy atmosphere cannot heat the higher temperature/frequency/entropy Earth, as explained by another author of “Slaying” here: even though “back-radiation” can be measured by a thermocouple or thermister that has been cooled by liquid nitrogen to temps lower than the atmosphere in order to measure said “back-radiation.” [alternatively, less expensive units can measure “back-radiation” at ground temperature by e.g. a thermister increasing or decreasing resistance (depending on the type) due to the thermister losing heat to the atmosphere and a mathematical correction is applied to measure temps lower than the sensor] • How about a little thought experiment, or actually a quiz, anyone? Imagine two blackbodies, one has emitted a 9um photon, which will interact with the other blackbody, the other has emitted a 10um photon which will interact with the first blackbody. Now both blackbodies will be warmed by the photon it interacts with. The question is: Which blackbody is warmer, the first or the second? I will listen only to those who can answer the question. • Warmer blackbodies emit more energetic radiation. Photons are very small. I have nothing to add. • Blackbodies make me warm. • I have nothing to add. • Given the amount of information you have provided for your ‘quiz’, it is not possible to tell which body is warmer. Two bodies at different temperatures can both emit photons at both 9 and 10 um. The distribution of frequencies emitted is very large. If you are more precise and specific with the question, I should be able to answer it. • The question is vague because the real question is whether or not there is a two-way flow of energy between the two blackbodies or not. If Cleas Johnson is right, then Planck, Einstein, and the Standard Model is wrong, and there should be some exchange of Nobel Prizes. And maybe a photon can carry more than one peice of information. Like it needs to know where it has been and where it is going. • bob, I don’t know why the question is vague, but it is. More to your point though. If one blackbody gives off energy, and another gives off energy, why wouldn’t they flow energy to each other? Johnson is basically saying that one blackbody (the warmer one) KNOWS that the other blackbody is colder. And he is saying with by only referring to this fact via the source-less Maxwell’s equations, according to his comments here. Did I miss something? • Now that I realize it was your question originally I take back my previous comment. You won’t understand it. • No, you didn’t miss a thing. That’s what I have been trying to say, that Cleas Johnson requires that the photons know where they are going and where they have been. • If Einstein is right, then Claes is right. • Nasif, you do realize that Einstein was the first to propose the existence of the photon, don’t you? • @ maxwell… Don’t you? I know also Einstein deduced induced emission many years before it was confirmed by observation/experimentation. In 1678 Huygens proposed that light was a wave, contradicted in 1704 by Newton who claimed light consisted of particles. Newton’s particle theory was generally accepted over Huygens’ wave theory until 1801 when Young’s two-slit experiment showed that Huygens was right. The wave account then survived for a century until Einstein showed that Newton was right too. However Huygens had no idea what the wavelength was, while Newton had no idea how big the particles were or how a mirror could reflect them. So neither of them had as much claim to their respective theories of light as Young and Einstein, who were the first to actually observe respectively the wave and particle forms of light. Newton called the particles “corpuscles” while Einstein called them “light quanta.” The snappy term “photon” was introduced later. • The IR irradiance from the lower temperature/frequency/entropy atmosphere cannot heat the higher emperature/frequency/entropy Earth, Yes, but what it can do however is reduce the loss of heat. When (for each square meter of surface) you have U watts of heat going up and D watts going down, with D < U, the net loss of heat from the surface is U − D. If U is 396 W and D is 0 W then the net loss of heat from the surface is 396 W. If however D is 333 W then the net loss of heat is only 63 W. This does not contradict the 2nd law of thermodynamics because the net flow of heat is still from the hotter to the colder entity, there just isn't as much flow between two entities that are at relatively similar temperatures. Although 63 W might seem like a lot of heat, in terms of temperature the difference is only 289 − 277 = 12 degrees. (100*sqrt(sqrt(396/5.67)) = 289 K.) There is incidentally a fundamental error in Dr. Anderson’s website. He says “Each time a greenhouse gas molecule absorbs ground radiation energy, it sends half of it back to the surface.” While it’s true that half the energy goes up (not necessarily straight up) and half goes down, the latter need not reach the surface because it may be intercepted by another GHG molecule first. That possibility is one of the things that makes it extremely hard to calculate just how much heat GHGs intercept. This is why I recommend my much simpler way of calculating it, namely solely in terms of the number of photons escaping to outer space. Those are the only ones capable of cooling Earth: if none escape the temperature will rise enormously. Those photons reradiated from the atmosphere bounce around the atmosphere, sometimes hitting the ground and sometimes escaping to space, and are much harder to reason about. Rather than even try to reason about them, just ignore them altogether on the ground that only those photons that escape to space make any difference to global temperature. • Vaughan, a couple of nit picks. Some of the IR goes sideways and depending on height some of the generally downward doesn’t even go to earth as the earth is round and not infinite, so, less than half goes in the direction of the ground. Increasingly less with altitude. As far as what is moving between earth and GHG’s, part of the argument is whether observed IR really transfers a quantum of energy that translates to heat. Various arguments include the fact that a photon is a wave front until it actually transfers its energy to something, which means in quantum mechanics it simply may not do it where we think it should. As there really do appear to be teleconnections between “particles” the photon may KNOW not to transfer its energy to the higher temperature bit just like in conduction where the material KNOWS not to move energy from cold to hot. Finally, fitting in with the idea of a slower cooling of the surface, the warmer surface may simply reradiate the energy from the incoming IR without it affecting the temperature. Then there is the older solid science of wave interference. Long before quantum theory was relatively solid it was known that waves interfered and cancelled each other. Why that is not considered as a possibility for colder not heating warmer or not slowing the warming I simply don’t understand. The energy equations show a NET energy flow and the interference, scattering, and cancellation could be components of creating this NET flow. In the case of a NET flow it should be noted that there would be NO slowing of the rate of radiation from the hotter surface unless the scenario where the photon coming from the colder source is absorbed and reradiated is correct. I am unsure why the lower energy photon would be able to cause a quantum increase in the warmer material though. Again, where are the quantum mechanics to explain this stuff!! My problem with the reradiation of the colder sourced IR is that there is an additive effect that would seem to cause more warming or at least extending the cooling time. This should be measurable. If it isn’t the effect probably isn’t large enough to worry about. The problem with the current numbers is that they do not appear to break out the effects of conduction from depth in the surface. This is a small effect, but, so is the amount of CO2 heating that is alledged to cause feedback with water vapor. So many choices and so few people with the skills to guide us to the correct conceptualization of what is actually happening. • Kuhnkat, The quantum electrodynamics (QED) developed by Feynman and others is an extremely successful theory in describing how photons are created and how they interact. It has been tested empirically to a better accuracy than perhaps any other physical theory. From QED we know how photons interact with material. We know that the photons do indeed release their energy in well understood ways. The is no change that the alternatives that you propose might be true. • Yes. And I am almost sure you can check that low frequency photons can heat “high” temperatures bodies, if, like most westerners, you have a microwave. The food you put in is usually between 270 and 300 K. It is very rapidly heated to 370+K by photons at 2.4 GHz, about 0.1 m wavelength. I do not have blackbody emission curve at hand, but this should be the typical max emission wavelength for a bb opf a few K, maybe 10 K max, no? Much lower than the food temp, for sure. So why is it heated? Because magnetron is a coherent source? I doubt it, nowhere in the heating proces is coherence required afaik… • Kai, why do you forget the concept of heat pumps? Is it just convenient to ignore the actual physics. • Kunhkat, sorry, but here, I completely fail to see any relationship between heat pump and microwave heating. Appart from the fact that a fridge and microwave oven are often quite close to each other in a typical kitchen or in the mall ;-) Seriously, you will have to elaborate a lot more before I consider my failure to see any connection something to be corrected… • Kai, how much energy does you microwave “consume” to heat your food and how efficient is it?? • It is very efficient. Do you think microwave would have been introduced for industrial food heating if they were not? (the first one were much more powerful than the current one – they were scary ;-) ). Efficiency for a heating apparatus is something extremely easy to achieve though, depending on how you measure it. typically almost 100% of input energy is converted to heat…because everythin is ultimately converted to heat! So I guess you refer to heat energy IN FOOD/input energy instead total heat energy/ energy input (which is usually near 100% -possibly escaping sound and EM wave are absorbed too far to be counted). For the “food” efficiency, microwave ovens are very efficient. Magnetron are nice efficient device (I am quite fascinated with tube-age power electronics, klystron, fusor, all that stuff. Nice that everybody have his own magnetron nowadays) , and not much heat is lost outside the oven nor transmitted to recipient (well, it depend in what you put your food). Or maybe you refer to efficiency compared to a perfect carnot cycle (hence you heat pump reference). Sorry, I do not know of any food warming technology using heat pump. Maybe there is, but I never saw any. If there is, I guess for large amount of food it can me more efficient than microwave… Why, you are just about bringing to market an ultra-efficient combined fridge/oven based on heat pump? Congrat to you, but what does it have to do with C.J. theory about radiative heat transfer??? • Try this Kai, it actually explains in detail how ir excites h2o molecules. Notice that if a molecule does not have the correct configuration it will NOT be excited by this method. So, what molecules exactly are preferentially excited by 15 micron IR from CO2?? • Pekka, it is interesting you couch your closing sentence to me with the phrase “there is no chance” when theoretical physicists tell us that the universe may just be one of those chances that you blithely suggest doesn’t exist. Quantum mechanics, as I am sure you are much more aware than I am, is based on statistics. Statistics allow for many stranger things than my simpleton maunderings. But, I am a hardheaded simpleton. Can you refer me to experiments showing the increased radiation from a heated object caused by moving a cooler object close to it?? • You could probably do an experiment yourself at home that would test the ability of a cooler object to raise the temperature of a warmer object. It wouldn’t be perfect, but should give a reasonable approximation. Start with a cool room, and in the center, a 100 Watt light bulb. Turn on the bulb, and let the room temperature equilibrate. Also place a thermometer against the light bulb (shielded from outside influences) and record the temperature at the surface of the bulb. At this point, the bulb is radiating 100 W and the room is losing 100 W through walls, windows, etc. Now surround the bulb at a distance of about 1 meter with wire mesh at room temperature. The purpose of using mesh is to provide space for air currents to escape so as not to interfere with convection. We can also leave the mesh open at the top so that rising heated air will not affect it. Also, because the conductivity of air is very low, we can reasonably assume that most heat transfer will occur by radiation – admittedly, it would be better to perform the experiment in a vacuum, but that wouldn’t be practical. Place a thermometer on the mesh (again shielded, so that it records only mesh temperature). Allow equilibration. The room temperature will not change, because 100 W are still flowing into the room – the amount from the warmed mesh compensating for the reduction due to heat absorption by the mesh. Here are my questions: 1. Do you agree that the mesh will warm due to radiation absorbed from the light bulb? 2. Do you agree that the mesh will remain cooler than the light bulb surface, because not all the 100 W are absorbed by the mesh? 3. Do you agree that the warmed mesh will radiate some of the wattage it receives back to the light bulb? 4. Do you agree that the surface of the light bulb will also continue to receive 100 W from its internal heating element? 5. Do you agree that the internally generated 100 W plus the W from the mesh will exceed the wattage the light bulb surface was receiving prior to being surrounded by the mesh? 6. Do you agree that at equilibrium, the light bulb surface will now be radiating the W described in 5? 7. What do you think will happen to the temperature of the light bulb surface? Why? • Fred, if what you are suggesting starts to happen the filament increases its resistance changing the energy flux. • Assume a filament that emits a constant 100 W. How would you answer the questions? • Fred, I am happy to read about real experiments and discuss them to the extent my garbled knowledge allows. Are you planning on doing this one with appropriate instrumentation? • Kuhnkat – I’m not planning to do the experiment, because I don’t feel a need to prove anything. However, I would still welcome your thoughts about how it would come out on the basis of the questions I asked. I also wrote those with the thought that other interested readers besides yourself might appreciate the reasoning that has been expressed by many of us regarding the ability of a cooler object to raise the temperature of a warmer one, as long as the cooler object didn’t depend on its own energy but could gain energy that originated from an external source. If you would like to answer the questions simply from the perspective of a thought experiment, I hope you’ll go ahead. • Fred, How can my misconceptions contribute to the advancment of the discourse?? • For distinction between Downwelling IR and “Backradiation”, as already discussed and apparently ignored. And for an important understanding of why the “Backradiation/greenhouse effect” in unphysical pseudo science : Also apparently being ignored. 66. As some of my writing in this chain may appear obscure and even support Claes Johnson’s texts, I want make clear that I do not see anything wrong with the standard description involving photons, back radiation and transitions between ground state and the vibrational state of CO2 molecules. I wanted only to tell that the same physics with the same conclusions may perhaps be formulated totally differently. This alternative formulation would be closer to, what Claes Johnson has presented, but would definitively not change the results of the standard approach, which rest on solid experimental and theoretical knowledge of physics. Thus I disagree totally with all his statements that would modify the final conclusions. • Well Pekka, either there is backradiation or there isn’t. It can’t be just a play with words unless physics is a swamp where something can mean anything. • Claes, The physics is the same, but it can be described in different ways. The only way that si well developed and known to work includes back radiation. It may be possible to drop the particles and stick to fields without (second) quantization, but nobody has developed theory on that basis. The wave-particle duality is reality when ways are searched for describing quantum physics in classical terms. People cannot discuss directly in quantum physics. Therefore such different classical type descriptions are used although there is just one real quantum physics behind. Back radiation is a part of the particle type description. It would not be part of the wave type description if that would really exist. The physics would still be the same. Using Maxwell’s equations is a small step in this direction, but it has not been made complete (by you or anybody else as far as I know). 67. Tomas Milanovic Claes Johnson To your claim 1) aka non existence of “back radiation”. As you prefer equations , so just a few very simple ones. Let us consider 3 interacting systems. S1 is the void S2 is the atmosphere S3 is the Earth We will consider that we know some things about the Earth and the void but the atmosphere is complicated . There are clouds , moving gases , many mysterious and complex processes. So we will consider S2 as a black box where the only knowable parameters are the energy fluxes at the interfaces. The only assumption we will take is that S2 (atmosphere) and S3 (Earth) are in a steady state . They may transport and transform energy internally as they want but they neither store it nor release it. For S1 (Void) we will assume that it is in an approximate radiative equilibrium with S2+S3. If we call the energy fluxes F (W/m²) then we have the following equations : At the interface S1-S2 we have F1->2 = F2->1 At the interface S2-S3 we have F2->3 = F3->2 There is no contact and no interface between S1 and S3. That is 2 equation , 4 unknowns. However we can measure F1->2 and F2->1 and find that they are 340 W/m² and indeed approximately equal. Remark : Of course the conservation of energy would require that I write the equation for the whole system and use energy (units J) for a certain time scale . However once I have the TOTAL in and out energy , without loss of generality I can always divide the result by the surface of the interface and by the time to get back to fluxes (W/m²) which are more familiar . This of course doesn’t mean that it is assumed that the real fluxes are 340 W/m² everywhere . They aren’t . This “average” value is just what represents the energy conservation. Back to S3 (Earth) . It is behaving like a grey body with an excellent approximation and emits according to F = ε.σ.T⁴. When we integrate that over the whole surface and divide by the surface to get homogeneous units for all fluxes , we get a value of about 390 W/m² . But the radiation is not the only component of the F3->2 flux . We have also convection , conduction and latent heat transfers . These 3 components can be computed and estimated to about 100 W/m². Now only 1 unknown is left , the energy flux from the atmosphere to the Earth , and it is necessarily 390 + 100 = 490 W/m² . What can that be ? Even if the radiation from S1 (Sun/Void) goes completely through the atmosphere and we know it doesn’t, it is only 340 W/m². There would be still 150 W/m² missing. Convection and conduction towards the Earth is very weak because it is generally warmer than the atmosphere . Part of the latent heat may possibly return. But whatever part of the 100 W/m² come back to Earth , it is still not enough . As what is missing is neither convection/conduction nor latent heat it can only be radiation. Conclusion : the atmosphere radiates “back” on the Earth (hence “backradiation”) at minimum 50 W/m² but actually probably significantly more because not all incoming 340 W/m² get through and not all 100 W/m² of convection/latent return to the Earth . Thus it appears clearly that one doesn’t need any quantum mechanics , second thermodynamics principles or complex radiative transfers to conclude that the “back radiation” is a necessary consequence of the dynamics of the interacting systems S1,S2,S3 , as long as they conserve energy and are in a steady state at least approximately in a temporally averaged sense what we indeed observe. Of course one can then become much more specific and explain how the “backradiation” can be deduced from the first principles too . But I won’t repeat what has already been written 100 times above , I wanted merely to prove its existence which can be of course confirmed either directly or by measuring the fluxes I defined above . To your 2) I largely agree with this opinion . I have exposed on other threads the arguments why I believe that. It has mostly to do with the fact that the system is governed by non linear dynamics which lead to spatio-temporal chaotic solutions. Analytical or statistical considerations of spatial averages alone destroy all spatial correlations and have no possibility to recover the right dynamics. As for the computer models, their resolution doesn’t allow and will never allow to really solve the dynamical equations. What the computers produce are plausible states (e.g states respecting more or less the conservation laws) of the system but they are unable to discriminate between dynamically allowed and forbidden states. This inability to discriminate between allowed and forbidden states becomes of course worse when the time scales get bigger . • But you are dismissing something very important, the total emittancy of the carbon dioxide, which, from experimentation and observation, is quite low. Well applied, the algoritms give 0.02 for CO2 and 0.01 for the whole mixture of the air, including water vapor. I must say that the algorithms derived from experiments give a ridiculous total emittancy for CO2, which is 0.002, at its current partial pressure in the atmosphere. Those are important parameters that are not taken into account by the current models. Carbon dioxide is not a blackbody, according to the most elemental definition of blackbody, but a graybody. The ignorance on this physics issuesintentonally or not, has taken to many people believe in backradiations heating up, or keeping heat, the surface. • Sorry, it should have said: “Well applied, the algoritms give 0.004 for CO2 and 0.01 for the whole mixture of the air…” • Tomas Milanovic Nasif I make no assumption about the blackbox atmosphere , what it contains and what it does . I just observe and measure the fluxes at interfaces and apply energy conservation for systems in a steady state . From there follows necessarily the existence of a radiation flux from the atmosphere to the Earth . I do not attempt to say how much or by what mechanism because others have developped it ad nauseum . I demonstrate that observation tells us that the number is strictly positive what is enough to establish its existence . • “At the interface S1-S2 we have F1->2 = F2->1 There is no contact and no interface between S1 and S3. That is 2 equation , 4 unknowns. ?? Isn’t that 2 equations and TWO unknowns? If you know F1->2, you already know F2->1, if they are equal. 68. One area that Claes approach may give a new way of looking at a problem that has often been discussed on SoDs site. That is what is the fate of the radiation from the colder object when it arrives at the hotter object. To keep things simple lets say both objects are blackbodies. Three tenable approaches are generally given. 1. No radiation from the colder object arrives. 2. The radiation arrives but is simply subtracted from the greater amount of radiation of every wavelength leaving the hotter object. 3. The radiation arrives and is completely absorbed. Lets see how the 3 approaches deal with a simplified problem. Let the colder body be at 290K Lets consider an area of 1m2 some way from the colder object. With the hotter object absent; this area has a flux of 100W/m2 passing through it. (This means 100joules per second pass through the area) If examined the spectrum of the radiation would be BB centred around 15um. Now bring the hotter (1000K) object to this area. Approach 1 says the radiation from the colder object no longer arrives at this area. I consider this to be unphysical and will now drop this as it seems unreasonable. Approach 2 says the subtraction of the radiation will still leave more radiation of every wavelength leaving the hotter object. This satisfies the Stephan Boltzmann equation and also means that the colder radiation has no effect on the temperature of the hotter object. Approach 3 says the 100Joules per second is totally absorbed and add to energy of hotter object. The temperature of the hotter object is increased even if only slightly. Effectively this means that 100J/s centred around 15um is transformed into 100J/s centred around 4.3um. I would say this improvement in the “quality” of the radiative energy is forbidden by the second law of thermodynamics. Further although approach 3 seems to satisfy the Stephan Boltzmann equation there may be a conflict there if the temperature of the hotter object increases significantly. For these reasons approach 2 seems to be the only correct solution. • As you said: this point has been discussed dozens of times on SoDs site. And your error is always the same: approach 3 is not “forbidden by the 2d law”. Approach 2 is impossible: it would suppose that the hotter object magically “knows” that the radiation comes from a colder object. The approach 3 is the correct one. • Ort Perhaps you could expand your reasoning as to why approach 2 is wrong. • The approach 3 is the correct one. Quite, it is a blackbody therefore it absorbs all incident radiation. Only if the same number of photons of higher energy were emitted, however this does not happen fewer photons would be emitted to balance the additional incoming flux. An example in my lab I used a Nd:YAG laser which emitted at 1066nm which I then passed through a crystal which doubled the frequency to give me 533nm output. Two 1066 quanta are combined by the crystal lattice which then emitted a single photon at 533, no thermodynamic laws broken. • Phil. Felton So you are saying that 100J of radiative energy at say 15um is thermodynamically equivalent to 100J of radiative energy at 4.7um? See Hockey Schtick post above. • Yes 100J is 100J, just fewer photons in the 4.7μm band. • Phil. Felton ….”Yes 100J is 100J, just fewer photons in the 4.7μm band.”….. Now you must feel that this is on shaky ground. With other physical equivalents of the “crystal” you could input low quality radiation say from seawater290K(radiative equivalent 15um) and by suitable “crystals” transform it in stages into 4.7um radiation equivalent to 1000K with no losses when absorbed. With such a device ships would have no need of fuel simply extract it from seawater. I think this is a clear violation of the second law I think method 2 is correct • Well what you think doesn’t matter. Frequency doubling (and tripling) crystals exist and don’t violate the second law as does two photon excitation microscopy. • @Phil… Oops! You’ve touched entropy. Does the entropy of a crystal diminishes or it increases? Does the entropy of that crystal surrounding increases or it decreases? Does the entropy of other crystals behave homogenously? Would that crystal preserve its structure as long as the universe exists? You’ve got a biiig problem, and you did it alone. • Phil. Felton The point you bring up is very interesting. If a crystal can double the frequency of radiation with no energy loss then I will have to revise my understanding of the second law. I have been to several websites to get more background information. I have so far been unsuccessful. The more relevant ones seem to be behind pay walls. If you could provide a link to the thermodynamics of frequency doubling crystals it would be a great help • I am ignorant in this area so please let me ask a couple questions. Are these crystals in a passive system similar to a crystal used to display a spectrum? My thought is that if it is in a powered system there may be a pump effect whereas a passive system would have much less possibility for a pump effect to be happening. Why would anyone consider that combining two photons of one frequency into one photon of another frequency with no change in net energy be a plus or minus to either side of the argument? It would apparently conserve mass and energy and the frequency change is proportional? • kuhnkat As far as I know these crystals are only used in lasers. The total power output will be less than the input. So it might be using work to achieve what would not happen spontaneously a bit like a refrigerator. However if someone can prove that a crystal can without any loses double the frequency of radiation then I will need a rethink on the second law. • Beats me! They’re a passive system, the crystal lattice absorbs two photons and is excited to emit a single more energetic photon (double frequency). This gives a good account: • Phil. Felton It seems the radiation has to be of very high intensity like a laser. Later on they talk about increasing the efficiency. They dont specify whether this is energy efficiency however. I will need to keep looking. • Thanks gentlemen. • I would assume a trade-off between frequency and amplitude. • BryaN, You are safe. The picture with the article shows a residual wave in addition to the desired second harmonic. It looks like only part of the beam is doubled and they filter out the residual for the microscopy. • kuhnkat Yes it looks like the a fraction of the fundamental went through and the desired output the second harmonic is then utilised. Its strange that to make sense of this phenomena we have to use the language of wave physics. Why should a particle phenomena like the photon have harmonics? It lends support to Claes ideas. I would like to see a further analysis of the thermodynamics of this system. • Bryan, actually I do not see it as strange at all. If there weren’t serious issues with the whole wave versus particle bit it would not have taken so many great minds so long to come up with the current compromises. The fact they settled on quantum theory as the explanation in no way invalidates the experimental data on what appeared to be waves at work. I think there is an issue with people thinking of a physical particle when quantum theory doesn’t really say there are physical particles. My limited reading seemed to indicate that electrons moved closer to waves than waves moved to electrons. They are both just convenient ways for us to think about a sets properties and how they interact. • One of the ways Climate Scientists obfuscate the physics is confusing energy and temperature. They are separate at the level that is being discussed in climate science. Getting a particular frequency out of a CO2 molecule does not mean it is the temperature as assumed by planck radiation. The frequency is determind by the molecular bond and not black body emission. The temperature would be indicated by the number of photons emitted by the CO2 molecule at atmospheric temperatures. Apparently CO2 has do be at combustion chamber temps for planck radiation to become significant. • In the cases I am aware of, the crystal which will take two photons and add them to create one photon of twice the energy, are very carefully selected or designed materials for having that effect on a particular wavelength photon. They do not do this for other frequencies of radiation incident upon them. So, the case you describe is a very special case and, yes, no violation of physics occurs in that case. But water, rock, and dirt do not generally have this property. • Charles The radiation from the laser does seem to have some unusual properties. For instance it does not obey the inverse square law. • huh? It sure does obey inverse square law, if not energy conservation would be violated. Simply, it has very high directivity, the divergence angle of a typical laser beam is very low (often almost as low as his frequency allow). But within this very small solid angle, it sure obey inverse square law: in a perfectly transparent medium, the intensity of the laser will be much lower (and the surface illuminated much larger) 1 light year away, an even a few km away, the broadening is already noticeable… • Dr. Anderson, ‘But water, rock, and dirt do not generally have this property.’ The important distinction with respect to nonlinear optics is that the nonlinear optical process necessitates coherence in space and time between the mixing beams. From there one can get sum and difference frequency and harmonic generation. That’s why lasers are used in such situations. But those are not the only kinds of nonlinear effects possible in a material. There are many more incoherent nonlinear processes in which the different photons acting upon a material are not coherent. Excited state absorption and spontaneous light scattering are two such situations which have fairly high cross-sections. So while many rocks do not have the property of being crystals with specific bi-refringent properties, there are still many nonlinear optical processes that can occur, all which you neglect in the piece that has been featured in the comments here. • Ort, How does one electron magically know the state of another electron in quantum mechanics??? Magic is apparently how our world works. It does what it does and we must figure out the rules and make up explanations that are palatable to our limited minds. • Thanks for that Bryan, • Hockey Schtick Thanks for the link, he knows what he is talking about being a specialist in materials physics • Yeah but he’s made a few mistakes. Following his argument, by emitting photons the surface necessarily cooled the instant those photons left leaving energy states ready to absorb any returning photons. • In the equilibrium case of solar radiation flux upon the surface, the Earth’s surface temperature is constant and the emission of a photon does not cool the surface. Of course at night, with no incident solar radiation, the surface is constantly cooling as infrared photons are emitted. In that case, a photon absorbed by a water molecule or CO2 may result in emission of a photon from that molecule and the photon may be absorbed by the cooling Earth’s surface, thereby retarding the cooling. Where I said the emission of the photon from the Earth’s surface cooled it, I was talking specifically about the phenomena of cooling at night. I wanted to make sure the reader knew that I was not denying that the presence of infra-red absorbing molecules in our atmosphere can contribute to a retardation of surface cooling at night and to make it clear how it did this, when it could not do it in the case of the surface at a constant or increasing temperature. • Your support is clumsy: don’t you care that this theory is in contradiction with what Bryan said? His theory and Bryans “approach 2” cannot be correct at the same time. Choose one side. (anyway, they are both wrong) • Ort: no, it is in agreement with Bryan’s approach 2. And tell us exactly why you know “both are wrong” Phil Felton: doesn’t matter – obviously the process continues to cycle, with no heating of the hotter object • Of course, here are the conceptual differences: Brian: no backradiation (supposedly because the 2d law) with a theorical case of two blackbodies (so, total absorptivity). Charles Anderson: backradiation, but absorptivity of the Earth surface = 0 for the longwave radiations (all the confuse 2d paragraph). That’s obviously false: you can check any textbooks for the absorptivity vs. wavelength for all the different type of opaque materials. • Ort: no, Bryan’s approach 1 is “no backradiation,” which he dismisses. Approach 2 is that there is “backradiation,” but the colder objects “backradiation” cannot heat the hotter object. This is exactly what materials physicist Charles Anderson explains in detail, and you fail to understand why the absorptivity is effectively 0 by a hotter temperature/frequency/entropy body from a colder body – did you even bother to go to his blog post instead of just reading the small excerpt? • I did and he’s wrong! • Pekka ……”but the rate of radiation is not changed by the absorption. Thus the incoming radiation influences the heat balance of the body.”….. This seems to be self contradictory • Bryan, I was not fully precise. There will be an effect through increased temperature of the body. I meant that there is no immediate effect related to the absorption. For a real surface even this is not quite true, but only a very good approximation, but for a black body it is true. • Pekka With reference to the option2 and 3 in my post above. To all realistic intents and purposes there is little practical difference between them. The heat by calculated by SB equation goes from hotter to colder body. Option 3 has the unfortunate implication of upgrading the quality of the radiation from the colder object which conflicts with the second law. Also the possibility of an increase in temperature is a signature of Heat transfer from colder to hotter which Clausius said was forbidden. • Bryan, I answered to your other message on this point. Your argument is in error. • There is a curious definitional issue here. A black body is often defined as a body that will absorb all wavelengths of radiation incident upon it. This is a case however in which we need badly to talk about real materials, such as those in the surface of the Earth. I extensively use a technique called FTIR spectroscopy to identify and characterize materials in my laboratory. The technique commonly uses infrared radiation covering the range from 2.5 microns to 25 microns in wavelength. A material placed on a IR transparent window, such as diamond, is irradiated as the IR wavelength is varied and any absorption results in a scattering of the IR radiation so that much less is reflected back to the IR detector. If real materials absorbed all IR in this broad range of wavelengths, the technique would be pretty useless. The range of IR radiation wavelengths covers most of the spectrum of radiation from a material emitting IR at a temperature of 288K. Near IR spectroscopy covers the longer, low energy tail of the 288 K emitter and while absorption here tends to be greater, it is still much less than 100%. That makes near IR spectroscopy a useful technique also for studying many materials. Most of the Earth’s surface is covered with water and the biggest window for water in the range of IR radiation near that of a 288 K emitter is pretty well aligned with the peak of the emitter spectrum. So water does not absorb all incident IR. Plants certainly do not either. Indeed, we often perform FTIR on plant materials and food products extracted from them. Near IR spectroscopy is also used on plants and food products extensively. FTIR is used less frequently on minerals because they commonly are not very good absorbers. • Sorry. I do not actually do near IR spectroscopy. I should have remembered that it applies to the IR wavelengths in the tail of the solar spectrum, not in the tail of the spectrum of an emitter at 288K. Near IR is therefore irrelevant in this discussion. • Your ‘theory’ of non absorption by a surface of thermal radiation from colder emitters (you don’t say what happens to the incoming radiation), is clearly invalidated by the fact that microwave ovens work, (see Kai’s posts elsewhere). The usual frequency used is 2.45 GHz (wavelength 122mm), your surface at 300K doesn’t emit much radiation at that wavelength! So why does that get absorbed in an oven? While you’re on here why don’t you explain that when you use your FTIR spectrometer you don’t have to do it in a vacuum because O2 and N2 don’t absorb IR, some of the ‘sceptics’ on here don’t believe that. Perhaps your practical experience will convince them? • The specific frequencies that 99% O2 and N2 absorb emit at are filtered out. Such a device would be worse than useless if that was not the case. • PF; Same answer as to most AGW silliness: it’s the H2O, st**id. • In that case, please explain me the approach 2 , and don’t forget Brian was talking about two black bodies. Now, about Anderson: “why the absorptivity is effectively 0 by a hotter temperature/frequency/entropy body from a colder body “. You fail to understand that the absorptivity of a surface, which is the proportion of radiation absorbed vs reflected, at a given wavelength, is a constant property of the material. No matter where 15um photons come from (from a cold body, a hot body, a distant body, a shaking body, an “active” body, a “passive” body), the ratio of absorbed 15 um photons is the same. Another time, you can check easily textbooks for the absorptivity vs. wavelength for all the different type of opaque materials : the position of Anderson is untenable. • Ort | So you quite happy that 100J of radiative energy at say 15um is upconverted to 100J of radiative energy at 4.7um, without any work being done? • Without an explanation, this question does not make sense for me. Details, please. (sorry to have mispelled your name in my last comment) • Ort If you look at one consequence of option 3 it means that 100J of radiative energy at centred at 15um is up converted to 100J of radiative energy centred at 4.7um, without any work being done? This is contrary to the second law. This is why option 2 is correct. It satisfies the Stephan Boltzmann equation without violating the second law • It has nothing to do with the second law. For the black body the wavelength of the incoming radiation makes no difference, when the amount of energy is the same. 100J heats by 100J. After the absorption it is in the heat of the body and for that the type of the incoming energy makes no difference, only its quantity in energy units. As stated by really many writers the black body absorbs also any wavelength whatever its own temperature. The second law has nothing to say about this. It tells that more radiation goes from the hotter body to the cooler than wise versa. It does not say anything about what happens when radiation hits a body. • “If you look at one consequence of option 3 it means that You seem not to understand the Stefan Bolzmann law (radiation of the body occurs, even in a vacuum), and the laws of thermodynamics neither. In fact, your assertion itself is totally confused and erroneous, linking two independent phenomenons with apparently an implicit energy equality (is that what you call “2d law”?) which can not be applied to your body, which is not a closed system! You already had long, clear, detailed, repeated and explanations of this on SoDs site by different contributors, more patient than I am; so it seems I am losing my time. • Ort Back to the previous question you included the quote but did not answer the implication. Instead you ignored it and went into a irrelevant rant. If the increase in quality of the radiation does not happen then options two and three are the same. If it does happen the second law is violated. • There is no such thing like your imaginary direct process of “upconversion” by “work”, so, I repeat, your rhetorical question as formulated does not make sense. Emission of thermal radiation is a function of temperature of the body (and if not a black body, a function of emissivity, a material property) and that’s that. Period. If ever there is some incoming radiation, whatever its wavelength, it will be absorbed (black body). But no matter if there is or not some incoming radiation from other bodies, and what could be its spectrum, the status of the outside world has no effect on emission of thermal radiation. Now, in all the possible configurations, if doing the sum of all the energy exchanges (including the radiating ones) between the black body and its environment, you find: E_in > E_out, then the temperature will increase (in function of the mass and of the heat capacity). If E_in<E_out, it will decrease; if E_in = E_out, no change. There is nothing "thermodynamically wrong", here. With your choice of same energy values (the last case), you tried at the same time to imply an imaginary direct causal link between emission and absorption, by the means of "upconversion", your word. You are supposing that the emission of thermal radiation is due to photoexcitation: you have invented some physics. You are now free to repeat ad nauseam "2d law, 2d law!", but don’t expect another response from me. • Ort If you read the original post it was the consequence of to the radiation of having the hotter object there as opposed to its absence as it passed through the defined area. Absent ; 100J at BB spectrum centred around 15um. Option 2 no effect on the temperature of the hot body other than to reduce the heat loss from the hot body. Option 3. To increase the temperature of the hot body. The 100J Joules is upgraded to be centred around 4.3um. This violates the second law as stated by Clausius. Heat flows from a hot object to a cold object never the reverse. The increase in the “quality” of the radiation reduces entropy =>against 2nd Law. If the problem was solved using vectors, there would be a single vector pointing from hot to ciold • The case you cite is different. The incident radiation comes from a hotter source, not a colder source or one of an equal temperature. When I measure the absorptivity of radiation in my lab, I use a light source with a filament or emitter which is hot compared to the material I am reflecting and absorbing radiation upon. LEDs are pretty cool compared to a tungsten filament, but they are still warmer than the room temperature object being examined for its absorption of light. Note also that IR detectors image objects warming than themselves, not objects cooler than themselves. • Great explanation, IMHO! • Does your rant actually refer to Anderson’s paper or to something else? What is all this stuff about 1000 K, e.g.? Where does he say that the “ground cannot absorb low frequency light because of vibrational states?” Maybe I missed something? • jae, if you insist on making a comment, you ought to make sure you have read what the content necessary for such a comment. From schtick’s comment taken directly from Anderson’s site, ‘The same is the case with some of the low energy, longwave infrared radiation returned from greenhouse gas molecule de-excitations. The Earth’s surface will not accept them since the excitable vibrational states are already excited and vibrating assuming that its temperature has not dropped since the returned photon was emitted by the ground. There simply is no available energy state able to accept it.’ He is saying that there are no ‘states’ that can absorb low frequency IR light because they are already in an excited state. If we ignore the factual inaccuracy of this statement to begin with (excited states still absorb IR light to get to further excited states) he is basically saying that there is an almost permanent vibrational population inversion in where there are more molecules in the surface of the earth that are excited rather than in their ground state. How else could he insist that, on average, ‘low’ frequency IR photons are not absorbed by the surface of the earth? If being in an excited state stops such a process, most molecules must be in such an excited state, right? Wrong. That is about as nonsensical a statement as one can make. If what he is saying were true, we could make a laser of the earth. I’m not seeing the ‘earth-laser’ in the near future. On top of that, it’s not as though each molecule only has one excited vibrational state. Each electronic manifold has many, many such states, each with its own selections rules for absorption of IR light or scattering of light. So even if the molecule is in an excited state, it can still absorb a photon of the appropriate energy to excite vibrational population to an even higher lying excited state. Because we are discussing vibrational transitions on the electronic ground state manifold, we do not have to take into consideration the topology of the potential energy surface itself. That means almost all of the overtones (excited state transitions) are of about the same energy as the fundamental. That means that the energy emitted by the decay from the first excited state will be very close to the energy necessary to make the transition to the second excited state from the first. That’s a great deal of quantum mechanics, but the point is that his premise is wrong to begin with, so whatever conclusions he makes with it are incorrect. On top of THAT, since he is discussing temperature, we can ask what the energy in a photon that excites the asymmetric stretch of CO2 corresponds to. Using Einstein’s equation and making an equality with the thermal energy from the Boltzmann constant, we find that such a photon has the equivalent of over 1000 K. When we follow his logic (ground to warm to absorb ‘low’ frequency IR light) it falters on the fact that a negligible portion of the earth’s surface is over 1000 K. Therefore, using his false logic, the vast majority of the earth’s surface should still absorb IR light emitted by CO2 molecules because those photons correspond to a temperature that is much, much hotter than the vast majority of the earth’s surface. So not only is this guy wrong on the front of the greenhouse effect, he is wrong about the optical properties of molecules and the optical properties of materials like the earth. I’m happy I came across him though. I wouldn’t want his firm doing any work for me. Who knows what he’d tell him. Can you follow all of that, or should I break it down for you even further? • Anderson writes: Which is baloney. He seems to be unaware that temperature is an average, for one thing. • “You’re telling me that the ground can’t absorb photons corresponding to a temperature of 1000 K? Really?” He he your post just made me think of a perfect household example for challenging (imho killing, but let’s see what aswer propoents of the no absorption can come in): How can my microwave very efficiently heat my food, when it emits (a lot of) photons at very low frequency (2.5 Ghz, 10 cm wavelength, centered around the emission peak of objects much much colder than my food !!!) • Right, in the context of the effective photon temperature, this definitely defeats Dr. Anderson’s theory. The photons have an effective temperature below 100 K (I think) while the food is at room temperature. • maxwell, “The photons have an effective temperature below 100 K (I think) ” Note that term “effective” you use. Please get someone to explain what the significance of it is in respect to the discussion. • MW cookers are tuned to excite water molecules. That’s why the handle of the coffee cup is barely warmed while the liquid contents are strongly heated. I haven’t tried it, but I assume it would be hard to MW-heat Melba toast! • MW ovens heat materials with a strong absorptivity at the frequency of the oven irrespectively of the fact that the wavelength is long and the frequency far below the range of IR where the body emits most efficiently. In this respect there is no difference compared to the situation where long wavelength IR heats a hot body, whose emission peak is at much shorter wavelengths. For the heating to happen we need radiation at any wavelength where the absorptivity is high. For the heating power the total power flux of the radiation is the determining factor, the wavelength is of no significance as long as the absorptivity is high. The temperature of the body has little influence on the absorptivity. • PP; your English comprehension skills have failed you. Obviously I was talking only about absorptivity, and made no suggestion that it varied with temperature. But many materials (ceramics and glass, fortunately) are almost transparent to the MW’s Magnetron’s output! I assume, of course, that H2O’s absorption is related to its fingerprint wavelength. Is that not so? In any case, your point does appear to me to contradict CP’s assertions. My understanding is that the 2nd Law relates only to net energy transfers between bodies at different temperatures. However, the RATE of cooling of a hot body would be lower if another object of intermediate temperature were inserted near it, warmer than the background. If it were colder than background, it would block some “incoming” IR and speed the hot body’s cooling. IMO. • One thing I think we miss is that one of the bodies should be passive (with no self-heating) if we’re creating an analogy to CO2 in the atmosphere. Can radiation from a passive body return heat to a hot body and make it hotter? It can affect the hot body’s rate of cooling, but it cannot make it hotter. • Ken Yes that’s another way of looking at it. Certainly the atmosphere at night is passive in that respect. • Ken, it has to do with the energy balance. At thermal equilibrium, we are defining that the energy leaving a body is equal, on average, to the energy coming in. It seems like a stretch for the earth’s surface, but let’s make the assumption for argument’s sake. So the earth’s surface is at thermal equilibrium by absorbing visible light from the sun and emitting IR light back to a mostly transparent atmosphere. Now we begin to add molecules to the atmosphere that can absorb the IR light emitted by the surface of the earth and, upon radiative decay, emit IR light back toward the surface of the earth. We have now changed the energy balance of the earth’s surface by adding MORE energy in. In response to this energy increase, the earth’s surface increases in temperature so that it can emit MORE energy to come to a new energy balance. This new thermal equilibrium is at a higher temperature than the previous equilibrium. Does that make sense? • Maxwell You are a genius! WOW! I have a night storage heater which has a dial to increase the energy input and therefore increase the heat output. When it gets really cold I turn the dial up to get more heat out, but this cost me more money in energy. Now thanks to your brilliance I have just worked how I can save myself a small fortune. I don’t why it never occurred to me before but then thats why we have superstar-scientists like you, so we don’t have to think for ourselves right? Thanks to your genius I have realised that all I have to do is open up the front panel and stuff some more bricks in it. Then I will have changed the energy balance by adding more energy in. I owe you one Maxwell. Big time! • I think you’re confused as to the analogous relationship between your heater and the earth. In fact, I’m certain of it. By adding CO2 to the atmosphere, we’re changing the energy balance OF THE SURFACE! Those are two different systems. You’ve taken this distinct, squashed and mixed concepts in your example to create a incorrect assessment of the possible energy balances and imbalances at the earth’s surface. Well done. • Okay, Maxwell, I’m listening. Let’s imagine 2 black balls floating in space. One has an internal heat source and a constant temperature…the other is passive and a long ways away. Now we move the passive ball closer and closer to the active ball. Closer…closer…closer, then so close they touch. During this process, what does the temperature profile of the active ball look like? At any time, is its temperature measurably greater than it was when the passive ball was far away? • Ken, let me save you some time here. Even Dr. Spencer claims the passive ball will cause the heated ball to warm slightly. I don’t agree, but, there you go. • Ken – before I answer, how will the correct answer affect your perceptions about the greenhouse effect? • Well, Fred, here’s what I see. Joe Sixpack reads the headlines and saw An Inconvenient Truth, so he believes in global warming and thinks the earth will experience greater and greater temperatures because of increased CO2 concentration in the atmosphere. He thinks the earth’s peak temperatures are increasing and more and more temperature records will be broken as we SUV-boogie ourselves into blackened crisps. Now Fred, you and I know it takes a certain amount of energy to heat up the active ball to a certain temperature and, in order for the active ball’s temperature to increase…additional energy must come from somewhere. And, we know the passive ball will not add energy to the active ball. So, Fred, hit me with your best shot. I’m particularly interested the instant just before the balls touch when they are infinitely close, but not touching, then the instant after they touch each other. • You didn’t answer my question. How will a correct answer to your own question affect your thinking about the greenhouse effect? If it can be shown that the active ball will warm, will that change your mind about the greenhouse effect? If not, why not? I don’t want to waste my time, so I need a commitment from you before I take the trouble to give an explanation. • That depends on what you mean by “shown”, Fred. If all you have is a formula or a theory or a weblink, then I’m not going to be very influenced. If you have test data, taken in a vacuum, that shows a passive object measurably increasing the temperature of an actively-heated object purely via ‘backradiation’, then I will rethink my life’s mission to attack and kill the sky dragon. • Ken – think about it a bit more. It’s late, but we could probably continue tomorrow. I have not floated any black balls in space recently, so you’ll have to do without “test data”. It’s actually very easy to show via principles that we all agree on that the active ball will warm. I’m prepared to do that. My own dilemma will be the following: suppose Ken claims that he is not convinced, for whatever reason, but it is clear that other readers of this thread without a stake in the outcome will find the explanation convincing, and will judge Ken accordingly for his refusal. Should I go ahead? Well, we’ll see. That might be enough for me to proceed, knowing that perhaps someone else’s “life mission” will be profoundly altered for the better. Who knows? • Ken, You say: “in order for the active ball’s temperature to increase…additional energy must come from somewhere.” and “And, we know the passive ball will not add energy to the active ball.” That is not true. There is no doubt about the fact that the passive ball will add energy to the active ball as long as the passive ball is not as cold as the empty space. It will add the more energy the closer it is until it is brought into contact. At that point conduction enters and it starts to cool the active warmer ball through conduction. • Pekka An object (at say 200K) can only raise the temperature of another object if the temperature of the other object is less than 200K If the other objects temperature is > or = to 200K its temperature cannot be increased by the first object. • ok, one last time: The temperature of the hot object is not increased by the cold object, it is increased by whatever heating mechanism made it hot (internal heater, the sun which is a much hotter object, a maser, pick yuor choice). What the cold object does is reduce the cooling efficiency of the even colder surrounding of the hot object. Reducing this cooling efficiency allow the heat source (remember, pick your heating mechanism of choice, but in case of earth, it is the sun) to heat the hot object some more, until equilibrium heat_in = heat_out is once again reached, only at a higher temperature. Really, it is so simple that I think there is a (selective and deliberate?) blind spot that prevent some to consider the external heating, who think somehow that all the heat have to come from the cold object. Nobody sane has ever clamed that a cold object will radiatively heat a hot object if there is not third object or some kind of ohter heating mechanims in the hot object, not for any though experiment, and not for GH effect for the earth. • I actually think we’re making progress. I am an engineer, so naturally I think in terms of getting useful work done and what can be practically measured and verified. Also, as an engineer (and not an academic), I get to discount small effects that are irrelevant to the job I’m trying to do. If my job assignment was add heat an already heated ball, would I use a passive ball to do it? My boss would fire me for spending any time on that plan. If the effect can’t be measured, then it is something of the theoretical realm and not the practical realm. I don’t care about the theoretical realm…in that world there are billions of influences and arcane aspects to consider. Getting bogged down in them would make me a bad engineer…a slow and expensive man to work with. So, how effective, in theory, can a passive ball heater be compared to a heated ball? Radiation is a crummy way to couple heat energy, but we’re in space and that’s all we have. My heated ball is radiating in three dimension (actually, four I guess if you allow the passive ball to modulate the heated ball’s temperature). As the passive ball is far away, how much radiation is it intercepting? It’s also radiating in three (or four) dimensions, so how much can it return? Let’s call it none. We move the ball closer. The passive ball gets slightly warmer. It radiates a bit more. Let’s skip ahead to where things get interesting. The passive ball is as infinitely close to the heated ball as it can be without touching. The straight line coupling between the closest point on the heated ball and the passive ball is infinitely short. That point on the passive ball is as close to the temperature of corresponding point on the heated ball as it can be. Let’s say, at that point only, the temperatures are equal. So, at that point what is the delta-T? And, at that point, what is the radiation intensity? Now move away from that point of maximum coupling. The temperatures diverge, right? I didn’t mention the relative ball diameters, but I don’t care. Let’s call them equal. Visualize it. Visualize the cone of radiation from the heated ball radiating the surface of the passive ball. How much? Not much. Now visualize the return radiation from the passive ball. How much of that cone intersects the heated ball? Not much. So, how effective is the heating created by the passive ball? A tiny fraction of outgoing heat energy coupled by radiation gets returned. It’s so close to zero that you can’t measure it. So, in the case of two balls coupled as much they possible can be…zero. Then they touch and conduction massively overwhelms anything radiation can do. In this case we say the action of conduction is the opposite of the action of radiation…and I’m not sure I buy that, but I’ll think that through later. You say radiation from the passive ball heats the active ball. I think I agree in theory. But I don’t care about theory. I care about the real world. Due to errors of measurement and noise, you can’t measure and quantify the temperature increase. Once you live in a world where you’ll believe in things you can’t measure…you’ll believe anything. For example, you’ll believe back radiation from materials with low density, low temperature and low thermal mass (like atmospheric CO2) can heat things with high density, higher temperatures and larger thermal masses (like sea water). • A single passive ball, Ken, would not heat the active ball very much, as you state. However, a very large multitude of passive objects completely surrounding the active ball, equivalent to a…. well, an atmosphere, would cause significant heating. • That’s what your formulas and models tell you, Fred. You believe it. That’s fine. However, even the slightest, tiniest error in evaluating insolation and its linkage to surface temperatures would swamp out all the influence (and more) you attribute to 390PPM of CO2. • As ye give, so shall ye receive. The atmosphere is being heated back by the surface. If the surface is warmer, it’s warming the atmosphere more than it is getting back. Continue until equalized, at which point Fair Trade takes over and no change occurs. • Ken, Think of the passive ball as a reflector of radiation – well it isn’t really, but the effect is the same as far as the active ball is concerned. The active ball is being heated by the source, but at the same time radiating energy. The energy it radiates is energy lost. But some of that radiation is reflected off the passive ball and is so returned to the active ball, which means that the active ball doesn’t lose as much energy as it would were it not for the presence of the passive ball. As the temperature of the ball will increase while Eout is less than Ein, the active ball gets hotter. • Ken, Assuming an infinite temperature detection precision, I’d say that that both balls gradually get warmer as they get closer. When they touch, we’d have to know the heat conduction properties of the materials from which the balls are made. But before such a point, however, each ball is taking in energy (from conduction from the internal heat source and from radiation) and emitting energy via radiation. As they approach each, more of the energy emitted from each ball is being absorbed by the other. Therefore, we are changing the energy balance of each ball. In response to changing this balance, the balls increase in temperature so that they can create a new balance by emitting more energy back to space and each other. That is the simplest explanation of what is happening that I know. Does that makes sense to you? • Yes, I’m with you. You have not convinced me the temp difference in the active ball is measurable and I’m still a little puzzled about how energy is conserved in this system, but that’s okay. I think you guys would have been okay…you could play with SBL and radiative balance and study back radiation and had great, quiet academic lives…but the activists wanted to change the world and found you to be useful academics. The tough times you guys have coming is collateral. That’s unfortunate. On the other hand, you had your chance to denounce An Inconvenient Truth and the worst of the exploiters like Schneider, Trenberth, Hansen and Mann…and you were silent. • Ken, ‘You have not convinced me the temp difference in the active ball is measurable…’ We’d have to know the amount of internally supplied energy and the distance between, etc., but I’m not convinced we would measure it either. More dramatically, however, ‘…but the activists wanted to change the world and found you to be useful academics.’ What are you talking about? I’ve never talked to an ‘activist’…well, besides you. I think that’s a totally inappropriate thing to claim when 1) you don’t my name, my work or even my opinion about climate politics 2) I am being as level with you on the topic of discussion here. I am not accusing you of anything illicit or demanding that you act in accord to my particular political beliefs on specific issues. Why do you insist on demanding the same of me? • You’re right, Maxwell. You’ve been polite and helpful and I do appreciate it. I’m a bit revved up right now and that’s not your fault. My editorial comments were uncalled for. In fact, to go even further, I will publicly apologize and send a $100 donation to the charity of your choice when you show me where you’re on record criticizing the alarmism and activism of An Inconvenient Truth. Fair enough? • You should search both Class M, A Few Things Ill Considered, and Island of Doubt for maxwell’s comments. You should then take the $100 and donate it to Doctors without Borders. You can send me the receipt via email when you’ve done so. If you need prove of my connection with this username elsewhere, I have emails from Coby Beck at A Few Things Ill Considered that connect me to it. • I’m a bit busy today, but the quote below is good enough for me. I apologize to you Maxwell and I will send $100 to DwoB (and prove it). Usually figures like JFK jr and Al Gore are cited as people to believe or whose opinions on scientific matters are of value. It’s a bit depressing to me to so transparently see the newsroom editor’s bias, but that’s the way it goes. –Maxwell at A Few Things Ill Considered • Maxwell, contact me at and I’ll send a copy of the receipt…it’s done. That can be your random act of kindness for the day if you like. • Nice one, Ken. Good for you. • I have new found respect for this process. Thanks Ken. • Ken – many of the people who are arguing with you are non scientists and are skeptics. By accepting the science behind the GHE you are not forced to accept the rest of it! • I would add that some are scientists (well, for what it’s worth, PhD and have quite a few papers published in reviewed scientific journals – not climatology though) and are skeptics too… • Indeed some of them are published PhD scientists who are extremely sceptical of claims that claim to overturn well established and tested science like CJ (who has still avoided a calculation based on his theory). Not to mention claims that N2 and O2 can absorb IR in contravention of all measurements! • The N2 and O2 are heated by collision, which is umpteen orders of magnitude more frequent and likely than radiative cooling of CO2. 69. Claes, I have read your chapters, your comments here, and your blog post. There’s no maybe about it: you’re a crackpot. Physics departments around the world ocassionally receive manuscripts claiming to overturn 100 years of physics by misusing classical equations. Unfortunately your work reeks of this and the many, many selected quotations (Abe Lincoln?) don’t help. This is neither silence nor ridicule, just straight talk. Thermally excited gas molecules in the atmosphere will radiate, even at night. Some of it will be directed downwards. This is your so-called “backradiation.” It should not require any equations to convince you of this. Temperature is a statistical phenomenon corresponding to the average kinetic energy of a collection of particles. As many people above have patiently tried to explain to you, there will always be some molecules in a body ready to absorb an incident photon of the right wavelength, even if that photon was emitted by a body with a lower temperature. These are basic phyiscal princples. If you don’t understand them all the equations in the world won’t help you. It’s like buttoning a shirt; you’ve put the first button in the wrong hole. like others before you on this thread you have confused downwelling radiation with “backradiation”. These two are not interchangeable atmospheric parameters that you can flip between at will. They must be separately defined as follows. Downwelling radiation contributes NO net energy increase because it is energy which is already present in the system. Back radiation MUST be accompanied by a demonstrable and measurable net energy increase. Any crackpot can understand that. • Will, the document you linked to does not contain the word “downwelling,” so I’m afraid I found it of little use. Your comment raises other questions. Like, if something is measurable, is it not also demonstrable? Some radiation leaving Earth’s surface will be absorbed by the atmosphere, and some of that will be emitted back towards the surface. This results in a smaller net heat loss than if the process did not occur. But it does. • David, the link is to a paper which demonstrates that there is NO net increase in T since 1975 in the unadjusted Radiosonde data which is accurate to 0.1º C. Unlike the surface record which has an error margin of 1.3º C and is, provably, arbitrarily adjusted to fit the AGW narrative at will. The smaller net loss argument MUST be accompanied by increasing temperature above and beyond the natural signal. It has not been. Furthermore your smaller net loss argument is falsified by the fact that adding CO2 can only increase atmospheric transmission to space not decrees it. This is substantiated by the Radiosonde data from the paper in the link I gave you. And has been demonstrated experimentally by myself. “AGW Debunked again.pdf” You are welcome to produce your own experiments if you think your results will be different. But you will not. Instead you will just wave your hands proclaiming this simple test to be invalid. But it is not. This simple £3.50 experiment has stood unchallenged bar the hand waving for more than 14 months. • What plastic are the bottles made from? Is it transparent to 15μm IR? If not you’re wasting everybody’s time. • Phil, do you mind if I just ignore you know? • That’s up to you but your experiment is not relevant to AGW unless it meets the requirement I stated. So if you wish to have any credibility you’d better answer the question. Otherwise we’ll have to conclude that either you don’t know the answer or are hiding the truth. • Will, the paper you linked would be much improved if it included citations for statements like: “There is no radiative heat transfer from SST to atmosphere.” A paper prepared for publication would include such references. Anyway that statement is false. I stopped reading at that point. Regarding your experiment, on the contrary, I applaud it, and the effort. I have some criticisms however. The main one is that the 1 deg. C effect is probably smaller than the error of a fishtank thermometer. I’d like to know other things, like was there the same volume of water in each bottle and what was the tempurature of the water. • So you are telling a Professor in Applied Mathematics that he is a crackpot. Well well. Not a very sivilized way of discussing mathematics and physics, is it? • Yes I am, because he’s wrong and won’t listen to reason. Besides, he said he was maybe a crackpot first. We just disagree about the maybe. • It doesnt make you look good attacking the person, instead of discussing the matters at hand. • Ah but I am discussing the matters at hand Kenneth. His statements and the concepts behind them. Have you really not seen that? Johnson has a lot of ideas that have no foundation, such as statistical thermodynamics doesn’t explain reality and physics can’t be explained with words. And when you call him on it, he deflects. That’s a crackpot. • Claes is trying to explain to you what he think is correct too. When you don’t agree, he does not call you a crackpot. When he deflects, he is trying to be polite. My guess. • No, he’s not trying to explain. And he’s not being polite. “At least I use Maxwell’s equations.” Like I don’t? And he can call me a crackpot all he wants but it won’t stick, because I’m not the one going around saying statistical physics is bunk based on ignorance plus a misinterpretation of something Einstein said a century ago. But please, ignore all that and keep focusing my blunt language. • He’s attempting to overturn the science behind radiational heat transfer which demonstrably works in practical situations. The onus is on him to show that his formulation correctly explains observations without invoking radiation from the cooler body. So far he has avoided doing so. 70. Well crackpot or not, I at least base my statements on Maxwell’s equations. Without equations anything is possible to say, but physics obeys equations, not wishful commands in words. • Claes, Equations are meaningful only when they are applied correctly. Anybody can write equations and make claims (and on these issues very many have indeed done that claiming almost anything imaginable), but without proper connection to the physical situation this is meaningless. Explanations that you have presented in your texts are almost always obscure, sometimes simply contradictory. With good will and some favorable interpretation some of them can be found to be correct, but in many cases this is not possible. Therefore your use of equations does not prove anything. • But you don’t base it on Maxwell’s equations. You say that the electric field is \ddot u , the acceleration of an oscillating charge. But J C Maxwell said that the electric field is determined by Div E = Q . Maxwell’s equations don’t appear at all in your chapter. • The 1d wave equation I use is a model for Maxwell, which captures a good deal of the essence. Planck used this model in his derivation of his law, but “in despair” resorted to statistics. • You really need the Maxwell-Bloch equations, which we’re available when Planck was around. You have to account for sources the EM radiation. The wave equation on its own won’t do the trick because we know we have sources in the atmosphere, as confirmed by over a hundred years of lab experiments. The fact that you are not considering sources is likely why the solution you find is ‘unstable’. Have you calculated the macroscopic polarization of the atmosphere under steady state conditions? If not, I’d say you don’t know what you’re doing. …in fact, I may say it in either case. 71. Claes, that is nonsense. Equations are shorthand for words. I’ve explained two concepts concisely. If my words cannot be contradicted by observation or experiment, then no further words–or equations–are required. So again, there’s no “or not” about it. • @David N… Yes, “equations are shorthand for words”, except when words are in disagreement with science, I mean, real science. • Equations are still shorthand for words, even when those equations are in disagreement with science like Johnson’s are. If you have a specific objection to the science I’ve explained I’d love to hear it. 72. I’ve tried to organize the data, so here it goes: Mean Free Path-whole mixed air (r = 14 Km, wv = 0.04) = 20.78 m Mean Free Path -water vapor at 0.04 (r = 14 Km) = 8.05 m Mean Free Path-whole column of carbon dioxide (r = 14 Km) 46.77 m Total absorptivity of water vapor at 0.04 = 0.024 Total emissivity of water vapor at 0.04 = 0.0237 Total absorptivity of carbon dioxide at 0.0004, whole column = 0.0039 Total emissivity of carbon dioxide at 0.0004, whole column = 0.0039 Overlap water vapor/carbon dioxide, absorptivity = 0.024 Overlap water vapor/carbon dioxide, emissivity = 0.0235 These data is the base of both thermodynamics and Radiative Heat Transfer. However, it seems the advocates of the “downwelling” radiation heating up the surface and of the pretty-exaggerated “backradiation” don’t want to take them into account. • Thank you very much Nasif. • The emissivity and absorptivity of air at near-surface pressures is very high (close to unity) in the relevant wavelengths. Those are the wavelengths at which surface IR is absorbed and emitted by greenhouse gases – in particular, CO2 and H2O. Emissivity/absorptivity values outside of those wavelengths are irrelevant. To put it another way, the atmosphere is very far from being a black body in general (e.g., compared with a Planck distribution), but behaves like a black body in those parts of the spectrum where it is optically thick. This accounts for its strong greenhouse effects. • Fred, for having a near 1 emissivity, the air pressure would be at least twice the real pressure and its temperature should be about 5000 R. Attributing unreal exaggerated physical characteristics to air is the “error” of AGW idea. Almost all books on heat transfer and radiative heat transfer show the numbers I’m including here. You know the purpose of the those air bubbles in packing bags is, and my post shows why. • That’s not correct. The relevant emissivity is at specified wavelengths. For example, the atmosphere is optically dense at 15 um, such that almost all IR emitted from the surface is absorbed (and emitted isotropically) within a short distance above the surface (probably a few tens of meters or less). If the atmosphere had high absorptivities/emissivities at all wavelengths, the Earth would melt. The values at the specific wavelengths are what account for the strong upwelling and downwelling radiation within the atmosphere itself. These values are incorporated into the radiative transfer codes used to compute the effects of CO2 and H2O on the vertical temperature profile of the atmosphere. The computed results match observational data quite well. • Fred… The reference you give is for saturated water vapor. Sorry… the calculations from your article are no related with a real atmosphere. • Of course they are, and the article refers to an emissivity of unity at the absorption maximum for CO2. I believe you should revise the descriptions on your website to conform to real world emissivities that are relevant to greenhouse effects. Emissivities described without reference to wavelength are essentially meaningless in this regard. It is certainly acceptable to mention them, but after that, you should then cite the emissivities in the IR wavelengths of relevance to the greenhouse gas absorption spectra. These are very high. • No, they are not… Begining from the fact the formulas they applied are not the adecquate formulas. The formulas I applied for my calculations are the same formulas derived from experimantation by many physicists. I suggest you, Fred, to visit the references I give in my website cause their authors support what I say in my article. No where, in this world, I mean the real world, we have a 0.04 atmospheric water vapor with an emissivity of 0.7, neither a blackbody CO2. The numbers I provide are real, coming from experimentation, they are not the outcome of idealized systems. • Nasif, it is common knowledge that reflectivity, emissivity and absorptivity are all frequency dependent parameters of any gas, liquid or solid. Given that these parameters are fundamentally based on quantum mechanics and the structures of each gas, liquid or solid is different, these parameters HAVE to depend on frequency. That’s why the index of refraction of a material, from which ALL optical properties of a material can be derived, depends on frequency. You really don’t know what the hell you’re talking about do you? • maxwell… I do know exactly what I’m talking on. You’re who doesn’t know that we live in a real world. Wavelengths and frequencies were taken into account in the experiments. Just answer a single question: Is CO2 a blackbody? • Nasif, you have to be more specific. Is a single CO2 molecule a blackbody? No, there aren’t enough degrees of freedom. Is a collection of CO2 molecules at a finite temperature a blackbody? Not exactly because such a collection would not absorb at all frequencies, but it would be much closer to a blackbody. Furthermore, the sun is a collection of ionized nuclei floating under a sea of electrons, interacting in such a complex manner we are basically mystified at what the sun does most of the time…but the output of that complex collection of atoms, nuclei, electrons and plasmas puts out a distribution of energy that looks a lot like a blackbody spectrum. There are gradations in questions and understand that you have to neglect in order to get the result you want. I’d say that’s not science. Then why has there been no warming for the last 15 years? Where is the “match?” Something must be “overshadowing” these effects (or they don’t exist). What? • What are you talking about? Help me out. What does “no warming” look like? • JCH blockquote>What does “no warming” look like? I will help you out. “No warming” for the last 13 years looks like the following! Why get bogged down in the greenhouse effect theory. It is like the debate on how many angels can dance on the head of a pin. Why not instead look at the data? Here is the global mean temperature anomaly (GMTA) and CO2 emission data. Year GMTA (deg C) 1998 0.529 1999 0.304 2000 0.278 2001 0.407 2002 0.454 2003 0.467 2004 0.444 2005 0.474 2006 0.425 2007 0.397 2008 0.329 2009 0.437 2010 0.468 From the above data, average global mean temperature for the period since 1998, for 13 years, was flat at 0.4 deg C with zero trend for the period. Year CO2 Emissions (G tons) 1998 6.6 1999 6.6 2000 6.7 2001 6.9 2002 7.0 2003 7.3 2004 7.7 2005 8.0 2006 8.2 2007 8.3 From the above data, total CO2 emission for the period from 1998 to 2007 was 73.3 G tons. According to the observation, the effect of emission of 73.3 G tons of CO2 into the atmosphere in the global mean temperature anomaly was Zero. Conclusion: Human emission of CO2 does not cause global warming. • Girma Can I ask, why 13 years? (ie, Why start in 1998?) Do these results hold for longer spans of time? What are the error bars on your figures? The confidence level of your conclusion? And, hey, 73.3 Gtons of something were dispersed arbitrarily into the air without limit, control or consent… Why does this not disturb you in and of itself? • Bart R Why does this not disturb you in and of itself? Thank you so much. I assume that you concede that the data does not support “catastrophic man made global warming?” First, let the scare mongering stop and then we will start taking about “limit, control or consent…” • Girma a) Where did I mention catastrophic or scary anything? Is this some explosion of the ‘disturb’ question to unintended proportions? b) This accusation of scaremongering you toss about so defamatorily and so easily when simply questioned about your statistical methods and opinion of the scale of the numbers you provide, how is it meant to be productive? c) Concede based on your arguments and presentation? Yours are some of the weakest and most transparently prejudiced claims I’ve seen on either side of the debate, frankly; while I’m going to go out on a limb and avere that I remain skeptical of many claims on both sides and find some claims on each side well worth studying, your claims in particular do not excite in me cause for concession. More.. the urge to return to my old teaching mode and attempt to address the errors in an energetic and interested scholar because they are so grating. You have elected a troublingly brief and complicated dataset to rest your ‘proof’ on. I am seeking the traditional guidance from you of an audience for an analyst. There is over a century of data you have excluded from your analysis. Please explain why. Your initial point is suspect due to extrinsic conditions many have observed about it before when used elsewhere by others, for example its known oceanic cyclic effects. Please explain how you meet these objections. Your dataset has, relative to its range, a large error bar; how do you handle this? Many others have done analyses on the same data. Please walk me through two or three such analyses from each side of the debate, other than your own, comparing your techniques and conclusions to theirs, so I may put your claims into context and judge your ability to critique on the subject. Or at least answer even one objection or criticism from any honest student of mathematics about why you so abuse statistics to subjugate so obviously to your preconceived opinions? • > Why start in 1998? Because by taking that year Girma can claiming that the trend is slowing down since that specific year. And since the acceleration of warming has stopped, warming has stopped. In case you believe my interpretation is wrong, see for yourself: • @maxwell… No, there is no gradations… and the answer is only “one”, it is yes or it is “not”… Whatever the level you chose; I repeat, is the CO2 a blackbody, yes or not? From your answer, I assume you’re saying it’s not; consequently, physicists and I are right on our numbers, you and those authors are wrong. • Fred, You have to demonstrate the argument is wrong through your own calculations. Blah, blah, blah doesn’t work here. Thanks… 73. In support of alternative perspectives, and Claes Johnson, I have posted a technical note explaining why we should be sceptical of the Stefan-Boltzmann equation. • Oh god, when will it stop? • When you accept yours is pseudoscience… fabricated science. • My science is such psuedoscience that the operation of all of the electronics you use on a daily basis is based upon it. What of what you use on a daily basis is based on your formalism? Or Johnson’s? The answer continue to stare you blindly in the face, yet you continue to shut your eyes muttering the same garbage over and over again. • Maxwell, maxwell… my science is found in scientific literature everywhere. Yours is based only in the internet. • N N One of the first computer programs I wrote as a student relied on data from a table in a textbook. It was a total failure. Try as I might, I could not make the program work. No matter what I did, the computer’s arithmetic logic came to a different conclusion than this table, which had been used for many years and reproduced in scientific literature over and over again, so must be right. I mean, who would trust logic over a published authority? • Well, it will not stop untill the warmers, lukewarmers, almost warmers, etc. can find some EMPIRICAL EVIDENCE for an “atmospheric greenhouse effect.” That means at the very least a CORRELATION between GHGs and temperature. There is NO SCIENCE HERE! No falsifiable hypothesis. No data, only models. NOTHING! Radiation cartoons and GCM’s are not empirical evidence. Nor are endless IR spectra. Nor are temperature measurements (which are going the wrong way–so sorry to report this to the true believers). Nor are endless arguments among physicists, as is occuring here. We need clear EMPIRICAL evidence of an increased warming due to CO2. It has not been forthcoming, and probably will not in the future. We cannot even explain the RWP, Cold Dark Ages, MWP, LIA, let alone the Modern Warm Period (if there is really one after considering all the disgusting “adjustments” made by NASA and CRU!) Back to the drawing boards, you warmistas! The king has no clothes on! • How about Tearth >> Tmoon. • JAE Please assist me in addressing your request by providing examples of the empirical evidence and standards you used to accept or reject (validate) this evidence for “..the RWP, Cold Dark Ages, MWP, LIA,..” As your screed stands now, it either uses a new definition of the word ‘clear,’ or is self-falsifying. • Bart: First, you don’t even need data; you can just read the history books. But if you need data, you can look here for the MWP: Here for the LIA: and so on (see the subject index here: • jae It seems you do use a new definition of the word ‘clear’. If you don’t need data, then it’s certainly not going to qualify as evidence to a scientist. I can think of no case where a history text qualifies as empirical evidence, except possibly if one is testing the paper for the concentration of dioxin compounds or the ink for inclusions of mercury. If you would accept history texts, but reject arbitrarily Dr. Seuss and other works of similar merit and veracity, one must conclude you have a double standard. Further.. not to malign too much, but the same argument applies. The interpreters of the reports compiled there have a known bias, their interpretations carry this bias, their data selection carries this bias, so one must be more, not less, skeptical of information caught in the halo of prejudice. When looking at the proxy candidates proposed by the various reports collated by these biased interpreters, one finds none within one sigma of that of the present temperature record, and most appear to be of the same CI or lower of commonly proposed evidence for the opposite of the interpreters’ claims. In the case of multiple conflicting poor reliability datasets, and a readily available group of better reliability datasets, what reasonable person would do as you have urged and reject the better reliability data to embrace the worst ones? You make an argument absurd and self-falsifying within one step. • Your post is a stream of consciousness like meander through several aspects of thermodynamics, an incorrect assessment of solid state physics and engineering tidbits…but there is nothing in it all that raised my eyebrows with respect to the Stefan-Boltzmann law. I mean, you state that it’s not applicable in all situations, which I admit, is a shocker. But everything else is basically innuendo. Most entertaining was the statement, ‘Heat transfer by radiation is really only applicable to a vacuum.’ Apparently you’ve never sat by a fire or a space heater. You’re really missing out. • I’m just glad you filed that under news and opinion, and not science. • Jen: thanks for that link. Maybe, hopefully, it will cause some of the know-it-alls out there to review the facts. Maybe not, since there are so many that are so committed to their “facts” that they cannot objectively look at alternative “facts.” Until we have some kind of clear evidence of a CAGW problem, we should not let a bunch of crazed environmentalists ruin our world. If those folks did not smoke so much pot, I would trust them more :-) • JAE, Why don’t you give us an overview of the ‘facts’? I would be very eager to hear what they are if they are not what I have learned over the course of a decade doing physical science research. • JAE: “……we should not let a bunch of crazed environmentalists ruin our world.” There’s plenty enough to be skeptical about without having to denounce the basic science of the greenhouse effect. I feel that you are very much in the minority of the denizens (who are predominantly skeptical after all). In itself, that’s nothing to worry about but it might give you some pause for thought and a more careful consideration of what is being said. • We should also be skeptical of gravity. After all, that flying bird is clear evidence that it can’t be true. Instead of the above, let us look at the data. Total human emission of CO2 for the period from 1910 to 1940 was 30.21 G ton. Total human emission of CO2 for the period from 1998 to 2010 was 73.32 G ton. When human emission of CO2 was 30.21 G ton, the global warming rate for the period from 1910 to 1940 was 0.15 deg per decade. When human emission of CO2 was 73.32 G ton, the global warming rate for the period from 1998 to 2010 was zero. According to the data, human emission of CO2 does not have any effect on the global mean temperature. It is pure waste of time to debate about how many angels can dance on the head of a pin. Instead let us heed the observed data and reject the theory of man-made global warming. Girma Orssengo • Plot the graph from 1996 to 2010 and you get a completely different answer don’t you, but you already know that don’t you. • When you calculate the slope (trend in our case) of a profile, the start and end points must be between successive local maximum and minimum. As shown in the following data, about year 2000, was a local maximum. So year 2000 is the start point for the calculation of trend for the current cooling phase. Also, for your future trend calculations, here are the start and end points: Years 1880, 1910, 1940, 1970 & 2000 And hopefully 2030! • sez who? Perhaps the best evidence for global warming is the extraordinary lengths to which skeptics have to go in redefining science. The reason the temperature has those peaks and troughs is because of the ocean oscillations. This is spelled out with a closed form formula for the violet curve in this graph, which closely tracks the actual temperature. The terms in the formula for the violet curve are based on the observed behavior of the ocean oscillations along with the Arrhenius formula for temperature as a function of CO2, and the Hofmann formula for CO2 as a function of time. Fitting trend lines the way you’re doing it is simply playing with meaningless straight lines that make no attempt whatsoever to understand what causes the temperature to behave the way it does. And hopefully 2030! Your naively drawn trend lines may well say that, but a more careful analysis suggests that 2030 should turn out to be around 0.4 °C hotter than 2010. • Vaughan Pratt Science is based on validated on theory. Your theory say “2030 should turn out to be around 0.4 °C hotter than 2010”. (0.2 deg C warming per decade) I, a vehement and proud man-made global warming denier, say “2030 should turn out to be around 0.3 °C COLDER than 2010”. How about this: If your projections are closer to the observation, we accept man-made global warming. If the skeptics projections of slight cooling is closer to the observation, we reject man-made global warming. What is wrong in advocating the accepting or rejection of a theory only after comparing projections with observations? Nothing whatsoever. You fully expect the temperature to go down over the next 20 years. I fully expect it to go up over that period. If in 20 years time the temperature has gone down by 0.3 °C with no other evident cause than simply the failure of the global warming theory, then I would join you in rejecting the theory of global warming. An example of an “evident cause” would be an asteroid or giant meteor or megavolcano throwing up so much dust or ash as to blot out the Sun long enough to greatly lower the temperature relative to what science projected. If that happened then it would be unfair to reject the theory of global warming on that ground. • Vaughan Pratt If you are given the following global mean temperature pattern, what would be your projection for the next 20 years? I don’t base projections on data alone, and I don’t base them on theory alone either. Until the data and the theory support each other, at least one of them must be wrong. Until you’ve figured out which of the theory and the data is to blame for the discrepancy, you can’t trust either and therefore projections from either are meaningless. In the case of the red curve you ask about, what’s the theory behind compressing, detrending, and offsetting? Without knowing that it’s impossible to predict what the next 20 years of it would look like. It’s like asking what’s the next number in the sequence 3,5,7. If you said 9 I’d say “wrong, the next odd prime is 11.” But if you said 11 I’d say “wrong, the next odd number is 9.” You can’t make reliable projections from raw data without an underlying theory. Correlations may give some idea, but they’re trumped by theories of what’s causing what and that are consistent with those correlations, like the competing odd-prime and odd-number theories of the data 3,5,7. In the case of the HADCRUT3 data, there exists a theory that is in good accord with the data. I therefore don’t have to trust either one of them by themselves. The black curve labeled Residue accomplishes two things. Because it did not stray far from zero over the last 160 years, it tells me that theory (the violet curve) is in good accord with the data (the red curve, smoothed to remove short-term events that are irrelevant to long-term global warming). But because it does fluctuate by up to 0.05 °C with rare excursions beyond that, it also tells me not to place too much trust in the violet curve in placing bets on the future since it may off by 0.05 °C and I’d lose that particular bet. Long term however, as long as the black curve continues to behave as it’s done for 160 years, I would end up ahead if I used the violet curve as a betting guide. And it is at least plausible that the black curve will stay roughly within those limits for at least the next 20 years. It’s certainly more plausible than any of the many straight lines you seem to think count as a model. None of the physical phenomena contributing to long-term global temperature are well modeled by straight lines, so there is no point trying to make straight lines fit somehow to the data. The ocean oscillations are reasonably well modeled as sinusoids (that’s why they call them oscillations) while the CO2 contribution is a “log of raised exponential” shape as per the theories of respectively Arrhenius and Hofmann. Your “trend-line” theory of climate is a Procrustean bed. Detrending is done to clearly see the oscillation component of the global mean temperature. The linear warming f 0.6 deg C per century removed by the detrending can be added latter to the oscillation component. As a result, the detrending does not remove anything from the global mean temperature. Compressing removes short-term noises from the data. Offsetting is just a translation and does change the data. Again the following global mean temperature pattern was valid for the last 130 years, and it is reasonable to assume the pattern will be valid for the next 20 years. This means that we will have global cooling until 2030. Vaughan, by the way this issue has been discussed in PIVATE by the team: 1) “Be awkward if we went through a early 1940s type swing!” 2) “I think we have been too readily explaining the slow changes over past decade as a result of variability–that explanation is wearing thin.” 3) “The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. OK it has but it is only 7 years of data and it isn’t statistically significant.” [This statement was made 5-years ago and the global warming rate still is zero] • No one doubts the existence of global warming or global cooling. Both have been occurring for billions of years. What many doubt is the absurdity of catastrophic man-made global warming (still unproven) triggered by minuscule emissions of a life-giving atmospheric plant food. • In apolitical science, when a theory fails even once it is rejected. Why not man-made global warming? • Girma: Total human emission of CO2 for the period from 1910 to 1940 was 30.21 G ton. Total human emission of CO2 for the period from 1998 to 2010 was 73.32 G ton. Anyone familiar with atmospheric CO2 can see at a glance that these numbers are way too low. Your claimed 30.21 G ton of CO2 for 1910-1940 is actually 110.77 G ton. And your claimed 73.32 G ton of CO2 for 1998-2010 is impossibly low, even just for 2010 alone it was 30 G ton! The correct figure for 1998-2010 is 356 G ton. You’re as far off as Ferenc Miskolczi when he got a figure for the kinetic energy of the atmosphere that was a factor of five too low. When arguing against global warming you may find it safer to avoid arguments with numbers in them. You and numbers don’t seem to get along very well. Your understanding of climate is also nonexistent. You appear to be unaware of both the 56-year Atlantic Multidecadal Oscillation and the 75-year or so Pacific Decadal Oscillation, which have drifted into phase in the past century to create large swings in global temperature that pretty much completely masked the CO2 warming prior to 1980. Only after 1980 did CO2 reach a sufficiently high level to start sticking out like a sore thumb. • Vaughan, I think the issue is that you are talking about CO2 weight and Girma is talking about Carbon weight and unwittingly wrote CO2. I freaked when I saw your 30GT figure and looked it up. I am used to the 8GT figure also but it is for C not CO2!! Apparantly this is ubiquitous and 1 m2 of atmosphere weighs about 10000 kg (10 tonnes). • Apparantly this is ubiquitous: “and 1 m2 of atmosphere weighs about 10000 kg (10 tonnes).” I’m not sure what’s troubling you here. Earth’s atmosphere has a mass of 5.148 * 10^{15} tonnes, and the area of the Earth is 0.510 * 10^{15} m2. Hence per square meter the atmosphere weighs about 10 tonnes. If you think this arithmetic is wrong then please supply the correct answer. This quantity, 10 tonnes/m2, is needed in computing how long it takes 0.53 kW/m2 to raise the temperature of the atmosphere by one degree. • Due to the 3dns problem the “state of art “GCM substitute the third full ns equation with the hydrostatic approximation.The trade off for this is idealized geometry ie equal meridians of longtitude and latitude and a surface area of 500 mk^2. Hence your skeptical calculations must be incorrect. • Hence your skeptical calculations must be incorrect. I didn’t understand all that, but in any event you still haven’t said what the correct answer is. A pressure of 1 bar or 1000 hectopascals (= millibars) is exactly ten tonnes/m2. The actual atmospheric pressure at the surface of the Earth is generally reasonably close to 1 bar, and varies somewhat, making ten tonnes a reasonable figure for the mass of a square meter of atmosphere. I truly don’t understand what you’re complaining about; all we’re trying to do here is estimate about how long it will take 0.53 kW/m2 to raise the temperature of the atmosphere by one degree. I get a little over 5 hours. If you get something different then tell us what you get instead of just complaining about what you believe to be errors. • kuhnkat Yes it is G tons of carbon (not CO2) *** Global CO2 Emissions from Fossil-Fuel Burning, *** *** Cement Manufacture, and Gas Flaring: 1751-2007 *** *** *** *** June 8, 2010 *** *** *** *** Source: Tom Boden *** *** Gregg Marland *** *** Tom Boden *** *** Carbon Dioxide Information Analysis Center *** *** Oak Ridge National Laboratory *** *** Oak Ridge, Tennessee 37831-6335 *** All emission estimates are expressed in million metric tons of carbon. • Restating 1998-2010 for carbon instead of CO2 reduces the 356 figure to 97 GtC. The number 73.32 GtC is for the period 1998-2007. In any event, not taking ocean oscillations into account is guaranteed to give strange results. The combined amplitude of the two ocean oscillations during 1910-1940 is around 0.12 °C (so a total swing from trough to peak of 0.24 °C), which accounts for more than half the swing from 1910 to 1940. Here’s a budget for each of the following two 30-year periods, based on the formulas and graphs given here. ….. 1910-1940 . 1970-2000. AMO 0.24 °C . 0.15 °C CO2 0.10 °C . 0.32 °C AER 0.08 °C . 0 °C TOT 0.42 °C . 0.47 °C GtC . 30.21 .. 172.14 AMO denotes the temperature rise caused by the two ocean oscillations. CO2 denotes the temperature rise caused by GtC emission. AER denotes aerosol or other induced cooling or warming, typically with warming resulting when there are fewer volcanoes. TOT is the sum of these three temperature rises. GtC is the gigatonnes of carbon emitted during each of those 30-year periods. The oscillations are drifting out of phase and therefore their sum is weakening. GtC is increasing and therefore so is CO2-induced warming. Aerosol/other cooling/warming seems to be rather random. The upshot is that the total temperature rise for each period is somewhat similar, 0.42 °C vs. 0.47 °C. Note that the GtC had to rise by nearly a factor of 6 to get the CO2-induced temperature rise to merely triple. That’s the Arrhenius logarithmic law at work. • Vaughan Pratt Please help me in calculating the global warming rate for the period from 1910 to 1940 & for the period from 1970 to 2000? What are they? Can you please post them? • See the budget immediately above. The relevant formulas for ocean oscillations and CO2-induced warming are attached to the graph. The aerosols-and-other is everything not otherwise accounted for, which seems to have at least some correlation with volcanoes and perhaps aerosols produced during WW2 in blowing up entire cities. • So you don’t dare post the warming rates for the period from 1910 to 1940 & for the period from 1970 to 2000? I thought you claimed you are good at numbers! What are those two numbers? Please no obfuscation. This is the claim of the IPCC: The current decadal global warming rate for the decade is less than 0.2 deg C per decade. The current decadal global warming rate is even less than the 0.1 deg C per decade projection for the case if CO2 emission had been held constant at the 2000 level. What is the current decadal global warming rate? It is only 0.03 deg C per decade. The current decadal global warming rate is 1/6th of IPCC projection. As a result, IPCC’s exaggeration factor is about 6! • Vaughan Pratt Thank you for that. That is close enough, but the actual values are 0.46 and 0.48 deg C. So you agree with the statement that the global warming from 1970 to 2000 is “similar” in magnitude and duration to that from 1910 to 1940. As a result, the recent warming is not unprecedented or anomalous. As this is what the data says, why the alarmism of man-made global warming, as similar magnitude and duration warming had happened before — naturally? • (Sorry about the delay here, I was out of the country for 2 weeks.) I answered this here. The bottom line is that while the two numbers are almost the same, as you point out, they have very different causes. Whereas the first is mainly due to the upswing of a natural ocean oscillation, the second is mainly due to the exponentially increasing anthropogenic component of CO2. The reason to be concerned about the latter and not the former is because oscillations average out to their center line whereas exponential growth keeps increasing. The oscillations have been going on for centuries, whereas exponentially growing CO2 is an entirely unprecedented phenomenon. 75. Anyone wanting to understand the laws of thermodynamics needs to visit this link: [audio src="" /] 76. Go Claes! So far I have not read anything in the comments that would force you to review your math. • I just finished reading a tome on quantum mechanics. (no I can’t do the math, popularization by a physicist) Based on my reading of his statements anyone trying to argue quantum mechanics better get out the Ouija Board!! If Claes actually understands it well enough to use mathematics rather than statistics, few are going to approach it. 77. Well, I don’t understand Claes well enough to even comment, so I’m trying my best to shut up. I’m just a chemist. Color me dumb, if you will. But I have a strong hunch that there are a lot of know-it-all blowhards, here , and most especially at Lucia’s site, who also don’t understand him, but won’t admit it and are joining in the mob-mentality of bashing him, without ANY frigging facts! I will bet anyone a six-pack that the “opponents” are nothing but bare-assed progressives. • Is this what is causing that horrendous snowstorm back east? Everyone warming up their arm waving?? 78. Will | February 1, 2011 at 8:52 am | Phil you are silly. That certainly is controversial, gas molecules without a permanent dipole can not absorb or emit IR, basic physics of gases. It is a factual statement unlike your misrepresentation, as can be verified at any text on Molecular Spectroscopy (e.g. Harris & Bertolucci, Herzberg ). That would be you, I’ve had occasion to correct several misrepresentations by you in this thread. • Phil: You fail to appreciate the distinction between correction and contradiction. Are you familiar with temporary dipole moments ? 79. Phil, that is a misapprehension I am under also. I was under impression that everything emitted IR also based on temperature. You are talking about GHG’g bond IR which is a different thing. Can you refer me to some physics text that will assure me that what you are saying is true?? My understanding is that larger molecules have more transitions so do have wider bandwidths for their planck energy than gas molecules. That does not completely remove the idea that gas atoms may emit also, only that their bandwidth will be limited. • kuhnkat | February 1, 2011 at 10:55 pm | Reply To absorb/emit in the IR a gas molecule needs a dipole, homonuclear diatomic molecules don’t possess a dipole and therefore can’t do so, N2 & O2 good examples and also monatomics like Ar. A linear, symmetrical triatomic like CO2 doesn’t have a permanent dipole but because it has a vibrational mode which bends the molecule it has a temporary dipole which does allow it to absorb/emit. Isotopologue molecules like N^14N^15 will show some very weak lines (~10^6 times weaker than CO2). You can read about it in any text on Molecular Spectroscopy such as the ones I referred to above: Herzberg (the bible), Harris & Bertolucci, Barrow etc. For an on-line version try they state “We conclude that a homonuclear molecule in the ground electronic state does not emit purely rotational or vibrational spectra by dipole radiation.” 80. The only way that si well developed and known to work includes back radiation. I should work harder at promoting my theory based solely on those photons that escape from Earth to space. It neatly finesses the complexity of back radiation. The main reason it can do this is that we know what the temperature of the atmosphere is at all altitudes and therefore do not need to calculate how back radiation influences temperature; we just need to know it’s retained in the atmosphere as opposed to escaping to space. In any event heating the atmosphere requires about 10 megajoules/m2 to raise the temperature of the atmosphere 1 °C, since the constant-pressure specific heat of air is 1.01 kilojoules/kg/deg and 1 m2 of atmosphere weighs about 10000 kg (10 tonnes). Even if we ignore the 0.531 kW/m2 flowing out of the atmosphere (333+169+30 W/m2), the 0.532 kW/m2 flowing into it (356+78+80+17 W/m2) will take 1.01*10000/.53 = 18800 seconds or over 5 hours to make a one-degree difference in a column that can easily vary by 70 °C from ground to tropopause. Taking into account the 0.531 kW/m2 flowing back out means that the changes are even slower. (Notice their difference of 1 W/m2 or 0.001 kW/m2 representing the non-equilibrium condition responsible for actual global warming.) Temperature variation in the atmosphere due to lapse rates of 5-9 °C/km therefore completely dwarfs anything that GHG heating can do in the available time to change the temperature of the atmosphere. Global warming certainly heats the atmosphere, but it does it very very slowly, taking days or weeks to make any appreciable change. This is a big part of the reason why one can completely ignore back radiation and work solely with the photons that escape to space, which ultimately is Earth’s only way of keeping cool. The photons that leave are gone within microseconds, those that don’t leave take days or weeks to have any appreciable impact at all on the temperature of the atmosphere, which therefore is more than adequately modeled by the adiabatic lapse rates alone without worrying about the impact of back radiation on temperature across the whole column of atmosphere. • Vaughan, I was referring to the state where the basic physics has developed. By basic physics I mean things like quantum mechanics and theory of electromagnetic radiation. General relativity is also part of that although not needed here. Classical thermodynamics is a bit different, as it does not go into the actual physical processes but expresses laws that apply to macroscopic bodies. Thermodynamics is also different in the way that it can be derived from what I classified as basic physics, but not vice versa. There is very much physics that describes macroscopic phenomena, but cannot be fully derived from basic physics, while physicists believe that it could be, if we were more capable in calculating the consequences of basic physics. The description of the atmosphere contains much that can be derived from basic physics, but also much that cannot. The subject of this chain is more or less in the first class. Claes Johnson presents ideas and calculations that are clearly in the realm of basic physics. Part of what he presents is in agreement with well established and tested physics, part is clearly in contradiction. What makes his presentations a bit difficult to handle is that the second of them contains also parts, which might be correct or not, as they are presented with little justification, but they might indeed be related to an alternative way of describing valid physics. The idea of using classical electrodynamics (Maxwell’s equations) further than is usually done is not crazy. It might even work, but that would require much more research and the conclusions would certainly not be those presented in “Slaying a Greenhouse Dragon”. • If there is anything at all in this line of reasoning (it still needs to explain observed radiative fluxes), Johnson needs to submit this to a peer reviewed physics journal, and not publish in a politically motivated book that is full of crankology. His argument at present is incorrect and incomplete. • JC: ” a politically motivated book that is full of crankology.” Wow-such insults!! Judy, please explain to all what my political motivations are? I can confirm several of the book’s authors, like me, hold explicitly democrat/socialist affiliations. Also please tell us where the “crankology” is in the book? If the insults are as a result of your buddies doing so badly perhaps its now time for you to call in your A-Team – those “undergrad physics or atmospheric science majors at Georgia Tech” who you assured us at the outset would soon prove Claes a crackpot. • John, if you are only in this for the science, you are backing some very strange and lame horses. Apologies for being fooled regarding your political affiliation: I was fooled by your engagement with Canada Free Press – billed as an online conservative newspaper- and National Review. • Dr. Curry, do physics journals publish discourse like this? I thought they only published specific research results. This is more like a monograph, is it not? • Something like this could in principle be published in a physics journal as a theoretical development. But a more coherent argument is needed, cleaned up math, and it has to be correct and provide new insights. This seems to fail on all these fronts. I suspect that this would be published in a journal like E&E, which doesn’t even merit an impact rating by webofscience. • I ask because not being in a reviewed journal is a common defense of AGW. Much of the skeptical literature is analytical, not primary research. • Lots of different kinds of research get published, including critiques of other people’s papers. Getting published is a pretty low bar; of course getting published in a high impact journal has a high bar. E.g. I’m sure Johnson could get his paper published, just not in a physics journal or in any journal that is on the map in terms of impact factor • That high bar for the high impact journals appears to be quite variable. 81. I have made a comment to Judy’s claim that I am a “crank” if I don’t believe in “downwelling IR-flux from the atmosphere” or “backradiation”, on my blog • While I try (emphasis on “try”) to avoid labeling people, I’d say that the following by you at your blog is pretty badly wrong. I’ll let Judge Judy decide whether that makes you something less than a Nobel-prize-winning scientist. Is it correct to use SB in the form Q = sigma T^4? No, because this law gives the radiated energy from a blackbody into an environment of 0 K. But the Earth surface is not at 0 K, but even warmer than the atmospheric emitter. The translation Q = sigma T^4 is thus incorrect in the sense that it indicates a fictitious “downwelling IR flux from the atmosphere” obtained by an erronous translation. This isn’t how Stefan’s law works. Radiation from any source does not decrease merely because what it is radiating into is radiating back to it even more strongly. It is completely independent of what it is radiating into. I can’t think of anything to add to that. • Vaughan, I hope I am not causing additional confusion by what follows. The radiation can be looked at in the standard way that you and more all less everybody uses with visible light and IR, but almost nobody with low frequency radio waves. The radiation can, however, be analyzed as is done with the radio waves using Maxwell equations. In that approach the matter interacts with electromagnetic field and the field follows Maxwell’s equations. The gas is handled as media where the field propagates. The interaction with CO2 molecules influences the properties of this media and those modifications can be presented in the coefficients of Maxwell’s equations. This influence must represent correctly in accordance with quantum mechanics the vibrational properties of the molecules. In this approach the whole concept of photon may be superfluous. Building up this theory without violation with quantum mechanics may be difficult, but probably not impossible. There is even a change that, what Claes Johnson has done is a partially correct step in this direction. That is the reason for my way of presenting criticism and objections to what many others have written. In this approach the back radiation is replaced by some properties of the media which lead to a similar change in net radiative flux. In this approach there is no back radiation but a reduction in “forward radiation”. • I am glad Pekka that you understand what I am saying. Many other show zero absorbitivity. • Claes, Perhaps I understand something, but that makes me to repeat, that the question is about an alternative formulation that must give the same results than other correct formulations. I have great trust in the conventional formulation. Therefore your analysis does not make me doubt the conventional results. Including this kind of incomplete analysis in a politically motivated book attacking on correct science does not give value for your work. The other problem with your writings is that they contain so many sentences, which are either clearly in error or so obscure that nobody can tell, what you want to say. • Pekka, thank you for this summary, I concur. • I am not attacking correct science: I prove Planck’s law using a deterministic wave-model. My wave model does not support any idea of “backradiation”. What is incorrect with this? • Claes, I believe I have stated in many of my messages, why I do not have much confidence in your results in spite of the fact that I accept the general approach. • But your model does not support heating by radiation that are at a lower freq than some cut-off frequency linked to the receiver T. This is a fundamental difference with classic theory. How do you explain microwaves within your framework? They seem to very efficiently heat stuff at room temperature with very low frequency waves (2.4 GHz, 10 cm wavelength usually). If I get your theory right, such feat should not be possible because the cut-off at room T is much much higher than 2.4 GHz. Or are you willing to invoque some coherence argument because the magnetron emit maser-like radiation instead of thermal stuff? Does not look too solid to me, and I am affraid you are getting dangerously close to experimental invalidation of your theory ;-) • kai, Was this directed to me? The general description about a potential new approach has no problems with that. I cannot really answer on behalf of CJ, but I noticed that his statements about separating incoming and outgoing wavelengths were softened in formulas to the cutoff of the Planck distribution. This is one place were his text is worse than his formulas. There are many other examples and they make it impossible to judge, what he really thinks. • Sorry, I was adressed to C.J., I think as the author he is the most qualified to answer :-) But u are welcome to do it too, of course. • I speak about blackbody radiation, not forced EM heating, but I guess the principle is the same modulo the amplitude of the incoming waves. • Could you elaborate on this? What do you theory predict? possible heating, or all the incoming low freq radiation can not be absorbed (i.e. it is either reflected, diffracted, or plainly reflected)? If it is absorbed, I do not see how your theory significantly differ from classic theory, and all the fuss is about interpretation (still, I prefer classic interpretation, which offer a big advantage: the radiating body does not need to know the exterior temperature) If it is not, why does microwave oven obviously work? Because there is a fundamental difference if the incoming waves are coherent single frequency? Then your theory has another disadvantage: it is less general, because it works only for thermalized radiation, and does not allow independent freq treatment or any kind of superposition…. • No problem: it is included in my analysis. If the incoming forcing is stronger than blackbody, the absorbing chicken will heat up until its blackbody radiation (assuming equilibration so that all freq have the same temp) balances the incoming intensity. • Does your idea support the infrared emission by gases such as CO? if it does not, it is incorrect. There is a whole field of spectroscopy (chemists, mostly) that have demonstrated this empirically. • Pekka, my problem with this line of reasoning is that any theory of infrared radiation needs to explain what an IR spectrometer sees when pointed skyward in a cloudless atmosphere. This seems to have nothing to do with a reduction in “forward” radiation. • Judith, That is not a problem in the alternative description. In this description electromagnetic field is present everywhere and its development follows Maxwells’s equations taking into account the influence of the media (gas with some CO2). When a measuring device is brought to such electromagnetic field it interacts with the field in full accordance with the empirical results observed. The CO2 molecules influence the field and the field interacts with the detector. All this is familiar at longer wavelengths but it is true also for short wavelengths. The most practical way of performing the calculations varies, but the basic theory is the same. • Pekka, It seems to be a problem in Johnson’s description, he has not demonstrated that what HE has done can explain any observations, whereas the generally accepted theory explains the observations. You have been very generous in describing a hypothetical good alternative description, which is a very idealistic (and IMO unsupported) view of what Johnson has actually done. • Judith, I do not think that Johnson has done much of what I have described. I said only that there are some similarities and that the basic approach is not wrong. It could possibly be developed to provide many results in a rather simple way, but I doubt whether it would really add anything to the understanding of the atmosphere. If further research along these lines is to produce interesting science it is more in understanding quantum mechanics and quantum filed theory than in understanding the atmosphere. This is why I started my comment above by “I hope I am not causing additional confusion”. For a theoretical physicist this line of discussion has some similarities with the argumentation between “orthodox” climate scientists and their skeptical critics. Criticism should not be condemned because it is on unfamiliar lines, but it can be discounted when found erroneous or lacking all justification. • I understand your point and it is a good one, but I suspect John O’Sullivan and Ken Coffmann do not understand the nuances of your position. But I also suspect that your willingness to consider this approach provides credibility to your statements in the eyes of O’Sullivan and Coffmann. • Judith, The numerous possibilities of misunderstanding my position have led me to repeat many times that nothing in what I have written contradicts the conventional approach. It is only an alternative way of describing the same physics. By back radiation I understand radiation that originates in the atmosphere and hits the earth surface (or some other surface facing the sky near the earth surface). The radiation affects the net energy balance of this surface in the same way in both descriptions. In the first description the net flux is the difference between the energies of all outgoing (emitted) and incoming (back radiation) photons. In the second description the same net energy is transferred to the electromagnetic field, which then carries it further. • Pekka, I don’t really think those are two different approaches to this problem. Saying that one approaches uses photons while the other uses waves is just that, saying it. Physically those two processes are indistinguishable. One can go through the process of counting each photon for fun or one can solve the Maxwell-Bloch equations to find the wave equation with sources. It’s still the same physics, however. I think that’s the thing that is confusing here. It sounds like you’re saying there could be a physically distinct way of thinking about this problem that produces the same result (IR emission toward the surface by GHG’s), but supplies that result via a macroscopic vs. microscopic perspective, ie EM waves vs. photons. But our current physical understanding is that these two concepts are the same physical phenomena, just allowing us to interpret the results of different kinds of experiments differently. But even if you went for the EM wave approach, you’d have to do quantum mechanics to calculate the elements of the density matrix to correctly calculate the macroscopic polarization of the source of the EM waves. So those two approaches are not independent in any sense. They just look different to the untrained eye who is willing to blather on and on about Maxwell’s equations. If Dr. Johnson is proposing that there is some inherent physical difference because he’s solving Maxwell’s equations for macroscopic systems, he’s wrong. It’s still the same physical approach solving for the exact same physical quantities. In my opinion, he is not even solving the right Maxwell’s equations because he will neither confirm nor deny whether he has derived the wave equation with or without sources of EM waves. So I think to continue on this track that Dr. Johnson COULD be on to something important if it confirmed observations is hazardous, not only because he is fundamentally incorrect, but also because there would be nothing new about his deriving the macroscopic approach to the situation. It was done 50 year ago in spectroscopy. • Maxwell, By two different approaches I mean that the same fundamental equations are solved in very different ways. I have tried to repeat often enough that there is only one physical theory behind these two approaches, but the mathematical handling would be really very different. The standard approach at the deepest level starts with perturbation theoretical methods of quantum electrodynamics and uses immediately Feynman diagrams to help in writing the required lowest order terms. This is known to be a very good way of doing quantitative calculations. The results can easily be expressed in terms of emission and absorption of photons originating at some positions and ending up at some other. It includes also transitions between vibrational states of CO2 etc. The other approach has been used in laser physics where fields are relatively strong and coherence is an important factor as is stimulated emission, but not to my knowledge in situation corresponding to atmosphere. The extensive incoherence of separate transitions and almost full nonexistence of stimulated emission makes this approach rather useless for detailed analysis. My idea is that it might still be a potential alternative for handling atmospheric radiation. The approach would involve certain approximations not present in standard approach, but perhaps these approximations could be shown to be acceptable. The mathematics is definitely different, and the approach is therefore not a minor modification of the standard method. Still the aim would be to solve the same basic equations with the expectation that the results are the same. • maxwell, “Physically those two processes are indistinguishable.” I wonder why those great scientists fought back and forth so long over the two apparently separate sets of physical attributes that have been ad hoc merged by quantum mechanics there bud. • Pekka, thanks for the nuanced response. As far as the two approaches go toward a meaningful end, I think you would agree with me that there is not a mathematically or physically distinguishable difference between the perturbative approach using quantum mechanics (either quantizing the field or treating it classically) when applied to atmospheric spectroscopy and the implications of spectroscopy. There may be some high field limit in which, because of plasmas or other exotic forms of matter, there is some wave formalism that adequately describes what happens. But given that is a completely physically distinct situation, I think we would both agree, on the terms that Dr. Johnson is trying work, such a limit most likely does not matter. I could waste a couple paragraphs explaining how you are mistaken in your understanding of the current standing of modern physics, but it would be just that, a waste. • Maxwell, I am also sure your explanations would be a waste. • And the absence of CO2 in that analysis by Pekka speaks volumes as to where our book is coming from; the need for the IPCC’s politicization of a trace gas and to isolate CO2 in this process is superfluous, sinister and invalidated by Occam’s Razor. After a quarter of a century and $100 million spent trying to single out CO2 as a forcing agent we not only see no empirical evidence to substantiate the GHE we no longer see any correlation between temperatures and CO2 levels in the atmosphere (outside the period 1975-1998). • John O’Sullivan, The potential approach that I have described is influenced by CO2 to the same extent and from the point of view of physics in exactly the same way as standard theory. It is only formulated differently in doing the calculations. I believe that I was the first one to use the formulation “politically motivated attack on correct science” about your book in this chain. I have no reason to doubt the correctness of this formulation – or do you propose that it is published purely for the advancement of science? Judith predicted that you will not be able to understand, what I have written. Evidently she was right in that. • If you have read what Claes has said here: You would know that he has stated quite clearly that quote: ” an IR camera (infrared radiometer) directed to the sky measures the frequency of incoming light and computes by Wien’s displacement law the temperature T of the emitter, and then by Stefan-Boltzmann’s law Q = sigma T^4 associates a “downwelling IR-flux from the atmosphere” of size Q.” Perhaps addressing this statement would be a good place to start Judith. • This statement is absolutely incorrect, it if is referring to gases in the atmosphere. Gases emitting IR from the atmosphere are not black bodies (they emit in spectral bands), and hence the integral form of the Planck function (e.g. the Stefan Boltzmann) law is not relevant. Do you see why I despair of critiquing this stuff? It is a hydra monster, you clarify ten things, then you start over and have to clarify these things all over. • Judith, Clarify? Is that what you think you are doing? Sorry but I have not observed much clarity in your responses to me so far. The above statement by you is a case in point in my view. Thanks anyway. • Sorry but I have not observed much clarity in your responses to me so far. When a communication channel breaks down because the receiver can’t keep up with the transmitter, the blame could in principle be shared: the receiver for being too slow and the transmitter too fast. In practice one specifies a rate for the channel, and the fault then lies with whichever end is out of spec: the receiver for being too slow for the channel or the transmitter for being too fast for it. In this case Judith is both the transmitter and (as blog host) the specifier of the channel rate. So if you can’t keep up and she’s not willing to lower the rate just for you, I would suggest hanging out on blogs that are more your speed. A more precise statement might be that Stefan’s law is relevant only to the extent that the spectral bands are distributed reasonably uniformly across the bulk of Planck’s function at that temperature. The integral of Planck’s function is still Stefan’s law when you punch holes uniformly across the whole curve. It’s only when the spectral bands die away or stop abruptly, both being common behaviors, that Stefan’s law breaks down in any serious way. • Using Stefan-Boltzman to compute the amount of energy being emitted from CO2 is incorrect? • To rescue SB you have to use an emissivity/absorptivity is a strongly frequency dependent, essentially shoving the spectroscopy into that package. • Not really sure what you just told me. The previous comments made it seem like the issue was the narrow or very non-contiguous bandwidth of GHG’s and other non-planck emmisions? How would you rescue it? Why does everyone seem to use it to compute their favorite thought experiment energy transfer? • Although I’m way outside my league in terms of background in theorical physics, I’ll have a shot on this: if I’ve understood correctly, what has been written above about the alternative way of calculating the radiative energy in the form of radiowaves and Maxwell’s equations, this calculation is done to analyze the propagation of radiation and hence energy into certain direction; the source of radiation would be the surface of the earth, and the medium the athmosphere. What you see from the spectrometer pointed skyward is the another direction; the source is the atmosphere. • Anander, Maxwell’s equation describe electromagnetic field, which fills all space, but may vary strongly from place to place. When the wavelength is very short the equations lead to the result that the source (the emitting molecule) and the sink (the detector) must essentially have a free line-of-sight connection. Very short waves turn very weakly around corners, but this is a property of the solutions of Maxwell’s equations. On the other hand we do not observe individual CO2 molecules. According to the Copenhagen interpretation what is not observed is not determined. Thus there is no deed to handle the atmosphere as a collection of molecules with well defined positions but it can be handled as a field of molecules, which interacts with the EM field. This is the approach of quantum field theory. The standard description of what happens corresponds to the lowest degree perturbation theory described by the simplest possible Feynman diagrams that have the required interacting components. Perhaps I stop here. • Thanks Pekka, In your opinion – is it correct to say, that this approach (i.e. modeling the radiative transfer as fields and ultimately translating it into flux in/flux out) should and would lead, if correctly applied, to same results as the more standard way of doing it with fluxes (S-B)? And if, when using this alternative way to model the radiative transfer, one finds out that there is either none or very little downwelling ‘backradiation’ one has probably just made some error along the way rather than a scientific breakthrough? I admit I didn’t read the book either, and possibly wouldn’t understand (at least all of it anyway), but for me dismissing the effect of above zero-K gas above us is just insane — of course it matters. What is written here I get a feeling that is at least to some extent the central message out there. IMHO we can at least calculate a reasonable average ballpark figure for the strenght of the backradiation given the composition and distribution of different gases and their relative temperatures. The main source for my sceptism (and probably for many others, too) is how this translates into changes in global climate as the change (just the change in dw radiation) itself is anyway quite small. • Anander, You made the correct interpretation of what I mean. The standard description of atmospheric physics is valid and there is no need to modify it. If two alternative descriptions are presented for the same physical situation then either they agree or at least one is in error. In this case the standard description is based on very solid knowledge and is with great certainty valid. If a new little studied alternative gets different results it is almost certainly in error. If the presentation is in addition incoherent and vague, the new results cannot be taken seriously even if there are some good ideas in the first steps of the new approach. All that I wrote above applies to that part of climate physics, where the basic equations can be applied in a straightforward way. The validity of the rest must be argued differently and taking advantage also of additional empirical observations. • randomengineer Backradiation exists but isn’t warming the surface. Isn’t this the crux of the entire argument? • The surface is warmer with the back radiation than without it, that is the whole crux of the greenhouse argument. • Judith may I just quote you in the interests of accuracy? “First, i have no idea what people mean when they say “back radiation”. • No, you may only include my quote in the context of the entire statement. If someone says backradiation does not exist, then I have no idea what they mean since some of the infrared emission from CO2 and water vapor in the atmosphere does travel back in the direction of the Earth’s surface. • Can you specify exactly how much of the re-emitted IR actually reaches the ground and how the probability of that changes with altitude and with regard to the curvature of the Earths surface. And how that reduction in probability by increased altitude of the re-absoption at the surface is factored in to the “greenhouse effect” hypothesis? Thank you. • That’s a great question. I stepped outside just now and measured the backradiation from directly above reaching my deck at 210 W/m2. About 60 W/m2 of that would like be due to water vapor (it’s a dry sunny day) and 150 W/m2 to other GHGs, in particular CO2. On cloudy days when it’s about to rain it’s more like 350 W/m2 (and that’s in winter at 37° N, you can expect a lot more at the equator). At least 250 W/m2 of that if not 300 would be from the clouds and associated water vapor, with only 50-100 W/m2 coming from other GHGs besides H2O. The reason there’s less from the CO2 is that the clouds are blocking more than half the downradiation from the CO2. The bottom line basically is that trying to figure out how much back-radiation reaches the surface is really complicated. And completely unnecessary since backradiation is an unnecessary distraction when trying to figure out how much any given increase in CO2 can warm the Earth. Backradiation is nothing more than an indication of the general temperature of the sky above you. That temperature varies hugely with altitude, cloud cover, etc. Here in Palo Alto in winter I’ve seen it change from 10 °C to −40 °C in a mere 12 hours. But given that the atmosphere gets colder with altitude, at a rate between 5 and 9 °C per km depending on how wet or dry the air is, even the huge fluctuations measured on the ground still don’t give the full picture of the thermal complexity of the atmosphere because you can’t measure lapse rate just from the ground. Trying to figure out the quantity and distribution of backradiation is tremendously complicated and a needless headache. It is much easier instead to work with Earth’s goal of constantly shedding 239 W/m2 from the top of the atmosphere, which is the amount of insolation it is absorbing. Increasing atmospheric CO2 acts to shut off progressively more CO2 absorption lines in the atmospheric window. At 288 K, the middle 50% of that window is from 468 to 993 cm⁻¹, and at 390 ppmv about 600 CO2 absorption lines in that region are closed. Each additional 2% increase in CO2 closes about 3 more lines. CO2 is currently increasing at about 0.5% a year, so every 16 months sees another line closed. By 2060 that rate will be down to 8 months a line. CO2 has 25,000 lines in that region so there is no danger of running out of lines to close. Closing does not happen abruptly but quite gradually with increasing CO2. I like to define a line as closed when it is blocking the escape route to space of more than half the radiation at that wavelength from the Earth’s surface. More commonly people use 1/e rather than 1/2 as the criterion but it’s somewhat arbitrary exactly where to draw that boundary; using either 1/2 or 1/e, about 3 lines close with each 2% increase in CO2. • Vaughan, so what??? CO2 absorbs, collides transferring energy depending on local imbalance, either loses or emits or depending on excitation state. Are you saying that closing lines prevents CO2 from emitting after gaining collisional energy and that H2O and other GHG’s can’t emit either? Actually it’s the opposite. A line is open when it is too weak to interact strongly with radiation at that wavelength, both by absorption and emission; it’s as though there were no CO2 at that wavelength. When a line closes, CO2 then both absorbs and emits at that wavelength. Other GHGs that don’t have a line at that wavelength don’t get involved with that CO2 line. From the point of view of radiation at that wavelength it’s as though those other GHGs weren’t there at all. The open CO2 lines are those that are not strong enough at the current CO2 level to block at least half the photons from going straight from the surface to space. Most of the open lines let almost all of their photons through to space. They close like a very slowly closing water tap: as they get near the closing point the flow slows and pretty much completely stops after a while. The choice of half as the dividing line between open and closed is arbitrary; climate scientists prefer 1/e rather than 1/2 as the dividing line. Unity optical thickness of any medium through which radiation passes is standardly defined as when 1/e of the photons are getting through that medium. Ordinarily it is defined separately at each wavelength: a medium may have optical thickness 1 at one wavelength while having 0.1 at another and 10 at yet another. • I see, so the energy ends up in the local area and convects up to where it will radiate to space like the energy from the normally closed lines. I keep wondering if we will ever see that hot spot with all those closed lines. Sort of. Radiation, convection, and conduction are all happening at all points, and the huge variability of clouds and water vapor, and the diurnal heating and cooling of clouds by the Sun, all make it unmanageably complicated to say what “the energy” is at any instant, or to say where exactly it’s going: it’s going off in all directions by all possible means. And even though radiation moves at the speed of light, the high thermal mass of the atmosphere in combination with the very low rate at which global warming shifts the atmosphere’s overall thermal equilibrium gives convection and conduction more than enough time to participate along with radiation in gradually adjusting the overall general temperature of the atmosphere. The advantage of limiting the analysis of global warming to the 239 W/m2 of Outgoing Longwave Radiation (OLR) Earth must send to space to maintain equilibrium with the incoming 239 W/m2 of insolation is that that part is considerably simpler than the way heat moves around between the atmosphere and the surface. In fact we really only need to divide that OLR up according to the two places from which it was launched. (a) The surface, via the atmospheric window (the part of the spectrum not blocked by some line of some GHG). Knowing which lines are closed (and for greater precision, by how much they’re closed) would make that part simple to calculate, were it not for clouds, which being essentially opaque to all relevant thermal radiation from Earth’s surface shut down the whole atmospheric window. But that’s fairly easily dealt with by determining what percentage of time the window is closed, for which there is quite enough data to make a good estimate. (b) The atmosphere, via closed lines of GHGs which being closed both absorb and emit. The more of a particular GHG there is in the atmosphere, the higher the origin of those photons that make it to space because of the additional GHG molecules that now block the exit to space from lower molecules. However the higher the origin the colder the emitting molecule, and hence the lower the rate at which photons at any given wavelength are emitted from a molecule. The photons that don’t make it to space are simply ignored as being part of what keeps the atmosphere warm, which as I said involves very slow processes which are therefore best analyzed from the point of view of an atmosphere that is basically in equilibrium albeit with large fluctuations. So to summarize, wavelengths that are open (i.e. part of the atmospheric window) allow the most photons with that wavelength to leave for space. As any given wavelength is gradually shut down by increasing the GHG responsible for it, the OLR at that wavelength shifts from Earth-originating to atmosphere-originating. But as the GHG continues to increase, instead of the atmosphere-to-space photons of that wavelength increasing (as even some who should know better have claimed here), they decrease because they have to come from progressively higher and hence colder molecules, which radiate fewer photons of that wavelength (and of all other wavelengths but here we’re focusing on a single wavelength). As a caveat, this de-emphasis of back radiation in explaining the greenhouse effect is a personal preference of mine and not the generally accepted approach which is to make back radiation the heart of the explanation of the greenhouse effect. Some day I’ll figure out how to explain my version more succinctly and clearly. A graph or two of how these processes depend on the level of the GHG in question would probably help, as would weaning the experts off the standard explanation if they ever come to find it more attractive than the back radiation approach. • The back radiation or the alternative formulation the same physics are describing a real effect that increases the temperature of the surface. Making this effect stronger by adding CO2 to the atmosphere rises the temperature further. This an unavoidable direct consequence of higher CO2 concentration. Further changes – the feedbacks – cannot be fully determined from basic physics. The problems are too complicated for that. • Warming is a relative term. You get more heat from infrared radiation emitted by the atmosphere to the surface if you add more greenhouse gases. The Earth’s surface (usually) emits more infrared radiation from the atmosphere than it receives. Exceptions occur in the polar regions when you have a temperature inversion and cloud that is warmer than the earth’s surface (the topic of radiative transfer in the polar regions is the topic of my Ph.D. thesis and several decades of subsequent research). Purely from infrared radiative transfer considerations, the net infrared radiation balance at the earth’s surface is cooling (again, exception when you have a cloud that is warmer than the surface). The net infrared radiation balance is less negative if you add more greenhouse gases. Again, the Georgia Tech undergrads understand this. It is pretty simple. This has been demonstrated empirically an endless number of times (see especially the data at Infrared radiative transfer models have been developed that reproduce these observations under relative warm vs cool atmospheric temperatures, and high versus low amounts of water vapor. Is an alternative explanation of the basic underlying physics that explains these observations possible? Sure. Has Claes Johnson accomplished this? No. Even if he did, he would need to explain these same observations; the radiative flux and the surface received from the atmosphere by infrared emission from gases such as H2O and CO2 that is observed isn’t going away by some manipulations of Maxwell’s equations. It seems that while Johnson can do mathematics, he does not understand anything about gases (included spectroscopy and basic kinetic theory). • Judith: Here’s my question. In the summer on the Great Plains, the humid warm air from the Gulf plays “tag” with the cool dry Artic air. One day the air is dry, one day it is humid as heck. Now, when the soil is dry as toast (no evaporation going on), it is absolutely no hotter on a clear humid day than it is on a clear dry day. With all the backradiation from the water vapor, it should be hotter on the humid day. Why isn’t it? Now, puleeeze, don’t ignore the daytime and start talking about nighttime, like everyone else does. • It depends on the wind speed, the details of the atmospheric temperature and humidity profiles. You have to consider both the solar radiation and the infrared radiation. With this kind of information, you can interpret what is going on with the surface temperatures. This is simple. • Judith: You say: “This is simple.” ?? Your response makes absolutely no sense. THAT is armwaving, Judith! As I understand the greenhouse gas concept, ON AVERAGE it should be hotter on humid days in July on the farm near Sterling, Colorado than on days when it’s dry. That is not the case. Something appears to be wrong with the greenhouse gas hypothesis, since we are talking about changes in greenhouse gas concentration (water vapor) from about 0.4 % water vapor (5 gm-3) on dry days to 1.6 % (20 gm-3) on humid days. This is what drives me nuts about climate science: it seems to be all based on completely unflasifiable hypotheses. Any empirical evidence that questions the “science” is automatically arm-waved away, just like you just did. There are always “other things” that keep one from validating any part of the grand scheme. • Judith, in fact he cannot even do mathematics. The equation he starts from (4) has exponentially growing solutions, exp(t/gamma), so blows up rapidly, and so is physically meaningless. He then makes an integration by parts error, before eqn (5), transferring time derivatives of u in a space integral. • thanks, i hadn’t caught that. i didn’t dig in that closely, I just assumed that the math at least was correct. Too much to assume, it seems. • PaulM: You are off track. I am very familiar with the math I present and have written many articles and books on this stuff. Your objections doesn’t make any sense. How many math articles have you written? • Claes, Is that an answer? • If you’ve written so many articles, and are very familiar with the equations, of course then you can easily come up with a counter-argument, don’t you? • Thank you Vaughan. This is issue of the Stefan Boltzmann law is somehow a big deal in the denial of this physics. Georgia Tech undergrads can definitely refute this one. • Claes, i read what you wrote, and it makes no sense to me (nor did this section of your article). Tell me, does your theory explain the observation that if you point a radiometer upwards at a cloudless sky, that it will measure a radiation flux of say 200-400 W m-2 (depending on ambient atmospheric temperature, humidity, etc). Can you put the atmospheric profile of temperature and gases into your equations and calculate the flux that is observed? If not, and you continue to insist that your theory is correct, then you get to wear the crank label. You need to clearly address this issue. This is how theories are tested, with observations. There is an enormous amount of data at against which to test any theory of infrared radiative transfer. So how does your theory pass this test of observations? It hasn’t been tested, right? And you somehow think your theory is better than the theory that actually explains observations. Sorry, nobody is buying this. I suspect that even John O’Sullivan and Ken Coffman understand that you have to test a theory with observations. While this can be very difficult to do on the scale of the entire planet, it is very easy to test infrared radiative transfer against observations in a single column. • Judith, please define the distinction between, A) “back-radiation” B) “downwelling-radiation.” Thank you. • I’ve described this previously. First, i have no idea what people mean when they say “back radiation”. The emission from molecules in the atmosphere is isotropic (goes in all directions), with some of this radiation going in the direction of the earth’s surface. This is what actually occurs; what you call it is up to you. • Thank you Judith. So when you point your radiometer skyward what you are measuring is the energy present in the atmosphere which keeps it in its gaseous state. This energy is therefore energy which is already in the system. What you have failed to show is that this energy is being added back to the sum total as a result of a composition change in the ratio of CO2. That would require that you can show a clear historical correlation between CO2 and temperature. There simply is no such relationship as you well know. Therefore there is no “greenhouse effect” signal from CO2. Let alone an AGW signal. As a climatologist I would expect you to acknowledge this point as a serious problem in the ‘greenhouse effect” hypothesis. I would also expect you to wonder if there maybe some doubt in the basic physics on which this hypothesis is based. I woud also expect you to question why in the last thirty odd years of practically unlimited public funding, the entire weight of scientific genius as failed to devise a simple real world experiment which can demonstrate the warming effect of a change in air composition by increasing the ration of CO2. And why this theory still clings to a 150 year old set of experiments by one man which have never since been re-eximend to analyse the possible flaws. Your unquestioning faith in the “greenhouse effect” hypothesis of CO2 is breathtakingly worrying to hose of us who can see that you have failed to acknowledge any of these points. The “greenhouse effect” hypothesis requires that a substance with highly transmissive properties like CO2 ultimately behaves as an insulator restricting the net energy loss to space. Judith it would be fantastic and truly astounding if that were the case. But it isn’t. When you understand and finally acknowledge all the negative feedbacks such as I have listed in : which more than compensate for any possible positive transmission warming effects you might realise just how dangerous and far of the mark, the “greenhouse effect” hypothesis really is. • I tried looking over your paper but I have a general rule about papers that begin with a Hitler quote. Sorry. CO2 admits short wavelengths and absorbs long wavelengths. That’s what makes it a greenhouse gas. Even if CO2 is not responsible for recent climate changes it still contributes to the total greenhouse effect. Or are you refuting the entire greenhouse effect? • “I tried looking over your paper but I have a general rule about papers that begin with a Hitler quote. Sorry.’ That is the best excuse for a short attention span I have ever heard! “Or are you refuting the entire greenhouse effect?” As a sub-surface dweller, I do refer to surface warming as a “greenhouse effect”, for the simple reason that it is a fallacious concept that could be used for nefarious purposes, such as that which occurred with the so called “science” of eugenics. • Dr. Curry, it might improve the conversation if we tightened up this description. The so-called back radiation is a wave or probability front expanding around the emission point. There is a good reason to use this explanation rather than the assumption that there is an actual particle that leaves the emission point with a specific vector. One of these days I may even be able to explain this good reason to another person. • Judy: Ask your students to explain to you that an IR-meter measures frequency (this is why it is called and InfraRed-meter), and that the connection to “downwelling radiation from the atmosphere” is ad hoc by applying SB in the form Q = sigma T^4. If you claim it is not ad hoc then prove to me that the formula applies. Or ask your students. • Actually, an IR-meter measures intensity at one or more frequency bands in the infrared portion of the EM spectrum by filtering out or otherwise being insensitive to frequencies outside the desired measurement range. • Heck, you can point an IR spectrometer upwards and observe the spectrum of the radiation from the atmosphere, so all this back and forth, is, as they say sausage. THEN you can point the IR spectrometer downwards and you can ACTUALLY MEASURE the amount, if any, of reflected IR from the downwelling radiation. HINT: Almost all of it is absorbed because, if nothing else, the absorbtivity/emissivity is close to unity for most materials in the region of the IR being talked about. Claes, your ideas simply fail the reality test. That sounds a lot like you just claimed to be able to measure something that you cannot prove existed in the first place. Good stuff! • Don’t be silly, one can, and many have measured frequency dependent absorptivities/emissivities for a huge variety of materials. This example from the MODIS database is appropriate in the US today • Eli, could you provide us with a source of information on the absorptivity of material ordered by frequency or wavelength?? • JC: “I suspect that even John O’Sullivan and Ken Coffman understand that you have to test a theory with observations.” Yes, even a’ dumbass’ like me knows that! So why, after 25 years and $100 million spent can you and other ‘ clever climatologists’ not give us one experiment in the atmosphere to prove your faux theory? • Heh, I provided some observations above somewhere that seem to me to contradict the concept that backradiation adds to surface heating, and all I got was some hand-waving that didn’t even speak to my observations. That is what is so slippery about climate science: nothing appears to be falsifiable, and if you bring up some observations that don’t fit the hypotheses, you are just told that there are “other variables” that make your observations nonsense or that you are just dumb (which cold be the case, of course). Now, we are even getting stupid statements about the snowstorms being caused by global warming (although mainly by idiots like Gore and the news media. I don’t think any reputable climate science has chimed in on this. Yet. However, tellingly, the famous climate scientists are sure not going out of their way to dispell this crap, either). 82. Will | February 2, 2011 at 10:17 am | They aren’t filtered out because they don’t exist, if you persist in this in the face of documented evidence you aren’t a sceptic you’re in denial! See here: “Unlike SEM inspection or EDX analysis, FTIR spectroscopy does not require a vacuum, since neither oxygen nor nitrogen absorb infrared rays. “ • Phil: Are you absolutely 100% sure that O2 and N2 do not absorb or emit any IR what so ever? • Will, highest order IR absorption selections rules are based on the presence of a permanent electric dipole or an electric induced dipole (in the case of the asymmetric stretch of CO2) in a molecule. Both N2 and O2 lack a permanent or induced dipole upon the excitation of the single, totally symmetric stretch possible. Therefore, due symmetry based selection rules, the absorption of IR light by N2 and O2 is negligible. O2 has a small, but non-zero magnetic dipole because there is an abundance of the paramagnetic form of O2, but the absorption cross-section for that transition is exceedingly small and can be ignored for most practical purposes. • Maxwell CO2 only has a temporary dipole moment. A temporary dipole moment is induced in O2 and N2 by ionisation which is occurring at varying degrees throughout the entire atmosphere. Ionisation is most prolific in the Diurnal Atmospheric Bulge which covers an area of 25% of the Earths atmospheric surface under the solar point and bulges upwards to an altitude of 600 km. Some people refer to the Diurnal Atmospheric Bulge as the Thermosphere. But it is not a sphere it is a bulge with a circumference equal to 25% of the surface of the atmosphere and bulges up to an altitude of 600 km. The ionisation is so intense that the bulge is actually elongated towards both poles due to the magnetic forces involved, (the Diurnal Bulge or Thermosphere, as you may know it better, disappears below the Mesosphere at 100 km on the dark side of the Earth). That is a lot of ionisation. Lots of temporary dipoles. Thanks for the lesson in dipole moments. • Will, ummm, according to sources, the ionization potential for N is over 14 eV. That corresponds to a photon with a wavelength of less than 100 nm. That’s well into the vacuum ultraviolet region of the EM spectrum, bordering on the extreme UV region. The solar spectrum counts off pretty effectively at less than 5 eV. The ionization potential for N2 is actually HIGHER than atomic nitrogen because the presence of the other nitrogen atom reduces the kinetic energy of the electrons relative to the molecule’s scattering states. Now, should we believe some speculation about the behavior of the earth in the past that has to be inferred from indirect observations, or do we direct observations of solar output and ionization of molecules in highly controlled experiments? I’m assuming you’ll choose whichever outcomes supports you’re already held position, showing everyone here that you’re pretty disinterested in science. Thanks for playing though! Hence in the ionosphere you see N+ not N2+. • Phil……? ? ? ! ! • What?! Asked and answered many times with citations, the real question is why you continue to propagate crap like this: “The specific frequencies that 99% O2 and N2 absorb emit at are filtered out.” Which is false and since no instrument manufacture would say such a thing is a lie, if not by you then from where you got it from. Now you have been given citations proving it to be false I suggest you inform that source that they are incorrect and discontinue posting such misinformation. • At atmospheric temperatures, pressures, and concentrations, the IR absorption/emission by O2 and N2 are so negligible that they do not need to be incorporated into radiative transfer calculations. In other circumstances (e.g., the planet Venus), very high concentrations can lead to some IR absorption/emission. Much of this is continuum absorption, reflecting the fact that although N2, for example, does not itself exhibit a dipole, a collision of two N2 molecules can create a temporary dipole allowing absorption. In the Earth’s atmosphere, this phenomenon is too infrequent to matter, and so for practical purposes, only the greenhouse gases such as CO2, H2O, ozone, etc., are active in IR wavelengths. • Fred: See my answer to Maxwell above. • Nowhere there does it say that N2 and O2 have dipoles, they do not! We are not talking about the ionosphere. • Phil: See Fred’s post above in relation to mine. And try and calm down a little, its just a discussion after all. Phil, my understanding (if I am correct) is that the demonisation of CO2 as the main driver of climate is long and deep, spanning decades and probably more. Therefore siting data from large corporations and and institutions has no baring on my line of enquiry. If it did, I wouldn’t get very far before I became as confused as most people seem to be. I think I have made my point clear enough with regard to ionisation and dipole. If you have missed something, please read it again. Arm waving can’t change anything. • So stop doing it! I’ve cited textbooks as well as commercial sources, why you would think that a manufacturer of a spectrometer would clam that there is no background IR to be subtracted from N2 & O2 if it were not so. Despite the fact that they tell you to background CO2 (water is not usually a problem because you dry the purge air as the optics are often hygroscopic). I dislike people propagated lies about the science as you have been doing, if you stop doing so you’ll have no problems from me. N2 and O2 do not absorb IR, end of story. • “N2 and O2 do not absorb IR, end of story.” I accept your apology for calling me a liar Phil: Now can I go back to ignoring you? • Lets see the absorption spectra. • Will, in the first plot, the scale in molecular extinction units shows the intensity of absorption at levels between 10^-35 and 10^-40. The absorption of CO2 in the same region is about 1. So for practical purposes, O2 doesn’t absorb IR light. It’s almost 40 orders of magnitude smaller than CO2 to an IR photon. The second isn’t even an absorption plot you moron. It’s a scattering spectrum, which is centered at 2300 cm^-1, showing both the Stokes and Anti-Stokes wings of the Raman spectrum. If you’re going to insist on presenting data, at least know what the hell you’re talking about before doing so. • Maxwell… You’re absolutely wrong. CO2 cannot have an absorptivity of 1 at any wavelength, not even at the band where you say it absorbs 100% of the energy emitted by the surface. Hottel, Leckner, Modest, and more than 100 scientists and engineers have demonstrated, observationally and experimentally, that the carbon dioxide has a ridiculous absorptivity and a similarily ridiculous emissivity at the band where you say it is a blackbody. • Nasif, I didn’t write absorptivity. I wrote absorption. There is a nuanced difference in the context discussion I’ll let you figure out on your own. • You call me a moron but you have just helped me make my point. That is the reason for these plots. Radiative transfer has an insignificant effect on atmospheric temperature as I have said many times above. As you point out CO2 maybe many orders of magnitude more absorptive than O2 or N2, yet this has no significant or definable effect on global temperature. Remember there is no definitive CO2 induced GW signal above normal variation, let alone an AGW CO2 signal. There is zero historical evidence showing CO2 driving global average T If radiative transfer by CO2 was such significant factor in determining global average T, do you not think that several orders of magnitude warming effect would be quite a simple thing to demonstrate experimentally ? Yet I have demonstrated with experiment that even with almost pure CO2 this is not the case. “AGW RIP “ There are many alternative views to the “greenhouse effect” hypothesis. Here is one that I particularly like and find fits in with my own thoughts on specific heat capacities that are actually supported by my experiments. In my book: which has been available as a free download since October 09, I discuss the fact that we are subsurface dwellers and that it is misleading to refer to surface warming or cooling as a greenhouse effect. See this post above to understand my perspective a little better. Try to understand that I do not want to discus serious issues with name callers. You simply expose the weakness of your own position when you resort to such tactics. • @Will… As I have said before, Phil uses to twist science based only on HIS “reliable” internet sources; you know what I mean; Phil’s is pure pseudoscience. Some posts above, he said that Te and Thb, in S-B equation , were heat. Now he’s saying that N2 and O2 are thermodynamically innert. • I’ll put it down to your lack of reading comprehension regarding the S-B terms. Regarding the absorption of CO2 , the Q-branch at ~667cm^-1 is ~1 in a 10cm pathlength cell at a volume fraction of 0.004. For N2 the Q-branch at ~2330cm-1 in the same cell at a volume fraction of 1 is ~1.5×10^-5. The concept that N2 and O2 do not absorb in the IR due to the lack of a dipole is shared by all physical scientists, and will be found in any text on molecular spectroscopy (e.g. Herzberg (the bible), Barrow etc.) • Nasif, Thank you, I know what Phil is all about and have watched him contradict himself up an down this thread depending on which poor sod he is trolling at any given moment. Yawn! • Phil, I think most of us are comfortable with the CO2 contribution to O2 and N2 temperatures being via collision (and vice versa). • Good, of course as illustrated by Will’s post there are others who just don’t get it. • Eli is going to get a bit technical here, but Will is playing with a few extra cards, and we should nail this down. The interaction of molecules with electromagnetic radiation is described by a power series in the electric and magnetic fields. The strongest and first element is the electric dipole interaction, then many orders weaker (about 10^-5), the magnetic dipole interactions, then the still weaker (~10^-8) electric quadrupole, etc. The energy of a photon that is absorbed or emitted is determined by the difference in energy between quantum states. If there is the possibility of an electric dipole (ED) transition it will dominate and the others can be ignored because they will be much weaker but not all transitions between all levels are allowed for ED transitions. For example, if in a vibrational transition the dipole moment of the molecule does not change, the transition is not allowed. This means that for homonuclear diatomic, ED transitions are forbidden. O2, for example, has two spin unpaired electrons, e.g. a magnetic moment, and the interaction of the electron spin with the magnetic part of the electromagnetic field leads to a very weak absorption/emission which can only be measured with great difficulty, and plays no part in atmospheric physics. N2 has no unpaired electrons in the ground state, and can only interact by the still weaker electric quadrupole interaction. CO2 has three different vibrations. The first the symmetric stretch has zero dipole moment in all vibrational states, so there is no transitional dipole moment and it neither absorbs or emits. The second, the asymmetric stretch starts from this position O—C—O and moves to this one O-C—–O . The ED does change and this transition is allowed by the ED selection rule (it is ~1900 cm-1) The other vibration is the bend which starts from O—C—O and changes to / \ O O and is also ED allowed. Now Eli, being a very sympathetic Rabett, would be overjoyed if Will stopped blathering, but he would also be surprised=:> 83. Fred, Ok, let’s look at the temperature-equilibrated in the high altitude case, since I would describe it differently.. The layer is initially already equilibrated with the adjacent atmospheric layers with respect to temperature, therefore in LTE (local thermodynamic equilibrium). If there is no sun on the night side the radiative imbalance equals minus OLR (outgoing longwave radiation). OLR measured per unit square. The surface and the layer cool simultaneously. The so-called “backradiation” decreases simultaneously. “Backradiation” in this case does not warm anything. It will lead to a decrease in surface cooling with respect to a reference system. If there is sun on the day side the radiative imbalance equals Solarconstant/2 * (1- Albedo) – OLR . This is positive because of the sun. Surface and the layer heat simultaneously. The so-called “backradiation” increases simultaneously. Of course the so-called “back radiation” exists as downwelling longwave radiation, but it is not a cause for surface warming. It will lead to a decrease of surface cooling and therefore a higher temperature in the stationary state. However, the sun is the cause or surface warming. “Back radiation” is an effect of the interaction of the sun with greenhouse gases that describes the heat transfer from incoming solar energy via the surface to the atmosphere. One can also choose to describe the greenhouse effect without it, just looking at incoming solar heating and outgoing longwave cooling. I’d like to add that the integrated radiative balance that you get in the 1-D energy balance models is not the physical reality, but only a way of energy book keeping. The physical reality is the instantaneous radiative balance on the day side and the night side and a rotating earth. Best regards • I think we agree that the sun is the energy source responsible for surface and atmospheric warming. Tbat’s the most important point. I’m still not sure what other point you are making regarding a single layer filled with CO2. Regardless of altitude, it will equilibrate at a temperature at which emissions (a function of temperature) are equal to absorption (a function of CO2 concentration), and that temperature would not be very different regardless of altitude, neglecting small changes in absorptivity and emissivity as a function of pressure. If only radiative processes are operating (which of course would not be the case in the real world), and assuming the CO2 in the layer is the only greenhouse gas in the atmosphere, the temperature would essentially be the surface temperature divided by the fourth root of 2. In each case, the absorbed energy would be emitted equally upward and downward, and so the downwelling radiation would be similar regardless of altitude. • @Fred… Correction: You forgot four possible trajectories for the energy emitted by the atmosphere. Regarding your last claim, you forgot photons streams and radiation pressure. The Second Law includes them in its contextual meanings. • I don’t believe I forgot anything, but with all due respect, you use words in a way that does not appear to apply to any real world phenomena. I don’t believe it’s unfair to state that no-one else appears to understand what you are trying to say either, despite their great familiarity with the relevant science. In any case, if you want to redo my calculations, I’ll be interested to see the results. Do you mind if I ask you a question? On your site, you list yourself as a University Professor. What University does this refer to? Where is it located? Who are students? What courses do you teach? • Bingo! Fred is now eager to shift to the ad hom argumentative form. I sense a deeper sense of failure to win the debate focusing on the facts. • John – I think if you wade through this entire thread to see the various exchanges of comments between Nasif and others, you would understand our frustration at communicating with someone who appears to be speaking a different language, scientifically speaking, from the rest of us. It also raises the question as to whether his interpretation of the meaning of University Professor differs from the ordinary one, but I hope his answer will clarify that issue. I’m not sure it’s an ad hominem attack on someone to ask him to tell us at what University he is a professor. Regarding the greenhouse effects of CO2 as a function of altitude, I certainly remain open to any scientific explanation he has to offer, provided that I can understand it. My calculations were simply derived from the need to maintain a steady state, such that absorbed and emitted radiative flux from CO2 remain equal. • John – Does this mean you’re not going to invite me to write a chapter for your next book? Fred, I think this might be Nasif’s CV. It says he was appointed a “University professor” in 1974 at the Universidad Regiomontana, a university in Monterrey, Nuevo Leon, in the northeast of México having 5,000 students and three schools, Humanities, Engineering-and-Architecture, and Economics-and-Administration, and offering Master’s degrees in a wide range of subjects. Professor Nahle held that appointment for 12 years. A year after that appointment began he founded the Biology Cabinet on the side, which he has owned and operated throughout the subsequent 35 years, during the last 24 of which he does not claim to have held any academic appointment. His CV does however claim “recognition from” the autonomous universities of Aguascalientes (September 21, 2006) and of Nuevo Leon (May 25, 2007). Andy Warhol famously said “In the future, everyone will be world-famous for 15 minutes.” Ironically Warhol himself got vastly more than 15 minutes of fame for that bon mot. As did Nasif Nahle, who got a full 24 hours for his May 25 recognition from UA de Nuevo Leon. This 24-hour recognition was in turn leveraged into a full professorship for Nahle at this site so as not to mislead the site’s readers into believing that Nahle was anything less than a world authority on climate science. This site quotes Nahle as follows. So if you now claim that the delay might be as much as 24 milliseconds, you are contradicting a noted authority. I would say more than 24 years, so I too am contradicting this authority, and moreover by a factor equal to the number of centimeters light travels in one second. All this puts one in mind of Frank Baum’s H. M. Woggle-Bug, T.E.. In the case of global warming denial the microscope responsible for Woggle-bug’s H.M. (highly magnified) prefix is the climate denial machine’s urgent need for authorities equipped with the requisite superpowers to do battle with the evil Washington empire’s entrenched control over the hapless citizens of this proud nation of independent battlers for freedom from authoritarian control. If you claim that this isn’t how that all went down, you have my full attention, at least until my plane leaves in 24 hours for India, after which I’ll be posting somewhat less often here for a couple of weeks since I have some talks to prepare. Very stupid of me to so volunteer. • Vaughan has now tag-teamed with Fred to switch from the issue at hand to attacking the man’s teaching credentials. Indeed, desperation has finally brought the ad hom into play. • attacking the man’s teaching credentials Huh? Fred asked what Nahle’s academic credentials were, I simply repeated what Nahle himself writes about himself in answer to Fred’s question. I don’t see how that could be an attack on Nahle’s academic credentials, but if it was then Nahle has attacked himself in his CV. It seems to me the desperation is on your side: you are desperate to interpret every statement about someone as an ad hominem attack even when it was not an attack on his credentials but merely what Nahle writes about himself. What I have attacked is the statements made by Nahle, such as his 22 millisecond stuff, which is hardly the compelling argument it’s been made out to be. Or are you so desperate to find ad hominem attacks in everything that even attacks on statements are ad hom attacks? • Interesting, Vaughan. Appointed a “University Professor” at age 23. Didn’t get his degrees until several years later. I don’t actually enjoy making fun of people just for sport, but there are reasons for informing readers of information relevant to the credibility of commentators. However, as far as I’m concerned, the “Professorship” is less important than the credibility of the comments themselves, which can easily be judged by anyone reading these exchanges. • Fred, you wrote: I think this statement is not the correct perception of the role of the so-called “back radiation”. But I might have misinterpreted you, reading it a third time. We agree that the sun is the only energy source. The so called “back radiation” in radiative transfer theory is the integral over all light rays in the longwave regime that reach the surface. It is an integral over angles. It is one parameter within the surface energy budget, but not the only one. Of course it cannot be omitted, but it should also not be used isolated as the physical cause for surface warming within the earth system according to my opinion. The physical cause for surface warming in the real world is the sun, as I showed in my example. Best regards 84. This may seem OT, but it’s really not as we’ve been discussing downwelling radiation in regards to the general greenhouse effect, and I’d like to get some professionals to comment about the increased water vapor we’ve observed worldwide over the past few decades. This has been a long-standing prediction of AGW theory, along with increased night time temps. We know that in general the temperature is determined by dew point and cannot fall lower than that. We saw 37 new high night time temps set last summer in just the U.S. And here’s a story about more records being set in Australia during their current summer: So, my question is: Aren’t these higher night time temps pretty much due to downwelling radiation caused by greater water vapor levels, and if they are showing a trend of increasing world-wide, at least some proof the world has been warming, regardless of the cause, but at least quite consistent with the effects expected by the primary and secondary greenhouse effects related to the 40% increase in atmospheric CO2 since the 1700’s? A second question might be: How would C. Johnson explain these higher night time temps? • Answer to the second question, the same way we explain higher night time temps measured on bodies with no atmosphere. Materials absorb energy and release it at differing rates. 85. Dr. C, Although I have no scientific credentials whatever, my last accomplishment in that area being the Junior Trig Prize in High School in the last year of the Eisenhower administration, I felt I had to comment because when I read this thoughtful article, at the end I found “666 Responses to Slaying a greenhouse dragon” and I regard myself as a Christian — though not a fundamentalist as that term is generally understood in Academia — so I did not want to risk any evil influences on this excellent blog. The number had to be raised to at least 667. I did not have a favorable impression of any of the Johnson papers I read, and likewise when I looked at the table of contents of Dragon I got the impression that it could easily have been a book underwritten by Joe Romm’s organization to discredit climate skepticism. Over the last few years I’ve read everything I could find on the subject, and this seemed to be almost a directory of the fringe (with exceptions, such as Dr. Ball). Yes, yes, yes, CO2 is a greenhouse gas, which reradiates absorbed IR, particularly in the 15 micron band, half up and half down. Water vapor also reradiates in the same band, more weakly — but it is around two orders of magnitude more common in the surface atmosphere. So does O2, even more weakly but three orders of magnitude more common. The real question is not whether the greenhouse effect exists, but rather how much net influence it has in a hideously complex and chaotic climate system, the overall effect of which is to transport unimaginably huge quantities of heat and moisture from point A to point B. The “consensus” types are fond of citing Arrhenius. Well, back in 8th Grade Science, between contests to see who could get the girl with the long ponytail to stand next to the vandeGraaf generator, we learned about the greenhouse effect: yep, it raised temperatures around 30 deg C, but it actually should have raised them by around 60 deg C according to calculations. “Settled science.” So although the greenhouse effect is quite real, it would appear that other processes (feedbacks?) reduce the effect by about half. Interestingly, if one takes the “consensus” figure of 1.2 deg C sensitivity for CO2 doubling and applies 50% negative feedback to it, one gets a sensitivity around half a degree, which is in the same general range as the conclusions of such distinguished researchers as Lindzen and Spencer. So that’s reason 1 for my skepticism. Reason 2 is that basic physics tells us that at the pressure of a standard surface atmosphere, CO2 relaxation by bonking into other molecules is several orders of magnitude more probable than by emission of yet another errant photon. This means its “activation” by IR absorption is almost instantly thermalized, which will cause it to participate in convection, which will lift it eventually to the convective boundary layer where lower pressure will make pinging a photon into outer space much easier. But if convection, rather than radiation, is the dominant mode of heat transfer in the lower atmosphere, where storms, precipitation, and suchlike horribles are generated, how could CO2 changes affect the weather? After all, hundred-year events are so-called because, although they are rare, they do occur — CO2 concentration be damned. Witness the ’67 Chicago blizzard or the mid-70s floods in Australia and Pakistan. Reason 3 is that in spite of careful reading of such sources as your Ch. 13 [IIRC] and ScienceofDoom, it is still unclear to me how changes in incident IR can have a measurable effect on ocean temperatures, 70% of the surface area of the planet and close to 100% of its thermal capacity. IR apparently penetrates no more than a few microns into water; the next several millimeters are cooled by evaporation; and the next hundred meters or so are effected principally by incident visible and UV, with the relative UV content determining how deep the effect penetrates. Sure, the top bit is sloshing around constantly with wind and waves, but a few microns?? In a mixing layer hundreds of meters deep??? And reason 4 is that in the many peer-reviewed climate papers I’ve read over the last few years, the overwhelming majority were like the excellent and interesting paper of yours I read from a link here a month or so ago, dealing with the relation between sea surface temperature, atmospheric pressure, and tropical cyclone wind force [IIRC]. It struck me (again, a layman) as an interesting and useful piece of research, doubtless to be cited as fundamental in future storm studies. (Not to mention the impressive and expensive full-color graphs. Do de name Arlo Guthrie ring a bell?) But as to IPCC-style climate alarmism, it simply offered a neat one-paragraph professional curtsy to CO2 and climate change — enough to satisfy the “climate change” section of the grant application, but no more. This is fine and understandable in the current political climate, but it amounts to a demonstration that when we hear the claim that billions and billions of scientists agree with the CO2-driven catastrophic global warming theory, it’s time to hold on to our wallets. • Craig – You have clearly thought about this issue and so you deserve a response. Perhaps not a long one here, however, because the thread is devoted to something else, and more importantly, because all the points you mention have been discussed in detail elsewhere. Briefly, however, the most likely value of climate sensitivity is estimated to be about 3 C for doubled CO2. This takes into account convective heat transport – without convection, the value would be much higher. Regarding ocean heat storage, downwelling IR contributes substantially more than solar irradiation. Essentially all IR reaching the ocean surface is absorbed within the “skin layer”, and the heat is distributed throughout the entire mixed layer (down to as much as 200 meters) by turbulence and convective mixing, so that solar and infrared contributions are homogenized. It may be unfair to ask you to wade through previous threads, but if you are willing, you will find these phenomena addressed extensively. 86. “As I understand the greenhouse gas concept, ON AVERAGE it should be hotter on humid days in July on the farm near Sterling, Colorado than on days when it’s dry. …” – jae jae, I almost asked this when you first made the comment up above. Can you link to something that confirms your interpretation of the greenhouse concept? 87. Glad to see we’re safely above 700 now… Thanks, Fred, but I have read (or, in certain cases of extreme exhaustion, at least skimmed) all of the relevant threads on this blog since its inception. The 1.2 deg C figure I cited was, I thought, even the IPCC’s basic CO2 sensitivity figure, theoretical, in the absence of feedbacks. If I’m mistaken, please provide a page reference in AR4. Otherwise I stand by all of my assertions above. The term “skin layer” is used both to refer to the microns-thick IR absorbing layer and the millimeters-thick layer cooled by evaporation; in which sense are you using the term? This is the same ambiguity I find in nearly all attempts to describe the IR effect in the ocean — even Dr. C’s, otherwise a model of explicitness and clarity. If you mean the mm layer, it’s possible but the layer is by all accounts cooler; if you mean the micron layer, the difference in scale and near-certainty of evaporation makes it simply unbelievable, no matter how strong the wind or high the waves (I’m an avid amateur sailor). Not to mention that given the relative heat capacities of the atmosphere and the ocean, the idea that any atmospheric phenomenon could have a perceptible effect on ocean temperatures is implausible, to say the least — the “mixing layer” of the ocean alone — defined as you do — having two orders of magnitude more capacity than the entire planetary atmosphere. I regard the publication of Dragon at this point in time as particularly unfortunate, since the always-improbable and faintly ridiculous CO2-driven AGW theory has managed over the last two decades to get taxpayers worldwide to spend more than $100 billion researching itself, only to have all actual relevant measurements produced by this research provide counterevidence. There is no need for silly fringe attempts to “debunk” it. But you are quite right; if this is not off-topic, it is at least pushing the very edge. Thanks in any case for your kind reply. • Estimated climate sensitivity with feedbacks is about 3 C per doubling. Without feedbacks, it’s 1.2 C. The IR downwelling radiation contributes considerably more to ocean heating than solar radiation. Both are mixed into the mixed layer, and each contributes proportionately to evaporation, which is why the temperature of the skin layer as measured by satellite is slightly cooler than the water immediately below. If the IR contributed disproportionately, it would be hotter, and that would be observed in the satellite data. • This NASA file shows the shallow upward thermal gradient from convection, but also that the skin layer is cooler than at 10 um and only slightly warmer than at 1 meter – Ocean Temperature . For IR absorption to be disproportionately dissipated into evaporation rather than heating would require a far greater difference between the surface and the 1 meter depth, given the relationship between temperature and evaporation rate. In essence, solar and downwelling IR are combined so that they contribute in proportion to heating and evaporation. • JCH
5837f8755b67cab4
The Essence of Reality I know it’s a crazy title. It has no place in a physics blog, but then I am sure this article will go elsewhere. […] Well… […] Let me be honest: it’s probably gonna go nowhere. Whatever. I don’t care too much. My life is happier than Wittgenstein’s. 🙂 My original title for this post was: discrete spacetime. That was somewhat less offensive but, while being less offensive, it suffered from the same drawback: the terminology was ambiguous. The commonly accepted term for discrete spacetime is the quantum vacuum. However, because I am just an arrogant bastard trying to establish myself in this field, I am telling you that term is meaningless. Indeed, wouldn’t you agree that, if the quantum vacuum is a vacuum, then it’s empty. So it’s nothing. Hence, it cannot have any properties and, therefore, it cannot be discrete – or continuous, or whatever. We need to put stuff in it to make it real. Therefore, I’d rather distinguish mathematical versus physical space. Of course, you are smart, and so you now you’ll say that my terminology is as bad as that of the quantum vacuumists. And you are right. However, this is a story that am writing, and so I will write it the way want to write it. 🙂 So where were we? Spacetime! Discrete spacetime. Yes. Thank you! Because relativity tells us we should think in terms of four-vectors, we should not talk about space but about spacetime. Hence, we should distinguish mathematical spacetime from physical spacetime. So what’s the definitional difference? Mathematical spacetime is just what it is: a coordinate space – Cartesian, polar, or whatever – which we define by choosing a representation, or a base. And all the other elements of the set are just some algebraic combination of the base set. Mathematical space involves numbers. They don’t – let me emphasize that: they do not!– involve the physical dimensions of the variables. Always remember: math shows us the relations, but it doesn’t show us the stuff itself. Think of it: even if we may refer to the coordinate axes as time, or distance, we do not really think of them as something physical. In math, the physical dimension is just a label. Nothing more. Nothing less. In contrast, physical spacetime is filled with something – with waves, or with particles – so it’s spacetime filled with energy and/or matter. In fact, we should analyze matter and energy as essentially the same thing, and please do carefully re-read what I wrote: I said they are essentially the same. I did not say they are the same. Energy and mass are equivalent, but not quite the same. I’ll tell you what that means in a moment. These waves, or particles, come with mass, energy and momentum. There is an equivalence between mass and energy, but they are not the same. There is a twist – literally (only after reading the next paragraphs, you’ll realize how literally): even when choosing our time and distance units such that is numerically equal to 1 – e.g. when measuring distance in light-seconds (or time in light-meters), or when using Planck units – the physical dimension of the cfactor in Einstein’s E = mcequation doesn’t vanish: the physical dimension of energy is kg·m2/s2. Using Newton’s force law (1 N = 1 kg·m/s2), we can easily see this rather strange unit is effectively equivalent to the energy unit, i.e. the joule (1 J = 1 kg·m2/s2 = 1 (N·s2/m)·m2/s= 1 N·m), but that’s not the point. The (m/s)2 factor – i.e. the square of the velocity dimension – reflects the following: 1. Energy is nothing but mass in motion. To be precise, it’s oscillating mass. [And, yes, that’s what string theory is all about, but I didn’t want to mention that. It’s just terminology once again: I prefer to say ‘oscillating’ rather than ‘vibrating’. :-)] 2. The rapidly oscillating real and imaginary component of the matter-wave (or wavefunction, we should say) each capture half of the total energy of the object E = mc2. 3. The oscillation is an oscillation of the mass of the particle (or wave) that we’re looking at. In the mentioned publication, I explore the structural similarity between: 1. The oscillating electric and magnetic field vectors (E and B) that represent the electromagnetic wave, and 2. The oscillating real and imaginary part of the matter-wave. The story is simple or complicated, depending on what you know already, but it can be told in an abnoxiously easy way. Note that the associated force laws do not differ in their structure: Coulomb Law gravitation law The only difference is the dimension of m versus q: mass – the measure of inertia -versus charge. Mass comes in one color only, so to speak: it’s always positive. In contrast, electric charge comes in two colors: positive and negative. You can guess what comes next, but I won’t talk about that here.:-) Just note the absolute distance between two charges (with the same or the opposite sign) is twice the distance between 0 and 1, which must explains the rather mysterious 2 factor I get for the Schrödinger equation for the electromagnetic wave (but I still need to show how that works out exactly). The point is: remembering that the physical dimension of the electric field is N/C (newton per coulomb, i.e. force per unit of charge) it should not come as a surprise that we find that the physical dimension of the components of the matter-wave is N/kg: newton per kg, i.e. force per unit of mass. For the detail, I’ll refer you to that article of mine (and, because I know you will not want to work your way through it, let me tell you it’s the last chapter that tells you how to do the trick). So where were we? Strange. I actually just wanted to talk about discrete spacetime here, but I realize I’ve already dealt with all of the metaphysical questions you could possible have, except the (existential) Who Am I? question, which I cannot answer on your behalf. 🙂 I wanted to talk about physical spacetime, so that’s sanitized mathematical space plus something. A date without logistics. Our mind is a lazy host, indeed. Reality is the guest that brings all of the wine and the food to the party. In fact, it’s a guest that brings everything to the party: you – the observer – just need to set the time and the place. In fact, in light of what Kant – and many other eminent philosophers – wrote about space and time being constructs of the mind, that’s another statement which you should interpret literally. So physical spacetime is spacetime filled with something – like a wave, or a field. So how does that look like? Well… Frankly, I don’t know! But let me share my idea of it. Because of the unity of Planck’s quantum of action (ħ ≈ 1.0545718×10−34 N·m·s), a wave traveling in spacetime might be represented as a set of discrete spacetime points and the associated amplitudes, as illustrated below. [I just made an easy Excel graph. Nothing fancy.] The space in-between the discrete spacetime points, which are separated by the Planck time and distance units, is not real. It is plain nothingness, or – if you prefer that term – the space in-between in is mathematical space only: a figment of the mind – nothing real, because quantum theory tells us that the real, physical, space is discontinuous. Why is that so? Well… Smaller time and distance units cannot exist, because we would not be able to pack Planck’s quantum of action in them: a box of the Planck scale, with ħ in it, is just a black hole and, hence, nothing could go from here to there, because all would be trapped. Of course, now you’ll wonder what it means to ‘pack‘ Planck’s quantum of action in a Planck-scale spacetime box. Let me try  to explain this. It’s going to be a rather rudimentary explanation and, hence, it may not satisfy you. But then the alternative is to learn more about black holes and the Schwarzschild radius, which I warmly recommend for two equivalent reasons: 1. The matter is actually quite deep, and I’d recommend you try to fully understand it by reading some decent physics course. 2. You’d stop reading this nonsense. If, despite my warning, you would continue to read what I write, you may want to note that we could also use the logic below to define Planck’s quantum of action, rather than using it to define the Planck time and distance unit. Everything is related to everything in physics. But let me now give the rather naive explanation itself: • Planck’s quantum of action (ħ ≈ 1.0545718×10−34 N·m·s) is the smallest thing possible. It may express itself as some momentum (whose physical dimension is N·s) over some distance (Δs), or as some amount of energy (whose dimension is N·m) over some time (Δt). • Now, energy is an oscillation of mass (I will repeat that a couple of times, and show you the detail of what that means in the last chapter) and, hence, ħ must necessarily express itself both as momentum as well as energy over some time and some distance. Hence, it is what it is: some force over some distance over some time. This reflects the physical dimension of ħ, which is the product of force, distance and time. So let’s assume some force ΔF, some distance Δs, and some time Δt, so we can write ħ as ħ = ΔF·Δs·Δt. • Now let’s pack that into a traveling particle – like a photon, for example – which, as you know (and as I will show in this publication) is, effectively, just some oscillation of mass, or an energy flow. Now let’s think about one cycle of that oscillation. How small can we make it? In spacetime, I mean. • If we decrease Δs and/or Δt, then ΔF must increase, so as to ensure the integrity (or unity) of ħ as the fundamental quantum of action. Note that the increase in the momentum (ΔF·Δt) and the energy (ΔF·Δs) is proportional to the decrease in Δt and Δs. Now, in our search for the Planck-size spacetime box, we will obviously want to decrease Δs and Δt simultaneously. • Because nothing can exceed the speed of light, we may want to use equivalent time and distance units, so the numerical value of the speed of light is equal to 1 and all velocities become relative velocities. If we now assume our particle is traveling at the speed of light – so it must be a photon, or a (theoretical) matter-particle with zero rest mass (which is something different than a photon) – then our Δs and Δt should respect the following condition: Δs/Δt = c = 1. • Now, when Δs = 1.6162×10−35 m and Δt = 5.391×10−44 s, we find that Δs/Δt = c, but ΔF = ħ/(Δs·Δt) = (1.0545718×10−34 N·m·s)/[(1.6162×10−35 m)·(5.391×10−44 s)] ≈ 1.21×1044 N. That force is monstrously huge. Think of it: because of gravitation, a mass of 1 kg in our hand, here on Earth, will exert a force of 9.8 N. Now note the exponent in that 1.21×1044 number. • If we multiply that monstrous force with Δs – which is extremely tiny – we get the Planck energy: (1.6162×10−35 m)·(1.21×1044 N) ≈ 1.956×109 joule. Despite the tininess of Δs, we still get a fairly big value for the Planck energy. Just to give you an idea, it’s the energy that you’d get out of burning 60 liters of gasoline—or the mileage you’d get out of 16 gallons of fuel! In fact, the equivalent mass of that energy, packed in such tiny space, makes it a black hole. • In short, the conclusion is that our particle can’t move (or, thinking of it as a wave, that our wave can’t wave) because it’s caught in the black hole it creates by its own energy: so the energy can’t escape and, hence, it can’t flow. 🙂 Of course, you will now say that we could imagine half a cycle, or a quarter of that cycle. And you are right: we can surely imagine that, but we get the same thing: to respect the unity of ħ, we’ll then have to pack it into half a cycle, or a quarter of a cycle, which just means the energy of the whole cycle is 2·ħ, or 4·ħ. However, our conclusion still stands: we won’t be able to pack that half-cycle, or that quarter-cycle, into something smaller than the Planck-size spacetime box, because it would make it a black hole, and so our wave wouldn’t go anywhere, and the idea of our wave itself – or the particle – just doesn’t make sense anymore. This brings me to the final point I’d like to make here. When Maxwell or Einstein, or the quantum vacuumists – or I 🙂 – say that the speed of light is just a property of the vacuum, then that’s correct and not correct at the same time. First, we should note that, if we say that, we might also say that ħ is a property of the vacuum. All physical constants are. Hence, it’s a pretty meaningless statement. Still, it’s a statement that helps us to understand the essence of reality. Second, and more importantly, we should dissect that statement. The speed of light combines two very different aspects: 1. It’s a physical constant, i.e. some fixed number that we will find to be the same regardless of our reference frame. As such, it’s as essential as those immovable physical laws that we find to be the same in each and every reference frame. 2. However, its physical dimension is the ratio of the distance and the time unit: m/s. We may choose other time and distance units, but we will still combine them in that ratio. These two units represent the two dimensions in our mind that – as Kant noted – structure our perception of reality: the temporal and spatial dimension. Hence, we cannot just say that is ‘just a property of the vacuum’. In our definition of as a velocity, we mix reality – the ‘outside world’ – with our perception of it. It’s unavoidable. Frankly, while we should obviously try – and we should try very hard! – to separate what’s ‘out there’ versus ‘how we make sense of it’, it is and remains an impossible job because… Well… When everything is said and done, what we observe ‘out there’ is just that: it’s just what we – humans – observe. 🙂 So, when everything is said and done, the essence of reality consists of four things: 1. Nothing 2. Mass, i.e. something, or not nothing 3. Movement (of something), from nowhere to somewhere. 4. Us: our mind. Or God’s Mind. Whatever. Mind. The first is like yin and yang, or manicheism, or whatever dualistic religious system. As for Movement and Mind… Hmm… In some very weird way, I feel they must be part of one and the same thing as well. 🙂 In fact, we may also think of those four things as: 1. 0 (zero) 2. 1 (one), or as some sine or a cosine, which is anything in-between 0 and 1. 3. Well… I am not sure! I can’t really separate point 3 and point 4, because they combine point 1 and point 2. So we’ve don’t have a quadrupality, right? We do have Trinity here, don’t we? […] Maybe. I won’t comment, because I think I just found Unity here. 🙂 Leave a Reply You are commenting using your account. Log Out /  Change ) Google+ photo Twitter picture Facebook photo Connecting to %s
f91318a0d7bf47a3
Erwin Schrödinger and the Schrödinger's Cat Thought Experiment Nobel Prize winning physicist who shaped quantum mechanics American shorthair cat in a cardboard box YingHuiTay / Getty Images Erwin Rudolf Josef Alexander Schrödinger (born on August 12, 1887 in Vienna, Austria) was a physicist who conducted groundbreaking work in quantum mechanics, a field which studies how energy and matter behave at very small length scales. In 1926, Schrödinger developed an equation that predicted where an electron would be located in an atom. In 1933, he received a Nobel Prize for this work, along with physicist Paul Dirac. Fast Facts: Erwin Schrödinger • Full Name: Erwin Rudolf Josef Alexander Schrödinger • Known For: Physicist who developed the Schrödinger equation, which signified a great stride for quantum mechanics. Also developed the thought experiment known as “Schrödinger’s Cat.” • Born: August 12, 1887 in Vienna, Austria • Died: January 4, 1961 in Vienna, Austria • Parents: Rudolf and Georgine Schrödinger • Spouse: Annemarie Bertel • Child: Ruth Georgie Erica (b. 1934) • Education: University of Vienna • Awards: with quantum theorist, Paul A.M. Dirac awarded 1933 Nobel Prize in Physics. • Publications: What Is Life? (1944), Nature and the Greeks (1954), and My View of the World (1961). Schrödinger may be more popularly known for “Schrödinger’s Cat,” a thought experiment he devised in 1935 to illustrate problems with a common interpretation of quantum mechanics. Early Years and Education Schrödinger was the only child of Rudolf Schrödinger – a linoleum and oilcloth factory worker who had inherited the business from his father – and Georgine, the daughter of a chemistry professor of Rudolf’s. Schrödinger’s upbringing emphasized cultural appreciation and advancement in both science and art. Schrödinger was educated by a tutor and by his father at home. At the age of 11, he entered the Akademische Gymnasium in Vienna, a school focused on classical education and training in physics and mathematics. There, he enjoyed learning classical languages, foreign poetry, physics, and mathematics, but hated memorizing what he termed “incidental” dates and facts. Schrödinger continued his studies at the University of Vienna, which he entered in 1906. He earned his PhD in physics in 1910 under the guidance of Friedrich Hasenöhrl, whom Schrödinger considered to be one of his greatest intellectual influences. Hasenöhrl was a student of physicist Ludwig Boltzmann, a renowned scientist known for his work in statistical mechanics. After Schrödinger received his PhD, he worked as an assistant to Franz Exner, another student of Boltzmann’s, until being drafted at the beginning of World War I. Career Beginnings In 1920, Schrödinger married Annemarie Bertel and moved with her to Jena, Germany to work as the assistant of physicist Max Wien. From there, he became faculty at a number of universities over a short period of time, first becoming a junior professor in Stuttgart, then a full professor at Breslau, before joining the University of Zurich as a professor in 1921. Schrödinger’s subsequent six years at Zurich were some of the most important in his professional career. At the University of Zurich, Schrödinger developed a theory that significantly advanced the understanding of quantum physics. He published a series of papers – about one per month – on wave mechanics. In particular, the first paper, “Quantization as an Eigenvalue Problem," introduced what would become known as the Schrödinger equation, now a central part of quantum mechanics. Schrödinger was awarded the Nobel Prize for this discovery in 1933. Schrödinger’s Equation Schrödinger's equation mathematically described the "wavelike" nature of systems governed by quantum mechanics. With this equation, Schrödinger provided a way to not only study the behaviors of these systems, but also to predict how they behave. Though there was much initial debate about what Schrödinger’s equation meant, scientists eventually interpreted it as the probability of finding an electron somewhere in space. Schrödinger’s Cat Schrödinger formulated this thought experiment in response to the Copenhagen interpretation of quantum mechanics, which states that a particle described by quantum mechanics exists in all possible states at the same time, until it is observed and is forced to choose one state. Here's an example: consider a light that can light up either red or green. When we are not looking at the light, we assume that it is both red and green. However, when we look at it, the light must force itself to be either red or green, and that is the color we see. Schrödinger did not agree with this interpretation. He created a different thought experiment, called Schrödinger's Cat, to illustrate his concerns. In the Schrödinger's Cat experiment, a cat is placed inside a sealed box with a radioactive substance and a poisonous gas. If the radioactive substance decayed, it would release the gas and kill the cat. If not, the cat would be alive. Because we do not know whether the cat is alive or dead, it is considered both alive and dead until someone opens the box and sees for themselves what the state of the cat is. Thus, simply by looking into the box, someone has magically made the cat alive or dead even though that is impossible. Influences on Schrödinger’s Work Schrödinger did not leave much information about the scientists and theories that influenced his own work. However, historians have pieced together some of those influences, which include: • Louis de Broglie, a physicist, introduced the concept of “matter waves." Schrödinger had read de Broglie’s thesis as well as a footnote written by Albert Einstein, which spoke positively about de Broglie’s work. Schrödinger was also asked to discuss de Broglie’s work at a seminar hosted by both the University of Zurich and another university, ETH Zurich. • Boltzmann. Schrödinger considered Boltzmann’s statistical approach to physics his “first love in science,” and much of his scientific education followed in the tradition of Boltzmann. • Schrödinger’s previous work on the quantum theory of gases, which studied gases from the perspective of quantum mechanics. In one of his papers on the quantum theory of gases, “On Einstein’s Gas Theory,” Schrödinger applied de Broglie’s theory on matter waves to help explain the behavior of gases. Later Career and Death In 1933, the same year he won the Nobel Prize, Schrödinger resigned his professorship at the University of Berlin, which he had joined in 1927, in response to the Nazi takeover of Germany and the dismissal of Jewish scientists. He subsequently moved to England, and later to Austria. However, in 1938, Hitler invaded Austria, forcing Schrödinger, now an established anti-Nazi, to flee to Rome. In 1939, Schrödinger moved to Dublin, Ireland, where he remained until his return to Vienna in 1956. Schrödinger died of tuberculosis on January 4, 1961 in Vienna, the city where he was born. He was 73 years old.
e1df0046e8c1763f
lördag 25 juli 2015 Frank Wilczek: Ugly Answer to Ugly Question In his new book A Beautiful Question: Finding Nature's Deep Design, Frank Wilczek (Nobel Prize in Physics 2004) starts out stating the questions (or paradoxes) which motivated the development of modern physics: In the quantum world of atoms and light, Nature treats us to a show of strange and seemingly impossible feats. Two of these feats seemed, when discovered, particularly impossible: • Light comes in lumps. This is demonstrated in the photoelectric effect, as we’ll discuss momentarily. It came as a shock to physicists. After Maxwell’s electromagnetic theory was confirmed in Hertz’s experiments (and later many others), physicists had thought they understood what light is. Namely, light is electromagnetic waves. But electromagnetic waves are continuous. • Atoms have parts, but are perfectly rigid. Electrons were first clearly identified in 1897, by J. J. Thomson. The most basic facts about atoms were elucidated over the following fifteen years or so. In particular: atoms consist of tiny nuclei containing almost all of their mass and all of their positive electric charge, surrounded by enough negatively charged electrons to make a neutral whole. Atoms come in different sizes, depending on the chemical element, but they’re generally in the ballpark of $10^{-8}$ centimeters, a unit of length called an angstrom. Atomic nuclei, however, are a hundred thousand times smaller. The paradox: How can such a structure be stable? Why don’t the electrons simply succumb to the attractive force from the nucleus, and dive in. • These paradoxical facts led Einstein and Bohr, respectively, to propose some outrageous, half-right hypotheses that served as footholds on the steep ascent to modern quantum theory.  • After epic struggles, played out over more than a decade of effort and debate, an answer emerged. It has held up to this day, and its roots have grown so deep that it seems unlikely ever to topple. Wilczek then proceeds to prepare us to accept the answers offered by the modern physics of quantum mechanics as the result of epic struggles: • The framework known as quantum theory, or quantum mechanics, was mostly in place by the late 1930s.  • Quantum theory is not a specific hypothesis, but a web of closely intertwined ideas. I do not mean to suggest quantum theory is vague—it is not.  • With rare and usually temporary exceptions, when faced with any concrete physical problem, all competent practitioners of quantum mechanics will agree about what it means to address that problem using quantum theory.  • But few, if any, would be able to say precisely what assumptions they have made to get there. Coming to terms with quantum theory is a process, through which the work will teach you how to do it. We learn that quantum mechanics is not built on specific hypotheses or assumptions, but nevertheless is not vague, and instead rather is a process monitored by competent practitioners. In any case, Wilczek proceeds to give us a glimpse of the basic hypothesis: • In quantum theory’s description of the world, the fundamental objects are ....wave functions. • Any valid physical question about a physical system can be answered by consulting its wave function. • But the relation between question and answer is not straightforward. Both the way that wave functions answer questions and the answers they give have surprising—not to say weird—features. OK, so we are now enlightened by understanding that the answers that come out are weird. Wilczek continues: • I will focus on the specific sorts of wave functions we need to describe the hydrogen atom:  • We are interested, then, in the wave function that describes a single electron bound by electric forces to a tiny, much heavier proton. • Before discussing the electron’s wave function, we’ll do well to describe its probability cloud. The probability cloud is closely related to the wave function. The probability cloud is easier to understand than the wave function, and its physical meaning is more obvious, but it is less fundamental. (Those oracular statements will be fleshed out momentarily). • Quantum mechanics does not give simple equations for probability clouds. Rather, probability clouds are calculated from wave functions. • The wave function of a single particle, like its probability cloud, assigns an amplitude to all possible positions of the particle. In other words, it assigns a number to every point in space.  • To pose questions, we must perform specific experiments that probe the wave function in different ways. • You get probabilities, not definite answers. • You don’t get access to the wave function itself, but only a peek at processed versions of it. • Answering different questions may require processing the wave function in different ways. • Each of those three points raises big issues. Wilczek then tackles these issues by posing new questions, or lacking question by retreating to an admirable attitude of humility in a lesson of wisdom • The first raises the issue of determinism. Is calculating probabilities really the best we can do? • The second raises the issue of many worlds. What does the full wavefunction describe, when we’re not peeking? Does it represent a gigantic expansion of reality, or is it just a mind tool, no more real than a dream? • The third raises the issue of complementarity....It is a lesson in humility that quantum theory forces to our attention. To probe is to interact, and to interact is potentially to disturb. • Complementarity is both a feature of physical reality and a lesson in wisdom. We see that Wilczek sells the usual broth of strange and seemingly impossible feats, weird features, and outrageous half-right hypotheses, all raising big issues. Wilczek sums up by the following quote of Walt Whitman under the headline COMPLEMENTARITY AS WISDOM:              Do I contradict myself?              Very well, then, I contradict myself,               I am large, I contain multitudes. But physics is not poetry, and contradictory poetry does not justify contradictory physics. Contradictory mathematical physics cannot be true real physics, not even meaningful poetry. To get big by contradiction is a trade of politics, which is ugly and not beautiful. Nevertheless, Wilczek started his Nobel lecture as follows: • In theoretical physics, paradoxes are good. That’s paradoxical, since a paradox appears to be a contradiction, and contradictions imply serious error. But Nature cannot realize contradictions. When our physical theories lead to paradox we must find a way out. Paradoxes focus our attention, and we think harder. We understand that to Wilczek/modern physicists, contradictions are good rather than catastrophical and the more paradox the better, since it makes physicists focus attention to think harder.  Beautiful. For more excuses, see What Is Quantum Theory. Wilczek here retells the story of the Father (or Dictator) of Quantum Mechanics, Niels Bohr: The paradox presented itself in 1925, but what happened to the hope of progress? Is paradoxical physics the physics of our time? Does light come in lumps? Why are atoms stable? Despite paradoxes, no real progress for 90 years!!?? PS1 Here is the question killing the probability interpretation of the wave function: Since the wave function for the ground state of Hydrogen is non-zero even far away from the kernel, does it mean that there is a non-zero chance of experimentally detecting a Hydrogen ground state electron far away from the kernel it is associated with? Or the other way around, since the wave function is maximal at zero distance from the kernel, does it mean that one will mostly find the electron hiding inside the kernel? PS2 Beauty is an expression of order and deep design, not of disorder and lack of design. An atomistic world ruled by chance can be beautiful only to a professional statistician obsessed by computing mean values. PS3 Not Even Wrong presents the book as follows: Frank Wilczek’s new book, A Beautiful Question, is now out and if you’re at all interested in issues about beauty and the deep structure of reality, you should find a copy and spend some time with it. As he explains at the very beginning: • This book is a long meditation on a single question: • Does the world embody beautiful ideas? To me (and I think to Wilczek), the answer to the question has always been an unambiguous “Yes”. The more difficult question is “what does such a claim about beauty and the world mean?” and that’s the central concern of the book. PS4 Wilczek expresses a tendency shared by many modern physicists of pretending to know all of chemistry "in principle", simply by writing down a Schrödinger equation on a piece of paper, however without actually being able to predict anything specific because solutions of the equation cannot by computed:  • Wave functions that fully describe the physical state of several electrons occupy spaces of very high dimension. The wave function for two electrons lives in a six-dimensional space, the wave function for three electrons lives in a nine-dimensional space, and so forth. The equations for these wave functions rapidly become quite challenging to solve, even approximately, and even using the most powerful computers. This is why chemistry remains a thriving experimental enterprise, even though in principle we know the equations that govern it, and that should enable us to calculate the results of experiments in chemistry without having to perform them. In this illusion game, the uncomputability of the Schrödinger's many-dimensional equation relieves the physicist from the real task of explaining the actual physics of chemistry, while the physicist can still safely take the role of being in charge of principal theoretical chemistry underlying a "thriving experimental enterprise", which "in principle" is superfluous. Beautiful?  Inga kommentarer: Skicka en kommentar
c74748e7eeff10ec
Tuesday, April 29, 2014 FQXi essay contest 2014: How Should Humanity Steer the Future? This year’s essay contest of the Foundational Questions Institute “How Should Humanity Steer the Future?” broaches a question that is fundamental indeed, fundamental not for quantum gravity but for the future of mankind. I suspect the topic selection has been influenced by the contest being “presented in partnership with” (which I translate into “sponsored by”) not only the John Templeton foundation and Scientific American, but also a philanthropic organization called the “Gruber Foundation” (which I had never heard of before) and Jaan Tallinn. Tallinn is no unknown, he is one of the developers of Skype and when I type his name into Google the auto completion is “net worth”. I met him at the 2011 FQXi conference where he gave a little speech about his worries that artificial intelligence will turn into a threat to humans. I wrote back then a blogpost explaining that I don’t share this particular worry. However, I recall Tallinn’s speech vividly, not because it was so well delivered (in fact, he seemed to be reading off his phone), but because he was so very sincere about it. Most people’s standard reaction in the face of threats to the future of mankind is cynicism or sarcasm, essentially a vocal shoulder shrug, whereas Tallinn seems to have spent quite some time thinking about this. And well, somebody really should be thinking about this... And so I appreciate the topic of this year’s essay contest has a social dimension, not only because it gets tiresome to always circle the same question of where the next breakthrough in theoretical physics will be and the always same answers (let me guess, it’s what you work on), but also because it gives me an outlet for my interests besides quantum gravity. I have always been fascinated by the complex dynamics of systems that are driven by the individual actions of many humans because this reaches out to the larger question of where life on planet Earth is going and why and what all of this is good for. If somebody asks you how humanity should steer the future, a modest reply isn’t really an option, so I have submitted my five step plan to save the world. Well, at least you can’t blame me for not having a vision. The executive summary is that we will only be able to steer at all if we have a way to collectively react to large scale behavior and long-term trends of global systems, and this can only happen if we are able to make informed decisions intuitively, quickly and without much thinking. A steering wheel like this might not be sufficient to avoid running into obstacles, but it is definitely necessary, so that is what we have to start with. The trends that we need to react to are those of global and multi-leveled systems, including economic, social, ecological and politic systems, as well as various infrastructure networks. Presently, we basically fail to act when problems appear. While the problems arise from the interaction of many people and their environment, it is still the individual that has to make decisions. But the individual presently cannot tell how their own action works towards their goals on long distance or time scales. To enable them to make good decisions, the information about the whole system has to be routed back to the individual. But that feedback loop doesn’t presently exist. In principle it would be possible today, but the process is presently far too difficult. The vast majority of people do not have the time and energy to collect the necessary information and make decisions based on it. It doesn’t help to write essays about what we ‘should’ do. People will only act if it’s really simple to do and of immediate relevance for them. Thus my suggestion is to create individual ‘priority maps’ that chart personal values and provide people with intuitive feedback for how well a decision matches with their priorities. A simple example. Suppose you train some software to tell what kind of images you find aesthetically pleasing and what you dislike. You now have various parameters, say colors, shapes, symmetries, composition and so on. You then fill out a questionnaire about preferences for political values. Now rather than long explanations which candidate says what, you get an image that represents how good the match is by converting the match in political values to parameters in an image. You pick the image you like best and are done. The point is that you are being spared having to look into the information yourself, you only get to see the summary that encodes whether voting for that person would work towards what you regard important. Oh, I hear you say, but that vastly oversimplifies matters. Indeed, that is exactly the point. Oversimplification is the only way we’ll manage to overcome our present inability to act. If mankind is to be successful in the long run, we have to evolve to anticipate and react to interrelated global trends in systems of billions of people. Natural selection might do this, but it would take too much time. The priority maps are a technological shortcut to emulate an advanced species that is ‘fit’ in the Darwinian sense, fit to adapt to its changing environment. I envision this to become a brain extension one day. I had a runner up to this essay contribution, which was an argument that research in quantum gravity will be relevant for quantum computing, interstellar travel and technological progress in general. But it would have been a quite impractical speculation (not to mention a self-advertisement of my work on superdeterminism, superluminal information exchange and antigravity). In my mind of course it’s all related – the laws of physics are what eventually drive the evolution of consciousness and also of our species. But I decided to stick with a proposal that I think is indeed realizable today and that would go a long way to enable humanity to steer the future. I encourage you to check out the essays which cover a large variety of ideas. Some of the contributions seem to be very bent towards the aim of making a philosophical case for some understanding of natural law rather than the other, or to find parallels to unsolved problems in physics, but this seems quite a stretch to me. However, I am sure you will find something of interest there. At the very least it will give you some new things to worry about... Saturday, April 26, 2014 Academia isn’t what I expected The Ivory Tower from The Neverending Story. [Source] Talking to the students at the Sussex school let me realize how straight-forward it is today to get a realistic impression of what research in this field looks like. Blogs are a good source of information about scientist’s daily life and duties, and it has also become so much easier to find and make contact with people in the field, either using social networks or joining dedicated mentoring programs. Before I myself got an office at a physics institute I only had a vague idea of what people did there. Absent the lauded ‘role models’ my mental image of academic research formed mostly by reading biographies of the heroes of General Relativity and Quantum Mechanics, plus a stack of popular science books. The latter didn’t contain much about the average researcher’s daily tasks, and to the extent that the former captured university life, it was life in the first half of the 20nd century. I expected some things to have changed during 50 years, notably in technological advances and the ease of travel, publishing, and communication. I finished high school in ’95, so the biggest changes were yet to come. I also knew that disciplines had drifted apart, that philosophy and physics were mostly going separate ways now, and that the days in which a physicist could also be a chemist could also be an artist were long gone. It was clear that academia had generally grown, become more organized and institutionalized, and closer linked to industrial research and applications. I had heard that applying for money was a big part of the game. Those were the days. But my expectations were wrong in many other ways. 20 years, 9 moves and 6 jobs later, here’s the contrast of what I believed theoretical physics would be like to reality: 1. Specialization While I knew that interdisciplinarity had given in to specialization I thought that theoretical physicists would be in close connection to the experimentalists, that they would frequently discuss experiments that might be interesting to develop, or data that required explanation. I also expected theoretical physicists to work closely together with mathematicians, because in the history of physics the mathematics has often been developed alongside the physics. In both cases the reality is an almost complete disconnect. The exchange takes place mostly through published literature or especially dedicated meetings or initiatives. 2. Disconnect I expected a much larger general intellectual curiosity and social responsibility in academia. Instead I found that most researchers are very focused on their own work and nothing but their own work. Not only do institutes rarely if ever have organized public engagement or events that are not closely related to the local research, it’s also that most individual researchers are not interested. In most cases, they plainly don’t have the time to think about anything than their next paper. That disconnect is the root of complaints like Nicholas Kristof’s recent Op-Ed, where calls upon academics: “[P]rofessors, don’t cloister yourselves like medieval monks — we need you!” 3. The Machinery My biggest reality shock was how much of research has turned into manufacturing, into the production of PhDs and papers, papers that are necessary for the next grant, which is necessary to pay the next students, who will write the next papers, iterate. This unromantic hamster wheel still shocks me. It has its good side too though: The standardization of research procedures limits the risks of the individual. If you know how to play along, and are willing to, you have good chances that you can stay. The disadvantage is though that this can force students and postdocs to work on topics they are not actually interested in, and that turns off many bright and creative people. 4. Nonlocality I did not anticipate just how frequent travel and moves are necessary these days. If I had known about this in advance, I think I would have left academia after my diploma. But so I just slipped into it. Luckily I had a very patient boyfriend who turned husband who turned father of my children. 5. The 2nd family The specialization, the single-mindedness, the pressure and, most of all, the loss of friends due to frequent moves create close ties among those who are together in the same boat. It’s a mutual understanding, the nod of been-there-done-that, the sympathy with your own problems that make your colleagues and officemates, driftwood as they often are, a second family. In all these years I have felt welcome at every single institute that I have visited. The books hadn’t told me about this. Experience, as they say, is what you get when you were expecting something else. By and large, I enjoy my job. Most of the time anyway. My lectures at the Sussex school went well, except that the combination of a recent cold and several hours of speaking stressed my voice box to the point of total failure. Yesterday I could only whisper. Today I get out some freak sounds below C2 but that’s pretty much it. It would be funny if it wasn’t so painful. You can find the slides of my lectures here and the guide to further reading here. I hope they live up to your expectations :) Monday, April 21, 2014 Away note I will be traveling the rest of the week to give a lecture at the Sussex graduate school "From Classical to Quantum GR", so not much will happen on this blog. For the school, we were asked for discussion topics related to our lectures, below are my suggestions. Leave your thoughts in the comments, additional suggestions for topics are also welcome. • Is it socially responsible to spend money on quantum gravity research? Don't we have better things to do? How could mankind possibly benefit from quantum gravity? • Can we make any progress on the theory of quantum gravity without connection to experiment? Should we think at all about theories of quantum gravity that do not produce testable predictions? How much time do we grant researchers to come up with predictions? • What is your favorite approach towards quantum gravity? Why? Should you have a favorite approach at all? • Is our problem maybe not with the quantization of gravity but with the foundations of quantum mechanics and the process of quantization? • How plausible is it that gravity remains classical while all the other forces are quantized? Could gravity be neither classical nor quantized? • How convinced are you that the Planck length is at 10-33cm? Do you think it is plausible that it is lower? Should we continue looking for it? • What do you think is the most promising area to look for quantum gravitational effects and why? • Do you think that gravity can be successfully quantized without paying attention to unification? Lara and Gloria say hello and wish you a happy Easter :o) Thursday, April 17, 2014 The Problem of Now [Image Source] Einstein’s greatest blunder wasn’t the cosmological constant, and neither was it his conviction that god doesn’t throw dice. No, his greatest blunder was to speak to a philosopher named Carnap about the Now, with a capital. “The problem of Now”, Carnap wrote in 1963, “worried Einstein seriously. He explained that the experience of the Now means something special for men, something different from the past and the future, but that this important difference does not and cannot occur within physics” I call it Einstein’s greatest blunder because, unlike the cosmological constant and indeterminism, philosophers, and some physicists too, are still confused about this alleged “Problem of Now”. The problem is often presented like this. Most of us experience a present moment, which is a special moment in time, unlike the past and unlike the future. If you write down the equations governing the motion of some particle through space, then this particle is described, mathematically, by a function. In the simplest case this is a curve in space-time, meaning the function is a map from the real numbers to a four-dimensional manifold. The particle changes its location with time. But regardless of whether you use an external definition of time (some coordinate system) or an internal definition (such as the length of the curve), every single instant on that curve is just some point in space-time. Which one, then, is “now”? You could argue rightfully that as long as there’s just one particle moving on a straight line, nothing is happening, and so it’s not very surprising that no notion of change appears in the mathematical description. If the particle would scatter on some other particle, or take a sudden turn, then these instances can be identified as events in space-time. Alas, that still doesn’t tell you whether they happen to the particle “now” or at some other time. Now what? The cause for this problem is often assigned to the timeless-ness of mathematics itself. Mathematics deals in its core with truth values and the very point of using math to describe nature is that these truths do not change. Lee Smolin has written a whole book about the problem with the timeless math, you can read my review here. It may or may not be that mathematics is able to describe all of our reality, but to solve the problem of now, excuse the heresy, you do not need to abandon a mathematical description of physical law. All you have to do is realize that the human experience of now is subjective. It can perfectly well be described by math, it’s just that humans are not elementary particles. The decisive ability that allows us to experience the present moment as being unlike other moments is that we have a memory. We have a memory of events in the past, an imperfect one, and we do not have memory of events in the future. Memory is not in and by itself tied to consciousness, it is tied to the increase of entropy, or the arrow of time if you wish. Many materials show memory; every system with a path dependence like eg hysteresis does. If you get a perm the molecule chains in your hair remember the bonds, not your brain. Memory has nothing to do with consciousness in particular which is good because it makes it much easier to find the flaw in the argument leading to the problem of now. If we want to describe systems with memory we need at the very least two time parameters: t to parameterize the location of the particle and τ to parameterize the strength of memory of other times depending on its present location. This means there is a function f(t,τ) that encodes how strong is the memory of time τ at moment t. You need, in other words, at the very least a two-point function, a plain particle trajectory will not do. That we experience a “now” means that the strength of memory peaks when both time parameters are identical, ie t-τ = 0. That we do not have any memory of the future means that the function vanishes when τ > t. For the past it must decay somehow, but the details don’t matter. This construction is already sufficient to explain why we have the subjective experience of the present moment being special. And it wasn’t that difficult, was it? The origin of the problem is not in the mathematics, but in the failure to distinguish subjective experience of physical existence from objective truth. Einstein spoke about “the experience of the Now [that] means something special for men”. Yes, it means something special for men. This does not mean however, and does not necessitate, that there is a present moment which is objectively special in the mathematical description. In the above construction all moments are special in the same way, but in every moment that very moment is perceived as special. This is perfectly compatible with both our experience and the block universe of general relativity. So Einstein should not have worried. I have a more detailed explanation of this argument – including a cartoon! – in a post from 2008. I was reminded of this now because Mermin had a comment in the recent issue of Nature magazine about the problem of now. In his piece, Mermin elaborates on qbism, a subjective interpretation of quantum mechanics. I was destined to dislike this just because it’s a waste of time and paper to write about non-existent problems. Amazingly however, Mermin uses the subjectiveness of qbism to arrive at the right conclusion, namely that the problem of the now does not exist because our experiences are by its very nature subjective. However, he fails to point out that you don’t need to buy into fancy interpretations of quantum mechanics for this. All you have to do is watch your hair recall sulphur bonds. The summary, please forgive me, is that Einstein was wrong and Mermin is right, but for the wrong reaons. It is possible to describe the human experience of the present moment with the “timeless” mathematics that we presently use for physical laws, it isn’t even difficult and you don’t have to give up the standard interpretation of quantum mechanics for this. There is no problem of Now and there is no problem with Tegmark’s mathematical universe either. And Lee Smolin, well, he is neither wrong nor right, he just has a shaky motivation for his cosmological philosophy. It is correct, as he argues, that mathematics doesn’t objectively describe a present moment. However, it’s a non sequitur that the current approach to physics has reached its limits because this timeless math doesn’t constitute a conflict with our experience. observation. Most people get a general feeling of uneasiness when they first realize that the block universe implies all the past and all the future is equally real as the present moment, that even though we experience the present moment as special, it is only subjectively so. But if you can combat your uneasiness for long enough, you might come to see the beauty in eternal mathematical truths that transcend the passage of time. We always have been, and always will be, children of the universe. Saturday, April 12, 2014 Book review: “The Theoretical Minimum – Quantum Mechanics” By Susskind and Friedman Quantum Mechanics: The Theoretical Minimum What You Need to Know to Start Doing Physics By Leonard Susskind, Art Friedman Basic Books (February 25, 2014) This book is the second volume in a series that we can expect to be continued. The first part covered Classical Mechanics. You can read my review here. The volume on quantum mechanics seems to have come into being much like the first, Leonard Susskind teamed up with Art Friedman, a data consultant whose role I envision being to say “Wait, wait, wait” whenever the professor’s pace gets too fast. The result is an introduction to quantum mechanics like I haven’t seen before. The ‘Theoretical Minimum’ focuses, as its name promises, on the absolute minimum and aims at being accessible with no previous knowledge other than the first volume. The necessary math is provided along the way in separate interludes that can be skipped. The book begins with explaining state vectors and operators, the bra-ket notation, then moves on to measurements, entanglement and time-evolution. It uses the concrete example of spin-states and works its way up to Bell’s theorem, which however isn’t explicitly derived, just captured verbally. However, everybody who has made it through Susskind’s book should be able to then understand Bell’s theorem. It is only in the last chapters that the general wave-function for particles and the Schrödinger equation make an appearance. The uncertainty principle is derived and path integrals are very briefly introduced. The book ends with a discussion of the harmonic oscillator, clearly building up towards quantum field theory there. I find the approach to quantum mechanics in this book valuable for several reasons. First, it gives a prominent role to entanglement and density matrices, pure and mixed states, Alice and Bob and traces over subspaces. The book thus provides you with the ‘minimal’ equipment you need to understand what all the fuzz with quantum optics, quantum computing, and black hole evaporation is about. Second, it doesn’t dismiss philosophical questions about the interpretation of quantum mechanics but also doesn’t give these very prominent space. They are acknowledged, but then it gets back to the physics. Third, the book is very careful in pointing out common misunderstandings or alternative notations, thus preventing much potential confusion. The decision to go from classical mechanics straight to quantum mechanics has its disadvantages though. Normally the student encounters Electrodynamics and Special Relativity in between, but if you want to read Susskind’s lectures as self-contained introductions, the author now doesn’t have much to work with. This time-ordering problem means that every once in a while a reference to Electrodynamics or Special Relativity is bound to confuse the reader who really doesn’t know anything besides this lecture series. It also must be said that the book, due to its emphasis on minimalism, will strike some readers as entirely disconnected from history and experiment. Not even the double-slit, the ultraviolet catastrophe, the hydrogen atom or the photoelectric effect made it into the book. This might not be for everybody. Again however, if you’ve made it through the book you are then in a good position to read up on these topics elsewhere. My only real complaint is that Ehrenfest’s name doesn’t appear together with his theorem. The book isn’t written like your typical textbook. It has fairly long passages that offer a lot of explanation around the equations, and the chapters are introduced with brief dialogues between fictitious characters. I don’t find these dialogues particularly witty, but at least the humor isn’t as nauseating as that in Goldberg’s book. All together, the “Theoretical Minimum” achieves what it promises. If you want to make the step from popular science literature to textbooks and the general scientific literature, then this book series is a must-read. If you can’t make your way through abstract mathematical discussions and prefer a close connection to example and history, you might however find it hard to get through this book. I am certainly looking forward to the next volume. (Disclaimer: Free review copy.) Monday, April 07, 2014 Will the social sciences ever become hard sciences? The term “hard science” as opposed to “soft science” has no clear definition. But roughly speaking, the less the predictive power and the smaller the statistical significance, the softer the science. Physics, without doubt, is the hard core of the sciences, followed by the other natural sciences and the life sciences. The higher the complexity of the systems a research area is dealing with, the softer it tends to be. The social sciences are at the soft end of the spectrum. To me the very purpose of research is making science increasingly harder. If you don’t want to improve on predictive power, what’s the point of science to begin with? The social sciences are soft mainly because data that quantifies the behavior of social, political, and economic systems is hard to come by: it’s huge amounts, difficult to obtain and even more difficult to handle. Historically, these research areas therefore worked with narratives relating plausible causal relations. Needless to say, as computing power skyrockets, increasingly larger data sets can be handled. So the social sciences are finally on the track to become useful. Or so you’d think if you’re a physicist. But interestingly, there is a large opposition to this trend of hardening the social sciences, and this opposition is particularly pronounced towards physicists who take their knowledge to work on data about social systems. You can see this opposition in the comment section to every popular science article on the topic. “Social engineering!” they will yell accusingly. It isn’t so surprising that social scientists themselves are unhappy because the boat of inadequate skills is sinking in the data sea and physics envy won’t keep it afloat. More interesting than the paddling social scientists is the public opposition to the idea that the behavior of social systems can be modeled, understood, and predicted. This opposition is an echo of the desperate belief in free will that ignores all evidence to the contrary. The desperation in both cases is based on unfounded fears, but unfortunately it results in a forward defense. And so the world is full with people who argue that they must have free will because they believe they have free will, the ultimate confirmation bias. And when it comes to social systems they’ll snort at the physicists “People are not elementary particles”. That worries me, worries me more than their clinging to the belief in free will, because the only way we can solve the problems that mankind faces today – the global problems in highly connected and multi-layered political, social, economic and ecological networks – is to better understand and learn how to improve the systems that govern our lives. That people are not elementary particles is not a particularly deep insight, but it collects several valid points of criticism: 1. People are too difficult. You can’t predict them. Humans are made of a many elementary particles and even though you don’t have to know the exact motion of every single one of these particles, a person still has an awful lot of degrees of freedom and needs to be described by a lot of parameters. That’s a complicated way of saying people can do more things than electrons, and it isn’t always clear exactly why they do what they do. That is correct of course, but this objection fails to take into account that not all possible courses of action are always relevant. If it was true that people have too many possible ways to act to gather any useful knowledge about their behavior our world would be entirely dysfunctional. Our societies work only because people are to a large degree predictable. If you go shopping you expect certain behaviors of other people. You expect them to be dressed, you expect them to walk forwards, you expect them to read labels and put things into a cart. There, I’ve made a prediction about human behavior! Yawn, you say, I could have told you that. Sure you could, because making predictions about other people’s behavior is pretty much what we do all day. Modeling social systems is just a scientific version of this. This objection that people are just too complicated is also weak because, as a matter of fact, humans can and have been modeled with quite simple systems. This is particularly effective in situations when intuitive reaction trumps conscious deliberation. Existing examples are traffic flows or the density of crowds when they have to pass through narrow passages. So, yes, people are difficult and they can do strange things, more things than any model can presently capture. But modeling a system is always an oversimplification. The only way to find out whether that simplification works is to actually test it with data. 2. People have free will. You cannot predict what they will do. To begin with it is highly questionable that people have free will. But leaving this aside for a moment, this objection confuses the predictability of individual behavior with the statistical trend of large numbers of people. Maybe you don’t feel like going to work tomorrow, but most people will go. Maybe you like to take walks in the pouring rain, but most people don’t. The existence of free will is in no conflict with discovering correlations between certain types of behavior or preferences in groups. It’s the same difference that doesn’t allow you to tell when your children will speak the first word or make the first step, but that almost certainly by the age of three they’ll have mastered it. 3. People can understand the models and this knowledge makes predictions useless. This objection always stuns me. If that was true, why then isn’t obesity cured by telling people it will remain a problem? Why are the highways still clogged at 5pm if I predict they will be clogged? Why will people drink more beer if it’s free even though they know it’s free to make them drink more? Because the fact that a prediction exists in most cases doesn’t constitute any good reason to change behavior. I can predict that you will almost certainly still be alive when you finish reading this blogpost because I know this prediction is exceedingly unlikely to make you want to prove it wrong. Yes, there are cases when people’s knowledge of a prediction changes their behavior – self-fulfilling prophecies are the best-known examples of this. But this is the exception rather than the rule. In an earlier blogpost, I referred to this as societal fixed points. These are configurations in which the backreaction of the model into the system does not change the prediction. The simplest example is a model whose predictions few people know or care about. 4. Effects don’t scale and don’t transfer. This objection is the most subtle one. It posits that the social sciences aren’t really sciences until you can do and reproduce the outcome of “experiments”, which may be designed or naturally occurring. The typical social experiment that lends itself to analysis will be in relatively small and well-controlled communities (say, testing the implementation of a new policy). But then you have to extrapolate from this how the results will be in larger and potentially very different communities. Increasing the size of the system might bring in entirely new effects that you didn’t even know of (doesn’t scale), and there are a lot of cultural variables that your experimental outcome might have depended on that you didn’t know of and thus cannot adjust for (doesn’t transfer). As a consequence, repeating the experiment elsewhere will not reproduce the outcome. Indeed, this is likely to happen and I think it is the major challenge in this type of research. For complex relations it will take a long time to identify the relevant environmental parameters and to learn how to account for their variation. The more parameters there are and the more relevant they are, the less the predictive value of a model will be. If there are too many parameters that have to be accounted for it basically means doing experiments is the only thing we can ever do. It seems plausible to me, even likely, that there are types of social behavior that fall into this category, and that will leave us with questions that we just cannot answer. However, whether or not a certain trend can or cannot be modeled we will only know by trying. We know that there are cases where it can be done. Geoffry West’s city theory I find a beautiful example where quite simple laws can be found in the midst of all these cultural and contextual differences. In summary. The social sciences will never be as “hard” as the natural sciences because there is much more variation among people than among particles and among cities than among molecules. But the social sciences have become harder already and there is no reason why this trend shouldn’t continue. I certainly hope it will continue because we need this knowledge to collectively solve the problems we have collectively created. Tuesday, April 01, 2014 Do we live in a hologram? Really?? Physicists fly high on the idea that our three-dimensional world is actually two-dimensional, that we live in a hologram, and that we’re all projections on the boundary of space. Or something like this you’ve probably read somewhere. It’s been all over the pop science news ever since string theorists sang the Maldacena. Two weeks ago Scientific American produced this “Instant Egghead” video which is a condensed mashup of all the articles I’ve endured on the topic: The second most confusing thing about this video is the hook “Many physicist now believe that reality is not, in fact, 3-dimensional.” To begin with, physicists haven’t believed this since Minkowski doomed space and time to “fade away into mere shadows”. Moyer in his video apparently refers only to space when he says “reality.” That’s forgiveable. I am more disturbed by the word “reality” that always creeps up in this context. Last year I was at a workshop that mixed physicists with philosophers. Inevitably, upon mentioning the gauge-gravity duality, some philosopher would ask, well, how many dimensions then do we really live in? Really? I have some explanations for you about what this really means. Q: Do we really live in a hologram? A: What is “real” anyway? Q: Having a bad day, yes? A: Yes. How am I supposed to answer a question when I don’t know what it means? Q: Let me be more precise then. Do we live in a hologram as really as, say, we live on planet Earth? A: Thank you, much better. The holographic principle is a conjecture. It has zero experimental evidence. String theorists believe in it because their theory supports a specific version of holography, and in some interpretations black hole thermodynamics hints at it too. Be that as it may, we don’t know whether it is the correct description of nature. Q: So if the holographic principle was the correct description of nature, would we live in a hologram as really as we live on planet Earth? A: The holographic principle is a mathematical statement about the theories that describe nature. There’s a several thousand years long debate about whether or not math is as real as that apple tree in your back yard. This isn’t a question about holography in particular, you could also ask that question also in general relativity: Do we really live in a metric manifold of dimension four and Lorentzian signature? Q: Well, do we? A: On most days I think of the math of our theories as machinery that allows us to describe nature but is not itself nature. On the remaining days I’m not sure what reality is and have a lot of sympathy for Platonism. Make your pick. Q: So if the holographic principle was true, would we live in a hologram as really as we previously thought we live in the space-time of Einstein’s theory of General Relativity? A: A hologram is an image on a 2-dimensional surface that allows one to reconstruct a 3-dimensional image. One shouldn’t take the nomenclature “holographic principle” too seriously. To begin with actual holograms are never 2-dimensional in the mathematical sense; they have a finite width. After all they’re made of atoms and stuff. They also do not perfectly recreate the 3-dimensional image because they have a resolution limit which comes from the wavelength of the light used to take (and reconstruct) the image. A hologram is basically a Fourier transformation. If that doesn’t tell you anything, suffices to say this isn’t the same mathematics as that behind the holographic principle. Q: I keep hearing that the holographic principle says the information of a volume can be encoded on the boundary. What’s the big deal with that? If I get a parcel with a customs declaration, information about the volume is also encoded on the boundary. A: That statement about the encoding of information is sloppy wording. You have to take into account the resolution that you want to achieve. You are right of course in that there’s no problem in writing down the information about some volume and printing it on some surface (or a string for that matter). The point is that the larger the volume the smaller you’ll have to print. Here’s an example. Take a square made out of N2 smaller squares and think of each of them as one bit. They’re either black or white. There are 2N2 different patterns of black and white. In analogy, the square is a box full of matter in our universe and the colors are information about the particles in the inside. Now you want to encode the information about the pattern of that square on the boundary using pieces of the same length as the sidelength of the smaller squares. See image below for N=3. On the left is the division of the square and the boundary, on the right is one way these could encode information. There’s 4N of these boundary pieces and 24N different patterns for them. If N is larger than 4, there are more ways the square can be colored than you have different patterns for the boundary. This means you cannot uniquely encode the information about the volume on the boundary. The holographic principle says that this isn’t so. It says yes, you can always encode the volume on the boundary. Now this means, basically, that some of the patterns for the squares can’t happen. Q: That’s pretty disturbing. Does this mean I can’t pack a parcel in as many ways as I want to? A: In principle, yes. In practice the things we deal with, even the smallest ones we can presently handle in laboratories, are still far above the resolution limit. They are very large chunks compared to the little squares I have drawn above. There is thus no problem encoding all that we can do to them on the boundary. Q: What then is the typical size of these pieces? A: They’re thought to be at the Planck scale, that’s about 10-33 cm. You should not however take the example with the box too seriously. That is just an illustration to explain the scaling of the number of different configurations with the system size. The theory on the surface looks entirely different than the theory in the volume. Q: Can you reach this resolution limit with an actual hologram? A: No you can’t. If you’d use photons with a sufficiently high energy, you’d just blast away the sample of whatever image you wanted to take. However, if you loosely interpret the result of such a high energy blast as a hologram, albeit one that’s very difficult to reconstruct, you would eventually notice these limitations and be able to test the underlying theory. Q: Let me come back to my question then, do we live in the volume or on the boundary? A: Well, the holographic principle is quite a vague idea. It has a concrete realization in the gauge-gravity correspondence that was discovered in string theory. In this case one knows very well how the volume is related to the boundary and has theories that describe each. These both descriptions are identical. They are said to be “dual” and both equally “real” if you wish. They are just different ways of describing the same thing. In fact, depending on what system you describe, we are living on the boundary of a higher-dimensional space rather than in a volume with a lower dimensional surface. Q: If they’re the same why then do we think we live in 3 dimensions and not in 2? Or 4? A: Depends on what you mean with dimension. One way to measure the dimensionality is, roughly speaking, to count the number of ways a particle can get lost if it moves randomly away from a point. The result then depends on what particle you use for the measurement. The particles we deal with will move in 3 dimensions, at least on the distance scales that we typically measure. That’s why we think, feel, and move like we live in 3 dimensions, and nothing wrong with that. The type of particles (or fields) you would have in the dual theories do not correspond to the ones we are used to. And if you ask a string theorist, we live in 11 dimensions one way or the other. Q: I can see then why it is confusing to vaguely ask what dimension “reality” has. But what is the most confusing thing about Moyer’s video? A: The reflection on his glasses. Q: Still having a bad day? A: It’s this time of the month. Q: Okay, then let me summarize what I think I learned here. The holographic principle is an unproved conjecture supported by string theory and black hole physics. It has a concrete theoretical formalization in the gauge-gravity correspondence. There, it identifies a theory in a volume with a theory on the boundary of that volume in a mathematically rigorous way. These theories are both equally real. How “real” that is depends on how real you believe math to be to begin with. It is only surprising that information can always be encoded on the boundary of a volum if you request to maintain the resolution, but then it is quite a mindboggling idea indeed. If one defines the number of dimensions in a suitable way that matches our intuition, we live in 3 spatial dimensions as we always thought we do, though experimental tests in extreme regimes may one day reveal that fundamentally our theories can be rewritten to spaces with different numbers of dimensions. Did I get that right? A: You’re so awesomely attentive. Q: Any plans on getting a dog? A: No, I have interesting conversations with my plants.
b8058b4adbe35d24
 The Solutions to Generalized Helmholtz Equations San José State University Thayer Watkins Silicon Valley & Tornado Alley The Solutions to Generalized Helmholtz Equations The Helmholtz Equation The standard form of the Helmholtz equation is ∇²φ + k²φ = 0 where k is a real-valued constant. What is investigated here are characteristics of the solution to an equation like the Helmholtz equation but where the coefficient labeled k above is not a constant; i.e., ∇²φ + f²(X)φ = 0 where X is the vector of the coordinates. The One Dimensional Case The relevance of this problem is where φ² is probability density and the item of interest is the how the spatial average of φ² is related to the value of f(x). The solution for φ² oscillates very rapidly between a minimum of zero and a maximum. Thus the spatial average of φ² is roughly one half of the maximum of φ². This will be shown below to be inversely related to the magnitude of f(x). For this case the equation is (d²φ/dx²) = (d/dx)(dφ/dx) = −f²(x)φ If (dφ/dx) is positive and φ is positive then an increase in x results in a decrease in (dφ/dx). The slope (dφ/dx) continues to decrease as x increases until it reaches 0 and thereafter φ also decreases. The decreases in (dφ/dx) and φ continue until φ reaches zero and then becomes negative. Thereafter (dφ/dx) increases with increasing x. The larger that the function f²(x) is, the more rapidly (dφ/dx) goes to zero and the smaller are the magnitudes of the maxima and minima of φ. Also the larger that f²(x) is, the smaller is the interval between the values of x at which φ(x) is equal to zero. This is illustrated below. Suppose f(x) is roughly constant over some interval Δx. Let this constant value be denote as f(x). If both sides of the above equation are multiplied by (dφ/dx) the result is (dφ/dx)(d²φ/dx²) = − f²(x)φ(dφ/dx) which is equivalent to ½(d/dx((dφ/dx)²) = − ½f²(x)(d(φ²)/dx The factor of ½ can be eliminated and the result integrated from a value of x such that φ is equal to zero and (dφ/dx) is a maximum to a value of x where (dφ/dx) is equal to zero and φ is a maximum. This means that 0 − (dφ/dx)max² = − f²(x)[φmax² − 0] which reduces to (dφ/dx)max² = f²(xmax² and hence (dφ/dx)max = f(xmax and further that φmax = (dφ/dx)max/f(x) This is a rigorous relationship. The interval Δx over which (dφ/dx) goes from a maximum to zero is the same interval for which φ goes from zero to a maximum. If f(x) is constant over an interval the solution to the equation is a sinusoidal function. The wavelength L is such that f(x)L = 2π and hence Δx = L/4 = π/(2f(x)) The integral of φ² over the interval Δx is approximately one half of the maximum of φ² times Δx; i.e.,. ∫φ²dx ≅ ½[(dφ/dx)max/f(x)][ π/(2f(x)] which can be expressed as ∫φ²dx ≅ [π(dφ/dx)max/4]/f²(x) When x corresponds to time (dφ/dx)max corresponds to the velocity of a particle at zero potential energy and hence is constant if the total energy of the system is constant. Any constant factor, such as the one in the above equation, is irrelevant for probability densities because it is also a factor of the sum of the probabilities and thus cancels out when the term for one interval is divided by the sum for all intervals. Thus the probability for an interval (state) is inversely proportional to the coefficient in the Helmholtz equation f²(x). The probability density is inversely proportional to f(x), the square root of the coefficient. The spatial average of the probability densities, which is ∫φ²dx/Δx, is then inversely proportional to f(x). Let the spatial average of a probability density function be denoted by an overscore, ̅P. The problem now is to make rigorous the argument sketched above. This can be achieved by starting with a system that is simple and tractible such as a harmonic oscillator. The Harmonic Oscillator The dynamics of a harmonic oscillator are given by m(d²x/dx²) = −k²x Its energy E is the sum of its kinetic energy K and potential energy V; i.e., E = K + V = ½mv² + ½k½x² where v is equal to (dx/dt). Thus v² = (2/m)(E − V(x)) and hence v = (2/m)½(E − V(x))½ The particle of a harmonic oscillator travels from a minimum value of x, xmin, to a maximum value of x, xmax, and back to xmin. The probability of finding the particle in an interval dx is [(2/|v(x)|)dx]/T where T is the time required to execute a complete cycle. Hence the probability density at a point x of the path of the particle is [2/|v(x)|]/T. This probability density function can be called the time-spent probability density function because it is the proportion of the time spent by the particle in a particular interval. Thus the classical probability density function, PC, for a harmonic oscillator is PC = 2/(T|v(x)|) = (2/m)½)/[E − V(x)]½/T A useful bit of notation is to let K(x) represent (E−V(x)). K(x) is the kinetic energy of the particle as a function of its displacement. Thus v = [(2/m)K(x)]½/T and hence PC = (m/2)½T)/[K(x)]½ The probability density function according to quantum physics, PQM, is given as the square of the wave function φ where φ satisfies the time independent Schrödinger equation −(h²/2m)(d²φ/dx²) + V(x)φ = Eφ which reduces to (d²φ/dx²) = − (2m/h²)(E − V(x))φ where h² is Planck's constant divided by 2π. Thus the quantum mechanical wave function satisfies the equation (d²φ/dx²) = − (2m/h²)K(x)φ This is a generalized Helmholtz equation. From the previous analysis the average value of φ² is inversely proportional to the square root of the coefficient of φ in the above equation. The constant terms in the coefficient are irrelevant in determining ̅PQM. Therefore ̅PQM is proportional to 1/[K(x)]½ . It was found previously that PC = (2/T)/[K(x)]½ Thus since both ̅PQM and PC are inversely proportional to [K(x)]½ they are proportional to each other. However the integral of both ̅PQM and PC must be unity. Therefore they must be equal to each other; i.e., ̅PQM(x) = PC(x) Here is a diagram that shows the values of PQM and PC for a harmonic oscillator in which the quantum number is 30. Visually one can see that ̅PQM is equal to PC. For the significant case of harmonic oscillators the spatial average of the quantum mechanical probability density function is equal to the time-spent probability density function derived from classical analysis. (To be continued.) HOME PAGE OF applet-magic
61f2411b10385f58
Réunion Comics 15/01/2014 Présents : Michele Amato, Salim Berrada, Arnaud Bournel, Philippe Dollfus, Jérôme Larroque, Mai Chung Nguyen, Jérôme Saint Martin, Su Li, Tran Van Truong, Adrien Vincent. Salim and Philippe with Plaçais group about Klein transistor nearly accepted (IOP 2D mat). Monastir paper with Arnaud about optical effects in GeSn quantum wells accepted for JAP: « Wave-function engineering and absorption spectra in Si0.16Ge0.84/Ge0.94Sn0.06/Si0.16Ge0.84 strained on relaxed Si0.10Ge0.90 type I quantum well ». But not very original in comparison of a previous paper on Ge wells. Naima as 1st author. ISCAS of Adrien accepted! In Australia in June. Next deadlines: tomorrow for E-MRS -> Michele has already submitted, Jérôme L. should write an abstract too. In Lille May 19-24. Summer school Graphene 2014 -> Truong and Mai Chung (?). Deadline for the conference Graphene (Toulouse) = 1/2 -> Trung, Mai Chung. IWCE, deadline 28/1. Same day as Comics lunch! Ulis, February 3 but… Next Comics meetings On Monday afternoon. About 1 per month. A (very) short introduction to Density Functional Theory, by Michele (pdf here) -> For atomic structure, thermodyn, chemical, electronic and scattering properties. Nobel prize for DFT in chemistry. 1998: Walter Kohn and John Pople. Semi-empirical methods (Hückel, based on Hartree-Fock formalism) -> good for large systems. Can fail if the computed molecule is not close to the database of parametrized molecules. DFT -> very good scalability. Many body problem -> Ne electrons, Nn nuclei, Schrödinger equation to solve… With kinetic energies and potential: repulsion between nuclei, between electrons and between electrons and nuclei. How to deal with 1023 particles? Born-Oppenheimer separation -> nuclei frozen in their equilibrium positions. But wavefunctions remain very complex! And experimentally measurable. While electron density is observable, e.g.by X-rays. Hohenberg-Kohn theorem I: the relation between the potential and the electron density is invertible. Then, the ground state expectation value of any observable depends only on the electron density. Hohenberg-Kohn theorem II: the total energy functional has a minimum, the ground state energy E0, in correspondence to the ground state density r0 -> proof of the existence of the universal functional without determining it. Kohn-Sham scheme -> r0 can be calculated thanks to an artificial system of non-interacting particles. Using one-particle orbitals which are not real wavefunctions. And Kohn-Sham potential = vion(r) + vH(r) + vxc(r) where vH describes the classic electrostatic potential and vxc is the exchange-correlation potential, for taking into account the effect of interactions between particles. Approximation for vxc -> local description. Self consistent flow diagram to solve the Kohn-Sham scheme, as a function of the density. De-freezing the nuclei… Cf. Feynman, Phys Rev 56 (4), 340 (1939) “Forces in molecules” (http://dx.doi.org/10.1103/PhysRev.56.340) -> Hellmann-Feynman theorem for an analytical solution. Wikipedia: proven independently by many authors, including Paul Güttinger (1932), Wolfgang Pauli (1933), Hans Hellmann (1937) and Richard Feynman (1939). This does not correspond to phonon problem! For further development :
be3b7b67357be562
The butterfly and the remarkable Professor Hofstadter This spring, there was excitement in the world of physics as a long-predicted butterfly was proved to exist. But, being a creature of physics, this butterfly wasn’t an insect, nor anything that would even occur without human minds to construct it. This was Hofstadter’s butterfly, a remarkable spectrum of electron energy levels. It was first described in 1976 by Douglas Hofstadter, who was then with the Physics Department, University of Oregon, USA. He was looking at the allowed energy levels of electrons restricted to a two-dimensional plane, with a periodic potential energy and a changing magnetic field. As Hofstadter put it in a summary of his work, “The resultant Schrödinger equation becomes a finite-difference equation whose eigenvalues can be computed by a matrix method.” To which you might respond, “Aha, but of course it does!” Or even, “Huh?” — in which case, you might simply appreciate that when he plotted a graph of the spectrum, Hofstadter made a remarkable pattern that looked somewhat like a butterfly. And this pattern was recursive, so if you look at a small part of the pattern you see the same butterfly shape, which is repeated at larger and larger scales. The paper was published just one year after the term “fractal” had been coined, and Hofstadter had discovered one of the very few fractals known in physics. Quest for the elusive butterfly Physicists have since searched for experimental proof of the butterfly, yet until recently it proved elusive. This is largely as it results from quantum effects, and when atoms in the two-dimensional plane are very close together observing the butterfly would require unfeasibly strong magnetic fields, while if they are widely spaced disorder ruins the pattern. Graphene, a quirky form of carbon, has been the key to finding the butterfly. It is a one-atom thick layer of carbon atoms arranged in hexagonal patterns – somewhat like chicken wire. A layer of this was placed on atomically flat boron nitride substrate, which likewise has a honeycomb atomic lattice structure, but with slightly longer bonds between atoms. This combination resulted in the electrons experiencing a periodic potential, akin to a marble rolling over a surface shaped like the tray of an egg carton. City College of New York Assistant Professor of Physics Cory Dean developed the material. He was a member of an international group that published its findings in May. Separate groups at the University of Manchester (UK) and Massachusetts Institute of Technology simultaneously reported similar results. According to a City College press release, the light and dark sections of the butterfly pattern correspond to “gaps” in energy levels that electrons cannot cross and dark areas where they can move freely. While efficient conductors like copper have no gaps, and there are very large gaps in insulators, Dean believes the very complicated structure of the Hofstadter spectrum suggests as yet unknown electrical properties. “We are now standing at the edge of an entirely new frontier in terms of exploring properties of a system that have never before been realized,” he said. “The ability to generate this effect could possibly be exploited to design new electronic and optoelectronic devices.” Graphene the wonder material, and father n son physicists Graphene planes had already shown promise as a new wonder material. They were first isolated in 2004, and have a thickness almost a millionth of a human hair. Graphene is stronger than steel and more conductive than copper, and can help make ultrafast optical switches for applications including communications, as well as lead to more efficient solar cells, enhanced printed circuits, unbreakable touchscreens and microscale Lithium-ion batteries. It may even prove to be the ideal material for 3D printing. Rather as graphene may have multiple uses, the man who described the butterfly spectrum has proven multi-talented. Douglas Hofstadter was the son of Stanford University physicist Robert Hofstadter, who in 1961 was the joint winner of the Nobel Prize for Physics, “for his pioneering studies of electron scattering in atomic nuclei and for his consequent discoveries concerning the structure of nucleons." Like father, like son, you might think, as Douglas also became a physicist. Yet he did not remain so for long. The year after his paper on the spectrum was published, Hofstadter joined Indiana University's Computer Science Department faculty, and launched a research program in computer modeling of mental processes, which he then called "artificial intelligence research", though he now prefers "cognitive science research". Miracles, mirages and butterfly dreaming Hofstadter pondered the question of what is a self, and how can one come out of stuff that is as selfless as a stone or a puddle? In an attempt to provide an answer, he wrote a book, Gödel, Escher, Bach: an Eternal Golden Braid. This interwove several narratives, and featured word play, puzzles, and recursion and self-reference, with objects and ideas referring to themselves. The book was a success, winning the Pulitzer Prize for general non-fiction. Yet in an interview with Wired, Hofstadter later expressed disappointment that most people found its point was simply to have fun, albeit noting that hundreds of people had written to him, saying it launched them on a path of studying computer science or cognitive science or philosophy. Some of these people might have been startled when Hofstadter, by then professor of cognitive science at Indiana University, USA, later told the New York Times, “I have no interest in computers,” adding, “People who claim that computer programs can understand short stories, or compose great pieces of music — I find that stuff ridiculously overblown.” The NY Times interview accompanied the publication of a straighter book on questions of consciousness and soul, I Am a Strange Loop. Within this, Hofstadter wrote, “In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference.” Here, Hofstadter seems to echo the butterfly pattern he discovered, with its multitude of versions of itself. But there’s far more to consciousness, which he believes derives from a self-model. Over 2300 years ago, a butterfly featured in an anecdote by another thinker. The Chinese philosopher Zhuangzi wrote of dreaming he was a butterfly, and awaking to wonder if he was a man who dreamt of being a butterfly, or a butterfly dreaming of being a man. After Hofstadter, you might wonder if either of these is but a dream within a dream. Which may remind you of a movie, which you may find within another column, which is currently a hazy looping form within the mirage that’s this writer … but will not feature butterflies. Martin Williams
6fe50bc721da2267
Isotropic Harmonic Oscillator From FSUPhysicsWiki Jump to: navigation, search Quantum Mechanics A Schrödinger Equation The most fundamental equation of quantum mechanics; given a Hamiltonian \mathcal{H}, it describes how a state |\Psi\rangle evolves in time. Basic Concepts and Theory of Motion UV Catastrophe (Black-Body Radiation) Photoelectric Effect Stability of Matter Double Slit Experiment Stern-Gerlach Experiment The Principle of Complementarity The Correspondence Principle The Philosophy of Quantum Theory Brief Derivation of Schrödinger Equation Relation Between the Wave Function and Probability Density Stationary States Heisenberg Uncertainty Principle Some Consequences of the Uncertainty Principle Linear Vector Spaces and Operators Commutation Relations and Simultaneous Eigenvalues The Schrödinger Equation in Dirac Notation Transformations of Operators and Symmetry Time Evolution of Expectation Values and Ehrenfest's Theorem One-Dimensional Bound States Oscillation Theorem The Dirac Delta Function Potential Scattering States, Transmission and Reflection Motion in a Periodic Potential Summary of One-Dimensional Systems Harmonic Oscillator Spectrum and Eigenstates Analytical Method for Solving the Simple Harmonic Oscillator Coherent States Charged Particles in an Electromagnetic Field WKB Approximation The Heisenberg Picture: Equations of Motion for Operators The Interaction Picture The Virial Theorem Commutation Relations Angular Momentum as a Generator of Rotations in 3D Spherical Coordinates Eigenvalue Quantization Orbital Angular Momentum Eigenfunctions General Formalism Free Particle in Spherical Coordinates Spherical Well Isotropic Harmonic Oscillator Hydrogen Atom WKB in Spherical Coordinates Feynman Path Integrals The Free-Particle Propagator Propagator for the Harmonic Oscillator Differential Cross Section and the Green's Function Formulation of Scattering Central Potential Scattering and Phase Shifts Coulomb Potential Scattering We now solve the isotropic harmonic oscillator using the formalism that we have just developed. While it is possible to solve it in Cartesian coordinates, we gain additional insight by solving it in spherical coordinates, and it is easier to determine the degeneracy of each energy level. The radial part of the Schrödinger equation for a particle of mass M\! in an isotropic harmonic oscillator potential V(r)=\frac{1}{2}M\omega^{2}r^2 is given by: -\frac{\hbar^2}{2M}\frac{d^2u_{nl}}{dr^2}+\left(\frac{\hbar^2}{2M}\frac{l(l+1)}{r^2} + \frac{1}{2}M\omega^{2}r^2\right)u_{nl}=Eu_{nl}. Let us begin by looking at the solutions u_{nl}\! in the limits of small and large r.\! As r\rightarrow 0\!, the equation reduces to The only solution of this equation that does not diverge as r\rightarrow 0 is u_{nl}(r)\simeq r^{l+1}. In the limit as r\rightarrow \infty, on the other hand, the equation becomes whose solution is given by u_{nl}(r)\simeq e^{-M\omega r^2/2\hbar}. We may now assume that the general solution to the equation is given by u_{nl}(r)=r^{l+1}e^{-M\omega r^2/2\hbar}f_{nl}(r). Substituting this expression into the original equation, we obtain We now use a series solution for this equation: f_{nl}(r)=\sum_{n=0}^{\infty}a_{n}r^n= a_{0}+a_{1}r+a_{2}r^2+a_{3}r^3+\ldots +a_{n}r^n+\ldots Substituting this solution into the reduced form of the equation, we obtain \sum_{n=0}^{\infty} \left[n(n-1)a_{n}r^{n-2}+2 \left( \frac{l+1}{r}- \frac{M\omega}{\hbar}r\right) na_nr^{n-1} + \left[\frac{2ME}{\hbar^2} - (2l+3)\frac{M\omega}{\hbar}\right] a_n r^n\right]=0, which reduces to For this equation to hold, the coefficients of each of the powers of r\! must vanish seperately. Doing this for the positive powers of r\! yields the following recursion relation: In addition, we have an r^{-1}\! term; for it to vanish, we must set a_1=0.\! This, combined with the above recursion relation, means that the function f_{nl}(r)\! contains only even powers of r.\! In other words, By a similar argument as the one that we employed for the one-dimensional harmonic oscillator, we find that, unless the series for f_{nl}(r)\! terminates, the resulting full wave function will diverge as r\rightarrow\infty. Because the series must only contain even powers of r,\! the resulting quantization condition on the energy is where n=2n'+l.\! The degeneracy corresponding to the n^{\text{th}}\! level may be found to be \tfrac{1}{2}(n+1)(n+2). We see that energy levels with even n\! correspond to even values of l,\! while those with odd n\! have odd values of l.\! The total wave function of the isotropic harmonic oscillator is thus given by \psi_{nlm}(r,\theta,\phi )=r^{l+1}e^{-M\omega r^2/2\hbar}f_{nl}(r)Y_{lm}(\theta,\phi)=R_{nl}(r)Y_{lm}(\theta ,\phi ). One may show that, in fact, f_{nl}(r)\! is an associated Laguerre polynomial in \frac{M\omega}{\hbar}r^2. Personal tools
04e31087b3ed376b
The Schrödinger Equation in Dirac Notation From FSUPhysicsWiki Jump to: navigation, search Quantum Mechanics A Schrödinger Equation Basic Concepts and Theory of Motion UV Catastrophe (Black-Body Radiation) Photoelectric Effect Stability of Matter Double Slit Experiment Stern-Gerlach Experiment The Principle of Complementarity The Correspondence Principle The Philosophy of Quantum Theory Brief Derivation of Schrödinger Equation Relation Between the Wave Function and Probability Density Stationary States Heisenberg Uncertainty Principle Some Consequences of the Uncertainty Principle Linear Vector Spaces and Operators Commutation Relations and Simultaneous Eigenvalues The Schrödinger Equation in Dirac Notation Transformations of Operators and Symmetry Time Evolution of Expectation Values and Ehrenfest's Theorem One-Dimensional Bound States Oscillation Theorem The Dirac Delta Function Potential Scattering States, Transmission and Reflection Motion in a Periodic Potential Summary of One-Dimensional Systems Harmonic Oscillator Spectrum and Eigenstates Analytical Method for Solving the Simple Harmonic Oscillator Coherent States Charged Particles in an Electromagnetic Field WKB Approximation The Heisenberg Picture: Equations of Motion for Operators The Interaction Picture The Virial Theorem Commutation Relations Angular Momentum as a Generator of Rotations in 3D Spherical Coordinates Eigenvalue Quantization Orbital Angular Momentum Eigenfunctions General Formalism Free Particle in Spherical Coordinates Spherical Well Isotropic Harmonic Oscillator Hydrogen Atom WKB in Spherical Coordinates Feynman Path Integrals The Free-Particle Propagator Propagator for the Harmonic Oscillator Differential Cross Section and the Green's Function Formulation of Scattering Central Potential Scattering and Phase Shifts Coulomb Potential Scattering The Schrödinger equation, as introduced in the previous chapter, is a special case of a more general equation that is satisfied by the abstract state vector |\Psi(t)\rangle describing the system. More specifically, it is an equation describing the components of the state vector in position space. We will now introduce this more general equation, written in terms of the state vector itself, and show how one can recover the wave equation from the previous chapter. In Dirac notation, the Schrödinger equation is written as i\hbar \frac{d}{dt}|\Psi(t)\rangle=\hat{H}(t)|\Psi(t)\rangle. We see that the Hamiltonian of the system determines how a given initial state will evolve in time. To show how to recover the equation for the wave function, let us consider the Hamiltonian for a particle moving in one dimension, We now write our state vector in position space. Since the position space is continuous, rather than discrete, the state vector as a linear superposition of position eigenstates must now be written as an integral: |\Psi(t)\rangle=\int dx\,\Psi(x,t)|x\rangle, where \langle x'|x\rangle=\delta(x'-x) and \delta(x)\! is the Dirac delta function. With the aid of the identity, \frac{\partial}{\partial x}\delta(x'-x)=\frac{\delta(x'-x)}{x'-x}, one may verify that \langle x'|\hat{p}|x\rangle=i\hbar\frac{\delta(x'-x)}{x'-x}=i\hbar\frac{\partial}{\partial x}\delta(x'-x) and that \hat{p}|\Psi(t)\rangle=-i\hbar\int dx\,\frac{\partial\Psi}{\partial x}|x\rangle. If we now substitute the above form of the Hamiltonian into the Schrödinger equation and project the resulting equation into position space, we will arrive at the wave equation stated in the previous chapter, The above procedure can be generalized to multiple dimensions, again recovering the multi-dimensional wave equation given in the previous chapter: i\hbar\frac{\partial \Psi(\textbf{r},t)}{\partial t} = \left[ -\frac{\hbar^2}{2m}\nabla^2 + V(\textbf{r})\right]\Psi(\textbf{r},t) We could also have chosen to work in momentum space; a similar procedure yields i\hbar\frac{\partial \Phi(\textbf{p},t)}{\partial t} = \left[ \frac{\textbf {p}^{2}}{2m} + V\left ( i\hbar \frac{\partial}{\partial \textbf{p}}\right)\right]\Phi(\textbf{p},t). Here, \Phi(\textbf{p},t) and \Psi(\textbf{r},t) are related through a Fourier transform as described in a previous section. Let us now consider a time-independent Hamiltonian. As described previously, we may solve the Schrödinger equation in this case by first assuming that the state vector has the form, |\psi_n(t)\rangle=e^{-iE_n t/\hbar}|\psi_n\rangle, where |\psi_n\rangle is independent of time. Substituting this form into the Schrödinger equation yields the equation for stationary states in Dirac notation: The eigenfunctions are replaced with eigenvectors. Use of this notation makes solution of the Schrödinger equation much simpler for some problems; if we write the eigenvectors in a convenient basis, we may project the above eigenvalue equation onto all states in the basis, thus reducing the problem to diagonalizing a matrix. We now ask how an arbitrary state |\Psi(t)\rangle evolves in time for a time-independent Hamiltonian. Let us expand this state in terms of an orthonormal basis |1\rangle,|2\rangle,\ldots, obtaining If we now substitute this into the Schrödinger equation and project the result onto each of the basis states, we obtain i\hbar\frac{dc_{m}(t)}{dt}=\sum_{mn}\langle m|H|n\rangle c_{n}(t). If, in particular, we work in the basis \left\{|\psi_n\rangle\right\} of eigenstates of the Hamiltonian, the above equation reduces to a set of decoupled equations for the coefficient of each eigenstate, whose solutions are c_{m}(t)=c_{m}(0)e^{-iE_{m}t/\hbar}. Therefore, the time evolution of a general state is given by which is just a linear superposition of the time-dependent eigenvectors obtained previously. One could also have obtained this from the fact that any linear superposition of solutions of the Schrödinger equation is itself a solution, and thus any state may be written in the above form. We will now show that the Schrödinger equation in this form preserves the normalization of the state vector; i.e., if the vector is normalized initially, then it will remain normalized at all times. We start by writing the dual of the Schrödinger equation, We now act on the Schrödinger equation the left with \langle\Psi(t)| and on its dual from the right with |\Psi(t)\rangle and subtract the two results, obtaining \langle\Psi(t)|\frac{d}{dt}|\Psi(t)\rangle+\frac{d}{dt}\left [\langle\Psi(t)|\right ]|\Psi(t)\rangle=0, As asserted, \langle\Psi(t)|\Psi(t)\rangle=\text{const.}, so that we only need to normalize the state vector at t=0.\! Personal tools
fdbb101f8e34c9c5
Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 5.07 MB Downloadable formats: PDF Because of the different wavelengths associated with different colors, it is clear that for a mixed light source we will have some colors interfering constructively while others interfere destructively. During a time-delayed Schrodinger's Cat experiment, the observation by a human is physically-passive, in contrast with the physically-active interaction that is important in quantum physics; we can see this in the Uncertainty Principle's description of how the thing-being-observed is affected by the physical interaction-during-observation. Pages: 438 Publisher: Society of Photo Optical (June 20, 2002) ISBN: 0819445444 Encyclopaedia of Oscillation and Waves Fiber Optic Sensors: An Introduction for Engineers and Scientists (Wiley Series in Pure and Applied Optics) The Story of Radio for Kids Now, you've used complex numbers in physics all the time, and even in electromagnetism, you use complex numbers. But you use them really in an auxiliary way only download. Finally, in DE, the velocity is negative and the acceleration is zero. In two or three dimensions, position x, velocity v, and acceleration a are all vectors, so that the velocity is v(t) = while the acceleration is a(t) = dx(t) dt dv(t). dt (6.3) Figure 6.2: Two different views of circular motion of an object , cited: QED: The Strange Theory of download online download online. A car of mass 1200 kg initially moving 30 m s−1 brakes to a stop. (a) What is the net work done on the car due to all the forces acting on it during the indicated period? (b) Describe the motion of the car relative to an inertial reference frame initially moving with the car. (c) In the above reference frame, what is the net work done on the car during the indicated period Higher-order Techniques in read for free As de Broglie explained that day to Bohr, Albert Einstein, Erwin Schrödinger, Werner Heisenberg and two dozen other celebrated physicists, pilot-wave theory made all the same predictions as the probabilistic formulation of quantum mechanics (which wouldn’t be referred to as the “Copenhagen” interpretation until the 1950s), but without the ghostliness or mysterious collapse The Method of Moments in Electromagnetics, Second Edition It may be expected that current events/headlines will be discussed in class. Prerequisites: some basic background in calculus or be concurrently taking MATH V1101 Calculus I , cited: Discover Signal Processing: An read pdf read pdf. When talking about reflection and refraction concerning plane mirrors, there are several important terms that need to be defined 3 ref.: Modulated Waves: Theory and read epub read epub. Figure 2.12: Illustration of wave vectors of plane waves which might be added together. The exponential function decreases rapidly as its argument becomes more negative, and for practical purposes, only wave vectors with ≤ αmax contribute significantly to the sum. Figure 2.13 shows what h(x, y) looks like when αmax = 0.8 radians and k = 1. Notice that for y = 0 the wave amplitude is only large for a small region in the range −4 < x < 4 , e.g. Vibrations and Waves read for free Understanding curved spacetime is an advanced topic which is not easily accessible at the level of this text. However, it turns out that some insight into general relativistic phenomena may be obtained by investigating the effects of acceleration in the flat (but non-Euclidean) space of special relativity epub. The net effect of scattering from a single row is equivalent to partial reflection from a mirror imagined to 125 Year Recipient 1901 1906 1914 1915 1918 1921 1922 1929 1932 1933 1937 W ref.: Quantum Field Theory and read pdf read pdf. An electron is localized by passing through an aperture. The probability that it will then be found at the particular position is determined by the wave function illustrated to the right of the aperture ref.: Mutually Catalytic Super Branching Random Walks: Large Finite Systems And Renormalization Analysis (Memoirs of the American Mathematical Society) This distinction is highly complex, requiring the use of quantum decoherence theory, parts of which are not entirely agreed upon. In particular, quantum decoherence theory posits the possibility of "weak measurements", which can indirectly provide "weak" information about a particle without collapsing it. [4] An important aspect of Quantum Mechanics is the predictions it makes about the radioactive decay of isotopes Broadband Optical Access Networks and Fiber-to-the-Home: Systems Technologies and Deployment Strategies Satellite Broadcasting N=2 Wonderland, The: From Calabi-Yau Manifolds To Topological Field Theories Problem Solutions For Diode Lasers And Photonic Integrated Circuits (Wiley Series in Microwave and Optical Engineering) Application to molecules having only single bonds. Obtaining an electrolyte solution by dissolving ionic solids, liquids or gases in water , cited: Classic Papers in Shock download here download here. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details Lecture Notes on read epub read epub. Bridging the gap to quantum world BBC - June 3, 2009 Scientists have "entangled" the motions of pairs of atoms for the first time. Entanglement is an effect in quantum mechanics, a relatively new branch of physics that is based more in probability than in classical laws ref.: Propagation and Reflection of Shock Wave (Series on Advances in Mathematics for Applied Sciences) What bothers some people about this interpretation is the random, abrupt change in the wave function, which violates the Schrödinger equation, the very heart of quantum mechanics. Everett argued that this approach was philosophically a mess: it used two contradictory conceptual schemes to describe reality, the quantum one of wave functions and the classical one of us and our apparatus epub. Ocean waves are mechanical waves; so, too, are sound waves, as well as the waves produced by pulling a string , cited: Quantum Coherence: From Quarks read online Fig: 2.3 The Compton Wavelength (Y) of the Electron - While this wavelength is related to the actual Wavelength of the Spherical Standing Wave, it is more complex than this online. According to the Copenhagen interpretation, the Y wave represents the probability of the photon being at any particular place ref.: Strings, Conformal Fields, and read pdf A system can only store a finite amount of information and uncertainty, as a general concept but also applied to particles (as is usually considered), is like overflowing some toilet of binary bits. “You can understand the uncertainty principle as a consequence of the fact that a physical system of a certain size—say dimension or energy constraint—can contain only a limited amount of information,” Stephanie Wehner, another of the new study’s co-authors, explained. “Very intuitively, if you had less uncertainty for some quantum measurements then you can use such a physical system to encode much more information: each measurement can be used to retrieve a portion of this information, and how well you can do that is determined by their mutual uncertainty.” Crucially, uncertainty is a statement about information, and it could correspond to a universe of other correlated phenomenon Gauge Theories and Neutrino Physics (Physics Reports Reprint Book Series, V. 2) Gauge Theories and Neutrino Physics. Search for Supersymmetry in Hadronic Final States: Evolution Studies of the CMS Electromagnetic Calorimeter (Springer Theses) Quantum Fields in Curved Space (Cambridge Monographs on Mathematical Physics) Collected Papers: Constructive Quantum Field Theory Selected Papers (Contemporary Physicists) (Volume 2) Quantum Field Theory Elements of Engineering Electromagnetics Quantum Field Theory: A Selection of Papers in Memoriam Kurt Symanzik Shock Compression of Condensed Matter 2009: Proceedings of the American Physical Society Topical Group on Shock Compression of Condensed Matter (AIP Conference Proceedings) Submarine Landslides and Tsunamis (Nato Science Series: IV:) Semi-Classical Analysis For Nonlinear Schrodinger Equations Theory and Computation of Electromagnetic Fields Wave mechanics;: An introductory sketch On the Spectra of Quantum Groups (Memoirs of the American Mathematical Society) It is fairly easy to block out light by simply holding up a hand in front of one's eyes. When this happens, the Sun casts a shadow on the other side of one's hand. The same action does not work with one's ears and the source of a sound, however, because the wavelengths of sound are large enough to go right past a relatively small object such as a hand Ken Wilson Memorial Volume: Renormalization, Lattice Gauge Theory, the Operator Product Expansion and Quantum Fields The more accurately you know the position, more uncertain you are about the momentum and vice versa Shock Wave Engine Design read pdf When you look at a cat, does your “act of observation” affect the cat? During a time-delayed Schrodinger's Cat experiment, the observation by a human is physically-passive, in contrast with the physically-active interaction that is important in quantum physics; we can see this in the Uncertainty Principle's description of how the thing-being-observed is affected by the physical interaction-during-observation Theoretical Optics: An download epub This is similar to simulated annealing — except you can, in essence, go through the hills rather than over them. “You can take advantage of a quantum phenomenon called tunneling,” Lidar says. “It’s like a quantum shortcut.” He’s careful to say that he and his team have not proven that the D-Wave uses quantum annealing, but the system certainly appears to use it , source: Caribbean Tsunamis: A 500-Year download online download online. The amplitude of a wave determines its energy. Damping of wave causes decrease in amplitude & energy. Waves' wavefront is always pependicular to the direction of the waves. Waves undergo all of these phenomena; reflection, refraction, diffraction & interference. Water,light & electromagnetic waves are transverse waves Scattering by Obstacles (Mathematics and Its Applications) The theory was full of pitfalls: formidable calculational complexity, predictions of infinite quantities, and apparent violations of the correspondence principle. In the late 1940s a new approach to the quantum theory of fields, QED (for quantum electrodynamics), was developed by Richard Feynman, Julian Schwinger, and Sin-Itiro Tomonaga Method of Moments for 2D Scattering Problems: Basic Concepts and Applications To get a rough idea of the spread of the momentum, the vertical momentum $p_y$ has a spread which is equal to $p_0\,\Delta\theta$, where $p_0$ is the horizontal momentum ref.: Basic Acoustics download epub Spooky, bizarre and mind-boggling are all understatements when it comes to quantum physics. Things in the subatomic world of quantum mechanics defy all logic of our macroscopic world. Particles can actually tunnel through walls, appear out of thin air and disappear, stay entangled and choose to behave like waves. According to Niels Bohr, the father of the orthodox 'Copenhagen Interpretation' of quantum physics, "Anyone who is not shocked by quantum theory has not understood it" ref.: Simulation of Radiowave Propagation in a Dense Urban Environment Simulation of Radiowave Propagation in a. Spooky, bizarre and mind-boggling are all understatements when it comes to quantum physics. Things in the subatomic world of quantum mechanics defy all logic of our macroscopic world download. The mathematical formulations of quantum mechanics are abstract. A mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. Mathematical manipulations of the wave function usually involve bra�ket notation, which requires an understanding of complex numbers and linear functionals Quantum Groups: Proceedings of Workshops held in the Euler International Mathematical Institute, Leningrad, Fall 1990 (Lecture Notes in Mathematics) read pdf. Rated 4.9/5 based on 892 customer reviews
aeccdc020b756190
Tuesday, July 31, 2007 Algorithm wars Some tantalizing Renaissance tidbits in this lawsuit against two former employees, both physics PhDs from MIT. Very interesting -- I think the subtleties of market making deserve further scrutiny :-) At least they're not among the huge number of funds rumored at the moment to be melting down from leveraged credit strategies. Previous coverage of Renaissance here. Ex-Simons Employees Say Firm Pursued Illegal Trades 2007-07-30 11:19 (New York) By Katherine Burton and Richard Teitelbaum July 30 (Bloomberg) -- Two former employees of RenaissanceTechnologies Corp., sued by the East Setauket, New York-based firm for theft of trade secrets, said the company violated securities laws and ``encouraged'' them to help. Renaissance, the largest hedge-fund manager, sought to block Alexander Belopolsky and Pavel Volfbeyn from using the allegations as a defense in the civil trade-secrets case. The request was denied in a July 19 order by New York State judge Ira Gammerman, who wrote that the firm provided no evidence to dispute the claims. The company denied the former employees' claims. ``The decision on this procedural motion makes no determination that there is any factual substance to the allegations,'' Renaissance said in a statement to Bloomberg News. ``These baseless charges are merely a smokescreen to distract from the case we are pursuing.'' Renaissance, run by billionaire investor James Simons, sued Belopolsky and Volfbeyn in December 2003, accusing them of misappropriating Renaissance's trade secrets by taking them to another firm, New York-based Millennium Partners LP. Renaissance settled its claims against Millennium in June. The men, who both hold Ph.D.'s in physics from the Massachusetts Institute of Technology, worked for the company from 2001 to mid-2003, according to the court document. ``We think the allegations are very serious and will have a significant impact on the outcome of the litigation,'' said Jonathan Willens, an attorney representing Volfbeyn and Belopolsky. ``We continue to think the allegations by Renaissance concerning the misappropriation of trade secrets is frivolous.'' `Quant' Fund Renaissance, founded by Simons in 1988, is a quantitative manager that uses mathematical and statistical models to buy and sell securities, options, futures, currencies and commodities. It oversees $36.8 billion for clients, most in the 2-year-old Renaissance Institutional Equities Fund. According to Gammerman's heavily redacted order, Volfbeyn said that he was instructed by his superiors to devise a way to ``defraud investors trading through the Portfolio System for Institutional Trading, or POSIT,'' an electronic order-matching system operated by Investment Technology Group Inc. Volfbeyn said that he was asked to create an algorithm, or set of computer instructions, to ``reveal information that POSIT intended to keep confidential.'' Refused to Build Volfbeyn told superiors at Renaissance that he believed the POSIT strategy violated securities laws and refused to build the algorithm, according to the court document. The project was reassigned to another employee and eventually Renaissance implemented the POSIT strategy, according to the document. New York-based Investment Technology Group took unspecified measures, according to the order, and Renaissance was forced to abandon the strategy, Volfbeyn said. Investment Technology Group spokeswoman Alicia Curran declined to comment. According to the order, Volfbeyn said that he also was asked to develop an algorithm for a second strategy involving limit orders, which are instructions to buy or sell a security at the best price available, up to a maximum or minimum set by the trader. Standing limit orders are compiled in files called limit order books on the New York Stock Exchange and Nasdaq and can be viewed by anyone. The redacted order doesn't provide details of the strategy. Volfbeyn refused to participate in the strategy because he believed it would violate securities laws. The limit-order strategy wasn't implemented before Volfbeyn left Renaissance, the two men said, according to the order. Swap `Scam' Claimed Volfbeyn and Belopolsky said that Renaissance was involved in a third strategy, involving swap transactions, which they describe as ``a massive scam'' in the court document. While they didn't disclose what type of swaps were involved, they said that Renaissance violated U.S. Securities and Exchange Commission and National Association of Securities Dealers rules governing short sales. Volfbeyn and Belopolsky said they were expected to help find ways to maximize the profits of the strategy, and Volfbeyn was directed to modify and improve computer code in connection with the strategy, according to the order. In a swap transaction, two counterparties exchange one stream of cash flows for another. Swaps are often used to hedge certain risks, such as a change in interest rates, or as a means of speculation. In a short sale, an investor borrows shares and then sells them in the hopes they can be bought back in the future at a cheaper price. Besides the $29 billion institutional equity fund, Renaissance manages Medallion, which is open only to Simons and his employees. Simons, 69, earned an estimated $1.7 billion last year, the most in the industry, according to Institutional Investor's Alpha magazine. I, Robot My security badge from a meeting with Israeli internet security company CheckPoint (Nasdaq: CHKP). There was some discussion as to whether I should be classified as a robot or a robot genius :-) Infoworld article: ... RGguard accesses data gathered by a sophisticated automated testbed that has examined virtually every executable on the Internet. This testbed couples traditional anti-virus scanning techniques with two-pronged heuristic analysis. The proprietary Spyberus technology establishes causality between source, executable and malware, and user interface automation allows the computers to test programs just as a user would - but without any human intervention. Monday, July 30, 2007 Tyler Cowen and rationality I recently came across the paper How economists think about rationality by Tyler Cowen. Highly recommended -- a clear and honest overview. The excerpt below deals with rationality in finance theory and strong and weak versions of efficient markets. I believe the weak version; the strong version is nonsense. (See, e.g, here for a discussion of limits to arbitrage that permit long lasting financial bubbles. In other words, capital markets are demonstrably far from perfect, as defined below by Cowen.) Although you might think the strong version of EMH is only important to traders and finance specialists, it is also very much related to the idea that markets are good optimizers of resource allocation for society. Do markets accurately reflect the "fundamental value of corporations"? See related discussion here. Financial economics has one of the most extreme methods in economic theory, and increasingly one of the most prestigious. Finance concerns the pricing of market securities, the determinants of market returns, the operating of trading systems, the valuation of corporations, and the financial policies of corporations, among other topics. Specialists in finance can command very high salaries in the private sector and have helped design many financial markets and instruments. To many economists, this ability to "meet a market test" suggests that financial economists are doing something right. Depending on one's interpretation, the theory of finance makes either minimal or extreme assumptions about rationality. Let us consider the efficient markets hypothesis (EMH), which holds the status of a central core for finance, though without commanding universal assent. Like most economic claims, EMH comes in many forms, some weaker, others stronger. The weaker versions typically claim that deliberate stock picking does not on average outperform selecting stocks randomly, such as by throwing darts at the financial page. The market already incorporates information about the value of companies into the stock prices, and no one individual can beat this information, other than by random luck, or perhaps by outright insider trading. Note that the weak version of EMH requires few assumptions about rationality. Many market participants may be grossly irrational or systematically biased in a variety of ways. It must be the case, however, that their irrationalities are unpredictable to the remaining rational investors. If the irrationalities were predictable, rational investors could make systematic extra-normal profits with some trading rule. The data, however, suggest that it is very hard for rational investors to outperform the market averages. This suggests that extant irrationalities are either very small, or very hard to predict, two very different conclusions. The commitment that one of these conclusions must be true does not involve much of a substantive position on the rationality front. The stronger forms of EMH claim that market prices accurately reflect the fundamental values of corporations and thus cannot be improved upon. This does involve a differing and arguably stronger commitment to a notion of rationality. Strong EMH still allows that most individuals may be irrational, regardless of how we define that concept. These individuals could literally be behaving on a random basis, or perhaps even deliberately counter to standard rationality assumptions. It is assumed, however, that at least one individual does have rational information about how much stocks are worth. Furthermore, and most importantly, it is assumed that capital markets are perfect or nearly perfect. With perfect capital markets, the one rational individual will overwhelm the influence of the irrational on stock prices. If the stock ought to be worth $30 a share, but irrational "noise traders" push it down to $20 a share, the person who knows better will keep on buying shares until the price has risen to $30. With perfect capital markets, there is no limit to this arbitrage process. Even if the person who knows better has limited wealth, he or she can borrow against the value of the shares and continue to buy, making money in the process and pushing the share price to its proper value. So the assumptions about rationality in strong EMH are tricky. Only one person need be rational, but through perfect capital markets, this one person will have decisive weight on market prices. As noted above, this can be taken as either an extreme or modest assumption. While no one believes that capital markets are literally perfect, they may be "perfect enough" to allow the rational investors to prevail. "Behavioral finance" is currently a fad in financial theory, and in the eyes of many it may become the new mainstream. Behavioral finance typically weakens rationality assumptions, usually with a view towards explaining "market anomalies." Almost always these models assume imperfect capital markets, to prevent a small number of rational investors from dwarfing the influence of behavioral factors. Robert J. Shiller claims that investors overreact to very small pieces of information, causing virtually irrelevant news to have a large impact on market prices. Other economists argue that some fund managers "churn" their portfolios, and trade for no good reason, simply to give their employers the impression that they are working hard. It appears that during the Internet stock boom, simply having the suffix "dot com" in the firm's name added value on share markets, and that after the bust it subtracted value.11 Behavioral models use looser notions of rationality than does EMH. Rarely do behavioral models postulate outright irrationality, rather the term "quasi-rationality" is popular in the literature. Most frequently, a behavioral model introduces only a single deviation from classical rationality postulates. The assumption of imperfect capital markets then creates the possibility that this quasi-rationality will have a real impact on market phenomena. The debates between the behavioral theories and EMH now form the central dispute in modern financial theory. In essence, one vision of rationality -- the rational overwhelm the influence of the irrational through perfect capital markets -- is pitted against another vision -- imperfect capital markets give real influence to quasi-rationality. These differing approaches to rationality, combined with assumptions about capital markets, are considered to be eminently testable. Game theory and the failed quest for a unique basis for rationality: Game theory has shown economists that the concept of rationality is more problematic than they had previously believed. What is rational depends not only on the objective features of the problem but also depends on what actors believe. This short discussion has only scratched the surface of how beliefs may imply very complex solutions, or multiple solutions. Sometimes the relevant beliefs, for instance, are beliefs about the out-of-equilibrium behavior of other agents. These beliefs are very hard to model, or it is very hard to find agreement among theorists as to how they should be modeled. In sum, game theorists spend much of their time trying to figure out what rationality means. They are virtually unique amongst economists in this regard. Game theory from twenty years ago pitted various concepts of rationality against each other in purely theoretical terms. Empirical results had some feedback into this process, such as when economists reject Nash equilibrium for some of its counterintuitive predictions, but it remains striking how much of the early literature does not refer to any empirical tests. This enterprise has now become much more empirical, and more closely tied to both computational science and experimental economics. Computational economics and the failed quest for a unique basis for rationality: Nonetheless it is easy to see how the emphasis on computability puts rationality assumptions back on center stage, and further breaks down the idea of a monolithic approach to rationality. The choice of computational algorithm is not given a priori, but is continually up for grabs. Furthermore the choice of algorithm will go a long way to determining the results of the model. Given that the algorithm suddenly is rationality, computational economics forces economists to debate which assumptions about procedural rationality are reasonable or useful ones. The mainstream criticism of computational models, of course, falls right out of these issues. Critics believe that computational models can generate just about "any" result, depending on the assumptions about what is computable. This would move economics away from being a unified science. Furthermore it is not clear how we should evaluate the reasonableness of one set of assumptions about computability as opposed to another set. We might consider whether the assumptions yield plausible results, but if we already know what a plausible result consists of, it is not clear why we need computational theories of rationality. As you can tell from my comments, I do not believe there is any unique basis for "rationality" in economics. Humans are flawed information processing units produced by the random vagaries of evolution. Not only are we different from each other, but these differences arise both from genes and the individual paths taken through life. Can a complex system comprised of such creatures be modeled through simple equations describing a few coarse grained variables? In some rare cases, perhaps yes, but in most cases, I would guess no. Finance theory already adopts this perspective in insisting on a stochastic (random) component in any model of security prices. Over sufficiently long timescales even the properties of the random component are not constant! (Hence, stochastic volatility, etc.) Saturday, July 28, 2007 From physics to finance Professor Akash Bandyopadhyay recounts his career trajectory from theoretical physics to Wall Street to the faculty of the graduate school of business at Chicago in this interview. One small comment: Bandyopadhyay says below that banks hire the very best PhDs from theoretical physics. I think he meant to say that, generally, they hire the very best among those who don't find jobs in physics. Unfortunately, few are able to find permanent positions in the field. Mike K. -- if you're reading this, why didn't you reply to the guy's email? :-) CB: Having a Ph.D. in Theoretical Physics, you certainly have quite a unique background compared to most other faculty members here at the GSB. Making a transition from Natural Science to Financial Economics and becoming a faculty member at the most premier financial school in the world in a short span of five years is quite an unbelievable accomplishment! Can you briefly talk about how you ended up at the GSB? AB: Sure. It is a long story. In 1999, I was finishing up my Ph.D. in theoretical physics at the University of Illinois at Urbana Champaign when I started to realize that the job situation for theoretical physicists is absolutely dismal. Let alone UIUC, it was very difficult for physicists from even Harvard or Princeton to find decent jobs. As a matter of fact, once, when I was shopping at a Wal-Mart in the Garden State, I bumped into a few people who had Ph.D. in theoretical physics from Princeton and they were working at the Wal-Mart's check-out counter. Yes, Wal-Mart! I could not believe it myself! CB: So, what options did you have at that point? AB: When I started to look at the job market for theoretical physicists, I found that the top investment banks hire the very best of the fresh Ph.D.s. I started to realize that finance (and not physics!) is the heart of the real world and Wall Street is the hub of activity. So, I wanted to work on Wall Street - not at Wal-Mart! (laughs!) I knew absolutely nothing about finance or economics at that time, but I was determined to make the transition. I got a chance to speak with Professor Neil Pearson, a finance professor at UIUC, who advised me to look at the 'Risk' Magazine and learn some finance by myself. There were two highly mathematical research papers at the end of an issue that caught my attention. Having a strong mathematical background, I could understand all the mathematical and statistical calculations/analysis in those papers, although I could not comprehend any of the financial terminology. As I perused more articles, my confidence in my ability to solve mathematical models in finance grew. At that point, I took a big step in my pursuit of working on the Street and e-mailed the authors of those two articles in the Risk Magazine, Dr. Peter Carr at the Banc of America Securities and Dr. Michael Kamal at Goldman Sachs. Dr. Carr (who, later on I found, is a legend in mathematical finance!), replied back in 2 lines: 'If you really want to work here, you have to walk on water. Call me if you are in the NYC area.' CB: So, we presume you went to NYC? AB: After some contemplation, I decided to fly to NYC; I figured I had nothing to lose. Dr. Carr set me up for an interview a few weeks later. Being a physics student throughout my life, I was not quite aware of the business etiquettes. So, when I appeared in my jeans, T-shirt and flip-flops at the Banc of America building at 9 West 57th Street, for an interview, there was a look on everyone's face (from the front desk staffs to everyone I met) that I can never forget. Looking back, I still laugh at those times. CB: Did you get an offer from Banc of America? AB: Not at the first attempt. After the interview, I was quite positive that I would get an offer. However, as soon as I returned home, I received an email from Dr. Carr saying, "You are extremely smart, but the bank is composed of deal makers, traders, marketers, and investment bankers. We are looking for someone with business skills. You will not fit well here." He suggested that we both write a paper on my derivation of Black-Scholes/Merton partial differential equation, or even possibly a book. He also suggested I read thoroughly (and to work out all the problems of) the book "Dynamic Asset Pricing Theory" by Darrell Duffie. In fact, Duffie's book was my starting point in learning financial economics. I assume your readers never heard of this book. It is a notoriously difficult book on continuous time finance and it is intended for the very advanced Ph.D. students in financial economics. But, it was the right book for me - I read it without any difficulty in the math part and it provided me with a solid foundation in financial economics. Anyway, I think I am going too off tangent to your question. CB: So, what did you do after you received that mail from Dr. Carr? AB: The initial set back did not deter me. I already started to become aware of my lack of business skills. So I offered Dr. Carr to work as an unpaid intern at Banc of America to gain experience and to learn more about the financial industry and the business. Dr. Carr finally relented and made me an offer to work as an unpaid intern in his group during the summer of 1999. CB: What did you do during the internship? AB: Upon my arriving, Dr. Carr told me that, "A bank is not a place to study. A bank is a place to make money. Be practical." This was probably the best piece of advice I could get. He gave me three tasks to help me get more familiar with finance and get closer to bankers. First, catalog and classify his books and papers on finance and at the same time flip through them. This way, believe it or not, I read tens of thousands of papers and other books that summer. Second, I helped test a piece of software, Sci-Finance, which would help traders to set and hedge exotic option prices. Thirdly, I answered math, statistics, and other quantitative modeling questions for equity, fixed income and options traders, and other investment bankers. CB: Wow! That is a lot of reading for one summer. So, did you get a full time offer from Banc of America after your internship? What did you do after that? AB: Yes, I got an offer for them, but then I had more than a year left to finish my PhD thesis, so I accepted an even better offer from Deutsche Bank next summer. I worked at Deutsche for three months in the summer of 2000. Then moved to Goldman Sachs for a while (where I gave seminars on finance theory to the quants, traders, and risk managers), then, after finishing my Ph.D., I took an offer from Merrill Lynch as the quant responsible for Convertible Bond valuation in their Global Equity Linked Products division in New York. I left Merrill after a few months to lead the North America's Equity Derivatives Risk Management division in Société Generale. So, basically, I came to GSB after getting some hardcore real-world experience in a string of top investment banks. CB: Are there any 'special' moments on Wall Street that you would like to talk about? AB: Sure, there are many. But one that stands out is the day I started my internship at Banc of America. As is the norm in grad school or academia, I felt that I had to introduce myself to my colleagues. So, on my very first day of internship, I took the elevator to the floor where the top bosses of the bank had offices. I completely ignored the secretary at the front desk, knocked on the CEO and CFO's door, walked in, and briefly introduced myself. Little did I know that this was not the norm in the business world!!! Shortly thereafter, Dr. Carr called me and advised that I stick to my cube instead of 'just wandering around'! In retrospect, that was quite an experience! CB: What made you interested in teaching after working for top dollar on Wall Street? AB: You mean to say that professors here don't get paid top dollar? (laughs) I always planned to be in academia. To be totally honest with you, I never liked the culture of Wall Street. Much of the high profile business in Wall Street heavily relies on the academic finance research, but, after all, they are there to make money, not to cultivate knowledge. One must have two qualities to succeed well in this financial business: First, one must have a solid knowledge on the strengths and limitations of financial models (and the theory), which comes from cutting edge academic research, and second, one must have the skills to translate the academic knowledge into a money-making machine. I was good in the first category, but not as good in the second. ... I generally try to keep this blog free of kid pictures, but I found these old ones recently and couldn't resist! Thursday, July 26, 2007 Humans eke out poker victory But only due to a bad decision by the human designers of the robot team! :-) Earlier post here. NYTimes: The human team reached a draw in the first round even though their total winnings were slightly less than that of the computer. The match rules specified that small differences were not considered significant because of statistical variation. On Monday night, the second round went heavily to Polaris, leaving the human players visibly demoralized. “Polaris was beating me like a drum,” Mr. Eslami said after the round. However, during the third round on Tuesday afternoon, the human team rebounded, when the Polaris team’s shift in strategy backfired. They used a version of the program that was supposed to add a level of adaptability and “learning.” Unlike computer chess programs, which require immense amounts of computing power to determine every possible future move, the Polaris poker software is largely precomputed, running for weeks before the match to build a series of agents called “bots” that have differing personalities or styles of play, ranging from aggressive to passive. The Alberta team modeled 10 different bots before the competition and then chose to run a single program in the first two rounds. In the third round, the researchers used a more sophisticated ensemble of programs in which a “coach” program monitored the performance of three bots and then moved them in and out of the lineup like football players. Mr. Laak and Mr. Eslami won the final round handily, but not before Polaris won a $240 pot with a royal flush than beat Mr. Eslami’s three-of-a-kind. The two men said that Polaris had challenged them far more than their human opponents. Wednesday, July 25, 2007 Man vs machine: live poker! This blog has live updates from the competition. See also here for a video clip introduction. It appears the machine Polaris is ahead of the human team at the moment. The history of AI tells us that capabilities initially regarded as sure signs of intelligence ("machines will never play chess like a human!") are discounted soon after machines master them. Personally I favor a strong version of the Turing test: interaction which takes place over a sufficiently long time that the tester can introduce new ideas and watch to see if learning occurs. Can you teach the machine quantum mechanics? At the end will it be able to solve some novel problems? Many humans would fail this Turing test :-) Earlier post on bots invading online poker. World-Class Poker Professionals Phil Laak and Ali Eslami Computer Poker Champion Polaris (University of Alberta) Can a computer program bluff? Yes -- probably better than any human. Bluff, trap, check-raise bluff, big lay-down -- name your poison. The patience of a monk or the fierce aggression of a tiger, changing gears in a single heartbeat. Polaris can make a pro's head spin. Psychology? That's just a human weakness. Odds and calculation? Computers can do a bit of that. Intimidation factor and mental toughness? Who would you choose? Does the computer really stand a chance? Yes, this one does. It learns, adapts, and exploits the weaknesses of any opponent. Win or lose, it will put up one hell of a fight. Many of the top pros, like Chris "Jesus" Ferguson, Paul Phillips, Andy Bloch and others, already understand what the future holds. Now the rest of the poker world will find out. Tuesday, July 24, 2007 What is a quant? The following log entry, which displays the origin of and referring search engine query for a pageload request to this blog, does not inspire confidence. Is the SEC full of too many JD's and not enough people who understand monte carlo simulation and stochastic processes? secfwopc.sec.gov (U.S. Securities & Exchange Commission) District Of Columbia, Washington, United States, 0 returning visits Date Time WebPage 24th July 2007 10:04:52 referer: www.google.com/search?hl=en&q=what is a quants&btnG=Search 24th July 2007 12:11:59 referer: www.google.com/search?hl=en&q=Charles Munger and the pricing of derivatives&btnG=Google Search Sunday, July 22, 2007 Income inequality and Marginal Revolution Tyler Cowen at Marginal Revolution discusses a recent demographic study of who, exactly, the top US wage earners are. We've discussed the problem of growing US income inequality here before. To make the top 1 percent in AGI (IRS: Adjusted Gross Income), you needed to earn $309,160. To make it to the top 0.1 percent, you needed $1.4 million (2004 figures). Here's a nice factoid: Somewhat misleading, as this includes returns on the hedgies' own capital invested as part of their funds. But, still, you get the picture of our gilded age :-) One of the interesting conclusions from the study is that executives of non-financial public companies are a numerically rather small component of top earners, comprising no more than 6.5%. Financiers comprise a similar, but perhaps larger, subset. Who are the remaining top earners? The study can't tell! (They don't know.) Obvious candidates are doctors in certain lucrative specialties, sports and entertainment stars and owners of private businesses. The category which I think is quite significant, but largely ignored, is founders and employees of startups that have successful exits. Below is the comment I added to Tyler's blog: The fact that C-level execs are not the numerically dominant subgroup is pretty obvious. The whole link between exec compensation and inequality is a red herring (except in that it symbolizes our acceptance of winner take all economics). I suspect that founders and early employees of successful private companies (startups) that have a liquidity event (i.e., an IPO or acquisition) are a large subset of the top AGI group. Note, though, that this population does not make it into the top tier (i.e., top 1 or .1%) with regularity, but rather only in a very successful year (the one in which they get their "exit"). Any decent tech IPO launches hundreds of employees into the top 1 or even .1%. It is very important to know what fraction of the top group are there each year (doctors, lawyers, financiers) versus those for whom it is a one-time event (sold the business they carefully built over many years). If it is predominantly the latter it's hard to attribute an increase in top percentile earnings to unhealthy inequality. To be more quantitative: suppose there are 1M employees at private companies (not just in technology, but in other industries as well) who each have a 10% chance per year of participating in a liquidity event that raises their AGI to the top 1% threshold. That would add 100k additional top earners each year, and thereby raise the average income of that group. If there are 150M workers in the US then there are 1.5M in the top 1%, so this subset of "rare exit" or employee stock option beneficiaries would make up about 7% of the total each year (similar to the corporate exec number). But these people are clearly not part of the oligarchy, and if the increase in income inequality is due to their shareholder participation, why is that a bad thing? We reported earlier on the geographic distribution of income gains to the top 1 percent: they are concentrated in tech hotbeds like silicon valley, which seems to support our thesis that the payouts are not going to the same people every year. Many Worlds: A brief guide for the perplexed I added this to the earlier post 50 years of Many Worlds and thought I would make it into a stand alone post as well. Many Worlds: A brief guide for the perplexed In quantum mechanics, states can exist in superpositions, such as (for an electron spin) (state)   =   (up)   +   (down) When a measurement on this state is performed, the Copenhagen interpretation says that the state (wavefunction) "collapses" to one of the two possible outcomes: (up)     or     (down), with some probability for each outcome depending on the initial state (e.g., 1/2 and 1/2 of measuring up and down). One fundamental difference between quantum and classical mechanics is that even if we have specified the state above as precisely as is allowed by nature, we are still left with only a probabilistic prediction for what will happen next. In classical physics knowing the state (e.g., position and velocity of a particle) allows perfect future prediction. There is no satisfactory understanding of how or exactly when the Copenhagen wavefunction "collapse" proceeds. Indeed, collapse introduces confusing issues like consciousness: what, exactly, constitutes an "observer", capable of causing the collapse? Everett suggested we simply remove wavefunction collapse from the theory. Then the state evolves in time always according to the Schrodinger equation. In fact, the whole universe can be described by a "universal wave function" which evolves according to the Schrodinger equation and never undergoes Copenhagen collapse. Suppose we follow our electron state through a device which measures its spin. For example: by deflecting the electron using a magnetic field and recording the spin-dependent path of the deflected electron using a detector which amplifies the result. The result is recorded in some macroscopic way: e.g., a red or green bulb lights up depending on whether deflection was up or down. The whole process is described by the Schrodinger equation, with the final state being (state)   =   (up) (device recorded up)   +   (down) (device recorded down) Here "device" could, but does not necessarily, refer to the human or robot brain which saw the detector bulb flash. What matters is that the device is macroscopic and has a large (e.g., Avogadro's number) number of degrees of freedom. In that case, as noted by Everett, the two sub-states of the world (or device) after the measurement are effectively orthogonal (have zero overlap). In other words, the quantum state describing a huge number of emitted red photons and zero emitted green photons is orthogonal to the complementary state. If a robot or human brain is watching the experiment, it perceives a unique outcome just as predicted by Copenhagen. That is, any macroscopic information processing device ends up in one of the possible macroscopic states (red light vs green light flash). The amplitude for those macroscopically different states to interfere is exponentially small, hence they can be treated thereafter as completely independent "branches" of the wavefunction. Success! The experimental outcome is predicted by a simpler (sans collapse) version of the theory. The tricky part: there are now necessarily parts of the final state (wavefunction) describing both the up and down outcomes (I saw red vs I saw green). These are the many worlds of the Everett interpretation. Personally, I prefer to call it No Collapse instead of Many Worlds -- why not emphasize the advantageous rather than the confusing part of the interpretation? Some eminent physicists who (as far as I can tell) believe(d) in MW: Feynman, Gell-Mann, Hawking, Steve Weinberg, Bryce DeWitt, David Deutsch, Sidney Coleman ... In fact, I was told that Feynman and Gell-Mann each claim(ed) to have independently invented MW, without any knowledge of Everett! Saturday, July 21, 2007 Man vs machine: poker It looks like we will soon add poker to the list of games (chess, checkers, backgammon) at which machines have surpassed humans. Note we're talking about heads up play here. I imagine machines are not as good at playing tournaments -- i.e., picking out and exploiting weak players at the table. How long until computers can play a decent game of Go? Associated Press: ...Computers have gotten a lot better at poker in recent years; they're good enough now to challenge top professionals like Laak, who won the World Poker Tour invitational in 2004. But it's only a matter of time before the machines take a commanding lead in the war for poker supremacy. Just as they already have in backgammon, checkers and chess, computers are expected to surpass even the best human poker players within a decade. They can already beat virtually any amateur player. "This match is extremely important, because it's the first time there's going to be a man-machine event where there's going to be a scientific component," said University of Alberta computing science professor Jonathan Schaeffer. The Canadian university's games research group is considered the best of its kind in the world. After defeating an Alberta-designed program several years ago, Laak was so impressed that he estimated his edge at a mere 5 percent. He figures he would have lost if the researchers hadn't let him examine the programming code and practice against the machine ahead of time. "This robot is going to do just fine," Laak predicted. The Alberta researchers have endowed the $50,000 contest with an ingenious design, making this the first man-machine contest to eliminate the luck of the draw as much as possible. Laak will play with a partner, fellow pro Ali Eslami. The two will be in separate rooms, and their games will be mirror images of one another, with Eslami getting the cards that the computer received in its hands against Laak, and vice versa. That way, a lousy hand for one human player will result in a correspondingly strong hand for his partner in the other room. At the end of the tournament the chips of both humans will be added together and compared to the computer's. The two-day contest, beginning Monday, takes place not at a casino, but at the annual conference of the Association for the Advancement of Artificial Intelligence in Vancouver, British Columbia. Researchers in the field have taken an increasing interest in poker over the past few years because one of the biggest problems they face is how to deal with uncertainty and incomplete information. "You don't have perfect information about what state the game is in, and particularly what cards your opponent has in his hand," said Dana S. Nau, a professor of computer science at the University of Maryland in College Park. "That means when an opponent does something, you can't be sure why." As a result, it is much harder for computer programmers to teach computers to play poker than other games. In chess, checkers and backgammon, every contest starts the same way, then evolves through an enormous, but finite, number of possible states according to a consistent set of rules. With enough computing power, a computer could simply build a tree with a branch representing every possible future move in the game, then choose the one that leads most directly to victory. ...The game-tree approach doesn't work in poker because in many situations there is no one best move. There isn't even a best strategy. A top-notch player adapts his play over time, exploiting his opponent's behavior. He bluffs against the timid and proceeds cautiously when players who only raise on the strongest hands are betting the limit. He learns how to vary his own strategy so others can't take advantage of him. That kind of insight is very hard to program into a computer. You can't just give the machine some rules to follow, because any reasonably competent human player will quickly intuit what the computer is going to do in various situations. "What makes poker interesting is that there is not a magic recipe," Schaeffer said. In fact, the simplest poker-playing programs fail because they are just a recipe, a set of rules telling the computer what to do based on the strength of its hand. A savvy opponent can soon gauge what cards the computer is holding based on how aggressively it is betting. That's how Laak was able to defeat a program called Poker Probot in a contest two years ago in Las Vegas. As the match progressed Laak correctly intuited that the computer was playing a consistently aggressive game, and capitalized on that observation by adapting his own play. Programmers can eliminate some of that weakness with game theory, a branch of mathematics pioneered by John von Neumann, who also helped develop the hydrogen bomb. In 1950 mathematician John Nash, whose life inspired the movie "A Brilliant Mind," showed that in certain games there is a set of strategies such that every player's return is maximized and no player would benefit from switching to a different strategy. In the simple game "Rock, Paper, Scissors," for example, the best strategy is to randomly select each of the options an equal proportion of the time. If any player diverted from that strategy by following a pattern or favoring one option over, the others would soon notice and adapt their own play to take advantage of it. Texas Hold 'em is a little more complicated than "Rock, Paper, Scissors," but Nash's math still applies. With game theory, computers know to vary their play so an opponent has a hard time figuring out whether they are bluffing or employing some other strategy. But game theory has inherent limits. In Nash equilibrium terms, success doesn't mean winning — it means not losing. "You basically compute a formula that can at least break even in the long run, no matter what your opponent does," Billings said. That's about where the best poker programs are today. Though the best game theory-based programs can usually hold their own against world-class human poker players, they aren't good enough to win big consistently. Squeezing that extra bit of performance out of a computer requires combining the sheer mathematical power of game theory with the ability to observe an opponent's play and adapt to it. Many legendary poker players do that by being experts of human nature. They quickly learn the tics, gestures and other "tells" that reveal exactly what another player is up to. A computer can't detect those, but it can keep track of how an opponent plays the game. It can observe how often an opponent tries to bluff with a weak hand, and how often she folds. Then the computer can take that information and incorporate it into the calculations that guide its own game. "The notion of forming some sort of model of what another player is like ... is a really important problem," Nau said. Computer scientists are only just beginning to incorporate that ability into their programs; days before their contest with Laak and Eslami, the University of Alberta researchers are still trying to tweak their program's adaptive elements. Billings will say only this about what the humans have in store: "They will be guaranteed to be seeing a lot of different styles." Friday, July 20, 2007 Visit to Redmond No startup odyssey is complete without a trip to Microsoft! I'm told there are 35k employees on their sprawling campus. Average age a bit higher than at Google, atmosphere a bit more serious and corporate, but still signs of geekery and techno wizardry. Fortunately for me, no one complained when I used a Mac Powerbook for my presentation :-) Monday, July 16, 2007 50 years of Many Worlds Max Tegmark has a nice essay in Nature on the Many Worlds (MW) interpretation of quantum mechanics. Previous discussion of Hugh Everett III and MW on this blog. Personally, I find MW more appealing than the conventional Copenhagen interpretation, which is certainly incomplete. This point of view is increasingly common among those who have to think about the QM of isolated, closed systems: quantum cosmologists, quantum information theorists, etc. Tegmark correctly points out in the essay below that progress in our understanding of decoherence in no way takes the place of MW in clarifying the problems with measurement and wavefunction collapse, although this is a common misconception. However, I believe there is a fundamental problem with deriving Born's rule for probability of outcomes in the MW context. See research paper here and talk given at Caltech IQI here. A brief guide for the perplexed: (state)   =   (up)   +   (down) (up)     or     (down), with some probability for each outcome depending on the initial state (e.g., 1/2 and 1/2 of measuring up and down). One fundamental difference between quantum and classical mechanics is that even though we have specified the state above as precisely as is allowed by nature, we are still left with a probabilistic prediction for what will happen next. In classical physics knowing the state (e.g., position and velocity of a particle) allows perfect future prediction. Everett suggested we simply remove wavefunction collapse from the theory. Then the state evolves in time always according to the Schrodinger equation. Suppose we follow our electron state through a device which measures its spin. For example: by deflecting the electron using a magnetic field and recording the spin-dendent path of the deflected electron using a detector which amplifies the result. The result is recorded in some macroscopic way: e.g., a red or green bulb lights up depending on whether deflection was up or down. The whole process is described by the Schrodinger equation, with the final state being Do the other worlds exist? Can we interact with them? These are the tricky questions remaining... Some eminent physicists who (as far as I can tell) believe in MW: Feynman, Gell-Mann, Hawking, Steve Weinberg, Bryce DeWitt, David Deutsch, ... In fact, I was told that Feynman and Gell-Mann each claim(ed) to have independently invented MW, without any knowledge of Everett! Many lives in many worlds Max Tegmark, Nature Almost all of my colleagues have an opinion about it, but almost none of them have read it. The first draft of Hugh Everett's PhD thesis, the shortened official version of which celebrates its 50th birthday this year, is buried in the out-of-print book The Many-Worlds Interpretation of Quantum Mechanics. I remember my excitement on finding it in a small Berkeley book store back in grad school, and still view it as one of the most brilliant texts I've ever read. By the time Everett started his graduate work with John Archibald Wheeler at Princeton University in New Jersey quantum mechanics had chalked up stunning successes in explaining the atomic realm, yet debate raged on as to what its mathematical formalism really meant. I was fortunate to get to discuss quantum mechanics with Wheeler during my postdoctorate years in Princeton, but never had the chance to meet Everett. Quantum mechanics specifies the state of the Universe not in classical terms, such as the positions and velocities of all particles, but in terms of a mathematical object called a wavefunction. According to the Schrödinger equation, this wavefunction evolves over time in a deterministic fashion that mathematicians term 'unitary'. Although quantum mechanics is often described as inherently random and uncertain, there is nothing random or uncertain about the way the wavefunction evolves. The sticky part is how to connect this wavefunction with what we observe. Many legitimate wavefunctions correspond to counterintuitive situations, such as Schrödinger's cat being dead and alive at the same time in a 'superposition' of states. In the 1920s, physicists explained away this weirdness by postulating that the wavefunction 'collapsed' into some random but definite classical outcome whenever someone made an observation. This add-on had the virtue of explaining observations, but rendered the theory incomplete, because there was no mathematics specifying what constituted an observation — that is, when the wavefunction was supposed to collapse. Everett's theory is simple to state but has complex consequences, including parallel universes. The theory can be summed up by saying that the Schrödinger equation applies at all times; in other words, that the wavefunction of the Universe never collapses. That's it — no mention of parallel universes or splitting worlds, which are implications of the theory rather than postulates. His brilliant insight was that this collapse-free quantum theory is, in fact, consistent with observation. Although it predicts that a wavefunction describing one classical reality gradually evolves into a wavefunction describing a superposition of many such realities — the many worlds — observers subjectively experience this splitting merely as a slight randomness (see 'Not so random'), with probabilities consistent with those calculated using the wavefunction-collapse recipe. Gaining acceptance It is often said that important scientific discoveries go though three phases: first they are completely ignored, then they are violently attacked, and finally they are brushed aside as well known. Everett's discovery was no exception: it took more than a decade before it started getting noticed. But it was too late for Everett, who left academia disillusioned1. Everett's no-collapse idea is not yet at stage three, but after being widely dismissed as too crazy during the 1970s and 1980s, it has gradually gained more acceptance. At an informal poll taken at a conference on the foundations of quantum theory in 1999, physicists rated the idea more highly than the alternatives, although many more physicists were still 'undecided'2. I believe the upward trend is clear. Why the change? I think there are several reasons. Predictions of other types of parallel universes from cosmological inflation and string theory have increased tolerance for weird-sounding ideas. New experiments have demonstrated quantum weirdness in ever larger systems. Finally, the discovery of a process known as decoherence has answered crucial questions that Everett's work had left dangling. For example, if these parallel universes exist, why don't we perceive them? Quantum superpositions cannot be confined — as most quantum experiments are — to the microworld. Because you are made of atoms, then if atoms can be in two places at once in superposition, so can you. The breakthrough came in 1970 with a seminal paper by H. Dieter Zeh, who showed that the Schrödinger equation itself gives rise to a type of censorship. This effect became known as 'decoherence', and was worked out in great detail by Wojciech Zurek, Zeh and others over the following decades. Quantum superpositions were found to remain observable only as long as they were kept secret from the rest of the world. The quantum card in our example (see 'Not so random') is constantly bumping into air molecules, photons and so on, which thereby find out whether it has fallen to the left or to the right, destroying the coherence of the superposition and making it unobservable. Decoherence also explains why states resembling classical physics have special status: they are the most robust to decoherence. Science or philosophy? The main motivation for introducing the notion of random wavefunction collapse into quantum physics had been to explain why we perceive probabilities and not strange macroscopic superpositions. After Everett had shown that things would appear random anyway (see 'Not so random') and decoherence had been found to explain why we never perceive anything strange, much of this motivation was gone. Even though the wavefunction technically never collapses in the Everett view, it is generally agreed that decoherence produces an effect that looks like a collapse and smells like a collapse. In my opinion, it is time to update the many quantum textbooks that introduce wavefunction collapse as a fundamental postulate of quantum mechanics. The idea of collapse still has utility as a calculational recipe, but students should be told that it is probably not a fundamental process violating the Schrödinger equation so as to avoid any subsequent confusion. If you are considering a quantum textbook that does not mention Everett and decoherence in the index, I recommend buying a more modern one. After 50 years we can celebrate the fact that Everett's interpretation is still consistent with quantum observations, but we face another pressing question: is it science or mere philosophy? The key point is that parallel universes are not a theory in themselves, but a prediction of certain theories. For a theory to be falsifiable, we need not observe and test all its predictions — one will do. Because Einstein's general theory of relativity has successfully predicted many things we can observe, we also take seriously its predictions for things we cannot, such as the internal structure of black holes. Analogously, successful predictions by unitary quantum mechanics have made scientists take more seriously its other predictions, including parallel universes. Moreover, Everett's theory is falsifiable by future lab experiments: no matter how large a system they probe, it says, they will not observe the wavefunction collapsing. Indeed, collapse-free superpositions have been demonstrated in systems with many atoms, such as carbon-60 molecules. Several groups are now attempting to create quantum superpositions of objects involving 1017 atoms or more, tantalizingly close to our human macroscopic scale. There is also a global effort to build quantum computers which, if successful, will be able to factor numbers exponentially faster than classical computers, effectively performing parallel computations in Everett's parallel worlds. The bird perspective So Everett's theory is testable and so far agrees with observation. But should you really believe it? When thinking about the ultimate nature of reality, I find it useful to distinguish between two ways of viewing a physical theory: the outside view of a physicist studying its mathematical equations, like a bird surveying a landscape from high above, and the inside view of an observer living in the world described by the equations, like a frog being watched by the bird. From the bird perspective, Everett's multiverse is simple. There is only one wavefunction, and it evolves smoothly and deterministically over time without any kind of splitting or parallelism. The abstract quantum world described by this evolving wavefunction contains within it a vast number of classical parallel storylines (worlds), continuously splitting and merging, as well as a number of quantum phenomena that lack a classical description. From their frog perspective, observers perceive only a tiny fraction of this full reality, and they perceive the splitting of classical storylines as quantum randomness. What is more fundamental — the frog perspective or the bird perspective? In other words, what is more basic to you: human language or mathematical language? If you opt for the former, you would probably prefer a 'many words' interpretation of quantum mechanics, where mathematical simplicity is sacrificed to collapse the wavefunction and eliminate parallel universes. But if you prefer a simple and purely mathematical theory, then you — like me — are stuck with the many-worlds interpretation. If you struggle with this you are in good company: in general, it has proved extremely difficult to formulate a mathematical theory that predicts everything we can observe and nothing else — and not just for quantum physics. Moreover, we should expect quantum mechanics to feel counterintuitive, because evolution endowed us with intuition only for those aspects of physics that had survival value for our distant ancestors, such as the trajectories of flying rocks. The choice is yours. But I worry that if we dismiss theories such as Everett's because we can't observe everything or because they seem weird, we risk missing true breakthroughs, perpetuating our instinctive reluctance to expand our horizons. To modern ears the Shapley–Curtis debate of 1920 about whether there was really a multitude of galaxies (parallel universes by the standards of the time) sounds positively quaint. If we dismiss theories because they seem weird, we risk missing true breakthroughs. Everett asked us to acknowledge that our physical world is grander than we had imagined, a humble suggestion that is probably easier to accept after the recent breakthroughs in cosmology than it was 50 years ago. I think Everett's only mistake was to be born ahead of his time. In another 50 years, I believe we will be more used to the weird ways of our cosmos, and even find its strangeness to be part of its charm. Saturday, July 14, 2007 Behavioral economics I found this overview and intellectual history of behavioral economics via a link from Economist's View. By now I think anyone who has looked at the data knows that the agents -- i.e., humans -- participating in markets are limited in many ways. (Only a mathematics-fetishizing autistic, completely disconnected from empiricism, could have thought otherwise.) If the agents aren't reliable or even particularly good processors of information, how does the system find its neoclassical equilibrium? (Can one even define the equilibrium if there are not individual and aggregate utility functions?) The next stage of the argument is whether the market magically aggregates the decisions of the individual agents in such a way that their errors cancel. In some simple cases (see Wisdom of Crowds for examples) this may be the case, but in more complicated markets I suspect (and the data apparently show; see below) that cancellation does not occur and outcomes are suboptimal. Where does this leave neoclassical economics? You be the judge! Related posts here (Mirowski) and here (irrational voters and rational agents?). The paper (PDF) is here. Some excerpts below. Opening quote from Samuelson and Conclusions: I wonder how much economic theory would be changed if [..] found to be empirically untrue. I suspect, very little. --Paul Samuelson Samuelson’s claim at the beginning of this paper that a falsification would have little effect on his economics remains largely an open question. On the basis of the overview provided in this paper, however, two developments can be observed. With respect to the first branch of behavioral economics, Samuelson is probably right. Although the first branch proposes some radical changes to traditional economics, it protects Samuelson’s economics by labeling it a normative theory. Kahneman, Tversky, and Thaler propose a research agenda that sets economics off in a different direction, but at the same time saves traditional economics as the objective anchor by which to stay on course. The second branch in behavioral economics is potentially much more destructive. It rejects Samuelson’s economics both as a positive and as a normative theory. By doubting the validity of the exogeneity of preference assumption, introducing the social environment as an explanatory factor, and promoting neuroscience as a basis for economics, it offers a range of alternatives for traditional economics. With game theory it furthermore possesses a powerful tool that is increasingly used in a number of related other sciences. ... Kahneman and Tversky: Over the past ten years Kahneman has gone one step beyond showing how traditional economics descriptively fails. Especially prominent, both in the number of publications Kahneman devotes to it and in the attention it receives, is his reinterpretation of the notion of utility.13 For Kahneman, the main reason that people do not make their decisions in accordance with the normative theory is that their valuation and perception of the factors of these choices systematically differ from the objective valuation of these factors. This is what amongst many articles Kahneman and Tversky (1979) shows. People’s subjective perception of probabilities and their subjective valuation of utility differ from their objective values. A theory that attempts to describe people’s decision behavior in the real world should thus start by measuring these subjective values of utility and probability. ... Thaler distinguishes his work, and behavioral economics generally, from experimental economics of for instance Vernon Smith and Charles Plott. Although Thaler’s remarks in this respect are scattered and mostly made in passing, two recurring arguments can be observed. Firstly, Thaler rejects experimental economics’ suggestion that the market (institutions) will correct the quasi-rational behavior of the individual. Simply put, if one extends the coffee-mug experiment described above with an (experimental) market in which subjects can trade their mugs, the endowment effect doesn’t change one single bit. Furthermore, there is no way in which a rational individual could use the market system to exploit quasi-rational individuals in the case of this endowment effect36. The implication is that quasi-rational behavior can survive. As rational agents cannot exploit quasi-rational behavior, and as there seems in most cases to be no ‘survival penalty’ on quasi-rational behavior, the evolutionary argument doesn’t work either. Secondly, experimental economics’ market experiments are not convincing according to Thaler. It makes two wrong assumptions. First of all, it assumes that individuals will quickly learn from their mistakes and discover the right solution. Thaler recounts how this has been falsified in numerous experiments. On the contrary, it is often the case that even when the correct solution has been repeatedly explained to them, individuals still persist in making the wrong decision. A second false assumption of experimental economics is to suppose that in the real world there exist ample opportunity to learn. This is labeled the Ground Hog Day argument37, in reference to a well-known movie starring Bill Murray. ... Subjects in (market) experiments who have to play the exact same game for tens or hundreds of rounds may perhaps be observed to (slowly) adjust to the rational solution. But real life is more like a constant sequence of the first few round of an experiment. The learning assumption of experimental economics is thus not valid. But perhaps even more destructive for economics is the fact that individuals’ intertemporal choices can be shown to be fundamentally inconsistent49. People who prefer A now over B now also prefer A in one month over B in two months. However, at the same time they also prefer B in one month and A in two months over A in one month and B in two months. The ultimatum game (player one proposes a division of a fixed sum of money, player two either accepts (the money is divided according to the proposed division), or rejects (both players get nothing)) has been played all over the world and leads always to the result that individuals do not play the ‘optimum’ (player one proposes the smallest amount possible to player two and player two accepts), but typically divide the money about half-half. The phenomenon is remarkably stable around the globe. However, the experiments have only been done with university students in advanced capitalist economies. The question is thus whether the results hold when tested in other environments. The surprising result is not so much that the average proposed and accepted divisions in the small-scale societies differ from those of university students, but how they differ. Roughly, the average proposed and accepted divisions go from [80%,20%] to [40%,60%]. The members of the different societies thus show a remarkable difference in the division they propose and accept. ...“preferences over economic choices are not exogenous as the canonical model would have it, but rather are shaped by the economic and social interactions of everyday life. ..." Camerer’s critique is similar to Loewenstein’s and can perhaps best be summed up with the conclusion that for Camerer there is no invisible hand. That is, for Camerer nothing mysterious happens between the behavior of the individual and the behavior of the market. If you know the behavior of the individuals, you can add up these behaviors to obtain the behavior of the market. In Anderson and Camerer (2000), for instance, it is shown that even when one allows learning to take place, a key issue for experimental economics, the game does not necessarily go to the global optimum, but as a result of path-dependency may easily get stuck in a sub-optimum. Camerer (1987) shows that, contrary to the common belief in experimental economics, decision biases persist in markets. In a laboratory experiment Camerer finds that a market institution does not reduce biases but may even increase them. ... The second branch of behavioral economics is organized around Camerer, Loewenstein, and Laibson. It considers the uncertainty of the decision behavior to be of an endogenous or strategic nature. That is, the uncertainty depends upon the fact that, like the individual, also the rest of the world tries to make the best decision. The most important theory to investigate individual decision behavior under endogenous uncertainty is game theory. The second branch of behavioral economics draws less on Kahneman and Tversky. What it takes from them is the idea that traditional Samuelson economics is plainly false. It argues, however, that traditional economics is both positively/descriptively and normatively wrong. Except for a few special cases, it neither tells how the individuals behave, nor how they should behave. The main project of the second branch is hence to build new positive theories of rational individual economic behavior under endogenous uncertainty. And here the race is basically still open. Made in China In an earlier post I linked to Bunnie Huang's blog, which describes (among other things) the manufacturing of his startup's hi-tech Chumby gadget in Shenzhen. At Foo Camp he and I ran a panel on the Future of China. In the audience, among others, were Jimmy Wales, the founder of Wikipedia, and Guido van Rossum, the creator of Python. Jimmy was typing on his laptop the whole time, but Guido asked a bunch of questions and recommended a book to me. Bunnie has some more posts up (including video) giving his impressions of manufacturing in China. Highly recommended! Made in China: Scale, Skill, Dedication, Feeding the factory. Below: Bunnie on the line, debugging what turns out to be a firmware problem with the Chumby. Look at those MIT wire boys go! :-) Wednesday, July 11, 2007 Hedge funds or market makers? To what extent are Citadel, DE Shaw and Renaissance really just big market makers? The essay excerpted below is by Harry Kat, a finance professor and former trader who was profiled in the New Yorker recently. First, from the New Yorker piece: It is notoriously difficult to distinguish between genuine investment skill and random variation. But firms like Renaissance Technologies, Citadel Investment Group, and D. E. Shaw appear to generate consistently high returns and low volatility. Shaw’s main equity fund has posted average annual returns, after fees, of twenty-one per cent since 1989; Renaissance has reportedly produced even higher returns. (Most of the top-performing hedge funds are closed to new investors.) Kat questioned whether such firms, which trade in huge volumes on a daily basis, ought to be categorized as hedge funds at all. “Basically, they are the largest market-making firms in the world, but they call themselves hedge funds because it sells better,” Kat said. “The average horizon on a trade for these guys is something like five seconds. They earn the spread. It’s very smart, but their skill is in technology. It’s in sucking up tick-by-tick data, processing all those data, and converting them into second-by-second positions in thousands of spreads worldwide. It’s just algorithmic market-making.” Next, the essay from Kat's academic web site. I suspect Kat exaggerates, but he does make an interesting point. Could a market maker really deliver such huge alpha? Only if it knows exactly where and when to take a position! Of Market Makers and Hedge Funds David and Ken both work for a large market making firm and both have the same dream: to start their own company. One day, David decides to quit his job and start a traditional market-making company. He puts in $10m of his own money and finds 9 others that are willing to do the same. The result: a company with $100m in equity, divided equally over 10 shareholders, meaning that each shareholder will share equally in the companyís operating costs and P&L. David will manage the company and will receive an annual salary of $1m for doing so. Ken decides to quit as well. He is going to do things differently though. Instead of packaging his market-making activities in the traditional corporate form, he is going to start a hedge fund. Like David, he also puts in $10m of his own money. Like David, he also finds 9 others willing to do the same. They are not called shareholders, however. They are investors in a hedge fund with a net asset value of $100m. Just like David, Ken has a double function. Apart from being one of the 10 investors in the fund, he will also be the fundís manager. As manager, he is entitled to 20% of the profit (over a 5% hurdle rate); the average incentive fee in the hedge fund industry. At first sight, it looks like David and Ken have accomplished the same thing. Both have a market-making operation with $100m in capital and 9 others to share the benefits with. There is, however, one big difference. Suppose David and Ken both made a net $100m. In Davidís company this would be shared equally between the shareholders, meaning that, including his salary, David received $11m. In Ken's hedge fund things are different, however. As the manager of the fund, he takes 20% of the profit, which, taking into account the $5m hurdle, would leave $81m to be divided among the 10 investors. Since he is also one of those 10 investors, however, this means that Ken would pocket a whopping $27.1m in total. Now suppose that both David and Ken lost $100m. In that case David would lose $9m, but Ken would still only lose $10m since as the fundís manager Ken gets 20% of the profit, but he does not participate in any losses. So if you wanted to be a market maker, how would you set yourself up? Of course, we are not the first to think of this. Some of the largest market maker firms in the world disguise themselves as hedge funds these days. Their activities are typically classified under fancy hedge fund names such as ëstatistical arbitrageí or ëmanaged futuresí, but basically these funds are market makers. This includes some of the most admired names in the hedge fund business such as D.E. Shaw, Renaissance, Citadel, and AHL, all of which are, not surprisingly, notorious for the sheer size of their daily trading volumes and their fairly consistent alpha. The above observation leads to a number of fascinating questions. The most interesting of these is of course how much of the profits of these market-making hedge funds stems from old-fashioned market making and how much is due to truly special insights and skill? Is the bulk of what these funds do very similar to what traditional market-making firms do, or are they responsible for major innovations and/or have they embedded major empirical discoveries in their market making? They tend to employ lots of PhDs and make a lot of fuzz about only hiring the best, etc. However, how much of that is window-dressing and how much is really adding value? Another question is whether market-making hedge funds get treated differently than traditional market makers when they go out to borrow money or securities. Given prime brokersí eagerness to service hedge funds these days, one might argue that in this respect market-making hedge funds are again better off then traditional market makers. So what is the conclusion? First of all, given the returns posted by the funds mentioned, it appears that high volume multi-market market making is a very good business to be in. Second, it looks like there could be a trade-off going on. Market-making hedge funds take a bigger slice of the pie, but the pie might be significantly bigger as well. Obviously, all of this could do with quite a bit more research. See if I can put a PhD on it. Monday, July 09, 2007 Theorists in diaspora Passing the time, two former theoretical physicists analyze a research article which only just appeared on the web. Between them, they manage over a billion dollars in hedge fund assets. While their computers process data in the background, vacuuming up nickels from the trading ether, the two discuss color magnetic flux, quark gluon plasma and acausal correlations. For fun, one of the two emails the paper to a former colleague, a humble professor still struggling with esoteric research... Quark-gluon plasma paradox D. Miskowiec Gesellschaft fur Schwerionenforschung mbH, Planckstr. 1, 64291 Darmstadt Based on simple physics arguments it is shown that the concept of quark-gluon plasma, a state of matter consisting of uncorrelated quarks, antiquarks, and gluons, has a fundamental problem. The result? The following email message. Dear Dr. Miskowiec, I read your interesting preprint on a possible QGP paradox. My comments are below. Best regards, Stephen Hsu In the paper it seems you are discussing a caricature of QGP, indeed a straw man. I don't know whether belief in this straw man is widespread among nuclear theorists; perhaps it is. But QGP is, after all, merely the high temperature phase of QCD. There *are* correlations (dynamics) that lead to preferential clustering of quarks into color neutral objects. These effects are absent at length scales much smaller than a fermi, due to asymptotic freedom. It is only on these short length scales that one can treat QCD as a (nearly) free gas of quarks and gluons. On sufficiently long length scales (i.e., much larger than a fermi) the system would still prefer to be color neutral. While it is true that at high temperatures the *linear* (confining) potential between color charges is no longer present, there is still an energetic cost for unscreened charge. It's a standard result in finite temperature QCD that, even at high temperatures, there are still infrared (long distance) nonperturbative effects. These are associated with a scale related to the magnetic screening length of gluons. The resulting dynamics are never fully perturbative, although thermodyamic quantities such as entropy density, pressure, etc. are close to those of a free gas of quarks and gluons. The limit to our ability to compute these thermodynamic quantities beyond a certain level in perturbation theory arises from the nonperturbative effects I mention. Consider the torus of QGP you discuss in your paper. Suppose I make a single "cut" in the torus, possibly separating quarks from each other in a way that leaves some uncancelled color charge. Once I pull the two faces apart by more than some distance (probably a few fermis), effects such as preferential hadronization into color neutral, integer baryon number, objects come into play. The energy required to make the cut and pull the faces apart is more than enough to create q-qbar pairs from the vacuum that can color neutralize each face. Note this is a *local* phenomenon taking place on fermi lengthscales. I believe the solution to your paradox is the third possibility you list. See below, taken from the paper, bottom of column 1 p.3. I only disagree with the last sentence: high temperature QCD is *not* best described as a gas of hadrons, but *does* prefer color neutrality. No rigorous calculation ever claimed a lack of correlations except at very short distances (due to asymptotic freedom). ...The third possibility is that local correlations between quarks make some cutting surfaces more probable than the others when it comes to cutting the ring and starting the hadronization. Obviously, in absence of such correlations the QGP ring basically looks like in Fig. 3 and no preferred breaking points can be recognized. If, however, some kind of interactions lead to clustering of quarks and gluons into (white) objects of integer baryon numbers like in Fig. 4 then starting hadronization from several points of the ring at the same time will not lead to any problem. However, this kind of matter would be hadron resonance matter rather than the QGP. Cooking the books: US News college rankings I found this amusing article from Slate. It turns out the dirty scoundrels at US News need a "logarithmic adjustor" (fudge factor) to keep Caltech from coming out ahead of HYP (Harvard-Yale-Princeton). Note the article is from back in 2000. The earlier Gottlieb article mentioned below discussing the 1999 rankings (where Caltech came out number 1) is here. For revealed preferences rankings of universities (i.e., where do students really choose to go when they are admitted to more than one school), see here. Cooking the School Books (Yet Again) The U.S. News college rankings get phonier and phonier. By Nicholas Thompson Posted Friday, Sept. 15, 2000, at 3:00 AM ET This year, according to U.S. News & World Report, Princeton is the best university in the country and Caltech is No. 4. This represents a pretty big switcheroo—last year, Caltech was the best and Princeton the fourth. Of course, it's not as though Caltech degenerated or Princeton improved over the past 12 months. As Bruce Gottlieb explained last year in Slate, changes like this come about mainly because U.S. News fiddles with the rules. Caltech catapulted up in 1999 because U.S. News changed the way it compares per-student spending; Caltech dropped back this year because the magazine decided to pretty much undo what it did last year. But I think Gottlieb wasn't quite right when he said that U.S. News makes changes in its formula just so that colleges will bounce around and give the annual rankings some phony drama. The magazine's motives are more devious than that. U.S. News changed the scores last year because a new team of editors and statisticians decided that the books had been cooked to ensure that Harvard, Yale, or Princeton (HYP) ended up on top. U.S. News changed the rankings back because those editors and statisticians are now gone and the magazine wanted HYP back on top. Just before the latest scores came out, I wrote an article in the Washington Monthly suggesting that this might happen. Even so, the fancy footwork was a little shocking. The story of how the rankings were cooked goes back to 1987, when the magazine's first attempt at a formula put a school in first that longtime editor Mel Elfin says he can't even remember, except that it wasn't HYP. So Elfin threw away that formula and brought in a statistician named Robert Morse who produced a new one. This one puts HYP on top, and Elfin frankly defends his use of this result to vindicate the process. He told me, "When you're picking the most valuable player in baseball and a utility player hitting .220 comes up as the MVP, it's not right." For the next decade, Elfin and Morse essentially ran the rankings as their own fiefdom, and no one else at the magazine really knew how the numbers worked. But during a series of recent leadership changes, Morse and Elfin moved out of their leadership roles and a new team came in. What they found, they say, was a bizarre statistical measure that discounted major differences in spending, for what seemed to be the sole purpose of keeping HYP at the top. So, last year, as U.S. News itself wrote, the magazine "brought [its] methodology into line with standard statistical procedure." With these new rankings, Caltech shot up and HYP was displaced for the first time ever. But the credibility of rankings like these depends on two semiconflicting rules. First, the system must be complicated enough to seem scientific. And second, the results must match, more or less, people's nonscientific prejudices. Last year's rankings failed the second test. There aren't many Techie graduates in the top ranks of U.S. News, and I'd be surprised if The New Yorker has published a story written by a Caltech grad, or even by someone married to one, in the last five years. Go out on the streets of Georgetown by the U.S. News offices and ask someone about the best college in the country. She probably won't start to talk about those hallowed labs in Pasadena. So, Morse was given back his job as director of data research, and the formula was juiced to put HYP back on top. According to the magazine: "[W]e adjusted each school's research spending according to the ratio of its undergraduates to graduate students ... [and] we applied a logarithmic adjuster to all spending values." If you're not up on your logarithms, here's a translation: If a school spends tons and tons of money building machines for its students, they only get a little bit of credit. They got lots last year—but that was a mistake. Amazingly, the only categories where U.S. News applies this logarithmic adjuster are also the only categories where Caltech has a huge lead over HYP. The fact that the formulas had to be rearranged to get HYP back on top doesn't mean that those three aren't the best schools in the country, whatever that means. After all, who knows whether last year's methodology was better than this year's? Is a school's quality more accurately measured by multiplying its spending per student by 0.15 or by taking a logarithmic adjuster to that value? A case could also be made for taking the square root. But the logical flaw in U.S. News' methodology should be obvious—at least to any Caltech graduate. If the test of a mathematical formula's validity is how closely the results it produces accord with pre-existing prejudices, then the formula adds nothing to the validity of the prejudice. It's just for show. And if you fiddle constantly with the formula to produce the result you want, it's not even good for that. U.S. News really only has one justification for its rankings: They must be right because the schools we know are the best come out on top. Last year, that logic fell apart. This year, the magazine has straightened it all out and HYP's back in charge—with the help of a logarithmic adjuster. Nicholas Thompson is a senior editor at Legal Affairs. Sunday, July 08, 2007 Myth of the Rational Voter The New Yorker has an excellent discussion by Louis Menand of Bryan Caplan's recent book The Myth of the Rational Voter. Best sentence in the article (I suppose this applies to physicists as well): Caplan is the sort of economist (are there other sorts? there must be) who engages with the views of non-economists in the way a bulldozer would engage with a picket fence if a bulldozer could express glee. Short summary (obvious to anyone who has thought about democracy): voters are clueless, and resulting policies and outcomes are suboptimal, but allowing everyone to have their say lends stability and legitimacy to the system. Democracy is a tradeoff, of course! While a wise and effective dictator (e.g., Lee Kwan Yew of Singapore, or, in Caplan's mind, a board of economic "experts") might outperform the electorate over a short period of time, the more common kind of dictator (stupid, egomaniacal) is capable of much, much worse. Without democracy, what keeps a corrupt and stupid dictator from succeeding the efficient and benevolent one? The analogous point for markets is that, for a short time (classic example: during a war), good central planning might be more effective for certain goals than market mechanisms. But over the long haul distributing the decisions over many participants will give a better outcome, both because of the complexity of economic decision making (e.g., how many bagels does NYC need each day? can a committee figure this out?) and because of the eventuality of bad central planning. When discussing free markets, people on the left always assume the alternative is good central planning, while those on the right always assume the opposite. Returning to Caplan, his view isn't just that voters are uninformed or stupid. He attacks an apparently widely believed feel-good story that says although most voters are clueless their mistakes are random and magically cancel out when aggregated, leaving the outcome in the hands of the wise fraction of the electorate. What a wonderfully fine-tuned dynamical system! (That is how markets are supposed to work, except when they don't, and instead horribly misprice things.) Caplan points out several common irrationalities of voters that do not cancel out, but rather tend to bias government in particular directions. Any data or argument supporting the irrationality of voters and suboptimality of democratic outcomes can be applied just as well to agents in markets. (What Menand calls "shortcuts" below others call heuristics or bounded cognition.) The claim that people make better decisions in market situations (e.g., buying a house or a choosing a career) because they are directly affected by the outcome is only marginally convincing to me. Evaluating the optimality of many economic decisions is about as hard as figuring out whether a particular vote or policy decision was optimal. Did your vote for Nader lead to G.W. Bush and the Iraq disaster? Did your votes for Reagan help end the cold war safely and in our favor? Would you have a higher net worth if you had bought a smaller house and invested the rest of your down payment in equities? Would the extra money in the bank compensate you for the reduced living space? Do typical people sit down and figure these things out? Do they come to correct conclusions, or just fool themselves? I doubt most people could even agree as to Reagan's effect on the cold war, over 20 years ago! I don't want to sound too negative. Let me clarify, before one of those little bulldozers engages with me :-) I regard markets as I regard democracy: flawed and suboptimal, but the best practical mechanisms we have for economic distribution and governance, respectively. My main dispute is with academics who really believe that woefully limited agents are capable of finding global optima. The average voter is not held in much esteem by economists and political scientists, and Caplan rehearses some of the reasons for this. The argument of his book, though, is that economists and political scientists have misunderstood the problem. They think that most voters are ignorant about political issues; Caplan thinks that most voters are wrong about the issues, which is a different matter, and that their wrong ideas lead to policies that make society as a whole worse off. We tend to assume that if the government enacts bad policies, it’s because the system isn’t working properly—and it isn’t working properly because voters are poorly informed, or they’re subject to demagoguery, or special interests thwart the public’s interest. Caplan thinks that these conditions are endemic to democracy. They are not distortions of the process; they are what you would expect to find in a system designed to serve the wishes of the people. “Democracy fails,” he says, “because it does what voters want.” It is sometimes said that the best cure for the ills of democracy is more democracy. Caplan thinks that the best cure is less democracy. He doesn’t quite say that the world ought to be run by economists, but he comes pretty close. For fifty years, it has been standard to explain voter ignorance in economic terms. Caplan cites Anthony Downs’s “An Economic Theory of Democracy” (1957): “It is irrational to be politically well-informed because the low returns from data simply do not justify their cost in time and other resources.” In other words, it isn’t worth my while to spend time and energy acquiring information about candidates and issues, because my vote can’t change the outcome. I would not buy a car or a house without doing due diligence, because I pay a price if I make the wrong choice. But if I had voted for the candidate I did not prefer in every Presidential election since I began voting, it would have made no difference to me (or to anyone else). It would have made no difference if I had not voted at all. This doesn’t mean that I won’t vote, or that, when I do vote, I won’t care about the outcome. It only means that I have no incentive to learn more about the candidates or the issues, because the price of my ignorance is essentially zero. According to this economic model, people aren’t ignorant about politics because they’re stupid; they’re ignorant because they’re rational. If everyone doesn’t vote, then the system doesn’t work. But if I don’t vote, the system works just fine. So I find more productive ways to spend my time. Political scientists have proposed various theories aimed at salvaging some dignity for the democratic process. One is that elections are decided by the ten per cent or so of the electorate who are informed and have coherent political views. In this theory, the votes of the uninformed cancel each other out, since their choices are effectively random: they are flipping a coin. So candidates pitch their appeals to the informed voters, who decide on the merits, and this makes the outcome of an election politically meaningful. Another argument is that the average voter uses “shortcuts” to reach a decision about which candidate to vote for. The political party is an obvious shortcut: if you have decided that you prefer Democrats, you don’t really need more information to cast your ballot. Shortcuts can take other forms as well: the comments of a co-worker or a relative with a reputation for political wisdom, or a news item or photograph (John Kerry windsurfing) that can be used to make a quick-and-dirty calculation about whether the candidate is someone you should support. (People argue about how valid these shortcuts are as substitutes for fuller information, of course.) There is also the theory of what Caplan calls the Miracle of Aggregation. As James Surowiecki illustrates in “The Wisdom of Crowds” (2004), a large number of people with partial information and varying degrees of intelligence and expertise will collectively reach better or more accurate results than will a small number of like-minded, highly intelligent experts. Stock prices work this way, but so can many other things, such as determining the odds in sports gambling, guessing the number of jelly beans in a jar, and analyzing intelligence. An individual voter has limited amounts of information and political sense, but a hundred million voters, each with a different amount of information and political sense, will produce the “right” result. Then, there is the theory that people vote the same way that they act in the marketplace: they pursue their self-interest. In the market, selfish behavior conduces to the general good, and the same should be true for elections. Caplan thinks that democracy as it is now practiced cannot be salvaged, and his position is based on a simple observation: “Democracy is a commons, not a market.” A commons is an unregulated public resource—in the classic example, in Garrett Hardin’s essay “The Tragedy of the Commons” (1968), it is literally a commons, a public pasture on which anyone may graze his cattle. It is in the interest of each herdsman to graze as many of his own cattle as he can, since the resource is free, but too many cattle will result in overgrazing and the destruction of the pasture. So the pursuit of individual self-interest leads to a loss for everyone. (The subject Hardin was addressing was population growth: someone may be concerned about overpopulation but still decide to have another child, since the cost to the individual of adding one more person to the planet is much less than the benefit of having the child.) ...But, as Caplan certainly knows, though he does not give sufficient weight to it, the problem, if it is a problem, is more deeply rooted. It’s not a matter of information, or the lack of it; it’s a matter of psychology. Most people do not think politically, and they do not think like economists, either. People exaggerate the risk of loss; they like the status quo and tend to regard it as a norm; they overreact to sensational but unrepresentative information (the shark-attack phenomenon); they will pay extravagantly to punish cheaters, even when there is no benefit to themselves; and they often rank fairness and reciprocity ahead of self-interest. Most people, even if you explained to them what the economically rational choice was, would be reluctant to make it, because they value other things—in particular, they want to protect themselves from the downside of change. They would rather feel good about themselves than maximize (even legitimately) their profit, and they would rather not have more of something than run the risk, even if the risk is small by actuarial standards, of having significantly less. People are less modern than the times in which they live, in other words, and the failure to comprehend this is what can make economists seem like happy bulldozers. ... Blog Archive