chash
stringlengths
16
16
content
stringlengths
267
674k
d581d8f94e854ad4
Error message Sacred Solids in the Atomic Nucleus 1.- Introduction "Our picture of the nucleus is so far different from the accepted picture (actually, there is no accepted picture) as to make any comparison impossible. The trained specialist recognizes immediately that if we are right, the whole edifice of 20th Century atomic physics must be rethought, as Dr. Moon had done. Moon was able to make breakthroughs where others could not, in part because he had a hands-on mastery of the crucial experiments on which the theoretical structure was built. He had done the experiments. Few of his peers had the combination of competence and courage to think in the same way.     Laurence Hetch [1] This article describes a tiny portion of the research conducted by the eminent nuclear physicist Dr. Robert J. Moon (1911-1989) [2], and it also introduces some possible variants of it. I came across Dr. Moon's geometrical model of the atomic nucleus after having known of a nice reorganization of the periodic table of the elements into a tetrahedral structure, the so called Perfect Periodic Table [3]. That made me ask the question: if the atomic elements can be organized in such a way, could the electrons themselves be geometrically organized in a regular way inside the atom? I am certain that there is a positive answer to this question, one that will arrive to us in due course. What it does exist, and nobody usually tells you about it at school, is a nice, coherent, geometrical model of the arrangement of protons in the atomic nucleus, which involves the Platonic Solids. Up to where modern science knows, every atom of matter is made of positive charged particles -protons- and neutral particles -neutrons- which are known to be concentrated in the atomic nucleus, and of negative charged particles -electrons- which are located around it. It is my firm belief that all those particles work together as an ensemble, but current physics separates the study of the nucleus from that of the extra-nuclear space. At the present, there is no theoretical model capable of describing into detail the structure of the nucleus. Each of the available nuclear models describes some of the known experimental observations, but there is no definite model that explains them all. Dr. Moon's nuclear model accounts for some of the periodicities found in many properties of the atomic elements, and it also explains why some elements -like uranium- may participate in nuclear fission. In the course of my investigations I rediscovered some alternative interpretations of two widely accepted physical theories: electromagnetism and quantum mechanics. I was surprised to find that, well before James Clerk Maxwell published his famous treatise of electromagnetism -the one which all electrical engineers are taught- Alfred Weber had already proposed a general expression of the electro-dynamical force between moving charged particles. In fact, Maxwell's equations for the electromagnetic field can all be derived from Weber's electrodynamics [4]. The interesting point for the present discussion is that Weber's theory predicts a distance below which the force between two charged particles of equal sign changes from repulsion to attraction [5]. When the modern constants are substituted into his formula one obtains the classical electron radius, despite the fact that Weber developed his theory long before the discovery of the atom!! Therefore, according to Weber's theory, positive charged protons inside the nucleus, instead of repelling, they attract one another. And why bother about quantum mechanics? Its widely accepted interpretation holds that one can only work with probabilities of finding an atomic particle in a given position inside the atom, and that it does not make sense to talk about the exact position or the trajectory of, say, an electron. Not to mention the possibility that the electron or the proton have an internal structure! Fortunately, a solid alternative interpretation exists according to which making those kind of questions is no longer utter madness. As Weber electrodynamics, it is likely unknown to most physicists. I am talking about De Broglie-Bohm quantum potential. Dr. Moon himself +mentioned in one of his interviews [6] that Louis De Broglie and David Bohm had worked together in the quantum potential interpretation of the Schrödinger equation -also known as pilot-wave theory- until De Broglie's death. This interpretation of quantum mechanics, presented by Louis De Broglie at the 1927 Solvay conference [7], not only predicts exactly all the experimental findings as it does the conventional approach (and some others which it does not explain such as the two slit electron interference experiment or the Aaron-Bohm effect) but it also shows that the probabilistic interpretation of quantum mechanics is a side consequence -not a necessary premise- and that it makes sense to talk about the trajectories and positions of atomic particles!! 2.- On the structure of matter The following figure shows the modern Periodic Table of the elements. Each element is characterised by its atomic number A, which counts the number of protons (positively charged particles) in its nucleus. This number also indicates the number of electrons (negatively charged particles) that surround the nucleus when the element is in its stable, non-ionized state. Each element is also characterised by its mass number Z, which takes into account the number N of neutrons (neutral particles) in the nucleus. Therefore Z = A + N. Depending on the number of neutrons in their nucleus, some elements have different variants known as isotopes, with the same atomic number but different mass number. Some isotopes are non-stable and rapidly disintegrate. Among the stable isotopes of a given element, the Periodic Table depicts the atomic mass of the most abundant isotope in Nature. The mass number can be generally obtained by rounding the atomic mass to the nearest integer. The highlighted elements in the table will be discussed in next section. IUPAC Periodic Table of the elements Figure 1: Periodic table of the elements (available here). The elements marked with red squares -either dark or light red- mark the completion of a shell in the first set of four shells in Moon model of the nucleus. Magenta squares mark the elements which I proposed might indicate the closure of additional shells not originally considered by Moon. The elements marked with pentagons mark the completion of each shell in the twin dodecahedral structure of Moon nuclear model. What can be said about the internal structure of any of those particles? For example, what is an electron? Most physicists agree that it is an entity with the property of a quantum of charge, and that we cannot infer anything about its internal composition from experiments: we are only allowed to talk about the probability of finding an electron somewhere around the nucleus. Fortunately, as mentioned in the introduction, according to the alternative interpretation of quantum mechanics originally suggested by Louis de Broglie in 1927, it does make sense to think of quantum particles as entities with internal structure that can move in some determined -although unknown- trajectories [8]. As to the internal composition of an electron, according to the research conducted by biophysicist Dr. Paulo Correa and his wife, it consists on a quantum of circularized energy in a constant flow that defines a toroidal structure [9]. Dr. Vladimir B. Ginzburg has also developed a simple model of all elementary charged particles in the form of a torus of varying aspect ratio [10]. And what do we know about the internal composition of a proton or a neutron? Physicists don't have a way of looking inside any such particle, but from experiments in particle colliders they agree that a proton (or a neutron) is composed of three more elementary units called quarks. So, in the domain of nuclear physics it appears to make sense to talk about the structure of a particle ha ha ha! In fact, some experiments conducted at the Weizmann Institute of Science have shown that electric current is quantized in units of 1/3 of the quantum of charge [11]. So it is likely that the electron is also composed of some elementary units, but at the moment science does not know this for certain, yet. I suspect that both the proton and the electron have a common internal structure, and they only differ in size and in their internal property of charge. They could well be scaled versions of one another. In fact, physicists hold that beta particles are electrons emitted by the nucleus is the decay process of a neutron in a radioactive element. Similarly, a neutron can be seen as a proton that has "fused" with an electron. But of course in both cases the size of such a nuclear electron must be comparable to that of proton. Therefore, everything seems to point out that electrons in the nucleus are scaled versions of their corresponding extra-nuclear partners! But let's focus on the main topic of the article. 3.- Some properties of the atomic elements The electrons in the atom are known to be organised in shells, although very little is known about the actual geometric structure of those shells. The noble gases, which are located at the rightmost column in the periodic table (Figure 1), mark the completion of a shell. The leftmost element in next row marks the beginning of a new shell. Several properties, such as the atomic volume, the melting point, the coefficient of linear expansion or the compressibility factor, have a sharp maximum for the atomic number corresponding to the element that starts a new electron shell (3Li, 11Na, 19K, 37Rb, 55Cs). This is graphically shown in Figure 2. Atomic volume as a function of atomic number Periodic dependence on atomic number of the melting point, coefficient of linear expansion and compressibility factor Figure 2: (a) Dependence of atomic volume on atomic number. (b) Dependence on atomic number of (1) the quantity 104/T, where T is the melting point; (2) coefficient of linear expansion alpha·105; and (3) compressibility factor K·106.(adapted from this reference) However, there are some other remarkable points in those graphs, namely their minimum points. Dr. Moon proposes that protons are also organised in shells into the nucleus. As we will show in the following sections, the completion of each of his proposed proton shells nearly corresponds to the atomic elements located at a local minimum of the above properties (4Be, 6C, 8O, 14Si, 26Fe, 46Pd, 92U). I have added two elements at the beginning of the series, Beryllium and Carbon, which were not originally proposed by Moon, but which also correspond to local minima and could easily fit into his model as I will explain later on. 3.- The tetrahedra helix Before going into detail, let me introduce an analogy that may help us to understand the rationale behind Moon's model. Let's take any game with balls and equal sized rods that can be interconnected together to form polyhedra. We start with a triangle (Figure 3a), and at each step we add a new pack of one ball and three rods (Figure 3b). We can imagine this pack as being a (nuclear) particle that joins a set of already established and organised nuclear particles. The triangle contains three balls and three rods. After adding the first pack, we have four balls and six rods. Are those numbers familiar to you -namely four vertices and six equal sized edges? Of course they can only be organised in three dimensions in the form of a tetrahedron (Figure 3c)! In addition it is the most symmetrical way of organising four elements in three dimensional space, leaving an empty space in the center. I suggest that the four protons of the Beryllium (4Be) nucleus may be organised in such a way. Notice that this element is located near a local minimum of the atomic properties shown in Figure 2. Let's add a new "particle" (a pack of one ball and three rods). We can put it on top of any of the faces of the original tetrahedron, and we end up with two tetrahedra side by side (Figure 3d). When we add a third additional pack, again we can put it on top of any of the six external tetrahedral faces, ending up with a three tetrahedra bundle shown in Figure 3f. This completes what we will call a turn (because one green rod comes out from every ball in the original triangle starting to define a twist direction). An interesting fact about this set of six balls and twelve equally sized rods is that it can be reorganised in a very symmetrical way, do you guess which one? Yes, it is our familiar octahedron (Figure 3f)! Tetrahedra helix triangular base Tetrahedra helix fourth ball Tetrahedra helix fifth ball Tetrahedra helix six balls expanded Tetrahedra helix six balls as Octahedron Figure 3: After joining six balls using twelve equal sized rods, the resulting three tetrahedra bundle can be reorganised in the form of an octahedron. If a tetrahedron is taken as one unit of volume, the original bundle was asymmetric and had a volume of three units, whereas the resulting octahedron has a volume of four units and is a perfectly symmetrical distribution of the same set of balls and rods. You may be wondering: is there any benefit in this reorganisation? Thinking in terms of volume, and supposing that each tetrahedra has one unit of volume, the original bundle had a volume of three, whereas the resulting octahedron has a much large volume of four. We have reached a new structure which, using the same amount of building "material", has the highest symmetry and the largest inner volume possible. Therefore, from this point of view -maximizing volume and symmetry- it is an optimum structure. The final structure also achieves the closest packing of the six elements: an hypothetical circumscribing sphere would have minimum radius in the octahedral, centrally symmetrical structure, than in the three tetrahedra extended one. I suggest that Nature uses this kind of optimization to organise particles in the atom, and in particular protons in the atomic nucleus. If we look at the periodic table of Figure 1, this octahedral organization of six protons could correspond to the nucleus of Carbon (6C) which as Beryllium is also near a local minimum of the atomic properties of Figure 2. In order to continue adding packs to our structure, now we have two options: either start from the original three tetrahedron bundle, or from the reorganised octahedron. In the first case, each new tetrahedron can be added in a way that continues the twisting direction of what we called the first turn. Figure 4 shows the resulting tetrahedra helix after adding seven packs to the original triangle, and one of its possible symmetrical reorganisations. Had we started from the reorganised octahedron, we could have stellated it adding a tetrahedron to each of its faces. After four such steps, the most symmetrical final structure appears to be a three dimensional tetractis (Figure 4b) or what Buckminster Fuller called a 2-frequency tetrahedron [12]. In addition to being symmetrical, this structure has a volume of eight units instead of the seven units of the ten ball, seven tetrahedra helix shown in Fig.4a (the same amount of "material" can be organised in less symmetrical ways which reach higher volumes, but we leave this as an exercise for the reader). Tetrahedra helix with seven elements Figure 4: The resulting structure of joining seven tetrahedra to the original blue triangle can be reorganised as a three-dimensional tetractis. Two more tetrahedra would complete a three turn helix (nine tetrahedra), which happens to contain a total of twelve balls and thirty rods. You may have guessed that these material allows to build an icosahedron, which would have the maximum volume of 18.51 units instead of the 9 units of the original nine tetrahedra helix (Figure 5). Continuing this process we would eventually end up with a final 33 tetrahedra helix. I address the reader interested in an alternative interpretation of this amazing structure to reference [13]. Tetrahedra helix of twelve balls as icosahedron Figure 5: A three turn tetrahedra helix contains exactly twelve balls and thirty rods, so it can be optimally organised in the form of a perfectly symmetric icosahedron, which has maximum internal volume. If we increase the number of balls up to fourteen, where would we put the two extra balls? One solution, which is not optimum in terms of volume but preserves the previous reorderings, would be to start from the ten ball tetractis of Figure 4 and go on stellating the octahedron until reaching a star tetrahedron (Figure 6). Notice that this structure contains the vertices of a cube (which is held stable by the two big "diagonal" tetrahedra) and those of its dual solid the octahedron. While the inner octahedron could be the spacial distribution of the six protons in a Carbon nucleus, the resulting octahedron+cube structure could well be the distribution of the fourteen protons in the nucleus of Silicon (14Si), which happens to be near another minimum in the atomic properties shown in Figure 2. The picture that has started to emerge so naturally is that of nuclear protons organised into shells: once a given shell is completed (for example the octahedron), protons start filling next shell (the cube), and so on... Tetrahedra helix with volume eleven reorganised as a Stellated Octahedron Figure 6: An eleven tetrahedra helix contains fourteen balls and thirty-six rods that can be reorganised as a stellated octahedron. This is an example of an octahedral shell surrounded by an hexahedral shell. It might reflect the internal structure of protons in the nucleus of Silicon. 4.- Moon model of the Atomic nucleus Dr. Robert J. Moon proposed a model of the spatial distribution of nuclear protons that can be understood with the line of thought developed in the previous section. Before going on, I reproduce a dialogue between Fletcher James and Dr. Moon regarding the nature of his model [1]: "FJ: Dr. Moon, I have a fundamental question about what you are doing in filling in these solids. Are you proposing a structure in which you actually have, quantized within space, a fixed structure, and you have points, particles which are located at rigid fixed intervals from each other within the structure? RM: No. You have singularities -singularities in space, particle singularities... FJ: But at fixed, constant distances from each other? or are you proposing that this is occurring in a phase space, and that there is a topological equivalence between this nesting? RM: Well, no, this is actual space, so there should be a topological equivalent to it. But, this is, these singularities in space may have nothing in them. But they're just a place where these particles can go." Dr. Moon suggested that the first stable structure to form was a cube whose vertices would define the distribution of the eight protons in the nucleus of Oxigen (8O) (Figure 7a). After adding six more protons, the next shell to complete would be an octahedron. Altogether, they may reflect the symmetric distribution of the fourteen protons in the nucleus of Silicon (14Si). The octahedron and cube form a dual pair, so Dr. Moon placed the vertices of the cube in the midpoints of the faces of the octahedron (Figure 7b). These eight points are all contained in the inscribed sphere of the octahedron. So actually the inner cube structure would be able to move inside the octahedral structure according to the electrodynamic forces acting among the protons. I have offered an alternative distribution of these fourteen protons in Figure 6 (we will return to it in next section). As I mentioned before, Silicon is located near a minimum of the atomic properties shown in Figure 2. Moon Model Oxygen proton distribution Moon model Silicon proton distribution Figure 7: (a) The first shell proposed by Moon is a cube, which reflects the distribution of the eight protons in the nucleus of Oxygen. (b) The next shell would form after adding six new protons, reflecting the distribution of the fourteen protons in the nucleus of Silicon. The next proton shell proposed by Dr. Moon is an icosahedron. It is reached after adding twelve new protons to the nucleus of Silicon (Figure 8a). This would lead to the nucleus of Iron (26Fe) totalling twenty-six protons. Iron is also near a minimum of the above mentioned atomic properties of the elements. There are many ways in which an octahedron can be inscribed into an icosahedron. In the first versions of his model, Moon placed the six vertices of the octahedron in the middle of six edges of the icosahedron. Later he proposed to place them onto six icosahedral faces, so that two octahedral and two icosahedral faces become mutually parallel (Figure 8b). Please note that none of those locations of the octahedron inside the icosahedron would allow the former to freely spin. In the following section I will propose an alternative that would solve this problem. Moon model 26Fe Iron proton distribution Moon model octahedron inside icosahedron with parallel faces Figure 8: (a) The completion of the third shell in the form of an icosahedron reflects the distribution of the twenty-six protons in the nucleus of Iron (26Fe). (b) The vertices of the octahedron are located on six icosahedral faces in such a way that both solids have two mutually parallel faces. The last shell in this structure, as it could not be another way, would be a dodecahedron (Figure 9). After adding twenty more protons to the nucleus of Iron, we would end up with the distribution of the forty-six protons in the nucleus of Palladium (46Pd). Again, this element falls very near to the next minimum in the atomic properties shown in Figure 2. The icosahedron and the dodecahedron are dual solids, so the vertices of the former would be located in the middle of the faces of the later in a natural way. As the twelve icosahedral vertices are all contained in the inscribed sphere of the dodecahedron, the inner icosahedral structure would be able freely move inside its enveloping dodecahedron. Moon Model distribution of the 46 protons in the nucleus of Palladium The first four shells in Moon's Model of the atomic nucleus Figure 9: The completion of the fourth shell in the form of a dodecahedron would reflect the distribution of the forty-six protons in the nucleus of Palladium (46Pd). The icosahedral vertices would be able to freely move inside its enveloping dodecahedron. After completing those four shells, Dr. Moon proposed that the following elements started forming a similar, twin structure on top of one of the dodecahedral faces. Both structures would share the five vertices of that face, as well as the icosahedral vertex located in its middle. So we would be left with just eleven icosahedral and fifteen dodecahedral vertices. Moon hypothesized that the first ten vertices to be occupied by the new protons would belong to the dodecahedron (Figure 10a). They would reflect the distribution of protons in the nucleus of Barium (56Ba). The following eight protons would complete the inner cube giving rise to the nucleus of Gadolinium (64Gd), in the middle of the lanthanide series of elements (Figure 10b). Almost at the end of the lanthanides we find Ytterbium (70Yb), with a completed proton octahedral shell (Figure 10c). Moon model of the protons in the nucleus of Barium 56Ba Moon model of the protons in the nucleus of Gadolinium 64Ga Moon model of the protons in the nucleus of 70Yb Figure 10: Moon proposed that the first ten protons of the twin structure occupy ten dodecahedral vertices; the structure reflects the proton distribution in the nucleus of Barium (56Ba), an element which marks the beginning of the Lanthanides. The completion of the inner cube leads to the nucleus of Gadolinium (64Gd), and the completion of the octahedron marks almost the end of the lanthanides at Ytterbium (70Yb). The next eleven protons would occupy the available vertices of the twin icosahedron, closing another shell and giving rise to the distribution of protons in nucleus of Thallium (81Tl) (Figure 11a). Five more protons close all the shells of the twin dodecahedron, placing us at the noble gas Radon (86Rd) (Figure 11b). Moon model of the protons in the nucleus of Thallium 81Tl Moon model of the protons in the nucleus of Radon 86Rd Figure 11: (a) The completion of the twin icosahedron leads us to the nucleus of Thallium containing eighty-one protons. (b) The last five protons complete the twin dodecahedral structure which reflects the eighty-six protons in the nucleus of the noble gas Radon. In order to allow new protons into this structure, Dr. Moon proposed that the two dodecahedra could separate as if they were hanging on a hinge (Figure 12a). That would free four spaces that could be occupied by four more protons. Two of the four corresponding elements, Francium (87Fr) and Actinium (89Ac) are not found in nature, because their nucleus is unstable and does not last very long. They have been made in nuclear reactors by bombarding elements with neutrons. To leave room for the additional nuclear proton of the next element, Protactinium (91Pa), Dr. Moon proposed that the hinge breaks and the twin structures are held together at one single point (Figure 12b). Figure 12: (a) To allow the next protons to find their places, the twin dodecahedra must open up, using one edge of the binding face as if it were a hinge. This makes room for four new elements, mostly unstable (b) Next proton can be added if the hinge breaks and the structures are held together by a single proton. The construction of Uranium (92U) requires that the last proton be placed at the point of joining. This can be accomplished if the structure breaks, and one solid is slightly displaced to penetrate the other (Figure 13). The exact interpenetration of the structure shown in this figure was proposed by Laurence Hetch, after defining a spin axis that will be explained in next section. The resulting structure, quoting Dr. Moon, "is something that's ready for fission [...] if you try to put more neutrons in there, it's going to fiss" [1]. Dr. Robert J. Moon was one of the scientists who first made fission happen in a wartime laboratory on the University of Chicago football field [14]. Figure 13: Two views of the twin inter penetrating structures. The exact placement of one vertex of each dodecahedron in the midpoint of each of the opposite icosahedral faces explains the exact number and disposition of neutrons proposed by Laurence Hetch (see next section). 5.- Discussion The Axis of the Universe Since the original conception of the model by Dr. Moon, Laurence Hetch and his co-workers have been trying to develop it further. One interesting thing about Hetch proposals is the possible existence of a preferred axis of spin for the atomic nucleus. He suggests that this axis -called the Axis of the Universe- should be perpendicular to the mutually parallel octahedral and icosahedral faces shown in Figure 8. In this way the angular momentum of the protons around this axis would be minimum in some of the shells, such as the cubic one (Figure 14). The reader further interested in the implications of this axis for the nuclear arrangement of protons as well as for the magnetic properties of the elements is addressed to reference [1]. Figure 14: Top view of the Moon model from the Axis of the Universe proposed by L. Hetch. Note that the axis is diagonal to to the cube and dodecahedron, but not to the icosahedron nor the octahedron. Free spin of each shell The vertices of the icosahedron, though originally placed at the center of the dodecahedral faces, need not be there at all. In fact the whole icosahedral structure would be able to freely spin inside the dodecahedral structure, in order to align itself in the best way with respect to the preferred axis of rotation. However, that would not be the case of the octahedral structure, because in the original Moon model its vertices surpass the icosahedron inscribed sphere (Figure 15a). I suggest to overcome this limitation by reducing the distance of the octahedral vertices to the center of the structure -and that of the cube vertices proportionally- so that the circumscribed sphere of the octahedron coincides with the inscribed sphere of the icosahedron (Figure 15b). With this simple adjustment, all four shells would be able to spin one inside another, obviously constrained by the electrodynamic forces among their protons. Moon model octahedron circumscribed sphere Proposed resizing of the octahedron and cube Figure 15: (a) The circumscribed sphere of the octahedron in Moon original model is bigger than the inscribed sphere of the icosahedron. (b) Slightly approaching the octahedral vertices to the origin would make both spheres coincident and would allow the octahedron to freely spin inside the icosahedron. Is the octahedral shell first? The order of filling of the first two shells -hexahedral and octahedral- is worth a bit more of discussion. The attentive reader may have noticed that the element Oxygen (8O), whose nuclear protons are organised in the form of a cube according to Moon model, is not a minimum of the atomic properties shown in Figure 2. I suggest that this fact is easily explained by an alternative filling order the first two shells. As I commented in Section 3, the first four protons of the nucleus are likely to be organised in the form of a tetrahedron. That would correspond to the nucleus of Beryllium (4Be), which is near the first minimum of the aforementioned atomic properties. The next proton (the fifth) would have no definite place to occupy, but when the number of six protons is reached, it is likely that the ensemble reorganises itself in the form of an octahedron, and that would correspond to the nucleus of Carbon (6C). This element is also close to a minimum of the atomic properties shown in Figure 2, much closer that Oxygen is. The next pronounced minimum appears at Silicon (14Si), which as already mentioned would complete a cube and its dual the octahedron. Which one is inside the other is not clear. Dr. Moon proposed that the cube goes inside, but as I have just argued it could well be the other way round. What about neutrons? The atomic nucleus not only consists of positively charged protons; it also contains neutral particles, the neutrons. Moon model does not explain the location of neutrons inside the nucleus. In fact, Dr. Moon used to say: "the protons find their parking place at what would correspond to the vertices. And then the neutrons, which are also out there, we’re not going to worry about them, because they have no charge and they can be most any place" [6]. However, Larry Hetch has also worked on this topic and he essentially proposes that neutrons can occupy the midpoints of the edges of the nested platonic solids, as well as the center of the icosahedral faces not already occupied by protons [13]. The nucleus of Uranium-238 provides a striking evidence in support of his proposal. We have seen that in the Uranium nucleus the two dodecahedral structures interpenetrate one another. The number of edges and face centres left for the neutrons to occupy are: Cube faces 6 Cube edges 12 Octahedral edges 12 Icosahedral edges 30 Icosahedral faces 13 That would leave 73 neutron positions on each twin structure, precisely the correct number for the 146 neutrons of Uranium-238!! In addition, it should be noticed that my proposed reordering of the shell filling would not alter this count, because the two additional faces of each icosahedron occupied with by two extra vertices of the cube would be compensated by the two extra octahedral faces in the innermost shell, so the total would be the same: Octahedral faces 8 Octahedral edges 12 Cube edges 12 Icosahedral edges 30 Icosahedral faces 11 Last update: 2013/03/21 [1] Hetch L., Stevens Ch.B.: "New Explorations with The Moon Model" ,2004. [2] Hetch L.: "Who was Robert J. Moon?" ,2000. [3] Tsimmerman V.: "Perfect Periodic Table". [4] Assis A.K.T., TorresSilva, H.: "Comparison between Weber's electrodynamics and classical electrodynamics". [5] Hetch L.: "The Atomic Science textbooks don't teach" ,1996. [6] Hetch L.: "The Life and Work of Dr. Robert J. Moon" ,2004. [7] Bacciagaluppi G., Valentini A..: "Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference". [8] Bohm D., Hiley B.J., Kaloyerou P.N.: "An Ontological basis for the Quantum Theory". [9] Correa P.N., Correa A.N.: "The Electric Aether and the Structure of the Electron". [10] Ginzburg V.B., "Three-Dimensional Spiral String Theory". [11] De-Picciotto R. et al., "Direct observation of a fractional charge". [12] Edmondson A.C., "A Fuller Explanation: The Synergetic Geometry of R. Buckminster Fuller" ,2007. [13] Kappraff J., "The Flame-Hand letters of the Hebrew Alphabet" ,2002. [14] Hetch L.: "The Geometric Basis for the Periodicity of the Elements" ,1988.
055ca06a733a61b5
WikiJournal of Science/A card game for Bell's theorem and its loopholes From Wikiversity Jump to navigation Jump to search WikiJournal of Science logo.svg WikiJournal of Science Wikipedia-integrated • Public peer review • Libre open access <meta name='citation_doi' value='10.15347/wjs/2018.005'> Article information Authors: Guy Vandegrift[i], Joshua Stomel Guy Vandegrift; Joshua Stomel (1 June 2018), "A card game for Bell's theorem and its loopholes", WikiJournal of Science, 1 (1): 5, doi:10.15347/WJS/2018.005, ISSN 2470-6345, Wikidata Q55120315 In 1964 John Stewart Bell made an observation about the behavior of particles separated by macroscopic distances that had puzzled physicists for at least 29 years, when Einstein, Podolsky and Rosen put forth the famous EPR paradox. Bell made certain assumptions leading to an inequality that entangled particles are routinely observed to violate in what are now called Bell test experiments. As an alternative to showing students a "proof" of Bell's inequality, we introduce a card game that is impossible to win. The solitaire version is so simple it can be used to introduce binomial statistics without mentioning physics or Bell's theorem. Things get interesting in the partners' version of the game because Alice and Bob can win, but only if they cheat. We have identified three cheats, and each corresponds to a Bell's theorem "loophole". This gives the instructor an excuse to discuss detector error, causality, and why there is a maximum speed at which information can travel. The conundrum Although this can be called a theorem, it might be better viewed as something "spooky" that has been routinely observed, and is consistent with quantum mechanics. But this puzzling behavior violates what might be called common notions about what is and is not possible.[1][2] Students typically encounter a mathematical theorem as an incomprehensible statement that cannot be digested until it is first proven and then applied in practice. It is not uncommon for novices to refer to some version of Bell's inequality as Bell's theorem because the inequality can be mathematically "proven".[3] The problem is that what is proven turns out to be untrue. David Mermin described an imaginary device not unlike that shown in Fig. 1, and refers to the fact that such a device actually exists as a conundrum, then pointed out that many physicists deny that it is a conundrum.[4] A simple Bell's theorem experiment It is customary to name the particles[5] in a Bell's theorem experiment "Alice" and "Bob", an anthropomorphism that serves to emphasize the fact that a pair of humans cannot win the card game ... unless they cheat. To some experts, a "loophole" is a constraint on any theory that might replace quantum mechanics.[6] It is also possible to view a loophole as a physical mechanism by which the outcome of a Bell's theorem experiment might seem less "spooky". In this paper, we associate loopholes with ways to cheat at the partners' version of the card game. It should be noted that the three loophole mechanisms introduced in this paper raise questions that are even spookier than quantum mechanics: Are the photons "communicating" with each other? Do they "know" the future? Do they "persuade" the measuring devices to fail when the "cards are unfavorable"?[7] Since entanglement is so successfully modeled by quantum mechanics, one can argue that there is no need for a mechanism that "explains" it. Nevertheless, there are reasons for investigating loopholes. At the most fundamental level, history shows that a successful physical theory can be later shown to be an approximation to a deeper theory, and the need for this new theory is typically associated with a failure of the old paradigm. It is plausible that a breakdown of quantum mechanics might be discovered using a Bell's theorem experiment designed to investigate a loophole. But the vast majority of us (including most working physicists) need other reasons to care about loopholes: Many find it interesting that we seem to live in a universe governed by fundamental laws, and Bell's theorem yields insights into the bizarre nature of those laws. Also, those who teach can use these card games to motivate introductory discussions about statistical inference, polarization, and modern physics. Bell card house.svg Figure 1 |  The outside casing of each device remains stationary while the circle with parallel lines rotates with the center arrow pointing in one of three directions (, , .) If Jacks are used to represent these directions, Alice will see J as her question card. She will respond with an "odd"-numbered answer card (3) to indicate that she is blocked by the filter. If Bob passes through a filter with the "spade" orientation, he sees J as the question card, and answers with the "even" numbered 2. This wins one point for the team because they gave different answers to different questions. Figure 1 shows a hypothetical and idealized experiment involving two entangled photons simultaneously emitted by a single (parent) atom. After the photons have been separated by some distance, each is exposed to a measurement that determines whether the photon would pass or be blocked by the polarizing filter.[8] To ensure that the results seem "spooky" it should be possible to rotate the filter while the photons are en route so that the filter's angle of orientation is not "known" to either photon until it encounters the filter. If the filters are rotated between only three polarization angles, we may use card suits (hearts , clubs ♣, spades ♠) to represent these angles. These three polarization angles are associated with "question" cards, because the measurement essentially asks the photon a question: "Will you pass through a filter oriented at this angle?" For simplicity we restrict our discussion to symmetric angles (0°, 120°, 240°.) The filter's axis of polarization is shown in the figure as parallel lines, with the center line pointing to the heart, club, or spade. Any face card can be used to "ask" the question, and the four face cards (jack, queen, king, ace) are equivalent. If the detectors are flawless, each measurement is binary: The photon either passes or is blocked by the filter (subsequent measurements on a photon would yield nothing interesting.) The measurement's outcome is represented by an even or odd numbered "answer" card (of the same suit). The numerical value of an answer card is not important: all even numbers (2,4,6,8) are equivalent and represent a photon passing through the filter, while the odd cards (3,5,7,9) represent a photon being blocked. Although Bell's inequality is easy to prove[9], we avoid it here because the card game reverses roles regarding probability: Instead of the investigators attempting to ascertain the photons' so-called hidden variables, the players are acting as particles attempting to win the game by guessing the measurement angles. Another complication is that the original form of Bell's inequality does not adequately model the partners' version of the game because humans have the freedom to exhibit a behavior not observed by entangled particles (under ideal experimental conditions). This behavior involves a 100% correlation (or anti-correlation) whenever the polarization measurement angles are either parallel or perpendicular to each other. [10] In the partners' version of the card game, this behavior must be enforced by deducting a penalty from the partners' score whenever they are caught using a forbidden strategy (which we shall later call the β-strategy). The minimum required penalty is calculated in Supplementary file:The car and the goats. Fortunately students need not master this calculation because the actual penalty should often be whatever it takes to encourage a strategy that mimics this aspect of entanglement (which we shall call the α-strategy.) A theoretical understanding of how one can model entanglement using the Schrödinger equation can be found in Supplementary file:Tube entanglement. The solitaire card game Bell's card game solitaire.svg Figure 2 |  Solitaire version of game. Cases 1, 2, and 3 represent three possible outcomes if the player chooses the best strategy (later called the "α-strategy": One answer (here, "odd" for ) differs from that given for the other two questions (here, "even" for & .) Figure 2 shows the three possible outcomes associated with one hand of the solitaire version of the game. The solitaire version requires nine cards. The figure uses a set with three "jacks" ( ♣ ♠) for the questions, and (2,3) for the six (even/odd) answer cards. To play one round of the game, the player first shuffles the three question cards and places them face down so their identity is not known. Next, for each of the three suits, the player selects an even or odd answer card. The figure shows the player choosing the heart and club to be even, while the spade is odd: 2 2♣ 3♠. This is the only viable strategy, since the alternative is to always lose by selecting three answers that are all even or all odd. In the partner's version we shall introduce a second, β-strategy, which is not possible in the solitaire game. After three answer cards are selected and turned face up, two of the three question cards are randomly selected and also turned face up. Figure 2 depicts all three equally probable outcomes, or ways to select two out of three cards (3 choose 2.)[11] The round is scored by adding or subtracting points, as shown in Table 1: First the suit of each of the two upturned question cards is matched to the corresponding answer card. In case 1 (shown in the figure), the player wins one point because answers are different: is an even number, while ♠ is odd. The player loses three points in case 2 because the and ♣ are the same (even). Case 3 wins one point for the player because the answers are different. It is evident that the player has a 2/3 probability winning a round. The conundrum of Bell's theorem is that entangled particles in an actual experiment manage to win with a probability of 3/4. Table 1 shows that this scoring system causes humans to average a loss of at least 1/3 of a point per round, while entangled particles maintain an average score of zero.[12] How do particles succeed where humans fail? Table 1: Solitaire Scoring Points Answers are: Example[13] +1 different 2 and 3♠ +1 different 2 and 3♣ −3 same 2 and 2♣ The game for entangled partners In the partners' version, Alice and Bob each play one (even/odd) answer card in response to the suit of a question card. Every round is played in two distinctly different phases. Alice and Bob are allowed to discuss strategy during phase 1 because it simulates the fact that the particles are (effectively) "inside" the parent atom before it emits photons. Then, all communication between the partners must cease during phase 2, which simulates the arrival of the photons at the detectors for measurement under conditions where communication is impossible. In this phase each player silently plays an (even/odd) answer that matches the question's suit. The player cannot know the other's question or answer during phase 2. In the solitaire version, the player held a deck of six numbered cards and pre-selected (even/odd) answers for each of the three (question) suits. This simulated the parent atom "deciding" the responses that each photon will give to all possible polarization measurements.[14] In an "ideal" Bell's theorem experiment, the two photons' responses to identical polarization measurement angles are either perfectly correlated or perfectly anticorrelated.[8][15] This freedom to independently choose different answers when Alice and Bob are faced with the same question creates a dilemma for the designers of the partners' version of the card game. Adherence to any rule forbidding different answers to the same question cannot always be verified. To enforce this rule, we deduct points whenever they give different answers to the same question. No points are awarded for giving the same answer to the same question. Note how this complexity is relevant to actual experiments because detectors can register false events. The minimum penalty that should be imposed depends on how often the partners are given question cards of the same suit, and is derived at Supplementary file:The car and the goats: where is the probability that Alice and Bob are asked the same question. The equality holds if and , which can be accomplished by randomly selecting two question cards from nine (K♠, K, K♣, Q♠, Q, Q♣,J♠, J, J♣), as shown in Fig. 3. If the equality in (1) holds, the partners are "neutral" with respect to the selection of two different strategies, one of which risks the 4 point penalty. Both strategies lose, but the loss rate is reduced to −1/4 points per round, because the referee must dilute the number of times different questions are asked. A sample round begins in the top part of Fig. 3 as phase 1 where the pipe smoking referee has selected different questions (hearts and spades). In a classroom setting, consider allowing Alice and Bob to side-by-side, facing slightly away from each other during phase 2. Arrange for the audience to sit close enough to listen and watch for evidence of surreptitious communication between Alice and Bob. The prospect of cheating not only makes the game more fun, but also allows us to introduce "loopholes". The "thought-bubbles" above the partners show a tentative agreement by the partners to play the same α-strategy introduced in the solitaire version (both say "even" to ♣, and "odd" to ♠.) It is important to allow both players to hold all the answer cards in phase 2 so that each can change his or her mind upon seeing the actual question. The figure shows them following their original plan and winning because the referee selected a heart for Alice and a spade for Bob. Bell's card game entangled.svg Figure 3 |  One round of the partners' version with Alice and Bob employing the same strategy (α) introduced in the solitaire game. Here, a version of "neutral" scoring is used in which the referee randomly selects from the nine question cards, with a penalty of 4 points assessed if different answers are given to the same question. Instructors might wish to override this "neutral" scoring by asking the same question more often than called for in the random selection. But the partners have another strategy that might win: Suppose Alice agrees to answer "even" to any question, while Bob answer is always "odd". This wins (+1) if different questions are asked, and loses (−Q) if the same question is asked. This is called the β-strategy. The Supplementary file:The car and the goats establishes that no other strategy is superior to the α and/or β strategies: α-strategy: Alice and Bob select their answers in advance, in such a way that both give the same answer if asked the same question. For example, they might both agree that ♣ are even, while ♠ is odd. This strategy was ensured in the solitaire version because only three cards are played: If the heart is chosen to be "even", the solitaire version models a situation where both Alice and Bob would answer "even" to "heart". This α-strategy requires that one answer differs from the other two (i.e., all "even" or all "odd" is never a good strategy). The expected loss is 1/3 for each round whenever different questions are asked. β-strategy: One partner always answers "even" while the other always answers "odd". This strategy gains one point if different questions are asked, and loses points if the same question is asked. For pedagogical reasons, the instructor may wish to discourage the β-strategy. If Alice and Bob are not asked the same question often, they might choose to risk large losses for the possibility winning just a few rounds using the β-strategy, perhaps terminating the game prematurely with a claim that they lost "quantum entanglement". To counter this, the referee can raise the penalty to six points and randomly shuffle only six question cards that result from the merging of two solitaire decks. We refer to any scoring that favors the players' use of the α-strategy as "biased scoring". To further inhibit use of the β-strategy, the referee should routinely override the shuffle and deliberately select question cards of the same suit. The distinction between biased and neutral scoring lies in whether the equality or the inequality holds in (1). Table 2 shows examples of each scoring system. Both were selected to match an integer value for . The shuffle of 9 face cards exactly matches the equality in (1) if , while the more convenient collection of 6 face cards will bias the players towards the α-strategy if .[16] Table 2: Examples of neutral and biased scoring Neutral scoring Shuffle 9 face cards to ask the same question exactly 25% of the time. Biased scoring Shuffle 6 face cards and/or ask the same question with a probability higher than 2/11. Points Alice and Bob give... Example Points +1 different answers to different questions "even" to hearts and "odd" to spades +1 −3 the same answer to different questions "even" to clubs and "even" to hearts −3 −4 different answers to the same question "even" to clubs and "odd" to clubs −6 0 the same answer to the same question "even" to clubs (for both players) 0 Cheating at cards and Bell's theorem "loopholes" In the card game, Alice and Bob could either win by surreptitiously communicating after they see their question cards, or by colluding with the referee to learn the questions in advance. Which seems more plausible, information travelling faster than light, or atoms acting as if they "know" the future? A small poll of undergraduate math and science college students suggests that they are inclined to favor faster-than-light communication as being more plausible. We shall use a space-time diagram to illustrate how faster-than-light communication violates causality by allowing people to send signals to their own past. And, we shall argue that one can make the case that decisions made today by humans regarding how and where to perform a Bell's theorem experiment next week, might be mysteriously connected to the behavior of an obscure atom in a distant galaxy billions of years ago.[17] The third loophole was a surprise for us. In an early trial of the partners' game, a student[18] stopped playing and attempted to construct a modified version of the α-strategy that uses the new information a player gains upon seeing his or her question card. After convincing ourselves that no superior strategy exists, we realized that a player could cheat by terminating the game after seeing his or her own question card, but before playing the answer card. This is related to an important detector efficiency loophole.[19] The student's discovery also alerted us to the fact that our original calculation of (1) was just a lucky guess based on flawed logic. Magic phones: Communications loophole Alice and Bob could win every round of the partners' version if they cheat by communicating with each other after seeing their question cards in phase 2. In an actual experiment, this loophole is closed by making the measurements far apart in space and nearly simultaneous, which in effect requires that these communications travel faster than the speed of light.[20] While any faster-than-light communication is inconsistent with special relativity, we shall limit our discussion to information that travels at nearly infinite speed.[21] Instantaneous communication Minkowskilike.svg Figure 4 |  "Magic phone#1" is situated on a moving train and can be used by Alice to send a message to Bob's past, which Bob relays back to Alice's past using the land-based "Magic phone #2". These magic phones transmit information with near infinite speed. Figure 4 shows Alice and Bob slightly more than one light-year apart. The dotted world lines for each is vertical, indicating that they remain at rest for over a year. The slopes of world lines of the train's front and rear are roughly 3 years per light-year, corresponding to about 1/3 the speed of light. Both train images are a bit confusing because it is difficult to represent a moving train on a space-time diagram: A moving train can be defined by the location of each end at any given instant in time. This requires the concept of simultaneity, which is perceived differently in another reference frame. The horizontal image of the train at the bottom represents to location of each car on the train on the first day of January, as time and simultaneity are perceived by Alice and Bob. To complicate matters, the horizontal train image is not what they would actually see due to the finite transit time required for light to reach their eyes. It helps to imagine a distant observer situated on a perpendicular to some point on the train. The transit time for light to reach this distant observer will be nearly the same for every car on the train. Many years later, this distant observer will see the horizontal train as depicted at the bottom of the figure. It will be instructive to return to the perspective of this distant observer after the paradox has been constructed. The slanted image of the train depicts the location of each car on the day that the (moving) passengers perceive the front to be adjacent to Alice, at the same time that the train's rear is perceived to be adjacent to Bob. It should be noted that Alice and Bob do not perceive these two events as simultaneous. The figure shows that the rear passes Bob several months before the front passes Alice (in the partners' reference frame.) Now we establish that the passengers perceive the front of the train to reach Alice at the same time that the rear reaches Bob. The light-emitting-diode (LED) shown at the bottom of Fig. 4 emits two pulses from the center of the train in January. It is irrelevant whether the LED is stationary or moving because all observers will see the pulses travelling in opposite directions at the speed of light (±1 ly/yr.) Note how the backward moving pulses reaches the rear of the train in May, five months before the other pulse reaches the train's front in October. But, the passengers see two light pulses created at the center of the train, directed at each end of the train, and will therefore perceive the two pulses as striking simultaneously. To create the causality paradox, we require two "magic-phones" capable of sending messages with nearly infinite speed. Unicorn icons use arrows to depict the information's direction of travel: magic phone #1 transmits from Alice to Bob, while #2 transmits from Bob to Alice. Magic phone #1 is situated on the moving train. When Alice shows her message through the front window as the train passes her in October, a passenger inside relays the message via magic phone #1 to the train's rear, where Bob can see it through a window. Bob immediately relays the message back to Alice via the land-based magic phone #2 in May, five months before she sent it. Our distant observer will likely take a skeptical view of all this. The slope of the slanted train's image indicates that the distant observer will see magic phone #1 sending information from Bob to Alice, opposite to what the passengers perceive. The distant observer will first see the message inside the rear of the train (when it was adjacent to Bob in May). That message will immediately begin to travel towards of Alice, faster than the speed of light, but slow enough so that Alice will not receive the it until October. Meanwhile, Bob sends the same message via land-based phone #2 to Alice, who receives it in May. Alice waits for almost five months, until she prepares to send the same message, showing it through the front window just before the message also arrives at the front via the train-based magic phone #1. It would appear to the distant observer that the events depicted in Fig. 4 had been artificially staged. This communications loophole in an actual Bell test experiment was closed by arranging for the measurements to coincide so that any successful effort to communicate would suggest that humans could change their own past using this ability to send information faster than light. Referee collusion: Determinism loophole Bell's theorem superdetermism cards.svg Figure 5 |  Cosmic photons from two distant spiral galaxies arrive on Earth with properties that trigger the filters to ask the & questions of photons just prior to their arrival with a winning combination of (even/odd) answers. This "determinism", or "freedom-of-choice" loophole involves the ability of the quantum system to predict the future. Curiously, the strategy would not be called "cheating" in the card game if Alice or Bob relied on intuition to guess which cards the referee will play in the upcoming round. But what makes this loophole bizarre when applied to a Bell test experiment is that it would have been necessary to predict the circumstances under which the experiment was designed and constructed by human beings who evolved on a plant that was formed almost five billion years ago. On the other hand, viewing the parent atom, the two photons, and the detectors as one integrated quantum entity is consistent with the proper modeling of a quantum-mechanical system. The paradoxical violation of Bell's inequality arises from the need to model two remote particles as one system, so it is not unreasonable to assume that the conundrum can be resolved by including the devices that make the measurements into that model. Figure 5 is inspired by a comment made by Bell during a 1985 radio interview that mentioned something he called "superdeterminism". [22][23] It is a timeline that depicts the big bang, beginning at a time when space and time were too confusing for us to graph. At this beginning, "instructions" were established that would dictate the entire future of the universe, from every action taken by every human being, to the energy, path, and polarization of every photon that will ever exist. Long ago, obscure atoms in two distant galaxies (Sb and Sc) were instructed to each emit what will become "cosmic photons" that strike Earth. Meanwhile, "instructions" will call for humans to evolve on Earth and create a Bell's theorem experiment that uses the frequency and/or polarization of cosmic photons to set the polarization measurement angles while the entangled photons Alice and Bob are still en route to the detectors. Alice and Bob will arrive at their destinations already "knowing" how to respond because the cosmic photons were "instructed" to have properties that cause the questions to be "heart" and "spade". Viewed this way, the events depicted in Fig. 5 are just the way things happen to turn out. Efforts to enact the scenario with an actual experiment using cosmic photons in this way are being carried out. The most recent experiment looks back at photons created 600 years ago.[24][25] Note also how this experiment does not "close" the loophole, but instead greatly expands the scale of any "collusion" between the parent atom and detectors. It is claimed that the results of Bell test experiments do not contradict special relativity, despite what may appear to some as faster-than-light "communication" between Alice and Bob.[26] Figure 5 can help us visualize this if the "instructions" represent the time evolution of an exotic version of Schrödinger's equation for the entire universe. If this wave equation is deterministic, future evolution of all probability amplitudes is predetermined. One flaw in this argument is that it relies on an equation that governs the entire universe, and for that reason is not likely to be solved or written down. Perhaps this is why the paradox seems to have no satisfactory resolution. The Rimstock cheat: Detector error loophole Bell's rimstock cards transposed.svg Figure 6 |  The Rimstock cheat: Bob flips a coin to determine whether to play the cheat on this round. Alice will play "even" to hearts and "odd" to spades or clubs. Rimstock spaghetti Bell's theorem 50.svg Figure 7 |  Four teams of players engaging in the detector error cheat. Each connected dot represents a hand in which different questions were asked, and the horizontal dots simulate a detector error that coincided with a player receiving an unfavorable question. The following variation of the α-strategy allows the team to match the performance of entangled particles by achieving an average score of zero: Alice preselects three answers and informs Bob of her decision. Bob will either answer in the same fashion, or he might abruptly stop the hand upon seeing his question card, perhaps requesting that the team take a brief break while another pair of students play the role of Alice and Bob. In a card game, this request to stop and replay a hand would require the cooperation of a gullible scorekeeper. But no detector in an actual Bell's theorem experiment is 100% efficient, and this complicates the analysis of a Bell's theorem experiment in a way that requires both careful calibration of the detector's efficiency, as well as detailed mathematical analysis. Since this strategy never calls for Alice and Bob to give different (even/odd) answers to the same question, we may consider only rounds where the players get different questions. To understand why Bob might refuse to play a card, suppose Alice plans to answer "even" to hearts and "odd" to clubs and spades. As indicated at the top of Fig. 6, Bob the heart is the "desired" suit because he knows they will win if he sees that question. But their chances of wining are reduced to only 50% if Bob sees the "undesired" club or spade. To avoid raising suspicion, Bob does not stop the game each time he sees an unfavorable question. Instead, he stops with a 50% probability upon seeing an unfavorable card. To calculate the average score, we construct a probability space consisting of equally probable outcomes, beginning with the three possible suits that Bob might see. We quadruple the size of this probability space (from 3 to 12) by treating the following two pairs of events as independent, and occurring with equal probability: 1. Bob will either stop the hand, or play round (Do stop or Don't stop.) 2. After seeing his question, Bob knows that Alice might receive one of only two possible questions (ignoring rounds with the same question asked of both.) Figure 6 can be used to show that Bob will stop the game with a probability of 1/3.[27] But if Bob and Alice randomly share this role of stopping the game, each player will stop a given round with only 1/6, yielding an apparent detector efficiency of 5/6 = 83.3%.[19] Typical results for a team playing this ruse are illustrated in Fig. 7. Ten rounds are played on four different occasions. The vertical axis represents in the team's net score, with upward steps corresponding to winning one point, and downward corresponding to losing three points. The horizontal lines showing no change in score indicate occasions where Bob or Alice refused to play an answer card (it was never necessary to ask both partners the same question in this simulation.) Figure 6 was generated using an Excel spreadsheet using the rand() function, which caused the graphs to change at every ctrl+= keystroke. It took several strokes to get a graph where the lines did not cross, and all the event counts were this close to expected values. As discussed in a supplement, an Excel verification lab is an appropriate activity in a wide variety of STEM courses. Pedagogical issues To make sixteen solitaire decks, purchase three identical standard 52 card decks. Remove only one suit (hearts, clubs, spades) from each deck to create four solitaire sets. Each group should contain 3-5 people, and two solitaire decks (for "biased" scoring with Q=6.) To avoid confusion of an ace (question card) with an (even/odd) answer card, reserve the ace for groups with large even/odd number cards. For example, one group might have solitaire sets with (ace,8,9) and (king, 6,7). In a small classroom, the entire audience can observe or even give advice to one pair playing the partners' version at the front of the room. Placing the question cards adjacent to the players at the start will permit the instructor and entire class to join the partners' discussion regarding strategy during phase 1. For "neutral" scoring with Q=4, the instructor can either borrow question cards from the class, or convert unused "10" cards into questions. Since cheating will come so naturally, this game is not suitable for gambling (even for pennies). Bell's theorem can lead to topics ranging from baseless pseudoscience to legitimate (but pedagogically unnecessary) speculation regarding alternatives to the theory of quantum mechanics. While few physicists are experts in such topics, all teachers will eventually face such issues in the classroom. The authors of this paper claim no expertise in any of this, and the intent is to illustrate the "spookiness" of Bell's theorem, show how one can use simple logic to prove that faster-than-light communication violates special relativity,[21] and introduce students to the concept of a "deterministic" theory or model.[26] Additional information Supplementary material • Supplementary file 1 | The car and the goats (OOjs UI icon download.svg Download) A rigorous proof of the penalty that yields "neutral scoring". • Supplementary file 2 | Impossible correlations (OOjs UI icon download.svg Download) Extends Bell's inequality to non-symmetric cases and also proves the CHSH inequality (without using calculus). • Supplementary file 3 | Tube entanglement (OOjs UI icon download.svg Download) Describes a simple analog to entanglement with polarized photons. It relies on Maluss' Law, and also introduces Dirac notation as a shorthand representation for the wavefunction of two non-interacting massive particles confined to a narrow tube. Versions of this manuscript have received five referee reports (it was first submitted to the American Journal of Physics.) It is obvious that each referee was highly qualified, and that each exerted a considerable effort to improve the quality of this paper. Competing interests Guy Vandegrift is a member of the WikiJournal of Science editorial board. 1. Bell, John S. (1964). "On the Einstein Podolsky Rosen Paradox". Physics 1 (3): 195–200. doi:10.1103/physicsphysiquefizika.1.195.  2. Vandegrift, Guy (1995). "Bell's theorem and psychic phenomena". The Philosophical Quarterly 45 (181): 471–476. doi:10.2307/2220310.  3. See for example this discussion on the Wikipedia article's talk page, or Wikipedia's effort to clarify this with w:No-go theorem 4. Mermin, N. David (1981). "Bringing home the atomic world: Quantum mysteries for anybody". American Journal of Physics 49 (10): 940–943. doi:10.1119/1.12594. "(referring to those who do not consider this a conundrum) In one sense they are obviously right. Compare Tovey's remark that (Beethoven's) Waldstein Sonata has no more business than sunsets and sunrises to be paradoxical."  5. or detectors 6. These (hypothetical) theories are called "hidden variable" theories Larsson, Jan-Åke (2014). "Loopholes in Bell inequality tests of local realism". Journal of Physics A: Mathematical and Theoretical 47 (42): 424003. doi:10.1088/1751-8113/47/42/424003.  7. In w:special:permalink/829073568 these questions are associated with the "communication (locality)", the "free choice" and a "fair sampling" loophole, respectively. 8. 8.0 8.1 In most experiments electro-optical modulators are used instead of polarizing filters, and often it is necessary to rotate one set of orientations by 90°. Giustina, Marissa; Versteegh, Marijn A. M.; Wengerowsky, Sören; Handsteiner, Johannes; Hochrainer, Armin; Phelan, Kevin; Steinlechner, Fabian; Kofler, Johannes et al. (2015). "Significant-Loophole-Free Test of Bell’s Theorem with Entangled Photons". Physical Review Letters 115 (25): 250401. doi:10.1103/physrevlett.115.250401.  9. Maccone, Lorenzo (2013). "A simple proof of Bell's inequality". American Journal of Physics 81 (11): 854–859. doi:10.1119/1.4823600.  10. See equation 29 in Aspect, Alain (2002). "Bell's theorem: the naive view of an experimentalist". In Bertlmann, Reinhold A.; Zeilinger, Anton (eds.). Quantum [un] speakables (PDF). Berlin: Springer. p. 119-153. doi:10.1007/978-3-662-05032-3_9. 11. or "n choose k" is defined in w:Binomial coefficient 12. The player can lose more than 1/3 of a point per round by adopting the obviously bad strategy of making all three answers the same (all even or all odd.) This is closely related to the fact that Bell's "inequality" is not Bell's "equation". 13. Since 3-choose-2 equals 6, three other cases exist; all involve 3. 14. Keep in mind that it seems artificial for the parent atom to "know" that these photons are part of an experiment involving just three possible polarization measurements. This need to somehow orchestrate all possible fates for each emitted photon created the EPR conundrum long before Bell's inequality was discovered. See w:EPR paradox. 15. It is best not to assume that this correlations implies that the "decision" regarding polarization was actually made as the two photons are created by the parent atom. In physics, mathematical models should be judged by whether they yield predictions that can be verified by experiment, not whether these models make any sense. 16. Equation (1) shows that the case is neutral at . 17. One can also make the case that it is not the role of physics (or science) to speculate in such matters. 18. User:Rimstock 19. 19.0 19.1 Garg, Anupam; Mermin, N. David (1987). "Detector inefficiencies in the Einstein-Podolsky-Rosen experiment". Physical Review D 35 (12): 3831–3835. doi:10.1103/physrevd.35.3831.  20. Aspect, Alain; Dalibard, Jean; Roger, Gérard (1982). "Experimental Test of Bell's Inequalities Using Time- Varying Analyzers". Physical Review Letters 49 (25): 1804–1807. doi:10.1103/physrevlett.49.1804.  21. 21.0 21.1 Liberati, Stefano; Sonego, Sebastiano; Visser, Matt (2002). "Faster-than-c Signals, Special Relativity, and Causality". Annals of Physics 298 (1): 167–185. doi:10.1006/aphy.2002.6233.  22. Bell, John S. (2004). "Introduction to hidden-variable question". Speakable and unspeakable in quantum mechanics: Collected papers on quantum philosophy. Cambridge University Press. pp. 29–39. doi:10.1017/cbo9780511815676.006. 23. Kleppe, A. (2011). "Fundamental Nonlocality: What Comes Beyond the Standard Models". Bled Workshops in Physics. 12. pp. 103–111. In that interview, Bell was apparently speculating about a deterministic "hidden variable theory where all outcomes are highly dependent on initial conditions. 24. Gallicchio, Jason; Friedman, Andrew S.; Kaiser, David I. (2014). "Testing Bell’s Inequality with Cosmic Photons: Closing the Setting-Independence Loophole". Physical Review Letters 112 (11): 110405. doi:10.1103/physrevlett.112.110405.  25. Handsteiner, Johannes; Friedman, Andrew S.; Rauch, Dominik; Gallicchio, Jason; Liu, Bo; Hosp, Hannes; Kofler, Johannes; Bricher, David et al. (2017). "Cosmic Bell Test: Measurement Settings from Milky Way Stars". Physical Review Letters 118 (6): 060401. doi:10.1103/PhysRevLett.118.060401.  26. 26.0 26.1 See also Ballentine, Leslie E.; Jarrett, Jon P. (1987). "Bell's theorem: Does quantum mechanics contradict relativity?". American Journal of Physics 55 (8): 696–701. doi:10.1119/1.15059.  27. 2/3 is the product of the probability of receiving an unfavorable card, and 1/2 is the probability of stopping; hence (2/3)(1/2)=1/3
3d164e17b408e6fb
On Self-Hating Theories of Arithmetic Gödel’s second incompleteness theorem tells us that no (sufficiently powerful) consistent theory can prove the statement of its own consistency. But did you know that such a theory can prove the statement of its own inconsistency? A consistent theory that claims to be inconsistent is what I call a self-hating theory. My convention in what follows: ℕ refers to the real, true natural numbers, the set consisting of {0, 1, 2, 3, …} and nothing else. ω refers to the formal object that exists in a theory of arithmetic that is “supposed to be” ℕ, but inevitably (in first-order logic) fails to be. When I write ℕ ⊨ ψ, I am saying that the sentence ψ is true of the natural numbers. When I write T ⊢ ψ (resp. T ⊬ ψ), I am saying that the sentence ψ can be (resp. can’t be) proven from the axioms of the theory T. And when I write T ⊨ ψ, I am saying that the axioms of T semantically entail the truth of ψ (or in other words, that ψ comes out true in all models of T). The next two paragraphs will give some necessary background on Gödel’s encoding, and then we’ll explore the tantalizing claims I started with. Gödel’s breathtakingly awesome insight was that within any language that is expressive enough to talk about natural number arithmetic, one can encode sentences as numbers and talk about syntactic features of these sentences as properties of numbers. When a number n encodes a sentence ψ, we write n = ⟦ψ⟧. Then Gödel showed that you can have sentences talking about the provability of other sentences. (The next step, of course, was showing that you can have sentences talking about their own provability – sneaking in self-reference through the back door of any structure attempting to describe arithmetic.) In particular, in any theory of natural number arithmetic T, one can write a sentence that on its surface appears to just be a statement about the properties of natural numbers, but when looked at through the lens of Gödel’s encoding, ends up actually encoding the sentence “T ⊢ ψ”. And this sentence is itself encoded as some natural number! So there’s a natural number n such that n = ⟦T ⊢ ψ⟧. It’s a short step from here to generating a sentence that encodes the statement of T’s own consistency. We merely need to encode the sentence “¬∃n (n = ⟦T ⊢ 0=1⟧)”, or in English, there’s no number n such that n encodes a proof of “0=1” from the axioms of T. In even plainer English, no number encodes a proof of contradiction from T (from which it follows that there IS no proof of contradiction from T, as any proof of contradiction would be represented by some number). We write this sentence as Con(T). Okay, now we’re in a position to write the original claim of this post more formally. If a theory T is consistent, then ℕ ⊨ Con(T). And Gödel’s second incompleteness theorem tells us that if ℕ ⊨ Con(T), then T ⊬ Con(T). But if T doesn’t prove the sentence Con(T), then no contradiction can be derived by adding ¬Con(T) as an axiom! So (T + ¬Con(T)) is itself a consistent theory, i.e. ℕ ⊨ Con(T + ¬Con(T)). But hold on! (T + ¬Con(T)) can prove its own inconsistency! Why? Because (T + ¬Con(T)) ⊢ ¬Con(T), i.e. it proves that a contradiction can be derived from the axioms of T, and it also has as axioms every one of the axioms of T! So the same number that encodes a proof of the inconsistency of T, also counts as a proof of the inconsistency of (T + ¬Con(T))! Summarizing this all: ℕ ⊨ Con(T) T ⊬ Con(T) ℕ ⊨ Con(T + ¬Con(T)), (T + ¬Con(T)) ⊢ ¬Con(T + ¬Con(T)) There we have it, a theory that is consistent but proves its own inconsistency! Expressed another way: T ⊢ ∃n (n = ⟦T ⊢ 0=1⟧), T ⊬ 0=1 Ok, so believe it or not, a lot of the strangeness of this can be explained away by thinking about the implications of nonstandard models of arithmetic. One easy way to see this is to reflect on the fact that, as we saw above, “T is consistent” becomes in Gödel’s encoding, “There is no natural number n such that n encodes a proof of T’s inconsistency.” Or more precisely, “T is consistent” becomes “There is no natural number n such that n = ⟦T ⊢ 0=1⟧.” Now, no first-order theory can pin down the natural numbers. (I’ve written about this here and here.) I.e. no first order theory can express a quantification like “there is no natural number N such that …”. You can try, for sure, by defining some object ω and adding axioms to restrict its structure to look more and more like ℕ, but no matter how hard you try, no matter how many axioms you add, there will always be models of the theory in which ω ≠ ℕ. In particular, ω will be a strict superset of ℕ in all of these nonstandard models (ℕ ⊂ ω), so that ω contains all the naturals but also additional nonstandard numbers. So now consider what happens when we try to quantify over the naturals by saying “∀x ∈ ω”. This quantifier inevitably ranges over ALL of the elements of ω in each model, so it also touches the nonstandard numbers in the nonstandard models. This means that the theory only semantically entails quantified statements that are true of all possible nonstandard numbers! (Remember, T ⊨ ψ means that ψ is true in ALL models of T.) One nice consequence of this is that if T has a model in which ω = ℕ then in this model “∀x∈ω Φ(x)” is true only if Φ(x) is true of all natural numbers. By the completeness of first-order logic, this means that T can’t prove “∀x∈ω Φ(x)” unless it’s true of ℕ. This is reassuring; if T ⊢ ∀x∈ω Φ(x) and T has a model in which ω = ℕ, then ℕ ⊨ ∀x∈ω Φ(x). But the implication doesn’t go the other way! ℕ ⊨ ∀x∈ω Φ(x) does not guarantee us that T ⊢ ∀x∈ω Φ(x), because T can only prove that which is true in EVERY model. So T can only prove “∀x∈ω Φ(x)” if Φ(x) is true of all the naturals and every nonstandard number in every model of T! This is the reason that we don’t know for sure that if Goldbach’s conjecture is true of ℕ, then it’s provable in Peano arithmetic. On the face of it, this looks quite puzzling; Goldbach’s conjecture can be written as a first-order sentence and first-order logic is complete, so if it’s true then how could we possibly not prove it? The answer is hopefully clear enough now: Goldbach’s conjecture might be true of all of ℕ but false of some nonstandard models of Peano arithmetic (henceforth PA). You might be thinking “Well if so, then we can just add Goldbach’s conjecture as an axiom to PA and get rid of those nonstandard models!” And you’re right, you will get rid of those nonstandard models. But you won’t get rid of all the nonstandard models in which Goldbach’s conjecture is true! You can keep adding as axioms statements that are true of ℕ but false of some nonstandard model, and as you do this you rule out more and more nonstandard models. At the end of this process (once your theory consists of literally all the first-order sentences that are true of ℕ), you will have created what is known as “True Arithmetic”: {ψ | ℕ ⊨ ψ}. But guess what! At this point, have you finally ruled out all the nonstandard models? No! There’s still many many more (infinitely many, in fact! Nonstandard models of every cardinality! So many models that no cardinality is large enough to describe how many!) Pretty depressing, right? There are all these models that agree with ℕ on every first order sentence! But they are still not ℕ (most obviously because they contain numbers larger than 0, 1, 2, and all the rest of ℕ). The nonstandard models of True Arithmetic are the models that are truly irremovable in any first-order theory of arithmetic. Any axiom you add to try to remove them will also remove ℕ as a model. And when you remove ℕ as a model, some pretty wacky stuff begins to happen. Fully armed now with new knowledge of nonstandard numbers, let’s return to the statement I started with at the top of this post: there are consistent theories that prove their own inconsistency. The crucial point, the thing that explains this apparent paradox, is that all such theories lack ℕ as a model. If you think about this for a minute, it should make sense why this must be the case. If a theory T is consistent, then the sentence “∀x∈ω (x ≠ ⟦T ⊢ 0 = 1⟧)” is true in a model where ω = ℕ. So if T has such a model, then T simply can’t prove its own inconsistency, as it’s actually not inconsistent and the model where ω = ℕ will be able to see that! And once more, T can only prove what’s true in all of its models. Okay, so now supposing T is consistent (i.e. ℕ ⊨ Con(T)), by Gödel’s second incompleteness theorem, T cannot prove its own consistency. This means that (T + ¬Con(T)) is a consistent theory! But (T + ¬Con(T)) no longer has ℕ as a model. Why? Because ℕ ⊨ Con(T) and (T + ¬Con(T)) ⊨ ¬Con(T). So for any consistent theory T, (T + ¬Con(T)) only has nonstandard models. What does this mean about the things that T + ¬Con(T) proves? It means that they no longer have to be true of ℕ. So for instance, even though ℕ ⊨ Con(T + ¬Con(T)), (T + ¬Con(T)) might end up proving ¬Con(T + ¬Con(T)). And in fact, it does prove this! As we saw up at the top of this post, a moment’s examination will show that (T + ¬Con(T)) asserts as an axiom that a contradiction can be derived from the axioms of T, but also contains all the axioms of T! So by monotonicity, (T + ¬Con(T)) proves ¬Con(T + ¬Con(T)). What do we say of this purported proof of contradiction from (T + ¬Con(T))? Well, we know for sure that it’s not a standard proof, one that would be accepted by a mathematician. I.e., it asserts that there’s some n in ω that encodes a proof of contradiction from (T + ¬Con(T)). But this n is not actually a natural number, it’s a nonstandard number. And nonstandards encode proofs only in the syntactical sense; a nonstandard proof is a proof according to Gödel’s arithmetic encoding, but Gödel’s arithmetic encoding only applies to natural numbers. So if we attempted to translate n, we’d find that the “proof” it encoded was actually nonsense all along: a fake proof that passes as acceptable by wearing the arithmetic guise of a real proof, but in actuality proves nothing whatsoever. In first order logic, every theory of arithmetic has nonstandard models that foil our attempts to prove all the truths of ℕ. Theories of arithmetic with ONLY nonstandard models and no standard model can prove things that don’t actually hold true of ℕ. In particular, since theories of arithmetic can encode statements about their own consistency, theories that don’t have ℕ as a model can prove their own inconsistency, even if they really are consistent. So much for first order logic. What about Second Order Logic? As you might already know, second order logic is capable of ruling out all nonstandard models. There are second order theories that are categorical for ℕ. But there’s a large price tag for this achievement: second order logic has no sound and complete proof system! Sigh. People sometimes talk about nature being tricky, trying to hide aspects of itself from us. Often you hear this in the context of discussions about quantum mechanics and black holes. But I think that the ultimate trickster is logic itself! Want a logic that’s sound and complete? Ok, but you’ll have to give up the expressive power to allow yourself to talk categorically about ℕ! Want to have a logic with the expressive power to talk about ℕ? Ok, but you’ll have to give up the possibility of a sound and complete proof system! The ultimate structure of ℕ remains shifty, slipping from our view as soon as we try to look closely enough at it. Suppose that T is a second order theory that is categorical for ℕ. Then for every second-order sentence ψ that is true of ℕ, T ⊨ ψ. But we can’t make the leap from T ⊨ ψ to T ⊢ ψ without a complete proof system! So there will be semantic implications of T that cannot actually be proven from T. In particular, suppose T is consistent. Then T ⊨ Con(T), but T ⊬ Con(T), by Gödel’s second. And since T ⊬ Con(T), (T + ¬Con(T)) is consistent. But since T ⊨ Con(T), (T + ¬Con(T)) ⊨ Con(T). So (T + ¬Con(T)) ⊨ Con(T) ∧ ¬Con(T)! In other words, T + ¬Con(T) actually has no model! But it’s consistent! There are consistent second-order theories that are actually not logically possible – that semantically entail a contradiction and have no models. How’s that for trickiness? Logic, Theism, and Boltzmann Brains: On Cognitively Unstable Beliefs First case Propositional logic accepts that the proposition A-A is necessarily true. This is called the law of the excluded middle. Intuitionist logic differs in that it denies this axiom. Suppose that Joe is a believer in propositional logic (but also reserves some credence for intuitionist logic). Joe also believes a set of other propositions, whose conjunction we’ll call X, and has total certainty in X. One day Joe discovers that a contradiction can be derived from X, in a proof that uses the law of the excluded middle. Since Joe is certain that X is true, he knows that X isn’t the problem, and instead it must be the law of the excluded middle. So Joe rejects the law of the excluded middle and becomes an intuitionist. The problem is, as an intuitionist, Joe now no longer accepts the validity of the argument that starts at X and concludes -X! Why? Because it uses the law of the excluded middle, which he doesn’t accept. Should Joe believe in propositional logic or intuitionism? Second case Karl is a theist. He isn’t absolutely certain that theism is correct, but holds a majority of his credence in theism (and the rest in atheism). Karl is also 100% certain in the following claim: “If atheism is true, then the concept of ‘evil’ is meaningless”, and believes that logically valid arguments cannot be made using meaningless concepts. One day somebody presents the problem of evil to Karl, and he sees it as a crushing objection to theism. He realizes that theism, plus some other beliefs about evil that he’s 100% confident in, leads to a contradiction. So since he can’t deny these other beliefs, he is led to atheism. The problem is, as an atheist, Karl no longer accepts the validity of the argument that starts at theism and concludes atheism! Why? Because the arguments rely on using the concept of ‘evil’, and he is now certain that this concept is meaningless, and thus cannot be used in logically valid arguments. Should Karl be a theist or an atheist? Third case Tommy is a scientist, and she believes that her brain is reliable. By this, I mean that she trusts her ability to reason both deductively and inductively. However, she isn’t totally certain about this, and holds out a little credence for radical skepticism. She is also totally certain about the content of her experiences, though not its interpretation (i.e. if she sees red, she is 100% confident that she is experiencing red, although she isn’t necessarily certain about what in the external world is causing the experience). One day Tommy discovers that reasoning deductively and inductively from her experiences leads her to a model of the world that entails that her brain is actually a quantum fluctuation blipping into existence outside the event hole of a black hole. She realizes that this means that with overwhelmingly high probability, her brain is not reliable and is just producing random noise uncorrelated with reality. The problem is, if Tommy believes that her brain is not reliable, then she can no longer accept the validity of the argument that led her to this position! Why? Well, she no longer trusts her ability to reason deductively or inductively. So she can’t accept any argument, let alone this particular one. What should Tommy believe? — — — How are these three cases similar and different? If you think that Joe should be an intuitionist, or Karl an atheist, then should Tommy believe herself to be a black hole brain? Because it turns out that many cosmologists have found themselves to be in a situation analogous to Case 3! (Link.) I have my own thoughts on this, but I won’t share them for now. How will quantum computing impact the world? A friend of mine recently showed me an essay series on quantum computers. These essays are fantastically well written and original, and I highly encourage anybody with the slightest interest in the topic to check them out. They are also interesting to read from a pedagogical perspective, as experiments in a new style of teaching (self-described as an “experimental mnemonic medium”). There’s one particular part of the post which articulated the potential impact of quantum computing better than I’ve seen it articulated before. Reading it has made me update some of my opinions about the way that quantum computers will change the world, and so I want to post that section here with full credit to the original authors Michael Nielsen and Andy Matuschak. Seriously, go to the original post and read the whole thing! You won’t regret it. No, really, what are quantum computers good for? It’s comforting that we can always simulate a classical circuit – it means quantum computers aren’t slower than classical computers – but doesn’t answer the question of the last section: what problems are quantum computers good for? Can we find shortcuts that make them systematically faster than classical computers? It turns out there’s no general way known to do that. But there are some interesting classes of computation where quantum computers outperform classical. Over the long term, I believe the most important use of quantum computers will be simulating other quantum systems. That may sound esoteric – why would anyone apart from a quantum physicist care about simulating quantum systems? But everybody in the future will (or, at least, will care about the consequences). The world is made up of quantum systems. Pharmaceutical companies employ thousands of chemists who synthesize molecules and characterize their properties. This is currently a very slow and painstaking process. In an ideal world they’d get the same information thousands or millions of times faster, by doing highly accurate computer simulations. And they’d get much more useful information, answering questions chemists can’t possibly hope to answer today. Unfortunately, classical computers are terrible at simulating quantum systems. The reason classical computers are bad at simulating quantum systems isn’t difficult to understand. Suppose we have a molecule containing n atoms – for a small molecule, n may be 1, for a complex molecule it may be hundreds or thousands or even more. And suppose we think of each atom as a qubit (not true, but go with it): to describe the system we’d need 2^n different amplitudes, one amplitude for each bit computational basis state, e.g., |010011. Of course, atoms aren’t qubits. They’re more complicated, and we need more amplitudes to describe them. Without getting into details, the rough scaling for an natom molecule is that we need k^n amplitudes, where . The value of k depends upon context – which aspects of the atom’s behavior are important. For generic quantum simulations k may be in the hundreds or more. That’s a lot of amplitudes! Even for comparatively simple atoms and small values of n, it means the number of amplitudes will be in the trillions. And it rises very rapidly, doubling or more for each extra atom. If , then even natoms will require 100 million trillion amplitudes. That’s a lot of amplitudes for a pretty simple molecule. The result is that simulating such systems is incredibly hard. Just storing the amplitudes requires mindboggling amounts of computer memory. Simulating how they change in time is even more challenging, involving immensely complicated updates to all the amplitudes. Physicists and chemists have found some clever tricks for simplifying the situation. But even with those tricks simulating quantum systems on classical computers seems to be impractical, except for tiny molecules, or in special situations. The reason most educated people today don’t know simulating quantum systems is important is because classical computers are so bad at it that it’s never been practical to do. We’ve been living too early in history to understand how incredibly important quantum simulation really is. That’s going to change over the coming century. Many of these problems will become vastly easier when we have scalable quantum computers, since quantum computers turn out to be fantastically well suited to simulating quantum systems. Instead of each extra simulated atom requiring a doubling (or more) in classical computer memory, a quantum computer will need just a small (and constant) number of extra qubits. One way of thinking of this is as a loose quantum corollary to Moore’s law: The quantum corollary to Moore’s law: Assuming both quantum and classical computers double in capacity every few years, the size of the quantum system we can simulate scales linearly with time on the best available classical computers, and exponentially with time on the best available quantum computers. In the long run, quantum computers will win, and win easily. The punchline is that it’s reasonable to suspect that if we could simulate quantum systems easily, we could greatly speed up drug discovery, and the discovery of other new types of materials. I will risk the ire of my (understandably) hype-averse colleagues and say bluntly what I believe the likely impact of quantum simulation will be: there’s at least a 50 percent chance quantum simulation will result in one or more multi-trillion dollar industries. And there’s at least a 30 percent chance it will completely change human civilization. The catch: I don’t mean in 5 years, or 10 years, or even 20 years. I’m talking more over 100 years. And I could be wrong. What makes me suspect this may be so important? For most of history we humans understood almost nothing about what matter is. That’s changed over the past century or so, as we’ve built an amazingly detailed understanding of matter. But while that understanding has grown, our ability to control matter has lagged. Essentially, we’ve relied on what nature accidentally provided for us. We’ve gotten somewhat better at doing things like synthesizing new chemical elements and new molecules, but our control is still very primitive. We’re now in the early days of a transition where we go from having almost no control of matter to having almost complete control of matter. Matter will become programmable; it will be designable. This will be as big a transition in our understanding of matter as the move from mechanical computing devices to modern computers was for computing. What qualitatively new forms of matter will we create? I don’t know, but the ability to use quantum computers to simulate quantum systems will be an essential part of this burgeoning design science. Quantum computing for the very curious (Andy Matuschak and Michael Nielsen) The EPR Paradox The Paradox I only recently realized how philosophical the original EPR paper was. It starts out by providing a sufficient condition for something to be an “element of reality”, and proceeds from there to try to show the incompleteness of quantum mechanics. Let’s walk through this argument here: The EPR Reality Condition: If at time t we can know the value of a measurable quantity with certainty without in any way disturbing the system, then there is an element of reality corresponding to that measurable quantity at time t. (i.e. this is a sufficient condition for a measurable property of a system at some moment to be an element of the reality of that system at that moment:) Example 1: If you measure an electron spin to be up in the z direction, then quantum mechanics tells you that you can predict with certainty that the spin in the z direction will up at any future measurement. Since you can predict this with certainty, there must be an aspect or reality corresponding to the electron z-spin after you have measured it to be up the first time. Example 2: If you measure an electron spin to be up in the z-direction, then QM tells you that you cannot predict the result of measuring the spin in the x-direction at a later time. So the EPR reality condition does not entail that the x-spin is an element of the reality of this electron. It also doesn’t entail that the x-spin is NOT an element of the reality of this electron, because the EPR reality condition is merely a sufficient condition, not a necessary condition. Now, what does the EPR reality condition have to say about two particles with entangled spins? Well, suppose the state of the system is initially |Ψ> = (|↑↓ – |↓↑) / √2 This state has the unusual property that it has the same form no matter what basis you express it in. You can show for yourself that in the x-spin basis, the state is equal to |Ψ> = (|→← – |←→) / √2 Now, suppose that you measure the first electron in the z-basis and find it to be up. If you do this, then you know with certainty that the other electron will also be measured to be up. This means that after measuring it in the z-basis, the EPR reality condition says that electron 2 has z-spin up as an element of reality. What if you instead measure the first electron in the x-basis and find it to be right? Well, then the EPR reality condition will tell you that the electron 2 has x-spin right as an element of reality. Okay, so we have two claims: 1. That after measuring the z-spin of electron 1, electron 2 has a definite z-spin, and 2. that after measuring the x-spin of electron 1, electron 2 has a definite x-spin. But notice that these two claims are not necessarily inconsistent with the quantum formalism, since they refer to the state of the system after a particular measurement. What’s required to bring out a contradiction is a further assumption, namely the assumption of locality. For our purposes here, locality just means that it’s possible to measure the spin of electron 1 in such a way as to not disturb the state of electron 2. This is a really weak assumption! It’s not saying that any time you measure the spin of electron 1, you will not have disturbed electron 2. It’s just saying that it’s possible in principle to set up a measurement of the first electron in such a way as to not disturb the second one. For instance, take electrons 1 and 2 to opposite sides of the galaxy, seal them away in totally closed off and causally isolated containers, and then measure electron 1. If you agree that this should not disturb electron 2, then you agree with the assumption of locality. Now, with this additional assumption, Einstein Podolsky and Rosen realized that our earlier claims (1) and (2) suddenly come into conflict! Why? Because if it’s possible to measure the z-spin of electron 1 in a way that doesn’t disturb electron 2 at all, then electron 2 must have had a definite z-spin even before the measurement of electron 1! And similarly, if it’s possible to measure the x-spin of electron 1 in a way that doesn’t disturb electron 2, then electron 2 must have had a definite x-spin before the first electron was measured! What this amounts to is that our two claims become the following: 1. Electron 2 has a definite z-spin at time t before the measurement. 2. Electron 2 has a definite x-spin at time t before the measurement. And these two claims are in direct conflict with quantum theory! Quantum mechanics refuses to assign a simultaneous x and z spin to an electron, since these are incompatible observables. This entails that if you buy into locality and the EPR reality condition, then you must believe that quantum mechanics is an incomplete description of nature, or in other words that there are elements of reality that can not described by quantum mechanics. The Resolution(s) Our argument rested on two premises: the EPR reality condition and locality. Its conclusion was that quantum mechanics was incomplete. So naturally, there are three possible paths you can take to respond: accept the conclusion, deny the second premise, or deny the first premise. To accept the conclusion is to agree that quantum mechanics is incomplete. This is where hidden variable approaches fall, and was the path that Einstein dearly hoped would be vindicated. For complicated reasons that won’t be covered in this post, but which I talk about here, the prospects for any local realist hidden variables theory (which was what Einstein wanted) look pretty dim. To deny the second premise is to say that in fact, measuring the spin of the first electron necessarily disturbs the state of the second electron, no matter how you set things up. This is in essence a denial of locality, since the two electrons can be time-like separated, meaning that this disturbance must have propagated faster than the speed of light. This is a pretty dramatic conclusion, but is what orthodox quantum mechanics in fact says. (It’s implied by the collapse postulate.) To deny the first premise is to say that in fact there can be some cases in which you can predict with certainty a measurable property of a system, but where nonetheless there is no element of reality corresponding to this property. I believe that this is where Many-Worlds falls, since measurement of z-spin doesn’t result in an electron in an unambiguous z-spin state, but in a combined superposition of yourself, your measuring device, the electron, and the environment. Needless to say, in this complicated superposition there is no definite fact about the z-spin of the electron. I’m a little unsure about where the right place to put psi-epistemic approaches like Quantum Bayesianism, which resolve the paradox by treating the wave function not as a description of reality, but solely as a description of our knowledge. In this way of looking at things, it’s not surprising that learning something about an electron at one place can instantly tell you something about an electron at a distant location. This does not imply any faster-than-light communication, because all that’s being described is the way that information-processing occurs in a rational agent’s brain. Measurement without interaction in quantum mechanics In front of you is a sealed box, which either contains nothing OR an incredibly powerful nuclear bomb, the explosion of which threatens to wipe out humanity permanently. Even worse, this bomb is incredibly unstable and will blow up at the slightest contact with a single photon. This means that anybody that opens the box to look inside and see if there really is a bomb in there would end up certainly activating it and destroying the world. We don’t have any way to deactivate the bomb, but we could maintain it in isolation for arbitrarily long, despite the prohibitive costs of totally sealing it off from all contact. Now, for obvious reasons, it would be extremely useful to know whether or not the bomb is actually active. If it’s not, the world can breathe a sigh of relief and not worry about spending lots of money on keeping it sealed away. And if it is, we know that the money is worth spending. The obvious problem is that any attempt to test whether there is a bomb inside will involve in some way interacting with the box’s contents. And as we know, any such interaction will cause the bomb to detonate! So it seems that we’re stuck in this unfortunate situation where we have to act in ignorance of the full details of the situation. Right? Well, it turns out that there’s a clever way that you can use quantum mechanics to do an “interaction-free measurement” that extracts some information from the system without causing the bomb to explode! To explain this quantum bomb tester, we have to first start with a simpler system, a classic quantum interferometer setup: At the start, a photon is fired from the laser on the left. This photon then hits a beam splitter, which deflects the path of the photon with probability 50% and otherwise does nothing. It turns out that a photon that gets deflected by the beam splitter will pick up a 90º phase, which corresponds to multiplying the state vector by exp(iπ/2) = i. Each path is then redirected to another beam splitter, and then detectors are aligned across the two possible trajectories. What do we get? Well, let’s just go through the calculation: We get destructive interference, which results in all photons arriving at detector B. Now, what happens if you add a detector along one of the two paths? It turns out that the interference vanishes, and you find half the photons at detector A and the other half at detector B! That’s pretty weird… the observed frequencies appear to depend on whether or not you look at which path the photon went on. But that’s not quite right, because it turns out that you still get the 50/50 statistics whenever you place anything along one path whose state is changed by the passing photon! Huh, that’s interesting… it indicates that by just looking for a photon at detector A, we can get evidence as to whether or not something interacted with the photon on the way to the detector! If we see a photon show up at the detector, then we know that there must have been some device which changed in state along the bottom path. Maybe you can already see where we’re going with this… We have to put the box in the bottom path in such a way that if the box is empty, then when the photon passes by, nothing will change about either its state or the state of the photon. And if the box contains the bomb, then it will function like a detector (where the detection corresponds to whether or not the bomb explodes)! Now, assuming that the box is empty, we get the same result as above. Let’s calculate the result we get assuming that the box contains the bomb: Something really cool happens here! We find that if the bomb is active, there is a 25% chance that the photon arrives as A without the bomb exploding. And remember, the photon arriving at detector A allows us to conclude with certainty that the bomb is active! In other words, this setup gives us a 25% chance of safely extracting that information! 25% is not that good, you might object. But it sure is better than 0%! And in fact, it turns out that you can strengthen this result, using a more complicated interferometer setup to learn with certainty whether the bomb is active with an arbitrarily small chance of setting off the bomb! There’s so many weird little things about quantum mechanics that defy our classical intuitions, and this “interaction-free measurements” is one of my new favorites. Is the double slit experiment evidence that consciousness causes collapse? No! No no no. This might be surprising to those that know the basics of the double slit experiment. For those that don’t, very briefly: A bunch of tiny particles are thrown one by one at a barrier with two thin slits in it, with a detector sitting on the other side. The pattern on the detector formed by the particles is an interference pattern, which appears to imply that each particle went through both slits in some sense, like a wave would do. Now, if you peek really closely at each slit to see which one each particle passes through, the results seem to change! The pattern on the detector is no longer an interference pattern, but instead looks like the pattern you’d classically expect from a particle passing through only one slit! When you first learn about this strange dependence of the experimental results on, apparently, whether you’re looking at the system or not, it appears to be good evidence that your conscious observation is significant in some very deep sense. After all, observation appears to lead to fundamentally different behavior, collapsing the wave to a particle! Right?? This animation does a good job of explaining the experiment in a way that really pumps the intuition that consciousness matters: (Fair warning, I find some aspects of this misleading and just plain factually wrong. I’m linking to it not as an endorsement, but so that you get the intuition behind the arguments I’m responding to in this post.) The feeling that consciousness is playing an important role here is a fine intuition to have before you dive deep into the details of quantum mechanics. But now consider that the exact same behavior would be produced by a very simple process that is very clearly not a conscious observation. Namely, just put a single spin qubit at one of the slits in such a way that if the particle passes through that slit, it flips the spin upside down. Guess what you get? The exact same results as you got by peeking at the screen. You never need to look at the particle as it travels through the slits to the detector in order to collapse the wave-like behavior. Apparently a single qubit is sufficient to do this! It turns out that what’s really going on here has nothing to do with the collapse of the wave function and everything to do with the phenomenon of decoherence. Decoherence is what happens when a quantum superposition becomes entangled with the degrees of freedom of its environment in such a way that the branches of the superposition end up orthogonal to each other. Interference can only occur between the different branches if they are not orthogonal, which means that decoherence is sufficient to destroy interference effects. This is all stuff that all interpretations of quantum mechanics agree on. Once you know that decoherence destroys interference effects (which all interpretations of quantum mechanics agree on), and also that a conscious observing the state of a system is a process that results in extremely rapid and total decoherence (which everybody also agrees on), then the fact that observing the position of the particle causes interference effects to vanish becomes totally independent of the question of what causes wave function collapse. Whether or not consciousness causes collapse is 100% irrelevant to the results of the experiment, because regardless of which of these is true, quantum mechanics tells us to expect observation to result in the loss of interference! This is why whether or not consciousness causes collapse has no real impact on what pattern shows up in the wall. All interpretations of quantum mechanics agree that decoherence is a thing that can happen, and decoherence is all that is required to explain the experimental results. The double slit experiment provides no evidence for consciousness causing collapse, but it also provides no evidence against it. It’s just irrelevant to the question! That said, however, given that people often hear the experiment presented in a way that makes it seem like evidence for consciousness causing collapse, hearing that qubits do the same thing should make them update downwards on this theory. Decoherence is not wave function collapse In the double slit experiment, particles travelling through a pair of thin slits exhibit wave-like behavior, forming an interference pattern where they land that indicates that the particles in some sense travelled through both slits. Now, suppose that you place a single spin bit at the top slit, which starts off in the state |↑⟩ and flips to |↓⟩ iff a particle travels through the top slit. We fire off a single particle at a time, and then each time swap out that spin bit for a new spin bit that also starts off in the state |↑⟩. This serves as an extremely simple measuring device which encodes the information about which slit each particle went through. Now what will you observe on the screen? It turns out that you’ll observe the classically expected distribution, which is a simple average over the two individual possibilities without any interference. Okay, so what happened? Remember that the first pattern we observed was the result of the particles being in a superposition over the two possible paths, and then interfering with each other on the way to the detector screen. So it looks like simply having one bit of information recording the path of the particle was sufficient to collapse the superposition! But wait! Doesn’t this mean that the “consciousness causes collapse” theory is wrong? The spin bit was apparently able to cause collapse all by itself, so assuming that it isn’t a conscious system, it looks like consciousness isn’t necessary for collapse! Theory disproved! No. As you might be expecting, things are not this simple. For one thing, notice that this ALSO would prove as false any other theory of wave function collapse that doesn’t allow single bits to cause collapse (including anything about complex systems or macroscopic systems or complex information processing). We should be suspicious of any simple argument that claims to conclusively prove a significant proportion of experts wrong. To see what’s going on here, let’s look at what happens if we don’t assume that the spin bit causes the wave function to collapse. Instead, we’ll just model it as becoming fully entangled with the path of the particle, so that the state evolution over time looks like the following: Now if we observe the particle’s position on the screen, the probability distribution we’ll observe is given by the Born rule. Assuming that we don’t observe the states of the spin bits, there are now two qualitatively indistinguishable branches of the wave function for each possible position on the screen. This means that the total probability for any given landing position will be given by the sum of the probabilities of each branch: But hold on! Our final result is identical to the classically expected result! We just get the probability of the particle getting to |j⟩ from |A⟩, multiplied by the probability of being at |A⟩ in the first place (50%), plus the probability of the particle going from |B⟩ to |j⟩ times the same 50% for the particle getting to |B⟩. In other words, our prediction is that we’d observe the classical pattern of a bunch of individual particles, each going through exactly one slit, with 50% going through the top slit and 50% through the bottom. The interference has vanished, even though we never assumed that the wave function collapsed! What this shows is that wave function collapse is not required to get particle-like behavior. All that’s necessary is that the different branches of the superposition end up not interfering with each other. And all that’s necessary for that is environmental decoherence, which is exactly what we had with the single spin bit! In other words, environmental decoherence is sufficient to produce the same type of behavior that we’d expect from wave function collapse. This is because interference will only occur between non-orthogonal branches of the wave function, and the branches become orthogonal upon decoherence (by definition). A particle can be in a superposition of multiple states but still act as if it has collapsed! Now, maybe we want to say that the particle’s wave function is collapsed when its position is measured by the screen. But this isn’t necessary either! You could just say that the detector enters into a superposition and quickly decoheres, such that the different branches of the wave function (one for each possible detector state) very suddenly become orthogonal and can no longer interact. And then you could say that the collapse only really happens once a conscious being observes the detector! Or you could be a Many-Worlder and say that the collapse never happens (although then you’d have to figure out where the probabilities are coming from in the first place). You might be tempted to say at this point: “Well, then all the different theories of wave function collapse are empirically equivalent! At least, the set of theories that say ‘wave function collapse = total decoherence + other necessary conditions possibly’. Since total decoherence removes all interference effects, the results of all experiments will be indistinguishable from the results predicted by saying that the wave function collapsed at some point!” But hold on! This is forgetting a crucial fact: decoherence is reversible, while wave function collapse is not!!!  Screen Shot 2019-03-17 at 8.21.51 PM Pretty picture from doi: 10.1038/srep15330 Let’s say that you run the same setup before with the spin bit recording the information about which slit the particle went through, but then we destroy that information before it interacts with the environment in any way, therefore removing any traces of the measurement. Now the two branches of the wave function have “recohered,” meaning that what we’ll observe is back to the interference pattern! (There’s a VERY IMPORTANT caveat, which is that the time period during which we’re destroying the information stored in the spin bit must be before the particle hits the detector screen and the state of the screen couples to its environment, thus decohering with the record of which slit the particle went through). If you’re a collapse purist that says that wave function collapse = total decoherence (i.e. orthogonality of the relevant branches of the wave function), then you’ll end up making the wrong prediction! Why? Well, because according to you, the wave function collapsed as soon as the information was recorded, so there was no “other branch of the wave function” to recohere with once the information was destroyed! This has some pretty fantastic implications. Since IN PRINCIPLE even the type of decoherence that occurs when your brain registers an observation is reversible (after all, the Schrodinger equation is reversible), you could IN PRINCIPLE recohere after an observation, allowing the branches of the wave function to interfere with each other again. These are big “in principle”s, which is why I wrote them big. But if you could somehow do this, then the “Consciousness Causes Collapse” theory would give different predictions from Many-Worlds! If your final observation shows evidence of interference, then “consciousness causes collapse” is wrong, since apparently conscious observation is not sufficient to cause the other branches of the wave function to vanish. Otherwise, if you observe the classical pattern, then Many Worlds is wrong, since the observation indicates that the other branches of the wave function were gone for good and couldn’t come back to recohere. This suggests a general way to IN PRINCIPLE test any theory of wave function collapse: Look at processes right beyond the threshold where the theory says wave functions collapse. Then implement whatever is required to reverse the physical process that you say causes collapse, thus recohering the branches of the wave function (if they still exist). Now look to see if any evidence of interference exists. If it does, then the theory is proven wrong. If it doesn’t, then it might be correct, and any theory of wave function collapse that demands a more stringent standard for collapse (including Many-Worlds, the most stringent of them all) is proven wrong. On decoherence Consider the following simple model of the double-slit experiment: A particle starts out at |O⟩, then evolves via the Schrödinger equation into an equal superposition of being at position |A⟩ (the top slit) and being at position |B⟩ (the bottom slit). To figure out what happens next, we need to define what would happen for a particle leaving from each individual slit. In general, we can describe each possibility as a particular superposition over the screen. Since quantum mechanics is linear, the particle that started at |O⟩ will evolve as follows: If we now look at any given position |j⟩ on the screen, the probability of observing the particle at this position can be calculated using the Born rule: Notice that the first term is what you’d expect to get for the probability of a particle leaving |A⟩ being observed at position |j⟩ and the second term is the probability of a particle from |B⟩ being observed at |j⟩. The final two terms are called interference terms, and they give us the non-classical wave-like behavior that’s typical of these double-slit setups. Now, what we just imagined was a very idealized situation in which the only parts of the universe that are relevant to our calculation are the particle, the two slits and the detector. But in reality, as the particle is traveling to the detector, it’s likely going to be interacting with the environment. This interaction is probably going to be slightly different for a particle taking the path through |A⟩ than for a particle taking the path through |B⟩, and these differences end up being immensely important. To capture the effects of the environment in our experimental setup, let’s add an “environment” term to all of our states. At time zero, when the particle is at the origin, we’ll say that the environment is in some state |ε0⟩. Now, as the particle traverses the path to |A⟩ or to |B⟩, the environment might change slightly, so we need to give two new labels for the state of the environment in each case. |εA⟩ will be our description for the state of the environment that would result if the particle traversed the path from |O⟩ to |A⟩, and |εB⟩ will be the label for the state of the environment resulting from the particle traveling from |O⟩ to |B⟩. Now, to describe our system, we need to take the tensor product of the vector for our particle’s state and the vector for the environment’s state: Now, what is the probability of the particle being observed at position j? Well, there are two possible worlds in which the particle is observed at position j; one in which the environment is in state |εA⟩ and the other in which it’s in state |εB⟩. So the probability will just be the sum of the probabilities for each of these possibilities. This final equation gives us the general answer to the double slit experiment, no matter what the changes to the environment are. Notice that all that is relevant about the environment is the overlap term ⟨εAB⟩, which we’ll give a special name to: This term tells us how different the two possible end states for the environment look. If the overlap is zero, then the two environment states are completely orthogonal (corresponding to perfect decoherence of the initial superposition). If the overlap is one, then the environment states are identical. And look what we get when we express the final probability in terms of this term! Perfect decoherence gives us classical probabilities, and perfect coherence gives us the ideal equation we found in the first part of the post! Anything in between allows the two states to interfere with each other to some limited degree, not behaving like totally separate branches of the wavefunction, nor like one single branch. The problem with the many worlds interpretation of quantum mechanics The Schrodinger equation is the formula that describes the dynamics of quantum systems – how small stuff behaves. One fundamental feature of quantum mechanics that differentiates it from classical mechanics is the existence of something called superposition. In the same way that a particle can be in the state of “being at position A” and could also be in the state of “being at position B”, there’s a weird additional possibility that the particle is in the state of “being in a superposition of being at position A and being at position B”. It’s necessary to introduce a new word for this type of state, since it’s not quite like anything we are used to thinking about. Now, people often talk about a particle in a superposition of states as being in both states at once, but this is not technically correct. The behavior of a particle in a superposition of positions is not the behavior you’d expect from a particle that was at both positions at once. Suppose you sent a stream of small particles towards each position and looked to see if either one was deflected by the presence of a particle at that location. You would always find that exactly one of the streams was deflected. Never would you observe the particle having been in both positions, deflecting both streams. But it’s also just as wrong to say that the particle is in either one state or the other. Again, particles simply do not behave this way. Throw a bunch of electrons, one at a time, through a pair of thin slits in a wall and see how they spread out when they hit a screen on the other side. What you’ll get is a pattern that is totally inconsistent with the image of the electrons always being either at one location or the other. Instead, the pattern you’d get only makes sense under the assumption that the particle traveled through both slits and then interfered with itself. If a superposition of A and B is not the same as “A and B’ and it’s not the same as ‘A or B’, then what is it? Well, it’s just that: a superposition! A superposition is something fundamentally new, with some of the features of “and” and some of the features of “or”. We can do no better than to describe the empirically observed features and then give that cluster of features a name. Now, quantum mechanics tells us that for any two possible states that a system can be in, there is another state that corresponds to the system being in a superposition of the two. In fact, there’s an infinity of such superpositions, each corresponding to a different weighting of the two states. The Schrödinger equation is what tells how quantum mechanical systems evolve over time. And since all of nature is just one really big quantum mechanical system, the Schrödinger equation should also tell us how we evolve over time. So what does the Schrödinger equation tell us happens when we take a particle in a superposition of A and B and make a measurement of it? The answer is clear and unambiguous: The Schrödinger equation tells us that we ourselves enter into a superposition of states, one in which we observe the particle in state A, the other in which we observe it in B. This is a pretty bizarre and radical answer! The first response you might have may be something like “When I observe things, it certainly doesn’t seem like I’m entering into a superposition… I just look at the particle and see it in one state or the other. I never see it in this weird in-between state!” But this is not a good argument against the conclusion, as it’s exactly what you’d expect by just applying the Schrödinger equation! When you enter into a superposition of “observing A” and “observing B”, neither branch of the superposition observes both A and B. And naturally, since neither branch of the superposition “feels” the other branch, nobody freaks out about being superposed. But there is a problem here, and it’s a serious one. The problem is the following: Sure, it’s compatible with our experience to say that we enter into superpositions when we make observations. But what predictions does it make? How do we take what the Schrödinger equation says happens to the state of the world and turn it into a falsifiable experimental setup? The answer appears to be that we can’t. At least, not using just the Schrödinger equation on its own. To get out predictions, we need an additional postulate, known as the Born rule. This postulate says the following: For a system in a superposition, each branch of the superposition has an associated complex number called the amplitude. The probability of observing any particular branch of the superposition upon measurement is simply the square of that branch’s amplitude. For example: A particle is in a superposition of positions A and B. The amplitude attached to A is 0.8. The amplitude attached to B is 0.4. If we now observe the position of the particle, we will find it to be at either A with probability (.6)2 (i.e. 36%), or B with probability (.8)2 (i.e. 64%). Simple enough, right? The problem is to figure out where the Born rule comes from and what it even means. The rule appears to be completely necessary to make quantum mechanics a testable theory at all, but it can’t be derived from the Schrödinger equation. And it’s not at all inevitable; it could easily have been that probabilities associated with the amplitude were gotten by taking absolute values rather than squares. Or why not the fourth power of the amplitude? There’s a substantive claim here, that probabilities associate with the square of the amplitudes that go into the Schrödinger equation, that needs to be made sense of. There are a lot of different ways that people have tried to do this, and I’ll list a few of the more prominent ones here. The Copenhagen Interpretation (Prepare to be disappointed.) The Copenhagen interpretation, which has historically been the dominant position among working physicists, is that the Born rule is just an additional rule governing the dynamics of quantum mechanical systems. Sometimes systems evolve according to the Schrödinger equation, and sometimes according to the Born rule. When they evolve according to the Schrödinger equation, they split into superpositions endlessly. When they evolve according to the Born rule, they collapse into a single determinate state. What determines when the systems evolve one way or the other? Something measurement something something observation something. There’s no real consensus here, nor even a clear set of well-defined candidate theories. If you’re familiar with the way that physics works, this idea should send your head spinning. The claim here is that the universe operates according to two fundamentally different laws, and that the dividing line between the two hinges crucially on what we mean by the words “measurement and “observation. Suffice it to say, if this was the right way to understand quantum mechanics, it would go entirely against the spirit of the goal of finding a fundamental theory of physics. In a fundamental theory of physics, macroscopic phenomena like measurements and observations need to be built out of the behavior of lots of tiny things like electrons and quarks, not the other way around. We shouldn’t find ourselves in the position of trying to give a precise definition to these words, debating whether frogs have the capacity to collapse superpositions or if that requires a higher “measuring capacity”, in order to make predictions about the world (as proponents of the Copenhagen interpretation have in fact done!). The Copenhagen interpretation is not an elegant theory, it’s not a clearly defined theory, and it’s fundamentally at tension with the project of theoretical physics. So why has it been, as I said, the dominant approach over the last century to understanding quantum mechanics? This really comes down to physicists not caring enough about the philosophy behind the physics to notice that the approach they are using is fundamentally flawed. In practice, the Copenhagen interpretation works. It allows somebody working in the lab to quickly assess the results of their experiments and to make predictions about how future experiments will turn out. It gives the right empirical probabilities and is easy to implement, even if the fuzziness in the details can start to make your head hurt if you start to think about it too much. As Jean Bricmont said, “You can’t blame most physicists for following this ‘shut up and calculate’ ethos because it has led to tremendous develop­ments in nuclear physics, atomic physics, solid­ state physics and particle physics.” But the Copenhagen interpretation is not good enough for us. A serious attempt to make sense of quantum mechanics requires something more substantive. So let’s move on. Objective Collapse Theories These approaches hinge on the notion that the Schrödinger equation really is the only law at work in the universe, it’s just that we have that equation slightly wrong. Objective collapse theories add slight nonlinearities to the Schrödinger equation so that systems sometimes spread out in superpositions and other times collapse into definite states, all according to one single equation. The most famous of these is the spontaneous collapse theory, according to which quantum systems collapse with a probability that grows with the number of particles in the system. This approach is nice for several reasons. For one, it gives us the Born rule without requiring a new equation. It makes sense of the Born rule as a fundamental feature of physical reality, and makes precise and empirically testable predictions that can distinguish it from from other interpretations. The drawback? It makes the Schrödinger equation ugly and complicated, and it adds extra parameters that determine how often collapse happens. And as we know, whenever you start adding parameters you run the risk of overfitting your data. Hidden Variable Theories These approaches claim that superpositions don’t really exist, they’re just a high-level consequence of the unusual behavior of the stuff at the smallest level of reality.  They deny that the Schrödinger equation is truly fundamental, and say instead that it is a higher-level approximation of an underlying deterministic reality. “Deterministic?! But hasn’t quantum mechanics been shown conclusively to be indeterministic??” Well, not entirely. For a while there was a common sentiment amongst physicists that John Von Neumann and others had proved beyond a doubt that no deterministic theory could make the predictions that quantum mechanics makes. Later subtle mistakes were found in these purported proofs that left a door open for determinism. Today there are well-known fleshed-out hidden variable theories that successfully reproduce the predictions of quantum mechanics, and do so fully deterministically. The most famous of these is certainly Bohmian mechanics, also called pilot wave theory. Here’s a nice video on it if you’d like to know more, complete with pretty animations. Bohmian mechanics is interesting, appear to work, give us the Born rule, and is probably empirically distinguishable from other theories (at least in principle). A serious issue with it is that it requires nonlocality, which is a challenge to any attempt to make it consistent with special relativity. Locality is such an important and well-understood feature of our reality that this constitutes a major challenge to the approach. Many-Worlds / Everettian Interpretations Ok, finally we talk about the approach that is most interesting in my opinion, and get to the title of this post. The Many-Worlds interpretation says, in essence, that we were wrong to ever want more than the Schrödinger equation. This is the only law that governs reality, and it gives us everything we need. Many-Worlders deny that superpositions ever collapse. The result of us performing a measurement on a system in superposition is simply that we end up in superposition, and that’s the whole story! So superpositions never collapse, they just go deeper into superposition. There’s not just one you, there’s every you, spread across the different branches of the wave function of the universe. All these yous exist beside each other, living out all your possible life histories. But then where does Many-Worlds get the Born rule from? Well, uh, it’s kind of a mystery. The Born rule isn’t an additional law of physics, because the Schrödinger equation is supposed to be the whole story. It’s not an a priori rule of rationality, because as we said before probabilities could have easily gone as the fourth power of amplitudes, or something else entirely. But if it’s not an a posteriori fact about physics, and also not an a priori knowable principle of rationality, then what is it? This issue has seemed to me to be more and more important and challenging for Many-Worlds the more I have thought about it. It’s hard to see what exactly the rule is even saying in this interpretation. Say I’m about to make a measurement of a system in a superposition of states A and B. Suppose that I know the amplitude of A is much smaller than the amplitude of B. I need some way to say “I have a strong expectation that I will observe B, but there’s a small chance that I’ll see A.” But according to Many-Worlds, a moment from now both observations will be made. There will be a branch of the superposition in which I observe A, and another branch in which I observe B. So what I appear to need to say is something like “I am much more likely to be the me in the branch that observes B than the me that observes A.” But this is a really strange claim that leads us straight into the thorny philosophical issue of personal identity. In what sense are we allowed to say that one and only one of the two resulting humans is really going to be you? Don’t both of them have equal claim to being you? They each have your exact memories and life history so far, the only difference is that one observed A and the other B. Maybe we can use anthropic reasoning here? If I enter into a superposition of observing-A and observing-B, then there are now two “me”s, in some sense. But that gives the wrong prediction! Using the self-sampling assumption, we’d just say “Okay, two yous, so there’s a 50% chance of being each one” and be done with it. But obviously not all binary quantum measurements we make have a 50% chance of turning out either way! Maybe we can say that the world actually splits into some huge number of branches, maybe even infinite, and the fraction of the total branches in which we observe A is exactly the square of the amplitude of A? But this is not what the Schrödinger equation says! The Schrödinger equation tells exactly what happens after we make the observation: we enter a superposition of two states, no more, no less. We’re importing a whole lot into our interpretive apparatus by interpreting this result as claiming the literal existence of an infinity of separate worlds, most of which are identical, and the distribution of which is governed by the amplitudes. What we’re seeing here is that Many-Worlds, by being too insistent on the reality of the superposition, the sole sovereignty of the Schrödinger equation, and the unreality of collapse, ends up running into a lot of problems in actually doing what a good theory of physics is supposed to do: making empirical predictions. The Many-Worlders can of course use the Born Rule freely to make predictions about the outcomes of experiments, but they have little to say in answer to what, in their eyes, this rule really amounts to. I don’t know of any good way out of this mess. Basically where this leaves me is where I find myself with all of my favorite philosophical topics; totally puzzled and unsatisfied with all of the options that I can see. More on quantum entanglement and irreducibility A few posts ago, I talked about how quantum mechanics entails the existence of irreducible states – states of particles that in principle cannot be described as the product of their individual components. The classic example of such an entangled state is the two qubit state Screen Shot 2018-07-17 at 8.03.53 PM This state describes a system which is in an equal-probability superposition of both particles being |0 and both particles being |1. As it turns out, this state cannot be expressed as the product of two single-qubit states. A friend of mine asked me a question about this that was good enough to deserve its own post in response. Start by imagining that Alice and Bob each have a coin. They each put their quarter inside a small box with heads facing up. Now they close their respective boxes, and shake them up in the exact same way. This is important! (as well as unrealistic) We suppose that whatever happens to the coin in Alice’s box, also happens to the coin in Bob’s box. Now we have two boxes, each of which contains a coin, and these coins are guaranteed to be facing the same way. We just don’t know what way they are facing. Alice and Bob pick up their boxes, being very careful to not disturb the states of their respective coins, and travel to opposite ends of the galaxy. The Milky Way is 100,000 light years across, so any communication between the two now would take a minimum of 100,000 years. But if Alice now opens her box, she instantly knows the state of Bob’s coin! So while Alice and Bob cannot send messages about the state of their boxes any faster than 100,000 years, they can instantly receive information about each others’ boxes by just observing their own! Is this a contradiction? No, of course not. While Alice does learn something about Bob’s box, this is not because of any message passed between the two. It is the result of the fact that in the past the configurations of their coins were carefully designed to be identical. So what seemed on its face to be special and interesting turns out to be no paradox at all. Finally, we get to the question my friend asked. How is this any different from the case of entangled particles in quantum mechanics?? Both systems would be found to be in the states |00 and |11⟩ with equal probability (where |0⟩ is heads and |1⟩ is tails). And both have the property that learning the state of one instantly tells you the state of the other. Indeed, the coins-in-boxes system also has the property of irreducibility that we talked about before! Try as we might, we cannot coherently treat the system of both coins as the product of two independent coins, as doing so will ignore the statistical dependence between the two coins. (Which, by the way, is exactly the sort of statistical dependence that justifies timeless decision theory and makes it a necessary update to decision theory.) I love this question. The premise of the question is that we can construct a classical system that behaves in just the same supposedly weird ways that quantum systems behave, and thus make sense of all this mystery. And answering it requires that we get to the root of why quantum mechanics is a fundamentally different description of reality than anything classical. So! I’ll describe the two primary disanalogies between entangled particles and “entangled” coins. Epistemic Uncertainty vs Fundamental Indeterminacy First disanalogy. With the coins, either they are both heads or they are both tails. There is an actual fact in the world about which of these two is true, and the probabilities we reference when we talk about the chance of HH or TT represent epistemic uncertainty. There is a true determinate state of the coins, and probability only arises as a way to deal with our imperfect knowledge. On the other hand, according to the mainstream interpretation of quantum mechanics, the state of the two particles is fundamentally indeterminate. There isn’t a true fact out there waiting to be discovered about whether the state is |00⟩ or |11⟩. The actual state of the system is this unusual thing called a superposition of |00⟩ and |11⟩. When we observe it to be |00⟩, the state has now actually changed from the superposition to the determinate state. We can phrase this in terms of counterfactuals: If when we look at the coins, we see that they are HH, then we know that they were HH all along. In particular, we know that if we had observed them a moment later or earlier, we would have gotten H with 100% certainty. Give that we actually observed HH, the probability that we would have observed HH is 100%. But if we observe the state of the particles to be |00⟩, this does not mean that had we observed it a moment before, we would be guaranteed to get the same answer. Given that we actually observed |00⟩, the probability that we would have observed |00⟩ is still 50%. (A project for some enterprising reader: see what the truths of these counterfactuals imply for an interpretation of quantum mechanics in terms of Pearl-style causal diagrams. Is it even possible to do?) Predictive differences The second difference between the two cases is a straightforward experimental difference. Suppose that Alice and Bob identically prepare thousands of coins as we described before, and also identically prepare thousands of entangled particles. They ensure that the coins are treated exactly the same way, so that they are guaranteed to all be in the same state, and similarly for the entangled pairs. If they now just observe all of their entangled pairs and coins, they will get similar results – roughly half of the coins will be HH and roughly half of the entangled pairs will be |00⟩. But there are other experiments they could run on the entangled pairs that would give different answers depending on whether we treat the particles as being in superposition or not. I described what these experiments could be in this earlier post – essentially they involve applying an operation that takes qubits in and out of superposition. The conclusion of this is that even if you tried to model the entangled pair as a simple probability distribution similar to the coins, you will get the wrong answer in some experiments. So we have both a theoretical argument and a practical argument for the difference between these two cases. They key take-away is the following: According to quantum mechanics an entangled pair is in a state that is fundamentally indeterminate. When we describe it with probabilities, we are not saying “This probabilistic description is an account of my imperfect knowledge of the state of the system”. We’re saying that nature herself is undecided on what we will observe when we look at the state. (Side note: there is actually a way to describe epistemic uncertainty in quantum mechanics. It is called the density matrix, and is distinct from the description of superpositions.) In addition, the most fundamental and accurate probability description for the state of the two particles is one that cannot be described as the product of two independent particles. This is not the case with the coins! The most fundamental and accurate probability description for the state of the two coins is either 100% HH or 100% TT (whichever turns out to be the case). What this means is that in the quantum case, not only is the state indeterminate, but the two particles are fundamentally interdependent – entangled. There is no independent description of the individual components of the system, there is only the system as a whole.
4e325c5908f34a9f
torsdag 24 april 2014 No, We Don't Understand Quantum Mechanics, But There Is Hope.                                                             Yes, QM is a strange world. The Preface of book Do We Really Understand Quantum Mechanics by Franck Laloe supplemented by an article with the same title, tells the truth about quantum mechanics: • In many ways, quantum mechanics QM is a surprising theory... because it creates a big contrast between its triumphs and difficulties.  • On the one hand, among all theories, quantum mechanics is probably one of the most successful achievements of science.  The applications of quantum mechanics are everywhere in our twentyfirst century environment, with all sorts of devices that would have been unthinkable 50 years ago.  • On the other hand, conceptually this theory remains relatively fragile because of its delicate interpretation – fortunately, this fragility has little consequence for  its efficiency.  • The reason why difficulties persist is certainly not that physicists have tried to ignore them or put them under the rug! • Actually, a large number of interpretations have been proposed over the decades, involving various methods and mathematical techniques.  • We have a rare situation in the history of sciences: consensus exists concerning a systematic approach to physical phenomena, involving calculation methods having an extraordinary predictive power; nevertheless, almost a century after the introduction of these methods, the same consensus is far from being reached concerning the interpretation of the theory and its • This is reminiscent of the colossus with feet of clay. • The difficulties of quantum mechanics originate from the object it uses to describe physical systems, the state vector (wave function) $\Psi$. • Without any doubt, the state vector is a curious object to describe reality! The message is that QM a formidable achievement of the human intellect which is incredibly useful in practice, but like a colossus with feet of clay has a main character flaw, namely that it is a curious way to describe reality and as such not understood by physicists.  There are two ways the handle if a physical theory is not understood because it is so curious, either the theory is dismisssed as being seriously flawed or the curiosity is chosen as a sign that the theory is correct and beyond questioning by human minds. The reason QM is so mysterious is that the wave function $\Psi =\Psi (x_1,x_2,…,x_N)$ for an atom or molecule with $N$ electrons depends on $N$ independent three-dimensional space variables $x_1$, $x_2$,…, $x_N$, together with time, thus is a function in $3N$ space dimensions plus time and as such has no direct real physical meaning since real physics takes place in $3$ space dimensions.  The wave function $\Psi$ is introduced as a the solution to a linear multi-dimensional linear wave equation named Schrödinger's equation of the form where $H$ is a Hamiltonian operator acting on wave functions. The mysticism of QM thus originates from Schrödinger's equation and is manifested by the fact that there is no real derivation of Schrödinger's equation from basic physical laws. Instead, Schrödinger's equation is motivated as a purely formal manipulation of classical Hamiltonian mechanics without physical meaning.  The main trouble with QM based on a linear multi-d Schrödinger equation is thus the physical interpretation of the multi-d wave function and the accepted answer to this enigma is to view  • $\vert\Psi (x_1,…,x_N)\vert^2$  as a probability distribution of a particle configuration described by the coordinates $(x_1,…,x_N)$ representing human knowledge about a physics and not physics itself. Epistemology of what we can know is thus allowed to replace ontology of what is. The linear multi-d Schrödinger equation thus lacks connection to physical reality. Moreover, because of its many dimensions the equation cannot be solved (analytically or computationally), and the beautiful net result is that QM is based on an equation without physical meaning which cannot be solved. No wonder that physicists still after 100 years of hard struggle do not really understand QM.  But since Schrödinger's linear multi-d equation lacks physical meaning (and neither can be solved) there is no compelling reason to view it as the foundation of atomistic physics.  It appears to be more constructive to consider instead systems of non-linear Schrödinger equations in $N$ three-dimensional wave functions $\psi_1(x),…,\psi_N(x)$ with $x$ a 3d space coordinate,  in the spirit of of Hartree models, as physically meaningful computable models of potentially great practical usefulness.  Sums of such wave functions then play a basic role and have physical meaning, to be compared the standard setting with $\Psi (x_1,…,x_N)$ in the form of Slater determinants as sums of muli-d products $\psi (x_1)\psi (x_2)…\psi (x_N)$ of complicated unphysical nature.  Inga kommentarer: Skicka en kommentar
4a2da49c57c493b0
Tomoi Koide is a Professor at Institute of Physics, Federal University of Rio de Janeiro, Brazil. He has completed his PhD from Tohoku University, Japan and Post-doctoral studies from Frankfurt University, Federal University of Rio de Janeiro and so on. He has published more than 60 papers in reputed journals. Variational principle plays a fundamental role in elucidating the structure of classical mechanics, clarifying the origin of dynamics and the relation between symmetries and conservation laws. In classical mechanics, the optimized function is characterized by Lagrangian, defined as T-V with T and V being a kinetic and a potential terms, respectively. We can still argue a variational principle even in quantum mechanics, but the Lagrangian does not have the form of T-V any more. Therefore, at first glance, any clear or direct correspondence between classical and quantum mechanics does not seem to exist from the variational point of view, but it does exist. For this, we need to extend the usual variational method to the case of stochastic variables. This is called stochastic variational method (SVM). The Schrödinger equation can be then obtained by the stochastic optimization of the action which leads to, meanwhile, the Newton equation in the application of the classical variation. From this point of view, quantization can be regarded as a process of stochastic optimization and the invariance of the action leads to the conservation laws in quantum mechanics. In this manner, classical and quantum behaviors are described in a unified way under SVM. Although SVM was originally proposed as the reformulation of Nelson's stochastic quantization, its applicability is not restricted to quantization. In fact, dissipative dynamics such as the Navier-Stokes-Fourier (viscous fluid) equation can be obtained by applying SVM to the Lagrangian which leads to the Euler (ideal fluid) equation in the classical variational method. This method is useful even to obtain coarse-grained dynamics. For example, the Gross-Pitaevskii equation is regarded as an optimized dynamics in SVM. Therefore it is possible to consider that the study of SVM enables us to generalize the framework of analytic mechanics. Speaker Presentations Speaker PDFs
76b6900a8c3f31f6
Many-worlds interpretation From Wikipedia, the free encyclopedia   (Redirected from Many worlds theory) Jump to: navigation, search The quantum-mechanical "Schrödinger's cat" theorem according to the many-worlds interpretation. In this interpretation, every event is a branch point; the cat is both alive and dead, even before the box is opened, but the "alive" and "dead" cats are in different branches of the universe, both of which are equally real, but which do not interact with each other.[1] The many-worlds interpretation is an interpretation of quantum mechanics that asserts the objective reality of the universal wavefunction and denies the actuality of wavefunction collapse. Many-worlds implies that all possible alternate histories and futures are real, each representing an actual "world" (or "universe"). In layman's terms, the hypothesis states there is a very large—perhaps infinite[2]—number of universes, and everything that could possibly have happened in our past, but did not, has occurred in the past of some other universe or universes. The theory is also referred to as MWI, the relative state formulation, the Everett interpretation, the theory of the universal wavefunction, many-universes interpretation, or just many-worlds. The original relative state formulation is due to Hugh Everett in 1957.[3][4] Later, this formulation was popularized and renamed many-worlds by Bryce Seligman DeWitt in the 1960s and 1970s.[1][5][6][7] The decoherence approaches to interpreting quantum theory have been further explored and developed,[8][9][10] becoming quite popular. MWI is one of many multiverse hypotheses in physics and philosophy. It is currently considered a mainstream interpretation along with the other decoherence interpretations, collapse theories (including the historical Copenhagen interpretation),[11] and hidden variable theories such as the Bohmian mechanics. Before many-worlds, reality had always been viewed as a single unfolding history. Many-worlds, however, views reality as a many-branched tree, wherein every possible quantum outcome is realised.[12] Many-worlds reconciles the observation of non-deterministic events, such as random radioactive decay, with the fully deterministic equations of quantum physics. In many-worlds, the subjective appearance of wavefunction collapse is explained by the mechanism of quantum decoherence, and this is supposed to resolve all of the correlation paradoxes of quantum theory, such as the EPR paradox[13][14] and Schrödinger's cat,[1] since every possible outcome of every event defines or exists in its own "history" or "world". In Dublin in 1952 Erwin Schrödinger gave a lecture in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that when his Nobel equations seem to be describing several different histories, they are "not alternatives but all really happen simultaneously". This is the earliest known reference to the many-worlds.[15][16] Although several versions of many-worlds have been proposed since Hugh Everett's original work,[4] they all contain one key idea: the equations of physics that model the time evolution of systems without embedded observers are sufficient for modelling systems which do contain observers; in particular there is no observation-triggered wave function collapse which the Copenhagen interpretation proposes. Provided the theory is linear with respect to the wavefunction, the exact form of the quantum dynamics modelled, be it the non-relativistic Schrödinger equation, relativistic quantum field theory or some form of quantum gravity or string theory, does not alter the validity of MWI since MWI is a metatheory applicable to all linear quantum theories, and there is no experimental evidence for any non-linearity of the wavefunction in physics.[17][18] MWI's main conclusion is that the universe (or multiverse in this context) is composed of a quantum superposition of very many, possibly even non-denumerably infinitely[2] many, increasingly divergent, non-communicating parallel universes or quantum worlds.[7] The idea of MWI originated in Everett's Princeton Ph.D. thesis "The Theory of the Universal Wavefunction",[7] developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 entitled "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state";[19] Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt,[7] who was responsible for the wider popularisation of Everett's theory, which had been largely ignored for the first decade after publication. DeWitt's phrase "many-worlds" has become so much more popular than Everett's "Universal Wavefunction" or Everett–Wheeler's "Relative State Formulation" that many forget that this is only a difference of terminology; the content of both of Everett's papers and DeWitt's popular article is the same. The many-worlds interpretation shares many similarities with later, other "post-Everett" interpretations of quantum mechanics which also use decoherence to explain the process of measurement or wavefunction collapse. MWI treats the other histories or worlds as real since it regards the universal wavefunction as the "basic physical entity"[20] or "the fundamental entity, obeying at all times a deterministic wave equation".[21] The other decoherent interpretations, such as consistent histories, the Existential Interpretation etc., either regard the extra quantum worlds as metaphorical in some sense, or are agnostic about their reality; it is sometimes hard to distinguish between the different varieties. MWI is distinguished by two qualities: it assumes realism,[20][21] which it assigns to the wavefunction, and it has the minimal formal structure possible, rejecting any hidden variables, quantum potential, any form of a collapse postulate (i.e., Copenhagenism) or mental postulates (such as the many-minds interpretation makes). Decoherent interpretations of many-worlds using einselection to explain how a small number of classical pointer states can emerge from the enormous Hilbert space of superpositions have been proposed by Wojciech H. Zurek. "Under scrutiny of the environment, only pointer states remain unchanged. Other states decohere into mixtures of stable pointer states that can persist, and, in this sense, exist: They are einselected."[22] These ideas complement MWI and bring the interpretation in line with our perception of reality. Many-worlds is often referred to as a theory, rather than just an interpretation, by those who propose that many-worlds can make testable predictions (such as David Deutsch) or is falsifiable (such as Everett) or by those who propose that all the other, non-MW interpretations, are inconsistent, illogical or unscientific in their handling of measurements; Hugh Everett argued that his formulation was a metatheory, since it made statements about other interpretations of quantum theory; that it was the "only completely coherent approach to explaining both the contents of quantum mechanics and the appearance of the world."[23] Deutsch is dismissive that many-worlds is an "interpretation", saying that calling it an interpretation "is like talking about dinosaurs as an 'interpretation' of fossil records."[24] Interpreting wavefunction collapse[edit] As with the other interpretations of quantum mechanics, the many-worlds interpretation is motivated by behavior that can be illustrated by the double-slit experiment. When particles of light (or anything else) are passed through the double slit, a calculation assuming wave-like behavior of light can be used to identify where the particles are likely to be observed. Yet when the particles are observed in this experiment, they appear as particles (i.e., at definite places) and not as non-localized waves. Some versions of the Copenhagen interpretation of quantum mechanics proposed a process of "collapse" in which an indeterminate quantum system would probabilistically collapse down onto, or select, just one determinate outcome to "explain" this phenomenon of observation. Wavefunction collapse was widely regarded as artificial and ad hoc[citation needed], so an alternative interpretation in which the behavior of measurement could be understood from more fundamental physical principles was considered desirable. Everett's Ph.D. work provided such an alternative interpretation. Everett stated that for a composite system – for example a subject (the "observer" or measuring apparatus) observing an object (the "observed" system, such as a particle) – the statement that either the observer or the observed has a well-defined state is meaningless; in modern parlance, the observer and the observed have become entangled; we can only specify the state of one relative to the other, i.e., the state of the observer and the observed are correlated after the observation is made. This led Everett to derive from the unitary, deterministic dynamics alone (i.e., without assuming wavefunction collapse) the notion of a relativity of states. Everett noticed that the unitary, deterministic dynamics alone decreed that after an observation is made each element of the quantum superposition of the combined subject–object wavefunction contains two "relative states": a "collapsed" object state and an associated observer who has observed the same collapsed outcome; what the observer sees and the state of the object have become correlated by the act of measurement or observation. The subsequent evolution of each pair of relative subject–object states proceeds with complete indifference as to the presence or absence of the other elements, as if wavefunction collapse has occurred, which has the consequence that later observations are always consistent with the earlier observations. Thus the appearance of the object's wavefunction's collapse has emerged from the unitary, deterministic theory itself. (This answered Einstein's early criticism of quantum theory, that the theory should define what is observed, not for the observables to define the theory).[25] Since the wavefunction merely appears to have collapsed then, Everett reasoned, there was no need to actually assume that it had collapsed. And so, invoking Occam's razor, he removed the postulate of wavefunction collapse from the theory. Attempts have been made, by many-world advocates and others, over the years to derive the Born rule, rather than just conventionally assume it, so as to reproduce all the required statistical behaviour associated with quantum mechanics. There is no consensus on whether this has been successful.[26][27][28] Frequency-based approaches[edit] Everett (1957) briefly derived the Born rule by showing that the Born rule was the only possible rule, and that its derivation was as justified as the procedure for defining probability in classical mechanics. Everett stopped doing research in theoretical physics shortly after obtaining his Ph.D., but his work on probability has been extended by a number of people. Andrew Gleason (1957) and James Hartle (1965) independently reproduced Everett's work[29] which was later extended.[30][31] These results are closely related to Gleason's theorem, a mathematical result according to which the Born probability measure is the only one on Hilbert space that can be constructed purely from the quantum state vector.[32] Bryce DeWitt and his doctoral student R. Neill Graham later provided alternative (and longer) derivations to Everett's derivation of the Born rule.[7] They demonstrated that the norm of the worlds where the usual statistical rules of quantum theory broke down vanished, in the limit where the number of measurements went to infinity. Decision theory[edit] A decision-theoretic derivation of the Born rule from Everettarian assumptions, was produced by David Deutsch (1999)[33] and refined by Wallace (2002–2009)[34][35][36][37] and Saunders (2004).[38][39] Deutsch's derivation is a two-stage proof: first he shows that the number of orthonormal Everett-worlds after a branching is proportional to the conventional probability density. Then he uses game theory to show that these are all equally likely to be observed. The last step in particular has been criticised for circularity.[40][41] Some other reviews have been positive, although the status of these arguments remains highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes.[42] In the New Scientist article, reviewing their presentation at a September 2007 conference,[43][44] Andy Albrecht, a physicist at the University of California at Davis, is quoted as saying "This work will go down as one of the most important developments in the history of science."[42] The Born rule and the collapse of the wave function have been obtained in the framework of the relative-state formulation of quantum mechanics by Armando V.D.B. Assis. He has proved that the Born rule and the collapse of the wave function follow from a game-theoretical strategy, namely the Nash equilibrium within a von Neumann zero-sum game between nature and observer.[45] Symmetries and invariance[edit] Wojciech H. Zurek (2005)[46] has produced a derivation of the Born rule, where decoherence has replaced Deutsch's informatic assumptions.[47] Lutz Polley (2000) has produced Born rule derivations where the informatic assumptions are replaced by symmetry arguments.[48][49] Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman,[50] proposed a similar approach based on self-locating uncertainty.[51] In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. Brief overview[edit] In Everett's formulation, a measuring apparatus M and an object system S form a composite system, each of which prior to measurement exists in well-defined (but time-dependent) states. Measurement is regarded as causing M and S to interact. After S interacts with M, it is no longer possible to describe either system by an independent state. According to Everett, the only meaningful descriptions of each system are relative states: for example the relative state of S given the state of M or the relative state of M given the state of S. In DeWitt's formulation, the state of S after a sequence of measurements is given by a quantum superposition of states, each one corresponding to an alternative measurement history of S. Schematic illustration of splitting as a result of a repeated measurement. For example, consider the smallest possible truly quantum system S, as shown in the illustration. This describes for instance, the spin-state of an electron. Considering a specific axis (say the z-axis) the north pole represents spin "up" and the south pole, spin "down". The superposition states of the system are described by (the surface of) a sphere called the Bloch sphere. To perform a measurement on S, it is made to interact with another similar system M. After the interaction, the combined system is described by a state that ranges over a six-dimensional space (the reason for the number six is explained in the article on the Bloch sphere). This six-dimensional object can also be regarded as a quantum superposition of two "alternative histories" of the original system S, one in which "up" was observed and the other in which "down" was observed. Each subsequent binary measurement (that is interaction with a system M) causes a similar split in the history tree. Thus after three measurements, the system can be regarded as a quantum superposition of 8 = 2 × 2 × 2 copies of the original system S. The accepted terminology is somewhat misleading because it is incorrect to regard the universe as splitting at certain times; at any given instant there is one state in one universe. Relative state[edit] In his 1957 doctoral dissertation, Everett proposed that rather than modeling an isolated quantum system subject to external observation, one could mathematically model an object as well as its observers as purely physical systems within the mathematical framework developed by Paul Dirac, von Neumann and others, discarding altogether the ad hoc mechanism of wave function collapse. Since Everett's original work, there have appeared a number of similar formalisms in the literature. One such idea is discussed in the next section. The relative state formulation makes two assumptions. The first is that the wavefunction is not simply a description of the object's state, but that it actually is entirely equivalent to the object, a claim it has in common with some other interpretations. The second is that observation or measurement has no special laws or mechanics, unlike in the Copenhagen interpretation which considers the wavefunction collapse as a special kind of event which occurs as a result of observation. Instead, measurement in the relative state formulation is the consequence of a configuration change in the memory of an observer described by the same basic wave physics as the object being modeled. The many-worlds interpretation is DeWitt's popularisation of Everett's work, who had referred to the combined observer–object system as being split by an observation, each split corresponding to the different or multiple possible outcomes of an observation. These splits generate a possible tree as shown in the graphic below. Subsequently, DeWitt introduced the term "world" to describe a complete measurement history of an observer, which corresponds roughly to a single branch of that tree. Note that "splitting" in this sense, is hardly new or even quantum mechanical. The idea of a space of complete alternative histories had already been used in the theory of probability since the mid-1930s for instance to model Brownian motion. Partial trace as relative state. Light blue rectangle on upper left denotes system in pure state. Trellis shaded rectangle in upper right denotes a (possibly) mixed state. Mixed state from observation is partial trace of a linear superposition of states as shown in lower right-hand corner. Successive measurements with successive splittings Under the many-worlds interpretation, the Schrödinger equation, or relativistic analog, holds all the time everywhere. An observation or measurement of an object by an observer is modeled by applying the wave equation to the entire system comprising the observer and the object. One consequence is that every observation can be thought of as causing the combined observer–object's wavefunction to change into a quantum superposition of two or more non-interacting branches, or split into many "worlds". Since many observation-like events have happened, and are constantly happening, there are an enormous and growing number of simultaneously existing states. If a system is composed of two or more subsystems, the system's state will be a superposition of products of the subsystems' states. Once the subsystems interact, their states are no longer independent. Each product of subsystem states in the overall superposition evolves over time independently of other products. The subsystems states have become correlated or entangled and it is no longer possible to consider them independent of one another. In Everett's terminology each subsystem state was now correlated with its relative state, since each subsystem must now be considered relative to the other subsystems with which it has interacted. Properties of the theory[edit] MWI removes the observer-dependent role in the quantum measurement process by replacing wavefunction collapse with quantum decoherence. Since the role of the observer lies at the heart of most if not all "quantum paradoxes," this automatically resolves a number of problems; see for example Schrödinger's cat thought experiment, the EPR paradox, von Neumann's "boundary problem" and even wave-particle duality. Quantum cosmology also becomes intelligible, since there is no need anymore for an observer outside of the universe.[citation needed] MWI is a realist, deterministic, arguably local theory, akin to classical physics (including the theory of relativity), at the expense of losing counterfactual definiteness. MWI achieves this by removing wavefunction collapse, which is indeterministic and non-local, from the deterministic and local equations of quantum theory.[52] MWI (or other, broader multiverse considerations) provides a context for the anthropic principle which may provide an explanation for the fine-tuned universe.[53][54] MWI, being a decoherent formulation, is axiomatically more streamlined than the Copenhagen and other collapse interpretations; and thus favoured under certain interpretations of Occam's razor.[55][unreliable source?] Of course there are other decoherent interpretations that also possess this advantage with respect to the collapse interpretations. Comparative properties and possible experimental tests[edit] One of the salient properties of the many-worlds interpretation is that it does not require an exceptional method of wave function collapse to explain it. "It seems that there is no experiment distinguishing the MWI from other no-collapse theories such as Bohmian mechanics or other variants of MWI... In most no-collapse interpretations, the evolution of the quantum state of the Universe is the same. Still, one might imagine that there is an experiment distinguishing the MWI from another no-collapse interpretation based on the difference in the correspondence between the formalism and the experience (the results of experiments)."[56] However, in 1985, David Deutsch published three related thought experiments which could test the theory vs the Copenhagen interpretation.[57] The experiments require macroscopic quantum state preparation and quantum erasure by a hypothetical quantum computer which is currently outside experimental possibility. Since then Lockwood (1989), Vaidman and others have made similar proposals.[56] These proposals also require an advanced technology which is able to place a macroscopic object in a coherent superposition, another task which it is uncertain will ever be possible to perform. Many other controversial ideas have been put forward though, such as a recent claim that cosmological observations could test the theory,[58] and another claim by Rainer Plaga (1997), published in Foundations of Physics, that communication might be possible between worlds.[59] Copenhagen interpretation[edit] In the Copenhagen interpretation, the mathematics of quantum mechanics allows one to predict probabilities for the occurrence of various events. When an event occurs, it becomes part of the definite reality, and alternative possibilities do not. There is no necessity to say anything definite about what is not observed. The universe decaying to a new vacuum state[edit] Any event that changes the number of observers in the universe may have experimental consequences.[60] Quantum tunnelling to a new vacuum state would reduce the number of observers to zero (i.e., kill all life).[citation needed] Some cosmologists[citation needed] argue that the universe is in a false vacuum state and that consequently the universe should have already experienced quantum tunnelling to a true vacuum state. This has not happened and is cited as evidence in favor of many-worlds. In some worlds, quantum tunnelling to a true vacuum state has happened but most other worlds escape this tunneling and remain viable. This can be thought of as a variation on quantum suicide. The many-minds interpretation is a multi-world interpretation that defines the splitting of reality on the level of the observers' minds. In this, it differs from Everett's many-worlds interpretation, in which there is no special role for the observer's mind.[59] Common objections[edit] The many-worlds interpretation is very vague about the ways to determine when splitting happens, and nowadays usually the criterion is that the two branches have decohered. However, present day understanding of decoherence does not allow a completely precise, self-contained way to say when the two branches have decohered/"do not interact", and hence many-worlds interpretation remains arbitrary. This objection is saying that it is not clear what is precisely meant by branching, and point to the lack of self-contained criteria specifying branching. MWI response: the decoherence or "splitting" or "branching" is complete when the measurement is complete. In Dirac notation a measurement is complete when: where represents the observer having detected the object system in the ith state. Before the measurement has started the observer states are identical; after the measurement is complete the observer states are orthonormal.[4][7] Thus a measurement defines the branching process: the branching is as well- or ill-defined as the measurement is; the branching is as complete as the measurement is complete – which is to say that the delta function above represents an idealised measurement. Although true "for all practical purposes" in reality the measurement, and hence the branching, is never fully complete, since delta functions are unphysical,[62] Since the role of the observer and measurement per se plays no special role in MWI (measurements are handled as all other interactions are) there is no need for a precise definition of what an observer or a measurement is — just as in Newtonian physics no precise definition of either an observer or a measurement was required or expected. In all circumstances the universal wavefunction is still available to give a complete description of reality. Also, it is a common misconception to think that branches are completely separate. In Everett's formulation, they may in principle quantum interfere (i.e., "merge" instead of "splitting") with each other in the future,[63] although this requires all "memory" of the earlier branching event to be lost, so no observer ever sees two branches of reality.[64][65] MWI states that there is no special role, or need for precise definition of measurement in MWI, yet Everett uses the word "measurement" repeatedly throughout its exposition. MWI response: "measurements" are treated as a subclass of interactions, which induce subject–object correlations in the combined wavefunction. There is nothing special about measurements (such as the ability to trigger a wave function collapse), that cannot be dealt with by the usual unitary time development process.[3] This is why there is no precise definition of measurement in Everett's formulation, although some other formulations emphasize that measurements must be effectively irreversible or create classical information. The splitting of worlds forward in time, but not backwards in time (i.e., merging worlds), is time asymmetric and incompatible with the time symmetric nature of Schrödinger's equation, or CPT invariance in general.[66] MWI response: The splitting is time asymmetric; this observed temporal asymmetry is due to the boundary conditions imposed by the Big Bang[67] There is circularity in Everett's measurement theory. Under the assumptions made by Everett, there are no 'good observations' as defined by him, and since his analysis of the observational process depends on the latter, it is void of any meaning. The concept of a 'good observation' is the projection postulate in disguise and Everett's analysis simply derives this postulate by having assumed it, without any discussion.[68][unreliable source?] MWI response: Everett's treatment of observations / measurements covers both idealised good measurements and the more general bad or approximate cases.[69] Thus it is legitimate to analyse probability in terms of measurement; no circularity is present. Talk of probability in Everett presumes the existence of a preferred basis to identify measurement outcomes for the probabilities to range over. But the existence of a preferred basis can only be established by the process of decoherence, which is itself probabilistic[40] or arbitrary.[70] MWI response: Everett analysed branching using what we now call the "measurement basis". It is fundamental theorem of quantum theory that nothing measurable or empirical is changed by adopting a different basis. Everett was therefore free to choose whatever basis he liked. The measurement basis was simply the simplest basis in which to analyse the measurement process.[71][72] We cannot be sure that the universe is a quantum multiverse until we have a theory of everything and, in particular, a successful theory of quantum gravity.[73] If the final theory of everything is non-linear with respect to wavefunctions then many-worlds would be invalid.[1][4][5][6][7] MWI response: All accepted quantum theories of fundamental physics are linear with respect to the wavefunction. While quantum gravity or string theory may be non-linear in this respect there is no evidence to indicate this at the moment.[17][18] Conservation of energy is grossly violated if at every instant near-infinite amounts of new matter are generated to create the new universes. MWI response: There are two responses to this objection. First, the law of conservation of energy says that energy is conserved within each universe. Hence, even if "new matter" were being generated to create new universes, this would not violate conservation of energy. Second, conservation of energy is not violated since the energy of each branch has to be weighted by its probability, according to the standard formula for the conservation of energy in quantum theory. This results in the total energy of the multiverse being conserved.[74][unreliable source?] Occam's Razor rules against a plethora of unobservable universes – Occam would prefer just one universe; i.e., any non-MWI. MWI response: Occam's razor actually is a constraint on the complexity of physical theory, not on the number of universes. MWI is a simpler theory since it has fewer postulates.[55][unreliable source?] Occams's razor is often cited by MWI adherents as an advantage of MWI. Unphysical universes: If a state is a superposition of two states and , i.e., , i.e., weighted by coefficients a and b, then if , what principle allows a universe with vanishingly small probability b to be instantiated on an equal footing with the much more probable one with probability a? This seems to throw away the information in the probability amplitudes. MWI response: The magnitude of the coefficients provides the weighting that makes the branches or universes "unequal", as Everett and others have shown, leading the emergence of the conventional probabilistic rules.[1][4][5][6][7][75][unreliable source?] Violation of the principle of locality, which contradicts special relativity: MWI splitting is instant and total: this may conflict with relativity, since an alien in the Andromeda galaxy can't know I collapse an electron over here before she collapses hers there: the relativity of simultaneity says we can't say which electron collapsed first – so which one splits off another universe first? This leads to a hopeless muddle with everyone splitting differently. Note: EPR is not a get-out here, as the alien's and my electrons need never have been part of the same quantum, i.e., entangled. MWI response: the splitting can be regarded as causal, local and relativistic, spreading at, or below, the speed of light (e.g., we are not split by Schrödinger's cat until we look in the box).[76][unreliable source?] For spacelike separated splitting you can't say which occurred first — but this is true of all spacelike separated events, simultaneity is not defined for them. Splitting is no exception; many-worlds is a local theory.[52] There is a wide range of claims that are considered "many-worlds" interpretations. It was often claimed by those who do not believe in MWI[77] that Everett himself was not entirely clear[78] as to what he believed; however, MWI adherents (such as DeWitt, Tegmark, Deutsch and others) believe they fully understand Everett's meaning as implying the literal existence of the other worlds. Additionally, recent biographical sources make it clear that Everett believed in the literal reality of the other quantum worlds.[24] Everett's son reported that Hugh Everett "never wavered in his belief over his many-worlds theory".[79] Also Everett was reported to believe "his many-worlds theory guaranteed him immortality".[80] One of MWI's strongest advocates is David Deutsch.[81] According to Deutsch, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed in this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing,[82] he suggested that parallelism that results from the validity of MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". Deutsch has also proposed that when reversible computers become conscious that MWI will be testable (at least against "naive" Copenhagenism) via the reversible observation of spin.[64] Asher Peres was an outspoken critic of MWI. For example, a section in his 1993 textbook had the title Everett's interpretation and other bizarre theories. Peres not only questioned whether MWI is really an "interpretation", but rather, if any interpretations of quantum mechanics are needed at all. An interpretation can be regarded as a purely formal transformation, which adds nothing to the rules of the quantum mechanics.[citation needed] Peres seems to suggest[according to whom?] that positing the existence of an infinite number of non-communicating parallel universes is highly suspect per those[who?] who interpret it as a violation of Occam's razor, i.e., that it does not minimize the number of hypothesized entities. However, it is understood[by whom?] that the number of elementary particles are not a gross violation of Occam's Razor, one counts the types, not the tokens. Max Tegmark remarks[where?] that the alternative to many-worlds is "many words", an allusion to the complexity of von Neumann's collapse postulate. On the other hand, the same derogatory qualification "many words" is often applied to MWI by its critics[who?] who see it as a word game which obfuscates rather than clarifies by confounding the von Neumann branching of possible worlds with the Schrödinger parallelism of many worlds in superposition.[citation needed] MWI is considered by some[who?] to be unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Others[64] claim MWI is directly testable. Everett regarded MWI as falsifiable since any test that falsifies conventional quantum theory would also falsify MWI.[23] According to Martin Gardner, the "other" worlds of MWI have two different interpretations: real or unreal; he claims that Stephen Hawking and Steven Weinberg both favour the unreal interpretation.[83] Gardner also claims that the nonreal interpretation is favoured by the majority of physicists, whereas the "realist" view is only supported by MWI experts such as Deutsch and Bryce DeWitt. Hawking has said that "according to Feynman's idea", all the other histories are as "equally real" as our own,[84] and Martin Gardner reports Hawking saying that MWI is "trivially true".[85] In a 1983 interview, Hawking also said he regarded the MWI as "self-evidently correct" but was dismissive towards questions about the interpretation of quantum mechanics, saying, "When I hear of Schrödinger's cat, I reach for my gun." In the same interview, he also said, "But, look: All that one does, really, is to calculate conditional probabilities—in other words, the probability of A happening, given B. I think that that's all the many worlds interpretation is. Some people overlay it with a lot of mysticism about the wave function splitting into different parts. But all that you're calculating is conditional probabilities."[86] Elsewhere Hawking contrasted his attitude towards the "reality" of physical theories with that of his colleague Roger Penrose, saying, "He's a Platonist and I'm a positivist. He's worried that Schrödinger's cat is in a quantum state, where it is half alive and half dead. He feels that can't correspond to reality. But that doesn't bother me. I don't demand that a theory correspond to reality because I don't know what it is. Reality is not a quality you can test with litmus paper. All I'm concerned with is that the theory should predict the results of measurements. Quantum theory does this very successfully."[87] For his own part, Penrose agrees with Hawking that QM applied to the universe implies MW, although he considers the current lack of a successful theory of quantum gravity negates the claimed universality of conventional QM.[73] Advocates of MWI often cite a poll of 72 "leading cosmologists and other quantum field theorists"[88] conducted by the American political scientist David Raub in 1995 showing 58% agreement with "Yes, I think MWI is true".[89] The poll is controversial: for example, Victor J. Stenger remarks that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes. Collaborating with James Hartle, Gell-Mann is working toward the development a more "palatable" post-Everett quantum mechanics. Stenger thinks it's fair to say that most physicists dismiss the many-world interpretation as too extreme, while noting it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse".[90] Max Tegmark also reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop.[91] According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations." Such polls have been taken at other conferences, for example, in response to Sean Carroll's observation, "As crazy as it sounds, most working physicists buy into the many-worlds theory"[92] Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." However, Nielsen notes that it seemed most attendees found it to be a waste of time: Asher Peres "got a huge and sustained round of applause… when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'"[93] A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored.[94] A 2011 poll of 33 participants at an Austrian conference found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen;[95] the authors remark that the results are similar to Tegmark's 1998 poll. Speculative implications[edit] Speculative physics deals with questions which are also discussed in science fiction. Quantum suicide thought experiment[edit] Quantum suicide, as a thought experiment, was published independently by Hans Moravec in 1987[96][97] and Bruno Marchal in 1988[98][99] and was independently developed further by Max Tegmark in 1998.[100] It attempts to distinguish between the Copenhagen interpretation of quantum mechanics and the Everett many-worlds interpretation by means of a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide regardless of the odds.[101] Weak coupling[edit] Another speculation is that the separate worlds remain weakly coupled (e.g., by gravity) permitting "communication between parallel universes". A possible test of this using quantum-optical equipment is described in a 1997 Foundations of Physics article by Rainer Plaga.[59] It involves an isolated ion in an ion trap, a quantum measurement that would yield two parallel worlds (their difference just being in the detection of a single photon), and the excitation of the ion from only one of these worlds. If the excited ion can be detected from the other parallel universe, then this would constitute direct evidence in support of the many-worlds interpretation and would automatically exclude the orthodox, "logical", and "many-histories" interpretations. The reason the ion is isolated is to make it not participate immediately in the decoherence which insulates the parallel world branches, therefore allowing it to act as a gateway between the two worlds, and if the measure apparatus could perform the measurements quickly enough before the gateway ion is decoupled then the test would succeed (with electronic computers the necessary time window between the two worlds would be in a time scale of milliseconds or nanoseconds, and if the measurements are taken by humans then a few seconds would still be enough). R. Plaga shows that macroscopic decoherence timescales are a possibility. The proposed test is based on technical equipment described in a 1993 Physical Review article by Itano et al.[102] and R. Plaga says that this level of technology is enough to realize the proposed inter-world communication experiment. The necessary technology for precision measurements of single ions already exists since the 1970s, and the ion recommended for excitation is 199Hg+. The excitation methodology is described by Itano et al. and the time needed for it is given by the Rabi flopping formula[103] Such a test as described by R. Plaga would mean that energy transfer is possible between parallel worlds. This does not violate the fundamental principles of physics because these require energy conservation only for the whole universe and not for the single parallel branches.[59] Neither the excitation of the single ion (which is a degree of freedom of the proposed system) leads to decoherence, something which is proven by Welcher Weg detectors which can excite atoms without momentum transfer (which causes the loss of coherence).[104] The proposed test would allow for low-bandwidth inter-world communication, the limiting factors of bandwidth and time being dependent on the technology of the equipment. Because of the time needed to determine the state of the partially decohered isolated excited ion based on Itano et al.'s methodology, the ion would decohere by the time its state is determined during the experiment, so Plaga's proposal would pass just enough information between the two worlds to confirm their parallel existence and nothing more. The author contemplates that with increased bandwidth, one could even transfer television imagery across the parallel worlds.[59] For example, Itano et al.'s methodology could be improved (by lowering the time needed for state determination of the excited ion) if a more efficient process were found for the detection of fluorescence radiation using 194 nm photons.[59] A 1991 article by J.Polchinski also supports the view that inter-world communication is a theoretical possibility.[105] Other authors in a 1994 preprint article also contemplated similar ideas.[106] The reason inter-world communication seems like a possibility is because decoherence which separates the parallel worlds is never fully complete,[107][108] therefore weak influences from one parallel world to another can still pass between them,[107][109] and these should be measurable with advanced technology. Deutsch proposed such an experiment in a 1985 International Journal of Theoretical Physics article,[110] but the technology it requires involves human-level artificial intelligence.[59] Similarity to modal realism[edit] The many-worlds interpretation has some similarity to modal realism in philosophy, which is the view that the possible worlds used to interpret modal claims exist and are of a kind with the actual world. Unlike the possible worlds of philosophy, however, in quantum mechanics counterfactual alternatives can influence the results of experiments, as in the Elitzur–Vaidman bomb-testing problem or the Quantum Zeno effect. Also, while the worlds of the many-worlds interpretation all share the same physical laws, modal realism postulates a world for every way things could conceivably have been. Time travel[edit] The many-worlds interpretation could be one possible way to resolve the paradoxes[81] that one would expect to arise if time travel turns out to be permitted by physics (permitting closed timelike curves and thus violating causality). Entering the past would itself be a quantum event causing branching, and therefore the timeline accessed by the time traveller simply would be another timeline of many. In that sense, it would make the Novikov self-consistency principle unnecessary. Many-worlds in literature and science fiction[edit] A map from Robert Sobel's novel For Want of a Nail, an artistic illustration of how small events – in this example the branching or point of divergence from our timeline's history is in October 1777 – can profoundly alter the course of history. According to the many-worlds interpretation every event, even microscopic, is a branch point; all possible alternative histories actually exist.[1] The many-worlds interpretation (and the somewhat related concept of possible worlds) has been associated to numerous themes in literature, art and science fiction. Some of these stories or films violate fundamental principles of causality and relativity, since the information-theoretic structure of the path space of multiple universes (that is, information flow between different paths) is very likely complex. Another kind of popular illustration of many-worlds splittings, which does not involve information flow between paths, or information flow backwards in time considers alternate outcomes of historical events. According to the many-worlds interpretation, all of the historical speculations entertained within the alternate history genre are realized in parallel universes.[1] The many-worlds interpretation of reality was anticipated with remarkable fidelity in Olaf Stapledon's 1937 science fiction novel Star Maker, in a paragraph describing one of the many universes created by the Star Maker god of the title. "In one inconceivably complex cosmos, whenever a creature was faced with several possible courses of action, it took them all, thereby creating many distinct temporal dimensions and distinct histories of the cosmos. Since in every evolutionary sequence of the cosmos there were very many creatures, and each was constantly faced with many possible courses, and the combinations of all their courses were innumerable, an infinity of distinct universes exfoliated from every moment of every temporal sequence in this cosmos." See also[edit] 1. ^ a b c d e f g Bryce Seligman DeWitt, Quantum Mechanics and Reality: Could the solution to the dilemma of indeterminism be a universe in which all possible outcomes of an experiment actually occur?, Physics Today, 23(9) pp 30–40 (September 1970) "every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself." See also Physics Today, letters followup, 24(4), (April 1971), pp 38–44 2. ^ a b Osnaghi, Stefano; Freitas, Fabio; Olival Freire, Jr (2009). "The Origin of the Everettian Heresy" (PDF). Studies in History and Philosophy of Modern Physics. 40: 97–123. doi:10.1016/j.shpsb.2008.10.002.  3. ^ a b Hugh Everett Theory of the Universal Wavefunction, Thesis, Princeton University, (1956, 1973), pp 1–140 5. ^ a b c Cecile M. DeWitt, John A. Wheeler eds, The Everett–Wheeler Interpretation of Quantum Mechanics, Battelle Rencontres: 1967 Lectures in Mathematics and Physics (1968) 6. ^ a b c Bryce Seligman DeWitt, The Many-Universes Interpretation of Quantum Mechanics, Proceedings of the International School of Physics "Enrico Fermi" Course IL: Foundations of Quantum Mechanics, Academic Press (1972) 7. ^ a b c d e f g h Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X Contains Everett's thesis: The Theory of the Universal Wavefunction, pp 3–140. 8. ^ H. Dieter Zeh, On the Interpretation of Measurement in Quantum Theory, Foundation of Physics, vol. 1, pp. 69–76, (1970). 9. ^ Wojciech Hubert Zurek, Decoherence and the transition from quantum to classical, Physics Today, vol. 44, issue 10, pp. 36–44, (1991). 10. ^ Wojciech Hubert Zurek, Decoherence, einselection, and the quantum origins of the classical, Reviews of Modern Physics, 75, pp 715–775, (2003) 11. ^ The Many Worlds Interpretation of Quantum Mechanics[permanent dead link] 12. ^ David Deutsch argues that a great deal of fiction is close to a fact somewhere in the so called multiverse, Beginning of Infinity, p. 294 13. ^ Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X Contains Everett's thesis: The Theory of the Universal Wavefunction, where the claim to resolves all paradoxes is made on pg 118, 149. 14. ^ Hugh Everett, Relative State Formulation of Quantum Mechanics, Reviews of Modern Physics vol 29, (July 1957) pp 454–462. The claim to resolve EPR is made on page 462 15. ^ David Deutsch. The Beginning of infinity. Page 310. 16. ^ 17. ^ a b Steven Weinberg, Dreams of a Final Theory: The Search for the Fundamental Laws of Nature (1993), ISBN 0-09-922391-0, pg 68–69 18. ^ a b Steven Weinberg Testing Quantum Mechanics, Annals of Physics Vol 194 #2 (1989), pg 336–386 19. ^ John Archibald Wheeler, Geons, Black Holes & Quantum Foam, ISBN 0-393-31991-1. pp 268–270 20. ^ a b Everett 1957, section 3, 2nd paragraph, 1st sentence 21. ^ a b Everett [1956]1973, "Theory of the Universal Wavefunction", chapter 6 (e) 22. ^ Zurek, Wojciech (March 2009). "Quantum Darwinism". Nature Physics. 5 (3): 181–188. Bibcode:2009NatPh...5..181Z. arXiv:0903.5082Freely accessible. doi:10.1038/nphys1202.  23. ^ a b Everett 24. ^ a b Peter Byrne, The Many Worlds of Hugh Everett III: Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family, ISBN 978-0-19-955227-6 25. ^ "Whether you can observe a thing or not depends on the theory which you use. It is the theory which decides what can be observed." Albert Einstein to Werner Heisenberg, objecting to placing observables at the heart of the new quantum mechanics, during Heisenberg's 1926 lecture at Berlin; related by Heisenberg in 1968, quoted by Abdus Salam, Unification of Fundamental Forces, Cambridge University Press (1990) ISBN 0-521-37140-6, pp 98–101 26. ^ N.P. Landsman, "The conclusion seems to be that no generally accepted derivation of the Born rule has been given to date, but this does not imply that such a derivation is impossible in principle.", in Compendium of Quantum Physics (eds.) F.Weinert, K. Hentschel, D.Greenberger and B. Falkenburg (Springer, 2008), ISBN 3-540-70622-4 27. ^ Adrian Kent (May 5, 2009), One world versus many: the inadequacy of Everettian accounts of evolution, probability, and scientific confirmation 28. ^ Kent, Adrian (1990). "Against Many-Worlds Interpretations". Int. J. Mod. Phys A. 5: 1745–1762. Bibcode:1990IJMPA...5.1745K. arXiv:gr-qc/9703089Freely accessible. doi:10.1142/S0217751X90000805.  29. ^ James Hartle, Quantum Mechanics of Individual Systems, American Journal of Physics, 1968, vol 36 (#8), pp. 704–712 30. ^ E. Farhi, J. Goldstone & S. Gutmann. How probability arises in quantum mechanics., Ann. Phys. (N.Y.) 192, 368–382 (1989). 31. ^ Pitowsky, I. (2005). "Quantum mechanics as a theory of probability". arXiv:quant-ph/0510095Freely accessible.  32. ^ Gleason, A. M. (1957). "Measures on the closed subspaces of a Hilbert space". Journal of Mathematics and Mechanics. 6: 885–893. MR 0096113. doi:10.1512/iumj.1957.6.56050.  33. ^ Deutsch, D. (1999). Quantum Theory of Probability and Decisions. Proceedings of the Royal Society of London A455, 3129–3137. [1]. 34. ^ David Wallace: Quantum Probability and Decision Theory, Revisited 35. ^ David Wallace. Everettian Rationality: defending Deutsch's approach to probability in the Everett interpretation. Stud. Hist. Phil. Mod. Phys. 34 (2003), 415–438. 36. ^ David Wallace (2003), Quantum Probability from Subjective Likelihood: improving on Deutsch's proof of the probability rule 37. ^ David Wallace, 2009,A formal proof of the Born rule from decision-theoretic assumptions 38. ^ Simon Saunders: Derivation of the Born rule from operational assumptions. Proc. Roy. Soc. Lond. A460, 1771–1788 (2004). 39. ^ Simon Saunders, 2004: What is Probability? 40. ^ a b David J Baker, Measurement Outcomes and Probability in Everettian Quantum Mechanics, Studies In History and Philosophy of Science Part B: Studies In History and Philosophy of Modern Physics, Volume 38, Issue 1, March 2007, Pages 153–169 41. ^ H. Barnum, C. M. Caves, J. Finkelstein, C. A. Fuchs, R. Schack: Quantum Probability from Decision Theory? Proc. Roy. Soc. Lond. A456, 1175–1182 (2000). 42. ^ a b Merali, Zeeya (2007-09-21). "Parallel universes make quantum sense". New Scientist (2622). Retrieved 2013-11-22.  (Summary only). 43. ^ Perimeter Institute, Seminar overview, Probability in the Everett interpretation: state of play, David Wallace – Oxford University, 21 Sept 2007 44. ^ Perimeter Institute, Many worlds at 50 conference, September 21–24, 2007 Archived 2007-10-20 at the Wayback Machine. 45. ^ Armando V.D.B. Assis (2011). "Assis, Armando V.D.B. On the nature of and the emergence of the Born rule. Annalen der Physik, 2011.". Annalen der Physik (Berlin). 523: 883–897. Bibcode:2011AnP...523..883A. arXiv:1009.1532Freely accessible. doi:10.1002/andp.201100062.  46. ^ Wojciech H. Zurek: Probabilities from entanglement, Born's rule from envariance, Phys. Rev. A71, 052105 (2005). 47. ^ Schlosshauer, M.; Fine, A. (2005). "On Zurek's derivation of the Born rule". Found. Phys. 35: 197–213. Bibcode:2005FoPh...35..197S. arXiv:quant-ph/0312058Freely accessible. doi:10.1007/s10701-004-1941-6.  48. ^ Lutz Polley, Position eigenstates and the statistical axiom of quantum mechanics, contribution to conference Foundations of Probability and Physics, Vaxjo, Nov 27 – Dec 1, 2000 49. ^ Lutz Polley, Quantum-mechanical probability from the symmetries of two-state systems 50. ^ Vaidman, L. "Probability in the Many-Worlds Interpretation of Quantum Mechanics." In: Ben-Menahem, Y., & Hemmo, M. (eds), The Probable and the Improbable: Understanding Probability in Physics, Essays in Memory of Itamar Pitowsky. Springer. 51. ^ Sebens, C.T. and Carroll, S.M., Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics. 52. ^ a b Mark A. Rubin, Locality in the Everett Interpretation of Heisenberg-Picture Quantum Mechanics, Foundations of Physics Letters, 14, (2001) , pp. 301–322, arXiv:quant-ph/0103079 53. ^ Paul C.W. Davies, Other Worlds, chapters 8 & 9 The Anthropic Principle & Is the Universe an accident?, (1980) ISBN 0-460-04400-1 54. ^ Paul C.W. Davies, The Accidental Universe, (1982) ISBN 0-521-28692-1 55. ^ a b Everett FAQ "Does many-worlds violate Ockham's Razor?" 56. ^ a b Vaidman, Lev. "Many-Worlds Interpretation of Quantum Mechanics". The Stanford Encyclopedia of Philosophy.  57. ^ Deutsch, D., (1986) 'Three experimental implications of the Everett interpretation', in R. Penrose and C.J. Isham (eds.), Quantum Concepts of Space and Time, Oxford: The Clarendon Press, pp. 204–214. 58. ^ Page, D., (2000) 'Can Quantum Cosmology Give Observational Consequences of Many-Worlds Quantum Theory?' 59. ^ a b c d e f g Plaga, R. (1997). "On a possibility to find experimental evidence for the many-worlds interpretation of quantum mechanics". Foundations of Physics. 27: 559–577. Bibcode:1997FoPh...27..559P. arXiv:quant-ph/9510007Freely accessible. doi:10.1007/BF02550677.  60. ^ Page, Don N. (2000). "Can Quantum Cosmology Give Observational Consequences of Many-Worlds Quantum Theory?". arXiv:gr-qc/0001001Freely accessible. doi:10.1063/1.1301589.  62. ^ Penrose, R. The Road to Reality, §21.11 63. ^ Tegmark, Max The Interpretation of Quantum Mechanics: Many Worlds or Many Words?, 1998. To quote: "What Everett does NOT postulate: "At certain magic instances, the world undergoes some sort of metaphysical 'split' into two branches that subsequently never interact." This is not only a misrepresentation of the MWI, but also inconsistent with the Everett postulate, since the subsequent time evolution could in principle make the two terms...interfere. According to the MWI, there is, was and always will be only one wavefunction, and only decoherence calculations, not postulates, can tell us when it is a good approximation to treat two terms as non-interacting." 64. ^ a b c Paul C.W. Davies, J.R. Brown, The Ghost in the Atom (1986) ISBN 0-521-31316-3, pp. 34–38: "The Many-Universes Interpretation", pp 83–105 for David Deutsch's test of MWI and reversible quantum memories 65. ^ Christoph Simon, 2009, Conscious observers clarify many worlds 66. ^ Joseph Gerver, The past as backward movies of the future, Physics Today, letters followup, 24(4), (April 1971), pp 46–7 67. ^ Bryce Seligman DeWitt, Physics Today,letters followup, 24(4), (April 1971), pp 43 68. ^ Arnold Neumaier's comments on the Everett FAQ, 1999 & 2003 69. ^ Everett [1956] 1973, "Theory of the Universal Wavefunction", chapter V, section 4 "Approximate Measurements", pp. 100–103 (e) 70. ^ Stapp, Henry (2002). "The basis problem in many-world theories" (PDF). Canadian Journal of Physics. 80: 1043–1052. Bibcode:2002CaJPh..80.1043S. arXiv:quant-ph/0110148Freely accessible. doi:10.1139/p02-068.  71. ^ Brown, Harvey R; Wallace, David (2005). "Solving the measurement problem: de Broglie–Bohm loses out to Everett" (PDF). Foundations of Physics. 35: 517–540. Bibcode:2005FoPh...35..517B. arXiv:quant-ph/0403094Freely accessible. doi:10.1007/s10701-004-2009-3.  72. ^ Mark A Rubin (2005), There Is No Basis Ambiguity in Everett Quantum Mechanics, Foundations of Physics Letters, Volume 17, Number 4 / August, 2004, pp 323–341 73. ^ a b Penrose, Roger (August 1991). "Roger Penrose Looks Beyond the Classic-Quantum Dichotomy". Sciencewatch. Archived from the original on 2007-10-23. Retrieved 2007-10-21.  74. ^ Everett FAQ "Does many-worlds violate conservation of energy?" 75. ^ Everett FAQ "How do probabilities emerge within many-worlds?" 76. ^ Everett FAQ "When does Schrodinger's cat split?" 77. ^ Jeffrey A. Barrett, The Quantum Mechanics of Minds and Worlds, Oxford University Press, 1999. According to Barrett (loc. cit. Chapter 6) "There are many many-worlds interpretations." 78. ^ Barrett, Jeffrey A. (2010). Zalta, Edward N., ed. "Everett's Relative-State Formulation of Quantum Mechanics" (Fall 2010 ed.). The Stanford Encyclopedia of Philosophy.  Again, according to Barrett "It is... unclear precisely how this was supposed to work." 79. ^ Aldhous, Peter (2007-11-24). "Parallel lives can never touch". New Scientist (2631). Retrieved 2007-11-21.  80. ^ Eugene Shikhovtsev's Biography of Everett, in particular see "Keith Lynch remembers 1979–1980" 81. ^ a b David Deutsch, The Fabric of Reality: The Science of Parallel Universes And Its Implications, Penguin Books (1998), ISBN 0-14-027541-X 82. ^ Deutsch, David (1985). "Quantum theory, the Church–Turing principle and the universal quantum computer". Proceedings of the Royal Society of London A. 400: 97–117. Bibcode:1985RSPSA.400...97D. doi:10.1098/rspa.1985.0070.  83. ^ A response to Bryce DeWitt[dead link], Martin Gardner, May 2002 84. ^ Award winning 1995 Channel 4 documentary "Reality on the rocks: Beyond our Ken" "Archived copy". Archived from the original on 2007-10-22. Retrieved 2007-10-20.  where, in response to Ken Campbell's question "all these trillions of Universes of the Multiverse, are they as real as this one seems to be to me?" Hawking states, "Yes.... According to Feynman's idea, every possible history (of Ken) is equally real." 85. ^ Gardner, Martin (2003). Are universes thicker than blackberries?. W.W. Norton. p. 10. ISBN 978-0-393-05742-3.  86. ^ Ferris, Timothy (1997). The Whole Shebang. Simon & Schuster. pp. 345. ISBN 978-0-684-81020-1.  87. ^ Hawking, Stephen; Roger Penrose (1996). The Nature of Space and Time. Princeton University Press. pp. 121. ISBN 978-0-691-03791-2.  88. ^ Elvridge., Jim (2008-01-02). The Universe – Solved!. pp. 35–36. ISBN 978-1-4243-3626-5. OCLC 247614399. 58% believed that the Many Worlds Interpretation (MWI) was true, including Stephen Hawking and Nobel Laureates Murray Gell-Mann and Richard Feynman  89. ^ Bruce., Alexandra. "How does reality work?". Beyond the bleep : the definitive unauthorized guide to What the bleep do we know!?. p. 33. ISBN 978-1-932857-22-1. [the poll was] published in the French periodical Sciences et Avenir in January 1998  90. ^ Stenger, V.J. (1995). The Unconscious Quantum: Metaphysics in Modern Physics and Cosmology. Prometheus Books. p. 176. ISBN 978-1-57392-022-3. LCCN lc95032599. Gell-Mann and collaborator James Hartle, along with a score of others, have been working to develop a more palatable interpretation of quantum mechanics that is free of the problems that plague all the interpretations we have considered so far. This new interpretation is called, in its various incarnations, post-Everett quantum mechanics, alternate histories, consistent histories, or decoherent histories. I will not be overly concerned with the detailed differences between these characterizations and will use the terms more or less interchangeably.  91. ^ Max Tegmark on many-worlds (contains MWI poll) 92. ^ Caroll, Sean (1 April 2004). "Preposterous Universe". Archived from the original on 8 September 2004.  93. ^ Nielsen, Michael (3 April 2004). "Michael Nielsen: The Interpretation of Quantum Mechanics". Archived from the original on 20 May 2004.  94. ^ Interpretation of Quantum Mechanics class survey Archived 2010-11-04 at the Wayback Machine. 95. ^ "A Snapshot of Foundational Attitudes Toward Quantum Mechanics", Schlosshauer et al 2013 96. ^ "The Many Minds Approach". 25 October 2010. Retrieved 7 December 2010. This idea was first proposed by Austrian mathematician Hans Moravec in 1987...  97. ^ Moravec, Hans (1988). "The Doomsday Device". Mind Children: The Future of Robot and Human Intelligence. Harvard: Harvard University Press. p. 188. ISBN 978-0-674-57618-6.  (If MWI is true, apocalyptic particle accelerators won't function as advertised). 98. ^ Marchal, Bruno (1988). "Informatique théorique et philosophie de l'esprit" [Theoretical Computer Science and Philosophy of Mind]. Acte du 3ème colloque international Cognition et Connaissance [Proceedings of the 3rd International Conference Cognition and Knowledge]. Toulouse: 193–227.  99. ^ Marchal, Bruno (1991). De Glas, M.; Gabbay, D., eds. "Mechanism and personal identity" (PDF). Proceedings of WOCFAI 91. Paris. Angkor.: 335–345.  101. ^ Tegmark, Max (November 1998). "Quantum immortality". Retrieved 25 October 2010.  102. ^ W.M.Itano et al., Phys.Rev. A47,3354 (1993). 103. ^ M.SargentIII,M.O.Scully and W.E.Lamb, Laser physics (Addison-Wesley, Reading, 1974), p.27. 104. ^ M.O.Scully and H.Walther, Phys.Rev. A39,5229 (1989). 105. ^ J.Polchinski, Phys.Rev.Lett. 66,397 (1991). 106. ^ M.Gell-Mann and J.B.Hartle, Equivalent Sets of Histories and Multiple Quasiclassical Domains, preprint University of California at Santa Barbara UCSBTH-94-09 (1994). 107. ^ a b H.D.Zeh, Found.Phys. 3,109 (1973). 108. ^ H.D.Zeh, Phys.Lett.A 172,189 (1993). 109. ^ A.Albrecht, Phys.Rev. D48,3768 (1993). 110. ^ D.Deutsch, Int.J.theor.Phys. 24,1 (1985). Further reading[edit] External links[edit]
d9d1e520278385d7
Monday, June 15, 2015 A brief introduction to basis sets In order to compute the energy we need to define mathematical functions for the orbitals.  In the case of atoms we can simply use the solutions to the Schrödinger equation for the  $\ce{H}$ atom as a starting point and find the best exponent for each function using the variational principle. But what functions should we use for molecular orbitals (MOs)?  The wave function of the $\ce{H2+}$ molecule provides a clue (Figure 1) Figure 1. Schematic representation of the wave function of the $\ce{H2+}$  molecule. It looks a bit like the sum of two 1$s$ functions centered at each nucleus (A and B), $${\Psi ^{{\text{H}}_{\text{2}}^ + }} \approx \tfrac{1}{{\sqrt 2 }}\left( {\Psi _{1s}^{{{\text{H}}_{\text{A}}}} + \Psi _{1s}^{{{\text{H}}_{\text{B}}}}} \right)$$ Thus, one way of constructing MOs is as a linear combination of atomic orbitals (the LCAO approximation), $${\phi _i}(1) = \sum\limits_{\mu  = 1}^K {{C_{\mu i}}{\chi _\mu }(1)} $$ an approximation that becomes better and better as $K$ increases.  Here $\chi _\mu$  is a mathematical function that looks like an AO, and is called a basis function (a collection of basis functions for various atoms is called a basis set), and $C_{\mu i}$ is a number (sometimes called an MO coefficient) that indicates how much basis function $\chi _\mu$ contributes to MO $i$, and is determined for each system via the variational principle.   Note that every MO is expressed in terms of all basis function, and therefore extends over the entire molecule. If we want to calculate the RHF energy of water, the basis set for the two $\ce{H}$ atoms would simply be the lowest energy solution to the Schrödinger equation for $\ce{H}$ atom $${\chi _{{{\text{H}}_{\text{A}}}}}(1) = \Psi _{1s}^{\text{H}}({r_{1A}}) = \frac{1}{{\sqrt \pi  }}{e^{ - \left| {{{\bf{r}}_1} - {{\bf{R}}_A}} \right|}}$$ For the O atom, the basis set is the AOs obtained from, say, an ROHF calculation on $\ce{O}$, i.e. $1s$, $2s$, $2p_x$, $2p_y$, and $2p_z$ functions from the solutions to the Schrödinger equation for the $\ce{H}$ atom, where the exponents ($\alpha_i$'s) have been variationally optimized for the $\ce{O}$ atom, $$\Psi _{1s}^{\text{H}},\;\Psi _{2s}^{\text{H}},\;\Psi _{2p}^{\text{H}} \xrightarrow{\frac{\partial E}{\partial \alpha_i}=0} \phi _{1s}^{\text{O}},\;\phi _{2s}^{\text{O}},\;\phi _{2p}^{\text{O}} \equiv \;\left\{ {{\chi _O}} \right\}$$ Notice that this only has to be done once, i.e. we will use this oxygen basis set for all oxygen-containing molecules.  We then provide a guess at the water structure and the basis functions are placed at the coordinates of the respective atoms.  Then we find the best MO coefficients by variational minimization, $$\frac{{\partial E}}{{\partial {C_{\mu i}}}} = 0 $$ for all $i$ and $\mu$. Thus, for water we need a total of seven basis functions to describe the five doubly occupied water MOs ($K = 7$ and $i = 1-5$ in Eq 5).  This is an example of a minimal basis set, since it is the minimum number of basis functions per atoms that makes chemical sense. One problem with the LCAO approximation is the number of 2-electron integrals it leads to, and the associated computational cost.  Let’s look at the part of the energy that comes from the Coulomb integrals   \sum\limits_{i = 1}^{N/2} {\sum\limits_{j = 1}^{N/2} {2{J_{ij}}} }  &= \sum\limits_{i = 1}^{N/2} {\sum\limits_{j = 1}^{N/2} {2\left\langle {\left. {{\phi _i}(1){\phi _j}(2)} \right|\frac{1}{{{r_{12}}}}\left| {{\phi _i}(1){\phi _j}(2)} \right.} \right\rangle } }  \\    &= \sum\limits_i^{N/2} {\sum\limits_j^{N/2} {2\left\langle {\left. {{\phi _i}(1){\phi _i}(1)} \right|\frac{1}{{{r_{12}}}}\left| {{\phi _j}(2){\phi _j}(2)} \right.} \right\rangle } }  \\    &= \sum\limits_i^{N/2} {\sum\limits_j^{N/2} {2\left\langle {{{\phi _i}{\phi _i}}}  \mathrel{\left | {\vphantom {{{\phi _i}{\phi _i}} {{\phi _j}{\phi _j}}}}  \right. }  {{{\phi _j}{\phi _j}}} \right\rangle } }  \\    &= \sum\limits_\mu ^K {\sum\limits_\nu ^K {\sum\limits_\lambda ^K {\sum\limits_\sigma ^K {\sum\limits_i^{N/2} {\sum\limits_j^{N/2} {2{C_{\mu i}}{C_{\nu i}}{C_{\lambda j}}{C_{\lambda j}}\left\langle {{{\chi _\mu }{\chi _\nu }}}  \mathrel{\left | {\vphantom {{{\chi _\mu }{\chi _\nu }} {{\chi _\lambda }{\chi _\sigma }}}}  \right. }  {{{\chi _\lambda }{\chi _\sigma }}} \right\rangle } } } } } }  \\    &= \sum\limits_\mu ^K {\sum\limits_\nu ^K {\sum\limits_\lambda ^K {\sum\limits_\sigma ^K {\tfrac{1}{2}{P_{\mu \nu }}{P_{\lambda \sigma }}\left\langle {{{\chi _\mu }{\chi _\nu }}}  \right. } We have roughly $(N/2)^2$ Coulomb integrals involving molecular orbitals but roughly $1/8K^4$ Coulomb integrals involving basis functions (the factor of 1/8 comes from the fact that some the integrals are identical and need only be computed once).  Using a minimal basis set $K$ = 80 for a small organic molecule like caffeine $\ce{(C8H10N4O2)}$ and results in ca 5,000,000 2-electron integrals involving basis functions! (That being said, you can perform an RHF energy calculation with 80 basis functions on your desktop computer in a few minutes.  The problem with $K^4$-scaling is that a corresponding calculation with 800 basis functions would take a few days on the same machine.  So you can forget about optimizing the structure on that machine.)  This is one of the reasons why modern computational quantum chemistry requires massive computers.  This is also the reason why the basis set size it a key consideration in a quantum chemistry project. The 2-electron integrals also pose another problem: the basis functions defined so far are exponential functions (also known as Slater type orbitals or STOs).  2-electron integrals involving STOs placed on four different atoms do not have analytic solutions.  As a result, most quantum chemistry programs use Gaussian type orbitals (or simply Gaussians) instead of STOs, because the 2-electron integrals involving Gaussians have analytic solutions.  Obviously, $${e^{ - \alpha {r_{1A}}}} \approx {e^{ - \beta r_{1A}^2}}$$ is a poor approximation, so a linear combination of Gaussians are used to model each STO basis function (Figure 2) $${e^{ - \alpha {r_{1A}}}} \approx \sum\limits_i^X {{a_{i\mu }}{e^{ - {\beta _i}r_{1A}^2}}}  = {\chi _\mu }$$ Figure 2. (a) An exponential function is not well represented by one Gaussian, but (b) can be well represented by a linear combination of three Gaussians. Here the $a_{i\mu}$parameters (or contraction coefficients) as well as the Gaussian exponents are determined just once for a given STO basis function. $\chi_\mu$  is a contracted basis function and the $X$ individual Gaussian functions are called primitives.  Generally, three primitives are sufficient to represent an STO, and this basis set is known at the STO-3G basis set.  $p$- and $d$-type STOs are expanded in terms of $p$- and $d$-type primitive Gaussians [e.g. $({x_1} - {x_A}){e^{ - \beta r_{1A}^2}}$ and $({x_1} - {x_A})({y_1} - {y_A}){e^{ - \beta r_{1A}^2}}$].  An RHF calculation using the STO-3G basis set is denoted RHF/STO-3G.  Unless otherwise noted, this usually also implies that the geometry is computed (i.e. the minimum energy structure is found) at this level of theory. Minimal basis sets are usually not sufficiently accurate to model reaction energies.  This is due to the fact that the atomic basis functions cannot change size to adjust to their bonding environment. However, this can be made possible by using some the contraction coefficients as variational parameters.  This will increase the basis set size (and hence the computational cost) so this must be done judiciously.  For example, we’ll get most improvement by worrying about the basis functions that describe the valence electrons that participate most in bonding.  Thus, for $\ce{O}$ atom we leave the 1$s$ core basis function alone, but “split” the valence 2$s$ basis function into linear combinations of two and one Gaussians respectively,    {\chi _{1s}} &= \sum\limits_i^3 {{a_{i1s}}{e^{ - {\beta _i}r_{1A}^2}}}   \\    {\chi _{2{s_a}}} &= \sum\limits_i^2 {{a_{i2s}}{e^{ - {\beta _i}r_{1A}^2}}}   \\   {\chi _{2{s_b}}} &= {e^{ - {\beta _{2{s_b}}}r_{1A}^2}} \\ \end{split} $$ and similarly for the 2$p$ basis functions. This is known as the 3-21G basis set (pronounced “three-two-one g” not “three-twenty one g”), which denotes that core basis functions are described by 3 contracted Gaussians, while the valence basis functions are split into two basis functions, described by 2 and 1 Gaussian each.  Thus, using the 3-21G basis set to describe water requires 13 basis functions: two basis functions on each $\ce{H}$ atom (1$s$ is the valence basis function of the H atom) and 9 basis functions on the $\ce{O}$ atom (one 1$s$ function and two each of 2$s$, 2$p_x$, 2$p_y$, and 2$p_z$). The $\chi _{2{s_a}}$  basis function is smaller (i.e., the Gaussians have a larger exponent) than the   basis function.  Thus, one can make a function of any intermediate size by (variationally) mixing these two functions (Figure 3).  The 3-21G is an example of a split valence or double zeta basis set (zeta, ζ, is often used as the symbol for the exponent, but I find it hard to write and don’t use it in my lectures).   Similarly, one can make other double zeta basis sets such as 6-31G, or triple zeta basis sets such as 6-311G. Figure 3. Sketch of two different sized $s$-type basis functions that can be used to make a basis function of intermediate size As the number of basis functions ($K$ in Eq 2) increase the error associated with the LCAO approximation should decrease and the energy should converge to what is called the Hartree-Fock limit ($E_{\text{HF}}$) that is higher than the exact energy ($E_{\text{exact}}$) (Figure 4).  The difference is known as the correlation energy, and is the error introduced by the orbital approximation Figure 4. Plot of the energy as a function the number of basis functions. However, in the case of a one-electron molecule like $\ce{H2+}$ we would expect the energy to converge to $E_{\text{exact}}$ since there is no orbital approximation.  However, if we try this with the basis sets discussed thus far we find that this is not the case (Figure 5)! Figure 5. Plot of the energy of $\ce{H2+}$ computed using increasingly larger basis sets. What’s going on? Again we get a clue by comparing the exact wave function to the LCAO-wave function (Figure 6). Figure 6. Comparison of the exact wave function and one computed using the 6-311G basis set. We find that compared to the exact result there is not “enough wave function” between the nuclei and too much at either end.  As we increase the basis set we only add $s$-type basis functions (of varying size) to the basis set.  Since they are spherical they cannot be used to shift electron from one side of the $\ce{H}$ atom to the other.  However, $p$-functions are perfect for this (Figure 7). Figure 7. Sketch of the polarization of an s basis function by a p basis function So basis set convergence is not a matter of simply increasing the number of basis functions, it is also important to have the right mix of basis function types.  Similarly, $d$-functions can be used to “bend” $p$-functions (Figure 8). Figure 8. Sketch of the polarization of a p basis function by a d basis function Such functions are known as polarization functions, and are denoted with the following notation. For example, 6-31G(d) denotes d polarization functions on all non-$\ce{H}$ atoms and can also be written as 6-31G*.  6-31G(d,p) is a 6-31G(d) basis set where p-functions have been added on all $\ce{H}$ atoms, and can also be written 6-31G**.  A RHF/6-31G(d,p) calculation on water involves 24 basis functions: 13 basis functions for the 6-31G part (just like for 3-21G) plus 3 $p$-type polarization functions on each H atom and 5 $d$-type polarization functions (some programs use 6 Cartesian d-functions instead of the usual 5). Anions tend to have very diffuse electron distributions and very large basis functions (with very small exponents) are often needed for accurate results.  These diffuse functions are denoted with “+” signs: e.g. 6-31+G denotes one s-type and three $p$-type diffuse Gaussians on each non-$\ce{H}$ atom, and 6-31++G denotes the addition of a single diffuse $s$-type Gaussian on each $\ce{H}$-atom. Diffuse functions also tend to improve the accuracy of calculations on van der Waals complexes and other structures where the accurate representation of the outer part of the electron distribution is important. Of course there are many other basis sets available, but in general they have the same kinds of attributes as described already.  For example, aug-cc-pVTZ is a more modern basis set: “aug” stands for “augmented” meaning “augmented with diffuse functions”, “pVTZ” means “polarized valence triple zeta”, i.e. it is of roughly the same quality as 6-311++G(d,p).  “cc” stands for “correlation consistent” meaning the parameters were optimized for correlated wave functions (like MP2, see below) rather than HF wave function like Pople basis sets [such as 6-31G(d)] described thus far. Related blog posts Complete basis set extrapolation Monday, June 1, 2015 Computational Chemistry Highlights: May issue The May issue of Computational Chemistry Highlights is out. Exploring the Accuracy Limits of Local Pair Natural Orbital Coupled-Cluster Theory Uthrene, a radically new molecule? This work is licensed under a Creative Commons Attribution 4.0
cc614bcb4f46ea75
Take the 2-minute tour × The Schrödinger Equation provides a Probability Density map of the atom. In light of that, are either of the following possible: 1. The orbital/electron cloud converges to a 2d surface without heat (absolute zero)? 2. heat is responsible for the probability density variation from the above smooth surface? I have taken two calculus based physics, and Modern Physics with the Schrödinger equation, Heisenberg Uncertainty Principle, Etc. share|improve this question 1 Answer 1 up vote 11 down vote accepted 1.) No. All the calculations one does in elementary quantum mechanics courses are at zero temperature. If they were at a finite temperature, you could never reliably say what quantum mechanical state your system is in; it would always be in an ensemble of different states. Since the ground-state wavefunction and ground-state density is not a 2d surface, you don't get one at $T = 0$. 2.) No. At zero temperature, the probability density of your electron is given by the ground state wavefunction: $$\varrho(x) = \psi_0^*(x) \psi_0(x)$$ At finite temperature, your system is best described by an ensemble of states. Basically, you get $$\varrho(x) = \sum_i p_i \psi_i^*(x) \psi_i(x)$$ where $p_i$ is the ensemble-probability for your system to be in state $\psi_i(x)$. For a canonical ensemble, for example, you have $p_i \sim e^{-E_i/kT}$ if your $\psi_i(x)$ are the energy-eigenstates with eigenenergies $E_i$. The same is true for any other expectation value: $$\langle \hat A \rangle = \sum_i p_i \langle \psi_i | \hat A | \psi_i \rangle$$ Note the two different expectation value here: One is $\langle \psi_i | \hat A | \psi_i \rangle$, the quantum mechanical expectation value of $\hat A$ when the system is in state $| \psi_i \rangle$. The sum over these, together with the $p_i$, then gives the thermodynamic expectation value This framework is used everywhere in physics and has been proven to be mind-bogglingly exact. share|improve this answer +1. This is a very good statement of the state of affairs, according to standard quantum theory. For completeness, it's probably worth adding that this theory is incredibly well-tested experimentally. For instance, when people do atomic physics experiments, they do them at a very wide range of temperatures. The wavefunctions corresponding to the various atomic energy levels do not vary as functions of temperature. (I mention this only because it seems possible that the questioner is asking whether standard theory might be wrong, as opposed to asking what standard theory says. It isn't.) –  Ted Bunn Apr 18 '11 at 22:11 Your Answer
bcd16b50465345be
Take the 2-minute tour × Question. Let $(S(t))_{t \ge 0}$ be a continuous semigroup of linear operators on some Banach space $X$. Might there exist $f, g\in X$ and $0<t_0<t_1$ such that \begin{equation}S(t_0)f=S(t_1)g\end{equation} but \begin{equation} S(t_0-\varepsilon_0)f\ne S(t_1-\varepsilon_1)g \end{equation} for all $0<\varepsilon_0\le t_0$ and $0<\varepsilon_1\le t_1$ (in particular, $f\ne g$)? Pictorially, I am wondering if the following configuration is possible: enter image description here Of course we know that, when the evolution is given by a group, this is not possible: orbits either coincide or are disjoint. This is the case of autonomous ODE systems or of the Schrödinger equation. But here we have a semi group, such as the heat one, which only goes forward in time, not backwards. So the only obvious thing that we can say is that, as soon as they touch, orbits merge into one. But in principle I don't see why they should coincide in the past. Added: After some searches, I have found that for the special case of the heat equation, the answer is negative. This is commonly referred to as backward uniqueness property. Here is a simplified version, which takes into account classical solutions on bounded domains: Theorem (Taken from Evans's book on PDE, 2nd ed., pag.64) Let $U\subset \mathbb{R}^n$ be an open and bounded domain. Suppose $u, \bar{u}$ are classical solutions of \begin{equation} \begin{cases} u_t=\Delta u & \text{in }U\times(0, T) \\ u=0 &\text{on }\partial U \times [0, T] \end{cases} \end{equation} If at time $T$ we have \begin{equation} u(x, T)=\bar{u}(x, T),\quad \forall x\in U, \end{equation} then $u\equiv \bar{u}$ on the whole parabolic cylinder $U\times (0, T]$. The property holds in much more general functional settings, as I read here (look for the keyword backward uniqueness). All of this leaves the general question open. Is the backward uniqueness property true for all continuous linear semigroups? I guess that the answer should be negative, otherwise this would not be regarded as a special feature of the heat equation. However, I cannot find an explicit example. share|improve this question 2 Answers 2 up vote 2 down vote accepted I believe this is a counterexample: Consider the semigroup on $L_2[0,\infty)$ given by $$ S(t) f(x) = f(x+t)$$ Let $f = I_{[0,1)}$, and $g = 2I_{[0,2)}$. Then $S(1)f = S(2)g = 0$, but $S(1-\epsilon_1)f \ne S(2-\epsilon_2)g$. share|improve this answer I think that it is a good question. To simplify, let us study the equation $$ u_t=-\Lambda u, $$ where $\Lambda=\sqrt{-\Delta}$. This equation is locally well-posed (both, forward and backward in time) for analytic initial data in a complex strip around the real axis. Then, at some time $T>0$, two solutions with a different initial data (at time $t=0$) can not coincide. Otherwise the backward problem posed at time $T$ would be ill-posed. I think that for the heat equation the situation is similar. share|improve this answer This example is a bit too complicated for me. Could you do something more basic? It's fine to discuss the heat equation only. –  Giuseppe Negro May 24 '13 at 13:31 The point of this second equaton is that I know how to solve it forward and backward (because is a first order equation both in space and time you can apply a Cauchy-Kowalevsky Theorem). For the heat equation all this is much more difficult. You have to solve backward a heat equation and you know that your initial data is an entire function. if this problem is well-posed the solution can not touch themselves. –  guacho May 24 '13 at 13:50 I agree that it is more difficult. We are lucky that someone already did this for us. See the update to the question. –  Giuseppe Negro May 27 '13 at 19:17 Your Answer
695b33fc579838a0
The Schrödinger equation is one of the most basic formulas of quantum physics. With the Schrödinger equation, you can solve for the wave functions of particles, and that allows you to say everything you can about the particle — where it is, what its momentum is, and so on. In the following version of the Schrödinger equation, the first term represents the kinetic energy and the second term represents the potential energy:
0d5712eee4b2f40e
7 Maret 2010 pukul 07:40 | Ditulis dalam Uncategorized | Meninggalkan komentar Read description: colorless gas with violet glow General properties Name, symbol, number hydrogen, H, 1 Pronunciation /ˈhaɪdrɵdʒɨn/,[1] HYE-dro-jin Element category nonmetal Group, period, block 1, 1, s Standard atomic weight 1.00794(7) g·mol−1 Electron configuration 1s1 Electrons per shell 1 (Image) Physical properties Color colorless Phase gas Density (0 °C, 101.325 kPa) 0.08988 g/L Liquid density at m.p. 0.07 (0.0763 solid)[2] g·cm−3 Melting point 14.01 K, -259.14 °C, -434.45 °F Boiling point 20.28 K, -252.87 °C, -423.17 °F Triple point 13.8033 K (-259°C), 7.042 kPa Critical point 32.97 K, 1.293 MPa Heat of fusion (H2) 0.117 kJ·mol−1 Heat of vaporization (H2) 0.904 kJ·mol−1 Specific heat capacity (25 °C) (H2) 28.836 J·mol−1·K−1 Vapor pressure P/Pa 1 10 100 1 k 10 k 100 k at T/K 15 20 Atomic properties Oxidation states 1, -1 (amphoteric oxide) Electronegativity 2.20 (Pauling scale) Ionization energies 1st: 1312.0 kJ·mol−1 Covalent radius 31±5 pm Van der Waals radius 120 pm Crystal structure hexagonal Magnetic ordering diamagnetic[3] Thermal conductivity (300 K) 0.1805 W·m−1·K−1 Speed of sound (gas, 27 °C) 1310 m/s CAS registry number 1333-74-0 Most stable isotopes Main article: Isotopes of hydrogen iso NA half-life DM DE (MeV) DP 1H 99.985% 1H is stable with 0 neutrons 2H 0.015% 2H is stable with 1 neutron 3H trace 12.32 y β 0.01861 3He Hydrogen is the chemical element with atomic number 1. It is represented by the symbol H. With an atomic weight of 1.00794 u, hydrogen is the lightest and most abundant chemical element, constituting roughly 75 % of the Universe’s elemental mass.[4] Stars in the main sequence are mainly composed of hydrogen in its plasma state. Naturally occurring elemental hydrogen is relatively rare on Earth. The most common isotope of hydrogen is protium (name rarely used, symbol H) with a single proton and no neutrons. In ionic compounds it can take a negative charge (an anion known as a hydride and written as H), or as a positively-charged species H+. The latter cation is written as though composed of a bare proton, but in reality, hydrogen cations in ionic compounds always occur as more complex species. Hydrogen forms compounds with most elements and is present in water and most organic compounds. It plays a particularly important role in acid-base chemistry with many reactions exchanging protons between soluble molecules. As the simplest atom known, the hydrogen atom has been of theoretical use. For example, as the only neutral atom with an analytic solution to the Schrödinger equation, the study of the energetics and bonding of the hydrogen atom played a key role in the development of quantum mechanics. Hydrogen gas (now known to be H2) was first artificially produced in the early 16th century, via the mixing of metals with strong acids. In 1766–81, Henry Cavendish was the first to recognize that hydrogen gas was a discrete substance,[5] and that it produces water when burned, a property which later gave it its name, which in Greek means “water-former”. At standard temperature and pressure, hydrogen is a colorless, odorless, nonmetallic, tasteless, highly combustible diatomic gas with the molecular formula H2. Industrial production is mainly from the steam reforming of natural gas, and less often from more energy-intensive hydrogen production methods like the electrolysis of water.[6] Most hydrogen is employed near its production site, with the two largest uses being fossil fuel processing (e.g., hydrocracking) and ammonia production, mostly for the fertilizer market. Hydrogen is a concern in metallurgy as it can embrittle many metals,[7] complicating the design of pipelines and storage tanks.[8] A black cup-like object hanging by its bottom with blue glow coming out of its opening. The Space Shuttle Main Engine burns hydrogen with oxygen, producing a nearly-invisible flame at full thrust. Hydrogen gas (dihydrogen[9]) is highly flammable and will burn in air at a very wide range of concentrations between 4% and 75% by volume.[10] The enthalpy of combustion for hydrogen is −286 kJ/mol:[11] 2 H2(g) + O2(g) → 2 H2O(l) + 572 kJ (286 kJ/mol)[note 1] Hydrogen gas forms explosive mixtures with air in the concentration range 4-74% (volume per cent of hydrogen in air) and with chlorine in the range 5-95%. The mixtures spontaneously detonate by spark, heat or sunlight. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is 500 °C (932 °F).[12] Pure hydrogen-oxygen flames emit ultraviolet light and are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle main engine compared to the highly visible plume of a Space Shuttle Solid Rocket Booster. The detection of a burning hydrogen leak may require a flame detector; such leaks can be very dangerous. The destruction of the Hindenburg airship was an infamous example of hydrogen combustion; the cause is debated, but the visible flames were the result of combustible materials in the ship’s skin.[13] Because hydrogen is buoyant in air, hydrogen flames tend to ascend rapidly and cause less damage than hydrocarbon fires. Two-thirds of the Hindenburg passengers survived the fire, and many deaths were instead the result of falls or burning diesel fuel.[14] H2 reacts with every oxidizing element. Hydrogen can react spontaneously and violently at room temperature with chlorine and fluorine to form the corresponding hydrogen halides, hydrogen chloride and hydrogen fluoride, which are also potentially dangerous acids.[15] Electron energy levels Drawing of a light-gray large sphere with a cut off quarter and a black small sphere and numbers 1.6 and 1.7x10-5 illustrating their relative diameters. Depiction of a hydrogen atom showing the diameter as about twice the Bohr model radius (image not to scale). The ground state energy level of the electron in a hydrogen atom is −13.6 eV, which is equivalent to an ultraviolet photon of roughly 92 nm wavelength.[16] The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, which conceptualizes the electron as “orbiting” the proton in analogy to the Earth’s orbit of the sun. However, the electromagnetic force attracts electrons and protons to one another, while planets and celestial objects are attracted to each other by gravity. Because of the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies.[17] A more accurate description of the hydrogen atom comes from a purely quantum mechanical treatment that uses the Schrödinger equation or the equivalent Feynman path integral formulation to calculate the probability density of the electron around the proton.[18] Elemental molecular forms Two bright circles on dark background, both contain numerous thin black lines inside. First tracks observed in liquid hydrogen bubble chamber at the Bevatron There exist two different spin isomers of hydrogen diatomic molecules that differ by the relative spin of their nuclei.[19] In the orthohydrogen form, the spins of the two protons are parallel and form a triplet state with a molecular spin quantum number of 1 (½+½); in the parahydrogen form the spins are antiparallel and form a singlet with a molecular spin quantum number of 0 (½-½). At standard temperature and pressure, hydrogen gas contains about 25% of the para form and 75% of the ortho form, also known as the “normal form”.[20] The equilibrium ratio of orthohydrogen to parahydrogen depends on temperature, but since the ortho form is an excited state and has a higher energy than the para form, it is unstable and cannot be purified. At very low temperatures, the equilibrium state is composed almost exclusively of the para form. The liquid and gas phase thermal properties of pure parahydrogen differ significantly from those of the normal form because of differences in rotational heat capacities, as discussed more fully in Spin isomers of hydrogen.[21] The ortho/para distinction also occurs in other hydrogen-containing molecules or functional groups, such as water and methylene, but is of little significance for their thermal properties.[22] The uncatalyzed interconversion between para and ortho H2 increases with increasing temperature; thus rapidly condensed H2 contains large quantities of the high-energy ortho form that converts to the para form very slowly.[23] The ortho/para ratio in condensed H2 is an important consideration in the preparation and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces enough heat to evaporate some of the hydrogen liquid, leading to loss of liquefied material. Catalysts for the ortho-para interconversion, such as ferric oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromic oxide, or some nickel[24] compounds, are used during hydrogen cooling.[25] A molecular form called protonated molecular hydrogen, or H+3, is found in the interstellar medium (ISM), where it is generated by ionization of molecular hydrogen from cosmic rays. It has also been observed in the upper atmosphere of the planet Jupiter. This molecule is relatively stable in the environment of outer space due to the low temperature and density. H+3 is one of the most abundant ions in the Universe, and it plays a notable role in the chemistry of the interstellar medium.[26] Neutral triatomic hydrogen H3 can only exist in an excited from and is unstable.[27] Covalent and organic compounds While H2 is not very reactive under standard conditions, it does form compounds with most elements. Millions of hydrocarbons are known, but they are not formed by the direct reaction of elementary hydrogen and carbon. Hydrogen can form compounds with elements that are more electronegative, such as halogens (e.g., F, Cl, Br, I); in these compounds hydrogen takes on a partial positive charge.[28] When bonded to fluorine, oxygen, or nitrogen, hydrogen can participate in a form of strong noncovalent bonding called hydrogen bonding, which is critical to the stability of many biological molecules.[29][30] Hydrogen also forms compounds with less electronegative elements, such as the metals and metalloids, in which it takes on a partial negative charge. These compounds are often known as hydrides.[31] Hydrogen forms a vast array of compounds with carbon. Because of their general association with living things, these compounds came to be called organic compounds;[32] the study of their properties is known as organic chemistry[33] and their study in the context of living organisms is known as biochemistry.[34] By some definitions, “organic” compounds are only required to contain carbon. However, most of them also contain hydrogen, and since it is the carbon-hydrogen bond which gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word “organic” in chemistry.[32] In inorganic chemistry, hydrides can also serve as bridging ligands that link two metal centers in a coordination complex. This function is particularly common in group 13 elements, especially in boranes (boron hydrides) and aluminium complexes, as well as in clustered carboranes.[35] Compounds of hydrogen are often called hydrides, a term that is used fairly loosely. The term “hydride” suggests that the H atom has acquired a negative or anionic character, denoted H, and is used when hydrogen forms a compound with a more electropositive element. The existence of the hydride anion, suggested by Gilbert N. Lewis in 1916 for group I and II salt-like hydrides, was demonstrated by Moers in 1920 with the electrolysis of molten lithium hydride (LiH), that produced a stoichiometric quantity of hydrogen at the anode.[36] For hydrides other than group I and II metals, the term is quite misleading, considering the low electronegativity of hydrogen. An exception in group II hydrides is BeH2, which is polymeric. In lithium aluminium hydride, the AlH4 anion carries hydridic centers firmly attached to the Al(III). Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, there are over 100 binary borane hydrides known, but only one binary aluminium hydride.[37] Binary indium hydride has not yet been identified, although larger complexes exist.[38] Protons and acids Oxidation of hydrogen, in the sense of removing its electron, formally gives H+, containing no electrons and a nucleus which is usually composed of one proton. That is why H+ is often called a proton. This species is central to discussion of acids. Under the Bronsted-Lowry theory, acids are proton donors, while bases are proton acceptors. A bare proton, H+, cannot exist in solution or in ionic crystals, because of its unstoppable attraction to other atoms or molecules with electrons. Except at the high temperatures associated with plasmas, such protons cannot be removed from the electron clouds of atoms and molecules, and will remain attached to them. However, the term ‘proton’ is sometimes used loosely and metaphorically to refer to positively charged or cationic hydrogen attached to other species in this fashion, and as such is denoted “H+” without any implication that any single protons exist freely as a species. To avoid the implication of the naked “solvated proton” in solution, acidic aqueous solutions are sometimes considered to contain a less unlikely fictitious species, termed the “hydronium ion” (H3O+). However, even in this case, such solvated hydrogen cations are thought more realistically physically to be organized into clusters that form species closer to H9O+4.[39] Other oxonium ions are found when water is in solution with other solvents.[40] Although exotic on earth, one of the most common ions in the universe is the H+3 ion, known as protonated molecular hydrogen or the triatomic hydrogen cation.[41] Schematic drawing of a positive atom in the center orbited by a negative particle. Protium, the most common isotope of hydrogen, has one proton and one electron. Unique among all stable isotopes, it has no neutrons (see diproton for discussion of why others do not exist). Hydrogen has three naturally occurring isotopes, denoted 1H, 2H and 3H. Other, highly unstable nuclei (4H to 7H) have been synthesized in the laboratory but not observed in nature.[42][43] • 1H is the most common hydrogen isotope with an abundance of more than 99.98%. Because the nucleus of this isotope consists of only a single proton, it is given the descriptive but rarely used formal name protium.[44] • 2H, the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in its nucleus. Essentially all deuterium in the universe is thought to have been produced at the time of the Big Bang, and has endured since that time. Deuterium is not radioactive, and does not represent a significant toxicity hazard. Water enriched in molecules that include deuterium instead of normal hydrogen is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for 1H-NMR spectroscopy.[45] Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion.[46] • 3H is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decaying into Helium-3 through beta decay with a half-life of 12.32 years.[35] Small amounts of tritium occur naturally because of the interaction of cosmic rays with atmospheric gases; tritium has also been released during nuclear weapons tests.[47] It is used in nuclear fusion reactions,[48] as a tracer in isotope geochemistry,[49] and specialized in self-powered lighting devices.[50] Tritium has also been used in chemical and biological labeling experiments as a radiolabel.[51] Hydrogen is the only element that has different names for its isotopes in common use today. (During the early study of radioactivity, various heavy radioactive isotopes were given names, but such names are no longer used). The symbols D and T (instead of 2H and 3H) are sometimes used for deuterium and tritium, but the corresponding symbol P is already in use for phosphorus and thus is not available for protium.[52] In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry allows any of D, T, 2H, and 3H to be used, although 2H and 3H are preferred.[53] Discovery and use Hydrogen gas, H2, was first artificially produced and formally described by T. Von Hohenheim (also known as Paracelsus, 1493–1541) via the mixing of metals with strong acids.[54] He was unaware that the flammable gas produced by this chemical reaction was a new chemical element. In 1671, Robert Boyle rediscovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas.[55] In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by identifying the gas from a metal-acid reaction as “flammable air” and further finding in 1781 that the gas produces water when burned. He is usually given credit for its discovery as an element.[56][57] In 1783, Antoine Lavoisier gave the element the name hydrogen (from the Greek hydro meaning water and genes meaning creator)[58] when he and Laplace reproduced Cavendish’s finding that water is produced when hydrogen is burned.[57] Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask.[57] He produced solid hydrogen the next year.[57] Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck.[56] Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey’s group in 1932.[57] François Isaac de Rivaz built the first internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner’s lamp and limelight were invented in 1823.[57] The first hydrogen-filled balloon was invented by Jacques Charles in 1783.[57] Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard.[57] German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900.[57] Regularly scheduled flights started in 1910 and by the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war. The first non-stop transatlantic crossing was made by the British airship R34 in 1919. Regular passenger service resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose. Therefore, H2 was used in the Hindenburg airship, which was destroyed in a midair fire over New Jersey on May 6, 1937.[57] The incident was broadcast live on radio and filmed. Ignition of leaking hydrogen is widely assumed to be the cause, but later investigations pointed to the ignition of the aluminized fabric coating by static electricity. But the damage to hydrogen’s reputation as a lifting gas was already done. In the same year the first hydrogen-cooled turbogenerator went into service with gaseous hydrogen as a coolant in the rotor and the stator in 1937 at Dayton, Ohio, by the Dayton Power & Light Co,[59] because of the thermal conductivity of hydrogen gas this is the most common type in its field today. The nickel hydrogen battery was used for the first time in 1977 aboard the U.S. Navy’s Navigation technology satellite-2 (NTS-2).[60] For example, the ISS,[61] Mars Odyssey[62] and the Mars Global Surveyor[63] are equipped with nickel-hydrogen batteries. The Hubble Space Telescope, at the time its original batteries were finally changed in May 2009, more than 19 years after launch, led with the highest number of charge/discharge cycles. Role in quantum theory A line spectrum showing black background with narrow lines superimposed on it: two violet, one blue and one red. Hydrogen emission spectrum lines in the visible range. These are the four visible lines of the Balmer series Because of its relatively simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic structure.[64] Furthermore, the corresponding simplicity of the hydrogen molecule and the corresponding cation H2+ allowed fuller understanding of the nature of the chemical bond, which followed shortly after the quantum mechanical treatment of the hydrogen atom had been developed in the mid-1920s. One of the first quantum effects to be explicitly noticed (but not understood at the time) was a Maxwell observation involving hydrogen, half a century before full quantum mechanical theory arrived. Maxwell observed that the specific heat capacity of H2 unaccountably departs from that of a diatomic gas below room temperature and begins to increasingly resemble that of a monatomic gas at cryogenic temperatures. According to quantum theory, this behavior arises from the spacing of the (quantized) rotational energy levels, which are particularly wide-spaced in H2 because of its low mass. These widely spaced levels inhibit equal partition of heat energy into rotational motion in hydrogen at low temperatures. Diatomic gases composed of heavier atoms do not have such widely spaced levels and do not exhibit the same effect.[65] Natural occurrence A white-green cotton-like clog on black background. NGC 604, a giant region of ionized hydrogen in the Triangulum Galaxy Hydrogen is the most abundant element in the universe, making up 75% of normal matter by mass and over 90% by number of atoms.[66] This element is found in great abundance in stars and gas giant planets. Molecular clouds of H2 are associated with star formation. Hydrogen plays a vital role in powering stars through proton-proton reaction and CNO cycle nuclear fusion.[67] Throughout the universe, hydrogen is mostly found in the atomic and plasma states whose properties are quite different from molecular hydrogen. As a plasma, hydrogen’s electron and proton are not bound together, resulting in very high electrical conductivity and high emissivity (producing the light from the sun and other stars). The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind they interact with the Earth’s magnetosphere giving rise to Birkeland currents and the aurora. Hydrogen is found in the neutral atomic state in the Interstellar medium. The large amount of neutral hydrogen found in the damped Lyman-alpha systems is thought to dominate the cosmological baryonic density of the Universe up to redshift z=4.[68] Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, H2 (for data see table). However, hydrogen gas is very rare in the Earth’s atmosphere (1 ppm by volume) because of its light weight, which enables it to escape from Earth’s gravity more easily than heavier gases. However, hydrogen is the third most abundant element on the Earth’s surface.[69] Most of the Earth’s hydrogen is in the form of chemical compounds such as hydrocarbons and water.[35] Hydrogen gas is produced by some bacteria and algae and is a natural component of flatus, as is methane, itself a hydrogen source of increasing importance.[70] H2 is produced in chemistry and biology laboratories, often as a by-product of other reactions; in industry for the hydrogenation of unsaturated substrates; and in nature as a means of expelling reducing equivalents in biochemical reactions. In the laboratory, H2 is usually prepared by the reaction of acids on metals such as zinc with Kipp’s apparatus. Zn + 2 H+Zn2+ + H2 Aluminium can also produce H2 upon treatment with bases: 2 Al + 6 H2O + 2 OH → 2 Al(OH)4 + 3 H2 The electrolysis of water is a simple method of producing hydrogen. A low voltage current is run through the water, and gaseous oxygen forms at the anode while gaseous hydrogen forms at the cathode. Typically the cathode is made from platinum or another inert metal when producing hydrogen for storage. If, however, the gas is to be burnt on site, oxygen is desirable to assist the combustion, and so both electrodes would be made from inert metals. (Iron, for instance, would oxidize, and thus decrease the amount of oxygen given off.) The theoretical maximum efficiency (electricity used vs. energetic value of hydrogen produced) is between 80–94%.[71] 2H2O(aq) → 2H2(g) + O2(g) In 2007, it was discovered that an alloy of aluminium and gallium in pellet form added to water could be used to generate hydrogen. The process also creates alumina, but the expensive gallium, which prevents the formation of an oxide skin on the pellets, can be re-used. This has important potential implications for a hydrogen economy, since hydrogen can be produced on-site and does not need to be transported.[72] Hydrogen can be prepared in several different ways, but economically the most important processes involve removal of hydrogen from hydrocarbons. Commercial bulk hydrogen is usually produced by the steam reforming of natural gas.[73] At high temperatures (1000–1400 K, °C;700–1100 °C or 1,300–2,000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and H2. CH4 + H2O → CO + 3 H2 This reaction is favored at low pressures but is nonetheless conducted at high pressures (2.0 MPa, 20 atm or 600 inHg) since high pressure H2 is the most marketable product and Pressure Swing Adsorption (PSA) purification systems work better at higher pressures. The product mixture is known as “synthesis gas” because it is often used directly for the production of methanol and related compounds. Hydrocarbons other than methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized technology is the formation of coke or carbon: CH4 → C + 2 H2 Consequently, steam reforming typically employs an excess of H2O. Additional hydrogen can be recovered from the steam by use of carbon monoxide through the water gas shift reaction, especially with an iron oxide catalyst. This reaction is also a common industrial source of carbon dioxide:[73] CO + H2OCO2 + H2 Other important methods for H2 production include partial oxidation of hydrocarbons:[74] 2 CH4 + O2 → 2 CO + 4 H2 and the coal reaction, which can serve as a prelude to the shift reaction above:[73] C + H2O → CO + H2 Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for the production of ammonia, hydrogen is generated from natural gas.[75] Electrolysis of brine to yield chlorine also produces hydrogen as a co-product.[76] There are more than 200 thermochemical cycles which can be used for water splitting, around a dozen of these cycles such as the iron oxide cycle, cerium(IV) oxide-cerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle are under research and in testing phase to produce hydrogen and oxygen from water and heat without using electricity.[77] A number of laboratories (including in France, Germany, Greece, Japan, and the USA) are developing thermochemical methods to produce hydrogen from solar energy and water.[78] Large quantities of H2 are needed in the petroleum and chemical industries. The largest application of H2 is for the processing (“upgrading”) of fossil fuels, and in the production of ammonia. The key consumers of H2 in the petrochemical plant include hydrodealkylation, hydrodesulfurization, and hydrocracking. H2 has several other important uses. H2 is used as a hydrogenating agent, particularly in increasing the level of saturation of unsaturated fats and oils (found in items such as margarine), and in the production of methanol. It is similarly the source of hydrogen in the manufacture of hydrochloric acid. H2 is also used as a reducing agent of metallic ores.[79] Hydrogen is highly soluble in many rare earth and transition metals[80] and is soluble in both nanocrystalline and amorphous metals.[81] Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal lattice.[82] These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the gas serves as a metallurgical problem as hydrogen solubility contributes in an unwanted way to embrittle many metals,[7] complicating the design of pipelines and storage tanks.[8] Apart from its use as a reactant, H2 has wide applications in physics and engineering. It is used as a shielding gas in welding methods such as atomic hydrogen welding.[83][84] H2 is used as the rotor coolant in electrical generators at power stations, because it has the highest thermal conductivity of any gas. Liquid H2 is used in cryogenic research, including superconductivity studies.[85] Since H2 is lighter than air, having a little more than 115 of the density of air, it was once widely used as a lifting gas in balloons and airships.[86] In more recent applications, hydrogen is used pure or mixed with nitrogen (sometimes called forming gas) as a tracer gas for minute leak detection. Applications can be found in the automotive, chemical, power generation, aerospace, and telecommunications industries.[87] Hydrogen is an authorized food additive (E 949) that allows food package leak testing among other anti-oxidizing properties.[88] Hydrogen’s rarer isotopes also each have specific applications. Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons, and in nuclear fusion reactions.[57] Deuterium compounds have applications in chemistry and biology in studies of reaction isotope effects.[89] Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs,[90] as an isotopic label in the biosciences,[51] and as a radiation source in luminous paints.[91] The triple point temperature of equilibrium hydrogen is a defining fixed point on the ITS-90 temperature scale at 13.8033 kelvins.[92] Energy carrier Hydrogen is not an energy resource,[93] except in the hypothetical context of commercial nuclear fusion power plants using deuterium or tritium, a technology presently far from development.[94] The Sun’s energy comes from nuclear fusion of hydrogen, but this process is difficult to achieve controllably on Earth.[95] Elemental hydrogen from solar, biological, or electrical sources require more energy to make it than is obtained by burning it, so in these cases hydrogen functions as an energy carrier, like a battery. Hydrogen may be obtained from fossil sources (such as methane), but these sources are unsustainable.[93] The energy density per unit volume of both liquid hydrogen and compressed hydrogen gas at any practicable pressure is significantly less than that of traditional fuel sources, although the energy density per unit fuel mass is higher.[93] Nevertheless, elemental hydrogen has been widely discussed in the context of energy, as a possible future carrier of energy on an economy-wide scale.[96] For example, CO2 sequestration followed by carbon capture and storage could be conducted at the point of H2 production from fossil fuels.[97] Hydrogen used in transportation would burn relatively cleanly, with some NOx emissions,[98] but without carbon emissions.[97] However, the infrastructure costs associated with full conversion to a hydrogen economy would be substantial.[99] Semiconductor industry Hydrogen is employed to saturate broken (“dangling”) bonds of amorphous silicon and amorphous carbon that helps stabilizing material properties.[100] It is also a potential electron donor in various oxide materials, including ZnO,[101][102] SnO2, CdO, MgO,[103] ZrO2, HfO2, La2O3, Y2O3, TiO2, SrTiO3, LaAlO3, SiO2, Al2O3, ZrSiO4, HfSiO4, and SrZrO3.[104] Biological reactions H2 is a product of some types of anaerobic metabolism and is produced by several microorganisms, usually via reactions catalyzed by iron- or nickel-containing enzymes called hydrogenases. These enzymes catalyze the reversible redox reaction between H2 and its component two protons and two electrons. Creation of hydrogen gas occurs in the transfer of reducing equivalents produced during pyruvate fermentation to water.[105] Water splitting, in which water is decomposed into its component protons, electrons, and oxygen, occurs in the light reactions in all photosynthetic organisms. Some such organisms—including the alga Chlamydomonas reinhardtii and cyanobacteria—have evolved a second step in the dark reactions in which protons and electrons are reduced to form H2 gas by specialized hydrogenases in the chloroplast.[106] Efforts have been undertaken to genetically modify cyanobacterial hydrogenases to efficiently synthesize H2 gas even in the presence of oxygen.[107] Efforts have also been undertaken with genetically modified alga in a bioreactor.[108] Safety and precautions Hydrogen poses a number of hazards to human safety, from potential detonations and fires when mixed with air to being an asphyxant in its pure, oxygen-free form.[109] In addition, liquid hydrogen is a cryogen and presents dangers (such as frostbite) associated with very cold liquids.[110] Hydrogen dissolves in many metals, and, in addition to leaking out, may have adverse effects on them, such as hydrogen embrittlement,[111] leading to cracks and explosions.[112] Hydrogen gas leaking into external air may spontaneously ignite. Moreover, hydrogen fire, while being extremely hot, is almost invisible, and thus can lead to accidental burns.[113] Even interpreting the hydrogen data (including safety data) is confounded by a number of phenomena. Many physical and chemical properties of hydrogen depend on the parahydrogen/orthohydrogen ratio (it often takes days or weeks at a given temperature to reach the equilibrium ratio, for which the data is usually given). Hydrogen detonation parameters, such as critical detonation pressure and temperature, strongly depend on the container geometry.[109] About these ads Tinggalkan sebuah Komentar » Umpan RSS untuk komentar-komentar pada pos ini. TrackBack URI Berikan Balasan WordPress.com Logo You are commenting using your WordPress.com account. Logout / Ubah ) Twitter picture You are commenting using your Twitter account. Logout / Ubah ) Facebook photo You are commenting using your Facebook account. Logout / Ubah ) Google+ photo Connecting to %s Blog di WordPress.com. | The Pool Theme. Entries dan komentar feeds. Get every new post delivered to your Inbox. %d blogger menyukai ini:
b54b97038b89d6e9
Royal Society Publishing Jennifer C. Brookes Human sensory processes are well understood: hearing, seeing, perhaps even tasting and touch—but we do not understand smell—the elusive sense. That is, for the others we know what stimuli causes what response, and why and how. These fundamental questions are not answered within the sphere of smell science; we do not know what it is about a molecule that … smells. I report, here, the status quo theories for olfaction, highlighting what we do not know, and explaining why dismissing the perception of the input as ‘too subjective’ acts as a roadblock not conducive to scientific inquiry. I outline the current and new theory that conjectures a mechanism for signal transduction based on quantum mechanical phenomena, dubbed the ‘swipe card’, which is perhaps controversial but feasible. I show that such lines of thinking may answer some questions, or at least pose the right questions. Most importantly, I draw links and comparisons as to how better understanding of how small (10’s of atoms) molecules can interact so specially with large (10 000’s of atoms) proteins in a way that is so integral to healthy living. Repercussions of this work are not just important in understanding a basic scientific tool used by us all, but often taken for granted, it is also a step closer to understanding generic mechanisms between drug and receptor, for example. 1. Introduction Aristotle wrote in his major treatise, ‘On the Soul’, that ‘Generally, about all perception, we can say that a sense is what has the power of receiving into itself the sensible forms of things without the matter, in the way in which a piece of wax takes on the impress of a signet ring without the iron or gold’. To paraphrase: perception is the shadow, the imitator, the model. Perception is a tricky matter, and perhaps, at this moment, beyond the reaches of scientific rationale. However, to scrutinize Aristotle’s analogy of the impression by the ring on the wax (the initiation of a sensory process), the interaction humans have with the world at the first stages is very much with the matter. For instance, we absorb stimulating photons, packets (quanta) of light, that activate rods and cones in our eyes. We conduct the compressions and rarefactions of sound waves into our ears. We react with the acidic (−COOH carboxyl groups) on our tongue. In these first stages of recognition we interact with the world in a way that is much more intimate, though not to belittle the power and impressive nature of memory, than in the stages of perception and recall. It is imperative to realize that, in the first stages of any sensory process, we are physically interacting with the outside world. Smell is arguably the most intimate of all the senses. When we smell an odorant molecule, it is volatile and non-reacting. These molecules are small enough to reach deep into the nose cavity, diffuse through a 10–40 μm thick mucus layer (Graziadei 1971), meet one of tens of thousands of cilia that project from the olfactory sensory neurons, and absorb onto one of the approximately 347 receptor types at the extracellular–intracellular interface (Buck 2004). There are thus approximately 347 related and various olfactory receptors that are presumably ‘tuned’ sometimes exclusively to, sometimes not, odorants (Malnic et al. 1999). Some receptors are broadly and some are selectively ‘tuned’. A whole lipid bilayer stabilizes and is responsible for the olfactory receptor proper orientation, each receptor has seven helices that cross the membrane. It is here where the initial interactions occur between the outside environment and our central nervous system that response receptors sit (figure 1). At recognition, the odorant is the first messenger, the receptors, G-protein-coupled receptors (GPCRS), release a G-protein unit and a second messenger process of transduction ensues that controls a Ca2+ and Na+ ion influx into the cell. Subsequent to this, Ca2+ ions act as third messengers that induce Cl ions to flow. This in turn causes the olfactory sensory neuron to fire. In this manner an odorant molecule message becomes an electric signal to be interpreted by the brain. Yet an often overlooked step is that the odorant receptor has to be ‘turned on’ to transmit this electricity, the intimate step being the gate keeper that determines recognition or ignorance, transmission of a signal or not. There is no obvious explanation as to why particular odorants open particular gateways. This is the most curious question in olfaction—‘What turns on the receptor?’—what is it about the matter, the odorant, that we are interacting with? What initiates transmission? Figure 1. The olfactory epithelium where the odorant meets the central nervous system is shown. Olfactory receptors are the gate keepers that determine signal firing. They are found at the interface of the olfactory cilia and control whether or not an odorant will initiate a signal transduction process that results in the depolarization of the olfactory sensory neuron. The electric signal generated is projected onto the olfactory bulb (OB). Adapted from a presentation by Simon Gane. 2. Past and present theories of olfaction Aristotle’s figurative description has survived since 350 BC and even taken literal formation in the context of what is named the ‘lock and key’ description of olfaction first purported in 1963 by Amoore. The lock and key description states that to produce a particular scent a particular fit is required between the odorant and receptor (Amoore 1963). As in many enzyme recognition processes, well described for example by Sigala et al. (2008), the receptor recognizes an odorant via shape complementarity as the odorant ‘key’ fits the receptor ‘lock’. This description, however useful elsewhere, does not work in olfaction. Different molecules that could fit the same site of a receptor more often than not do not smell the same (Sell 2006). Further, on the other hand, physiological studies of rodent olfactory receptor neurons have shown that olfactory cells respond to many odorants that are not the same shape (Tareilus et al. 1995; Rawson & Gomez 2002). Therefore, any predictions based on shape alone will give surprising results. Furthermore, the lock and key model does not explain what happens next in the process of smell. What does shape complementarity achieve? How mechanically can a small odorant (key) initiate global changes in a much larger and floppier1 protein (lock) when it is physically not comparable to an actual lock and key? Therefore, as a mechanism of signal initiation, lock and key alone cannot provide an explanation of signal transduction. Mori & Shepard (1994) offer an alternative variation on the lock and key model: the ‘odotope theory’, in which key features (shapes) of the odorant are detected by the receptor rather than the shape as a whole. It may be one structural feature of a molecule, such as that of the functional group, that a particular receptor responds to as opposed to the general shape of that molecule. This theory placed the importance on the atoms present rather than on the position of the atoms. Further, this notion better represents the known ‘combinatorial code’ nature of odorant signalling (Malnic et al. 1999), whereby one odorant will activate several receptor types and one receptor type will respond to many odorants. One main objection to this model is the existence of the many well-documented cases of chiral molecules (handed, mirror image molecules or enantiomer pairs) that smell different in their mirror image forms. If the receptors detect individual groups contained on the molecule as opposed to the molecule as a whole, then the famous right-handed 4R-(−)-carvone (‘spearmint, fresh herbal’) and left-handed 4S-(+)-carvone (‘caraway, fresh herbal’) should smell the same. They, however, do not (Leffingwell and associates). Quite different from any shape structure-based theory came the idea that a molecule’s vibrational spectrum determines its scent, purported by Dyson (1938) and Wright (1977). Much like discriminating colours by their wavelengths, unique scents are attributed to a unique spectrum of signals: the combinatorial code. Unfortunately, the case of mirror image molecules refutes this theory: these molecules would exhibit exactly the same spectra given their symmetry. Furthermore, even if the discrimination of smell was purely vibrational, how would this signal be measured? Like shape, how does the receptor detect the different vibrations between different molecules? Turin (1996) proposed a theory to include quantum mechanics in our understanding of smell. He postulated that the olfactory receptor contains electron donor (D) and electron acceptor (A) units which are separated in energy by a fixed amount that matches a specific odorant’s quanta of vibration (a phonon). Upon an odorant binding to its receptor, an electron tunnelling event occurs when the emitted phonon fills this D–A gap, which in turn may initiate the transmission step towards the brain. Once the electron reaches A, the signal is initiated via the G-protein release mechanism. The D and A are electron sources and sinks, respectively, as part of the receptor protein which provides the tunnelling electron which is the message carrier (figure 2). Figure 2. The proposed sequence of events according to Turin’s theory of signal transduction is shown. The olfactory receptor is pictured here as a cartoon with five cylinders to represent the protein helices (there are typically seven); the odorant is a carborane isomer—a camphoraceous smelling molecule (Turin & Yoshii 2003). (a) Source of electrons available at RD. (b) Electron tunnels to site D (donor) as odorant docks and deforms receptor. (c) Electron tunnels to A (acceptor) mediated by odorant phonon. (d) Odorant is expelled and electron transmission to RA initiates signal. Neither vibration-based nor shape-based theories of olfaction have fully satisfied scientific scrutiny to date. However, the introduction of quantum mechanics into describing the initial processes of olfaction can no longer be overlooked as it provides physical validification of Turin’s original idea. This model of olfaction incorporating quantum mechanical formalism manages to detail a receptor’s odorant discrimination based on simple oscillations while successfully distinguishing between mirror images of a molecule without violating any fundamental physical rules (Brookes et al. 2007). This model, and the application of the ‘swipe card’ paradigm to cover generalizations of models like Turin’s, is described below. 3. A swipe card model (a) Good, good, good, good vibrations Vibrations are everywhere; from the quartz crystal that times the hands on your watch to the spring in a kangaroo’s hop. These examples are simple harmonic oscillators (SHOs). The bonds in a molecule, such as an odorant molecule, can also be approximated as SHOs. The nuclei of a molecule with mass m, displaced at small distances from an equilibrium position, tend back to their starting points under the forces of the surrounding electrons. These motions are restorative in a way described by Hooke’s Law F=−kx, the ‘force is as the extension’. By integration of this force the potential energy can be determined, Embedded Image. The first derivative, where F=−∂V /∂x=0, determines the equilibrium position—the most relaxed state with least energy, the state that most of nature wishes to be. Plotting the potential energy V (x) versus displacement x provides a parabola from which the motion is characterized, and the spring constant k can be found. Thus, simple harmonic motion can be characterized by a simple curve. Using Hooke’s Law and solving the time-independent Schrödinger equation for quantum behaviour, solutions show the atomic modes of motion are quantized by Embedded Image, where Embedded Image (its angular frequency) and n indicates the quantum level (the level of phonon excitation). At the simplest level of approximation, and under the application of a coherent driving force, these modes of motion, normal modes, are molecular dances the atoms make of concerted motion whereby every atom passes through its equilibrium position at the same time. The centre of mass of the molecule never changes and each mode is independent (never exchanging energy with another mode). Though molecular vibrational spectra is not entirely harmonic (there are higher order terms), at small displacements from the equilibrium, we can make a useful approximation that each mode of vibration is like a simple harmonic oscillator. (b) Can quantum mechanics explain how humans smell? In Turin’s theory, signal transduction is determined by an electron transition from donor (D) to acceptor (A). Quantum transitions such as these between quantized electronic and vibrational states in atomic levels are reliably and accurately calculated using Fermi’s golden rule. The golden rule determines a probability per unit time that a transition between two zero-order states occurs under the presence of a small perturbation. It is appropriate to apply this rule to Turin’s theory of olfaction in order to determine whether the electron may cross from the D and A state under an influencing force, perturbation, of an odorant. Quantum mechanically speaking, there is a finite possibility that the tunnelling electron may be anywhere. Therefore, it needs to be determined whether this possibility of the electron getting from D to A is more favourable in the presence of a ‘correct’ odorant than when there is either no odorant or an incorrect odorant coupled with a particular receptor. A configuration coordinate diagram helps us to put the electron transition in the context of nuclear vibrations. The coordinate diagram uses two parabolas (figure 3) to describe the harmonic motions of all oscillations within the receptor, initially (in state D) and finally (in state A). This approximates all the SHOs as one collective motion. The nuclear modes of motion, not necessarily the normal modes, that describe the reaction pathway (electron on D or A) consist of the reaction coordinate. There are two instances (channels) when an electron can transfer from D to A while satisfying the fundamental law of energy conservation: when Embedded Image or when εDεA=0. The first instance corresponds to receptor discrimination of an odorant and the second does not. In the non-discriminating channel the odorant is not excited. In the discriminating channel, the odorant absorbs this energy Embedded Image. The probability for both events, discriminatory and non-discriminatory, can be calculated with Fermi’s golden rule. For quantum mechanics to explain how humans smell, the discriminatory channel must ‘win’. Figure 3. The configuration coordinate diagram to describe events in olfaction is shown. Electron tunnelling from the donor |D〉 to acceptor |A〉 is facilitated by the excitation of an appropriate odorant phonon corresponding to Embedded Image The change in force as the electron transfers is characterized by the shift in energy, down the vertical axes E, and displacement, along the reaction coordinate Q, that is phonon assisted. The reaction coordinate describes the displacements of nuclear modes that entail the reaction pathway. The presence of an odorant introduces a non-adiabatic interaction between donor and acceptor whereby an electron may make a quantum jump from one energy state to another: from |D〉 to |A〉. We assume that these states are sharp electronic energy levels that couple only weakly with nuclear transitions. Any strong interaction would widen lifetime broadening and obscure selectivity. In one way the presence of an odorant introduces an electronic state like a ‘stepping stone’ for an electron to hop from the electronic states D to A. The strength of this hopping is determined by the electronic part of a transition matrix element. This is the electronic contribution to the non-adiabatic component which determines the ease with which the electron can transfer across these states. As the electron transfers from D to A the odorant feels a change in force which springs the key normal mode into action (excites the relevant odorant vibration). This change in force is measured by the electron–phonon coupling, a Huang–Rhys factor (S). S is a ratio of the relaxation energy λ (or reorganization energy) to the phonon’s energy Embedded Image. The relaxation energy is determined from the change in force incurred as the electron moves from D to A in the field of the oscillating odorant. Thus, those IR active phonons, where there is a change in dipole moment, are those detected in this model. All other vibrations in the protein are either too low in frequency or too far away from the binding sites to contribute to the S of the odorant and interfere with recognition (Brookes et al. 2007). By analysis, the crucial result given by the application of the golden rule is that the discriminating case has nearly a 600× higher rate of transmission than that of the non-discriminating case. This holds true when: we use harmonic approximations, background oscillations are low frequency and weakly coupled to the electron transfer, there is a low reorganization energy and when one phonon of vibration in the odorant is excited. Under these conditions the inelastic channel is preferred and the odorant with the ‘right’ frequency will be detected over the ‘wrong’ one owing to resonance transfer of bond vibrations communicated between the odorant and receptor. This provides a model that defines the gate-keeping nature of olfactory receptors. (c) A ‘swipe card’ model For a fast and discriminating rate of electron transfer, a strong mixing of the right electronic and vibrational states is required. Both states are affected by the structure of the molecule. A compromise of geometrical (shape) factors is required in combination with the right energetic (vibrational) factors. This model, therefore, incorporates principles from the lock and key model whereby the shape of an odorant is important. However, this differs from the lock and key model because it describes the next step in signal transduction, which the lock and key does not. This model may be considered much like a ‘swipe card’ (or a key card) where an approximate fit between odorant and receptor (shape) is necessary to swipe the key into the lock, but it is the internal message (vibration) that is essential to open the door. Thus, shape is necessary but not sufficient. (d) Some answers, and more questions How, then, can we quantify smell? The swipe card model tests the physical feasibility of Turin’s postulate and finds that a smell signal can be quantified by the rate of electron transfer. Further the odorant combinatorial code may be calculated by measuring the electron–phonon coupling (Huang–Rhys factors) for each odorant mode and plotting an odorant characterizing spectrum. The change in force owing to electron transfer is sensitive to the direction of the oscillating charges in the odorant, and this is calculated in the Huang–Rhys factor. Mirror image molecules, while they will have identical Embedded Image, will have differing Huang–Rhys factors owing to the contrast in the directions of the relevant oscillating atoms, when they are held in the same chiral (and thus symmetry breaking) receptor. The odorant combinatorial code has to differ by the activation of only one extra receptor type to drastically redefine a smell. In (4R)-carvone, for example, it may be that the directions of the oscillating atoms maximize the change of force that is measured in the electron–phonon coupling. So the swipe card model even explains the apparent mirror image molecule oddities. Though still untested, the swipe card at the very least provides explanation and a method of prediction as to how the olfactory receptor gate keepers may respond to certain odorants. Questions that face scientists in olfaction now must include those that really challenge and test the theory of smellable vibrations. Can humans discriminate isotopes at the receptor stage? Can electron transfer at the receptor be detected by experiment? How well does the Huang–Rhys factor, and the odorant spectrum, define and predict other oddities in smell? What can knowing more about signal transduction in olfaction tell us about generic signal transduction mechanisms? 4. The future Mirror image molecules are of interest because they exemplify the importance of shape in receptor detection, while still leaving the rules of shape selectivity obscure. They clearly show that the positions of atoms matter, though we still lack a scientific explanation as to why. A recent study by Brookes et al. (2009), which categorizes a suite of mirror image molecules documented by Leffingwell and associates, finds that, by categorizing the odorants by their scent descriptors and physical attributes, a simple rule can be determined. The rule is that odorant molecules of an enantiomer pair will smell alike (type 1) when they are rigid, and will smell different (type 2) when they are flexible (Brookes et al. 2009). This study of flexibility determined that those odorant molecules containing six-membered rings can twist and pseudo-rotate between ‘twist’-, ‘boat’- and ‘chair’-like configurations, similar to the flexibility seen in cylcohexane (figure 4), or have cis–trans stereo-isomeric flexibilities. This begs the question: which structure is it that is recognized by the receptor? It is perhaps more relevant to ask which shape turns the receptor ‘on’ as opposed to which shape allows the odorant (ligand) to get there. Note that the degree of flexibility will affect recognition at the site (affinity) but also the signalling or switching (efficacy/actuation). I propose that the evidence of differentiable mirror image odorants determines the importance of flexibility in olfactory actuation. It is common in the relevant literature these days to propose that flexibility aids the affinity a ligand has for a receptor site. This would imply, however, that two mirror image related molecules, equal in degrees of flexibility, would activate an equal set of receptors and smell the same, when they often do not. The inference that greater flexibility determines a more promiscuous ligand is not valid here. Flexibility could be as much a hindrance as an aid when it comes to ligand–receptor actuation. Figure 4. (a) (4R)-(−)-carvone with the isopropenyl group axial to the ring. (b) (4R)-(−)-carvone with the isopropenyl group equatorial to the ring; adapted from Brookes et al. (2009). (c) The ‘twist’ , ‘boat’ and ‘chair’ states (and deviations inbetween) in cyclohexane; adapted from Juaristi (1995). Also, the difference (or lack of) between two-dimensional structures of (d) 5α-diH- and (e) 5β-diH-progesterone is shown. Odorants are not the only small molecules that interact unpredictably with large proteins; steroid hormones, anaesthetics and neurotransmitters, to name a few, are examples of ligands that interact specifically with special receptors to produce important biological processes. Steroids, in particular, exhibit similar curiosities to odorants. Compare 5α-diH- and 5β-diH-progesterone, for example, in figure 4. The only difference structurally between these ligands is the direction of one carbon–hydrogen σ-bond. Two dimensionally, the differences in structure are almost undetectable, yet the effect in vivo is quite drastic whereby the two steroids produce different bio-effects (Galigniana et al. 2004). It cannot be clearer that minute changes in a ligand’s structure can have a profound impact on activity. This is exemplified not only by mirror image odorants, as discussed above, but also in the endocrine system, where even smaller stereo-isomeric changes to a molecule make a difference to the molecule’s function. Research thus far shows that models based on shape or vibrations alone are not enough to describe and predict ligand performance. The ‘swipe card’ model combines these physical attributes, and does better. This electron transfer model depends intimately on a Huang–Rhys factor, which in turn depends on the orientation of the critical oscillating mode of vibration. Where conventional inelastic electron tunnelling spectroscopy can determine the orientation of a molecule (Kirtley et al. 1976), so could olfactory-based inelastic electron tunnelling. It is interesting to hypothesize that, given the positions of atoms can certainly be detected by the electron, perhaps any flexibility in the odorant may promote or demote the electron transfer in an actuating step. Thus, we can make testable conjectures and attempt to characterize scent. Also important is that this simple ligand-perspective analysis has shown that it is pressing to consider the physics of a dynamical quantum world, where minute changes have gargantuan effects, as opposed to the useful but static textbook ball and stick models. 5. Discussion and conclusions What we smell is very curious. When Alice in wonderland ponders at her reflected world ‘I wonder if looking glass milk is as good to drink’, she could have been thinking of mirror image molecules: many odorants related by symmetry do not smell the same (Leffingwell and associates). Some odorants change in character with concentration; for example, p-meth-1-en-8-thiol, which turns from grapefruit abruptly to sulphurous (Wilson & Stevenson 2006). Some odorants smell sulphurous even when they do not contain sulphur (Turin 1996). Some odorants may share the same atoms in the same order, and differ only in the direction of one bond. Whether one hydrogen atom is axial or equatorial to the plane of the rest of the molecule, which in turn induces dramatic changes to the other atoms geometrically, may drastically alter the smell of the odorant. Some odorants are very subtle, some are very strong and some conjure vivid memories in ways our other senses cannot. Perhaps it is curiosities such as these that cause a certain scepticism of smell: demonstrated by an attitude that it is ‘all in the brain’. Let it be emphasized here that there certainly is a difficult explanation of the way certain people interpret smell. However, this is true of all perception and feeling. What the brain does with the information obtained may vary from person to person, but, as in sight and hearing, there must be a common way we achieve the information in the first instance. While the neuroscience and psychology surrounding this area is doubtless interesting, the scope of this article examines the initial processes with the outside world. Leaving the piquant question—just what are the distinguishing characteristics of smell? A scientist can determine a colour by measuring the wavelength of the responsible photons. A clothing chain store can match exactly the colours of the suit jacket bought in a London store with the trousers that may have been bought in Edinburgh, and this is done using 16 wavelengths. For smell, there may be many more, adding to the complications scientists already face, but that does not mean it is impossible to match smells. It is put here that the scientist may be able to measure smell by determining the relevant odorant oscillation. Though it is essential that more conjectures and refutations are made to justify this claim and establish the validity of the swipe card theory, at least now in olfactory science the right questions are beginning to be asked. It is the question of how the odorant communicates with the receptor (and in general the ligand and the protein) that concerns this article, and a question which begins an interesting era of scientific discovery. Author Profile Jennifer C. Brookes From 2001 to 2004, Jennifer Brookes read joint honours physics and philosophy, at King’s College London (KCL). In the summer of 2003 she held a short studentship at KCL in a laboratory under the supervision of Gordon Davies, examining in silicon the defects, interstitials, substitutionals, doping and impurities that can be exposed using infra-red absorption spectroscopy. For her undergraduate work she won the Perkin Elmer prize for practical physics. Wanting to learn particular theoretical techniques she began in 2004 an MSc in physics, at University College London (UCL), for which she won a Graduate School Masters Award to fund the ambition. It was then, as part of a research project, that she was introduced to future PhD supervisors Andrew Horsfield and Marshall Stoneham and the imaginative ideas of Luca Turin’s theory on olfaction. Jennifer began her PhD studies at UCL in 2005, thrilled to be given the opportunity to investigate the physics of this intriguing sense with these scientists. In December 2008 Jennifer successfully defended her PhD thesis and won a departmental prize for Outstanding Postgraduate Research in Condensed Matter and Materials Physics. In 2009, she worked with Marshall Stoneham, as research assistant on various projects at the London Centre for Nanotechnology at UCL. These projects began to be of interest in olfaction, but have shown interesting insights into problem solving in other signal transduction events. In particular, her recent work, in collaboration with Gavin Vinson, of Queen Mary, University of London, finds interesting correlations of ligand dynamics with bio-function in vivo. In June 2009 Jennifer was awarded a Sir Henry Wellcome post-doctoral fellowship that gives her a great opportunity to pursue her interests in olfaction and a wider remit of ligand (drug)–protein (receptor) interactions. Her fellowship will be hosted by UCL for the 4 years duration, half of which she shall spend as a visiting fellow at the Massachusetts Institute of Technology (MIT) where she will work in Shuguang Zhang’s laboratory at the Center for Biomedical Engineering. In her spare time she loves to travel, dance and swim. I would like to thank Luca Turin, Gavin Vinson, Simon Gane, Thornton Greenland and Filio Hartoutsiou for the lively discussions that surrounded this work. I am also very grateful to Andrew Horsfield, Marshall Stoneham and Chris Howard for their input on the content. I would like to gratefully acknowledge the EPSRC and the IRC in Nanotechnology for financial support, Flexitral for a case award and of course the Wellcome Trust. Finally, for their support, I would like to thank Andrew Fisher, Gabriel Aeppli and Shuguang Zhang. View Abstract
15a9bb2e40bfb118
Solid-state physics From Wikipedia, the free encyclopedia   (Redirected from Solid state physics) Jump to: navigation, search "State theory" redirects here. For theories in political science, see State (polity). Solid materials are formed from densely packed atoms, which interact intensely. These interactions produce the mechanical (e.g. hardness and elasticity), thermal, electrical, magnetic and optical properties of solids. Depending on the material involved and the conditions in which it was formed, the atoms may be arranged in a regular, geometric pattern (crystalline solids, which include metals and ordinary water ice) or irregularly (an amorphous solid such as common window glass). The bulk of solid-state physics, as a general theory, is focused on crystals. Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling. Likewise, crystalline materials often have electrical, magnetic, optical, or mechanical properties that can be exploited for engineering purposes. The forces between the atoms in a crystal can take a variety of forms. For example, in a crystal of sodium chloride (common salt), the crystal is made up of ionic sodium and chlorine, and held together with ionic bonds. In others, the atoms share electrons and form covalent bonds. In metals, electrons are shared amongst the whole crystal in metallic bonding. Finally, the noble gases do not undergo any of these types of bonding. In solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom. The differences between the types of solid result from the differences between their bonding. The physical properties of solids have been common subjects of scientific inquiry for centuries, but a separate field going by the name of solid-state physics did not emerge until the 1940s, in particular with the establishment of the Division of Solid State Physics (DSSP) within the American Physical Society. The DSSP catered to industrial physicists, and solid-state physics became associated with the technological applications made possible by research on solids. By the early 1960s, the DSSP was the largest division of the American Physical Society.[1][2] Large communities of solid state physicists also emerged in Europe after World War II, in particular in England, Germany, and the Soviet Union.[3] In the United States and Europe, solid state became a prominent field through its investigations into semiconductors, superconductivity, nuclear magnetic resonance, and diverse other phenomena. During the early Cold War, research in solid state physics was often not restricted to solids, which led some physicists in the 1970s and 1980s to found the field of condensed matter physics, which organized around common techniques used to investigate solids, liquids, plasmas, and other complex matter.[1] Today, solid-state physics is broadly considered to be the subfield of condensed matter physics that focuses on the properties of solids with regular crystal lattices. Crystal structure and properties[edit] An example of a simple cubic lattice Many properties of materials are affected by their crystal structure. This structure can be investigated using a range of crystallographic techniques, including X-ray crystallography, neutron diffraction and electron diffraction. The sizes of the individual crystals in a crystalline solid material vary depending on the material involved and the conditions when it was formed. Most crystalline materials encountered in everyday life are polycrystalline, with the individual crystals being microscopic in scale, but macroscopic single crystals can be produced either naturally (e.g. diamonds) or artificially. Real crystals feature defects or irregularities in the ideal arrangements, and it is these defects that critically determine many of the electrical and mechanical properties of real materials. Electronic properties[edit] Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics. An early model of electrical conduction was the Drude model, which applied kinetic theory to the electrons in a solid. By assuming that the material contains immobile positive ions and an "electron gas" of classical, non-interacting electrons, the Drude model was able to explain electrical and thermal conductivity and the Hall effect in metals, although it greatly overestimated the electronic heat capacity. Arnold Sommerfeld combined the classical Drude model with quantum mechanics in the free electron model (or Drude-Sommerfeld model). Here, the electrons are modelled as a Fermi gas, a gas of particles which obey the quantum mechanical Fermi–Dirac statistics. The free electron model gave improved predictions for the heat capacity of metals, however, it was unable to explain the existence of insulators. The nearly free electron model is a modification of the free electron model which includes a weak periodic perturbation meant to model the interaction between the conduction electrons and the ions in a crystalline solid. By introducing the idea of electronic bands, the theory explains the existence of conductors, semiconductors and insulators. The nearly free electron model rewrites the Schrödinger equation for the case of a periodic potential. The solutions in this case are known as Bloch states. Since Bloch's theorem applies only to periodic potentials, and since unceasing random movements of atoms in a crystal disrupt periodicity, this use of Bloch's theorem is only an approximation, but it has proven to be a tremendously valuable approximation, without which most solid-state physics analysis would be intractable. Deviations from periodicity are treated by quantum mechanical perturbation theory. Modern research in solid state physics[edit] Current research topics in solid state physics include: See also[edit] 1. ^ a b Martin, Joseph D. (2015). "What's in a Name Change? Solid State Physics, Condensed Matter Physics, and Materials Science". Physics in Perspective. 17 (1). Retrieved 20 April 2015.  2. ^ Hoddeson, Lillian; et al. (1992). Out of the Crystal Maze: Chapters from The History of Solid State Physics. Oxford University Press. ISBN 9780195053296.  3. ^ Hoffmann, Dieter (2013). "Fifty Years of Physica Status Solidi in Historical Perspective". Physica Status Solidi B 250 (4). Retrieved 22 April 2015.  Further Reading[edit] • Neil W. Ashcroft and N. David Mermin, Solid State Physics (Harcourt: Orlando, 1976). • Charles Kittel, Introduction to Solid State Physics (Wiley: New York, 2004). • H. M. Rosenberg, The Solid State (Oxford University Press: Oxford, 1995). • Steven H. Simon, The Oxford Solid State Basics (Oxford University Press: Oxford, 2013). • Out of the Crystal Maze. Chapters from the History of Solid State Physics, ed. Lillian Hoddeson, Ernest Braun, Jürgen Teichmann, Spencer Weart (Oxford: Oxford University Press, 1992). • M. A. Omar, Elementary Solid State Physics (Revised Printing, Addison-Wesley, 1993). Pop science
c5e4ab9056d4521a
Physicists in the US have calculated that a new class of three-body bound states should exist for atoms that experience long-range interactions – even though the interactions themselves are too weak to bind pairs of the same atoms. Such states have previously been seen in bosonic atoms affected by short-range interactions, but the team says that this latest phenomenon is very different, particularly because it can also occur for fermions. Although the researchers have not yet seen the new states, these could be revealed in experiments on ultracold atomic gases. The idea that three atoms can form a loosely bound quantum state – even if any two of the atoms on their own cannot bind together – was first predicted by the Russian physicist Vitaly Efimov in the early 1970s. Now known as Efimov three-body bound states, they were first spotted in 2006 in a gas of caesium atoms that was cooled to just 10 nK by a team led by Hanns-Christoph Nägerl of Innsbruck University in Austria. Efimov states only occur for atoms that are bosons; that is, atoms that have integer, rather than half-integer, values of spin. The long and the short of it One important feature of Efimov states is that the interactions between the atoms are short ranged – in other words, they are described by an attractive potential that falls off faster than the inverse square of the distance between atoms. If the potential has a longer range, then Efimov's calculations do not apply and Efimov states do not exist. Of course, if these long-range potentials happen to be strong, then there will be an infinite number of three-body bound states – but these are not Efimov states. Until now, however, it has not been clear if three-body bound states exist when the potential is so weak that it does not bind together pairs of atoms. What Brett Esry and colleagues at Kansas State University have found is that bound states of three atoms should occur when they are attracted to each other by a very weak inverse-square potential. The team came to this conclusion by studying numerical solutions to the three-body Schrödinger equation for three identical bosons. Esry does it Esry and colleagues then turned their attention to fermions and obtained a second surprising result. When the spins of all three atoms point in the same direction, three-body bound states occur even when atoms in pairs repel each other. Esry told that it may be possible to see the new states in lab experiments. "Given that they are very weakly bound – much like Efimov states – ultracold gases are the most likely candidates for seeing them," he says. "The most likely scenario for seeing our states is in a mixture of heavy bosonic atoms interacting with light fermionic atoms." In this scenario, the fermions act as force mediators resulting in an effective attractive inverse-square potential between the bosons. While Esry believes that the effect could reveal itself in a gas of bosonic caesium and fermionic lithium, more ideal candidate systems should have a larger mass ratio between the boson and fermion. Possible systems include ytterbium–hydrogen or erbium–hydrogen, although Esry points out that hydrogen is particularly difficult to work with so lithium might be a better choice of fermion. Nägerl says he "would not have expected" this result, adding that it is "surprising how rich the [inverse square] case is". However, Nägerl believes that it may be difficult to persuade experimentalists to try to confirm the theoretical result because of the challenges associated with tooling up their labs to create and study an appropriate boson–fermion combination. The work is described in Physical Review Letters.
c93ad5ef84112830
Interaction picture From Wikipedia, the free encyclopedia Jump to: navigation, search In quantum mechanics, the interaction picture (also known as the Dirac picture) is an intermediate representation between the Schrödinger picture and the Heisenberg picture. Whereas in the other two pictures either the state vector or the operators carry time dependence, in the interaction picture both carry part of the time dependence of observables.[1] The interaction picture is useful in dealing with changes to the wave functions and observable due to interactions. Most field theoretical calculations[2] use the interaction representation because they construct the solution to the many body Schrödinger equation as the solution to the free particle problem plus some unknown interaction part. Equations that include operators acting at different times, which hold in the interaction picture, don't necessarily hold in the Schrödinger or the Heisenberg picture. This is because time-dependent unitary transformations relate operators in one picture to the analogous operators in the others. Operators and state vectors in the interaction picture are related by a change of basis (unitary transformation) to those same operators and state vectors in the Schrödinger picture. To switch into the interaction picture, we divide the Schrödinger picture Hamiltonian into two parts, H_S = H_{0,S} + H_{1, S} ~. Any possible choice of parts will yield a valid interaction picture; but in order for the interaction picture to be useful in simplifying the analysis of a problem, the parts will typically be chosen so that H0,S is well understood and exactly solvable, while H1,S contains some harder-to-analyze perturbation to this system. If the Hamiltonian has explicit time-dependence (for example, if the quantum system interacts with an applied external electric field that varies in time), it will usually be advantageous to include the explicitly time-dependent terms with H1,S, leaving H0,S time-independent. We proceed assuming that this is the case. If there is a context in which it makes sense to have H0,S be time-dependent, then one can proceed by replacing e^{\pm i H_{0,S} t/\hbar} by the corresponding time-evolution operator in the definitions below. State vectors[edit] A state vector in the interaction picture is defined as[3] | \psi_{I}(t) \rangle = e^{i H_{0, S} t / \hbar} | \psi_{S}(t) \rangle ~, where |ψS(t)〉is the state vector in the Schrödinger picture. An operator in the interaction picture is defined as A_{I}(t) = e^{i H_{0,S} t / \hbar} A_{S}(t) e^{-i H_{0,S} t / \hbar}. Note that AS(t) will typically not depend on t, and can be rewritten as just AS. It only depends on t if the operator has "explicit time dependence", for example due to its dependence on an applied, external, time-varying electric field. Hamiltonian operator[edit] For the operator H0 itself, the interaction picture and Schrödinger picture coincide, H_{0,I}(t) = e^{i H_{0,S} t / \hbar} H_{0,S} e^{-i H_{0,S} t / \hbar} = H_{0,S} . This is easily seen through the fact that operators commute with differentiable functions of themselves. This particular operator then can be called H0 without ambiguity. For the perturbation Hamiltonian H1,I, however, H_{1,I}(t) = e^{i H_{0,S} t / \hbar} H_{1,S} e^{-i H_{0,S} t / \hbar} , where the interaction picture perturbation Hamiltonian becomes a time-dependent Hamiltonian—unless [H1,S, H0,S] = 0 . It is possible to obtain the interaction picture for a time-dependent Hamiltonian H0,S(t) as well, but the exponentials need to be replaced by the unitary propagator for the evolution generated by H0,S(t), or more explicitly with a time-ordered exponential integral. Density matrix[edit] The density matrix can be shown to transform to the interaction picture in the same way as any other operator. In particular, let ρI and ρS be the density matrix in the interaction picture and the Schrödinger picture, respectively. If there is probability pn to be in the physical state |ψn〉, then \rho_I(t) = \sum_n p_n(t) |\psi_{n,I}(t)\rang \lang \psi_{n,I}(t)| = \sum_n p_n(t) e^{i H_{0, S} t / \hbar}|\psi_{n,S}(t)\rang \lang \psi_{n,S}(t)|e^{-i H_{0, S} t / \hbar} = e^{i H_{0, S} t / \hbar} \rho_S(t) e^{-i H_{0, S} t / \hbar}. Evolution Picture of: Heisenberg Interaction Schrödinger Ket state constant | \psi_{I}(t) \rang = e^{i H_{0, S} ~t / \hbar} | \psi_{S}(t) \rang | \psi_{S}(t) \rang = e^{-i H_{ S} ~t / \hbar} | \psi_{S}(0) \rang Observable A_H (t)=e^{i H_{ S}~ t / \hbar} A_S e^{-i H_{ S}~ t / \hbar} A_I (t)=e^{i H_{0, S} ~t / \hbar} A_S e^{-i H_{0, S}~ t / \hbar} constant Density matrix constant \rho_I (t)=e^{i H_{0, S} ~t / \hbar} \rho_S (t) e^{-i H_{0, S}~ t / \hbar} \rho_S (t)= e^{-i H_{ S} ~t / \hbar} \rho_S(0) e^{i H_{ S}~ t / \hbar} Time-evolution equations in the interaction picture[edit] Time-evolution of states[edit] Transforming the Schrödinger equation into the interaction picture gives: i \hbar \frac{d}{dt} \mid \psi_{I} (t) \rang = H_{1, I}(t) \mid \psi_{I} (t) \rang. This equation is referred to as the SchwingerTomonaga equation. Time-evolution of operators[edit] If the operator AS is time independent (i.e., does not have "explicit time dependence"; see above), then the corresponding time evolution for AI(t) is given by In the interaction picture the operators evolve in time like the operators in the Heisenberg picture with the Hamiltonian H' =H0. Time-evolution of the density matrix[edit] Transforming the Schwinger–Tomonaga equation into the language of the density matrix (or equivalently, transforming the von Neumann equation into the interaction picture) gives: i\hbar \frac{d}{dt} \rho_I(t) = \left[ H_{1,I}(t), \rho_I(t)\right]. Use of interaction picture[edit] The purpose of the interaction picture is to shunt all the time dependence due to H0 onto the operators, thus allowing them to evolve freely, and leaving only H1,I to control the time-evolution of the state vectors. The interaction picture is convenient when considering the effect of a small interaction term, H1,S, being added to the Hamiltonian of a solved system, H0,S. By utilizing the interaction picture, one can use time-dependent perturbation theory to find the effect of H1,I, e.g., in the derivation of Fermi's golden rule, or the Dyson series, in quantum field theory: In 1947, Tomonaga and Schwinger appreciated that covariant perturbation theory could be formulated elegantly in the interaction picture, since field operators can evolve in time as free fields, even in the presence of interactions, now treated perturbatively in such a Dyson series. 1. ^ Albert Messiah (1966). Quantum Mechanics, North Holland, John Wiley & Sons. ISBN 0486409244 ; J. J. Sakurai (1994). Modern Quantum Mechanics (Addison-Wesley) ISBN 9780201539295 . 2. ^ J. W. Negele, H. Orland (1988), Quantum Many-particle Systems, ISBN 0738200522 3. ^ The Interaction Picture, lecture notes from New York University • Townsend, John S. (2000). A Modern Approach to Quantum Mechanics, 2nd ed. Sausalito, California: University Science Books. ISBN 1-891389-13-0.  See also[edit]
fec38de559b704e5
Take the 2-minute tour × Why is the wave function complex? I've collected some layman explanations but they are incomplete and unsatisfactory. However in the book by Merzbacher in the initial few pages he provides an explanation that I need some help with: that the de Broglie wavelength and the wavelength of an elastic wave do not show similar properties under a Galilean transformation. He basically says that both are equivalent under a gauge transform and also, separately by Lorentz transforms. This, accompanied with the observation that $\psi$ is not observable, so there is no "reason for it being real". Can someone give me an intuitive prelude by what is a gauge transform and why does it give the same result as a Lorentz tranformation in a non-relativistic setting? And eventually how in this "grand scheme" the complex nature of the wave function becomes evident.. in a way that a dummy like me can understand. A wavefunction can be thought of as a scalar field (has a scalar value in every point ($r,t$) given by $\psi:\mathbb{R^3}\times \mathbb{R}\rightarrow \mathbb{C}$ and also as a ray in Hilbert space (a vector). How are these two perspectives the same (this is possibly something elementary that I am missing out, or getting confused by definitions and terminology, if that is the case I am desperate for help ;) One way I have thought about the above question is that the wave function can be equivalently written in $\psi:\mathbb{R^3}\times \mathbb{R}\rightarrow \mathbb{R}^2 $ i.e, Since a wave function is complex, the Schroedinger equation could in principle be written equivalently as coupled differential equations in two real functions which staisfy the Cauchy-Riemann conditions. ie, if $$\psi(x,t) = u(x,t) + i v(x,t)$$ and $u_x=v_t$ ; $u_t = -v_x$ and we get $$\hbar \partial_t u = -\frac{\hbar^2}{2m} \partial_x^2v + V v$$ $$\hbar \partial_t v = \frac{\hbar^2}{2m} \partial_x^2u - V u$$ (..in 1-D) If this is correct what are the interpretations of the $u,v$.. and why isn't it useful. (I am assuming that physical problems always have an analytic $\psi(r,t)$). share|improve this question Hi Yayu. I've always found interesting a paper by Leon Cohen, "Rules of Probability in Quantum Mechanics", Foundations of Physics 18, 983(1988), which approaches this question somewhat sideways, through characteristic functions. Cohen comes from a signal processing background, where Fourier transforms are very often a natural thing to do. Fourier transforms and complex numbers are of course pretty much joined at the hip. –  Peter Morgan Apr 5 '11 at 18:08 Here are a few straightforward observations that might be helpful. (1) You can describe standing waves with real-valued wavefunctions, e.g., one can almost always get away with this in low-energy nuclear structure physics. (2) The w.f. of a photon is simply the electric and magnetic fields. These are observable and real-valued. (3) If the electron w.f. was real and observable, the wavelength would have to be invariant under a Galilean boost, which would violate the de Broglie relation. (4) Even for real-valued waves, operators are complex, e.g., momentum in the classically forbidden region. –  Ben Crowell May 30 '13 at 23:08 @yayu A complex analytic function is a function from complex numbers to complex numbers. And the Cauchy-Riemann equations are about such functions. To pick on x and t as if the t axis were an imaginary axis and the x axis were a real axis and y and z didn't exist is very confusing. –  Timaeus Jan 2 at 4:33 11 Answers 11 up vote 22 down vote accepted More physically than a lot of the other answers here (a lot of which amount to "the formalism of quantum mechanics has complex numbers, so quantum mechanics should have complex numbers), you can account for the complex nature of the wave function by writing it as $\Psi (x) = |\Psi (x)|e^{i \phi (x)}$, where $i\phi$ is a complex phase factor. It turns out that this phase factor is not directly measurable, but has many measurable consequences, such as the double slit experiment and the Aharonov-Bohm effect. Why are complex numbers essential for explaining these things? Because you need a representation that both doesn't induce time and space dependencies in the magnitude of $|\Psi (x)|^{2}$ (like multiplying by real phases would), AND that DOES allow for interference effects like those cited above. The most natural way of doing this is to multiply the wave amplitude by a complex phase. share|improve this answer Is there any wave or vibration which cannot/has-to-be described with complex number formalism? –  Georg Oct 13 '11 at 10:03 But what are the differences between the sound waves and the wavefunction? Why the second must be complex, while the first also may interfere? And we may write the our wavefunction through sines and cosines, so the value $\psi^{T}\psi$ also refers to the invariant in this case. –  Andrew McAddams Mar 27 '14 at 5:08 @AndrewMcAddams: the difference is that the amplitude of a sound wave is an observable, while only the amplitude of the modulus squared is an observable in quantum mechanics. I can see the phase of a water wave, but I can only see the phase of an electron wave through interference effects. –  Jerry Schirmer Mar 27 '14 at 13:15 But @Jerry Schirmer, you say using complex phase is the most natural way to model quantum behavior, is it the ONLY way? –  docscience Nov 9 '14 at 15:07 @docscience: of course not -- you don't even need complex numbers to do the math of complex numbers, after all. It's just a nice, easy way to do them. And people have tried to reformulate quantum mechanics using quarternions, but I don't know how far they've really gotten, that's outside of my field of expertise. –  Jerry Schirmer Nov 9 '14 at 23:14 Alternative discussion by Scott Aaronson: http://www.scottaaronson.com/democritus/lec9.html 1. From the probability interpretation postulate, we conclude that the time evolution operator $\hat{U}(t)$ must be unitary in order to keep the total probability to be 1 all the time. Note that the wavefunction is not necessarily complex yet. 2. From the website: "Why did God go with the complex numbers and not the real numbers? Answer: Well, if you want every unitary operation to have a square root, then you have to go to the complex numbers... " $\hat{U}(t)$ must be complex if we still want a continuous transformation. This implies a complex wavefunction. Hence the operator should be: $\hat{U}(t) = e^{i\hat{K}t}$ for hermitian $\hat{K}$ in order to preserve the norm of the wavefunction. share|improve this answer Personally I prefer Jerry Schirmer's answer because it requires less postulate and instead uses experimental fact directly. =) –  pcr Apr 8 '11 at 4:56 I very much like your answer, as much as Jerry's. But I would add two things: firstly, the square root thing is a bit obtuse: I would put it as follows for those like me who are a bit slow on the uptake: ....(ctd)... –  WetSavannaAnimal aka Rod Vance Aug 5 '13 at 4:44 "All eigenvalues of unitary operators have unit magnitude. So the only nontrivial unitary operator with all real eigenvalues is one with a mixture of +1s and -1s as eigenvalues- say $M$ -otherwise it is the identity operator $I$. Since $U(t)$ and its eigenvalues vary continuously, $U(t)$ cannot reach $M$ from its beginning value $U(0)=I$ unless at least one eigenvalue goes through all values on the unit semicircle to reach the value -1". ...(ctd)... –  WetSavannaAnimal aka Rod Vance Aug 5 '13 at 4:44 Secondly, the argument won't quite fly as is: there are nontrivial, real matrix valued unitary groups $\mathbf{SO}(N)$ (whose members have complex eigenvalues but nonetheless are real matrices) that will realise the $U(t)=\exp(i\,K\,t)$ in your argument, so quantum states can still be all real wavefunctions if they are real at $t=0$. I don't quite have a fix for this, maybe you could appeal to an experiment. It is a pretty argument, though, so I'll keep thinking. –  WetSavannaAnimal aka Rod Vance Aug 5 '13 at 4:46 Among other things, the OP reprinted a page of a textbook, asking what "it is all about". I think it is impossible to answer this kind of questions because what is the OP's problem all about is totally undetermined, and the people who offer their answers could be writing their own textbooks, with no results. The wave function in quantum mechanics has to be complex because the operators satisfy things like $$ [x,p] = xp-px = i\hbar.$$ It's the commutator defining the uncertainty principle. Because the left hand side is anti-Hermitian, $$ (xp-px)^\dagger = p^\dagger x^\dagger - x^\dagger p^\dagger = (px-xp) = -(xp-px),$$ it follows that if it is a $c$-number, its eigenvalues have to be pure imaginary. It follows that either $x$ or $p$ or both have to have some non-real matrix elements. Also, Schrödinger's equation $$i\hbar\,\, {\rm d/d}t |\psi\rangle = H |\psi\rangle$$ has a factor of $i$ in it. The equivalent $i$ appears in Heisenberg's equations for the operators and in the $\exp(iS/\hbar)$ integrand of Feynman's path integral. So the amplitudes inevitably have to come out as complex numbers. That's also related to the fact that eigenstates of energy and momenta etc. have the dependence on space or time etc. $$\exp(Et/i\hbar)$$ which is complex. A cosine wouldn't be enough because a cosine is an even function (and the sine is an odd function) so it couldn't distringuish the sign of the energy. Of course, the appearance of $i$ in the phase is related to the commutator at the beginning of this answer. See also Why complex numbers are fundamental in physics Concerning the second question, in physics jargon, we choose to emphasize that a wave function is not a scalar field. A wave function is not an observable at all while a field is. Classically, the fields evolve deterministically and can be measured by one measurement - but the wave function cannot be measured. Quantum fields are operators - but the wave function is not. Moreover, the mathematical similarity of a wave function to a scalar field in 3+1 dimensions only holds for the description of one spinless particle, not for more complicated systems. Concerning the last question, it is not useful to decompose complex numbers into real and imaginary parts exactly because "a complex number" is one number and not two numbers. In particular, if we multiply a wave function by a complex phase $\exp(i\phi)$, which is only possible if we allow the wave functions to be complex and we use the multiplication of complex numbers, physics doesn't change at all. It's the whole point of complex numbers that we deal with them as with a single entity. share|improve this answer thanks for answering. I have one question, not knowing about Feynman path integrals yet, I take it that what you are saying is the same thing as: if we make the transformation $\psi(r,t) = e^{i\frac{S(r,t)}{\hbar}}$ then the Schrodinger equation reduces to the classical hamilton Jacobi equations (if terms containing $i$ and $\hbar$ were negligible)? –  yayu Apr 5 '11 at 5:35 Dear yayu, thanks for your question. First, the appearance of $\exp(iS/\hbar)$ in Feynman's approach is not a transformation of variables: the exponential is an integrand that appears in an integral used to calculate any transition amplitude. Second, $\psi$ is complex and $S$ is real, so $\psi=\exp(iS/\hbar)$ cannot be a "change of variables". You may write $\psi=\sqrt{\rho}\exp(i S/\hbar)$, in which case Schrödinger's equation may be (unnaturally) rewritten as two real equations, a continuity equation for $\rho$ and the Hamilton-Jacobi equation for $S$ with some extra quantum corrections. –  Luboš Motl Apr 5 '11 at 5:39 I edited my question removing the reprints and trying to state my problem without them.. it will take some time to think about some points you made in the answer already, though. –  yayu Apr 5 '11 at 6:00 This year-old question popped up unexpectedly when I signed in, and it's an interesting one. So I guess it's OK just to add an intuition-level "addendum answer" to the excellent and far more complete responses provided long ago. Your kernel question seems to be this: "Why is the wave function complex?" My intentionally informal answer is this: Because by experimental observation, the quantum behavior of a particle far more closely resembles that of a rotating rope (e.g. a skip rope) than it does a rope that only moves up and down. If each point in a rope marks out a circle as it moves, then a very natural and economical way to represent each point along the length of the rope is as a complex magnitude. You certainly don't have to do it that way, of course. In fact, using polar coordinates would probably be a bit more straightforward. However, the nifty thing about complex numbers is that they provide a simple and computationally efficient way to represent just such a polar coordinate system. You can get into the gory details mathematical details of why, but suffice it to say that when early physicists started using complex numbers for just that purpose, their benefits continued even as the problems became far more complex. In quantum mechanics, their benefits became so overwhelming that complex numbers started being accepted pretty much as the "reality" of how to represent such mathematics. That conceptual merging of complex quantities with actual physics can throw off your intuitions a bit. For example, if you look at moving skip rope there is no distinction between the "real" and "imaginary" axes in the actual rotations of each point in the rope. The same is true for quantum representations: It's the phase and amplitude that counts, with other distinctions between the axes of the phase plane being a result of how you use those phases within more complicated mathematical constructions. So, if quantum wave functions behaved only like ropes moving up and down along a single axis, we'd use real functions to represent them. But they don't. Since they instead are more like those skip ropes, it's a lot easier to represent each point along the rope with two values, one "real" and one "imaginary" (and neither in real XYZ space) for its value. Finally, why do I claim that a single quantum particle has a wave function that resembles that of a skip rope in motion? The classic example is the particle-in-a-box problem, where a single particle bounces back-and-forth between the two X axis ends of the box. Such a particle forms one, two, three, or more regions (or anti-nodes) in which the particle is more likely to be found. If you borrow Y and Z (perpendicular to the length of the box) to represent the real and imaginary amplitudes of the particle wave function at each point along X, it's interesting to see what you get. It looks exactly like a skip-rope in action, one in which the regions where the electron is most likely to be found correspond one-for-one to the one, two, three, or more loops of the moving skip rope. (Fancy skip-ropers know all about higher numbers of loops.) The analogy doesn't stop there. The volume enclosed by all the loops, normalized to 1, tells you exactly what the odds are on finding the electron along any one section along the box in the X axis. Tunneling is represented by the electron appearing on both sides of the unmoving nodes of the rope, those nodes being regions where there is no chance of finding the electron. The continuity of the rope from point to point captures a rough approximation of the differential equations that assign high energy costs to sharp bends in the rope. The absolute rotation speed of the rope represents the total mass-energy of the electron, or at least can be used that way. Finally, and a bit more complicated, you can break those simple loops down into other wave components by using the Fourier transform. Any simple look can also be viewed as two helical waves (like whipping a hose around to free it) going in opposite directions. These two components represent the idea that a single-loop wave function actually includes helical representations of the same electron going in opposite directions, at the same time. "At the same time" is highly characteristic of quantum function in general, since such functions always contain multiple "versions" of the location and motions of the single particle that they represent. That is really what a wave function is, in fact: A summation of the simple waves that represent every likely location and momentum situation that the particle could be in. Full quantum mechanics is far more complex than that, of course. You must work in three spatial dimensions, for one thing, and you have to deal with composite probabilities of many particles interacting. That drives you into the use of more abstract concepts such as Hilbert spaces. But with regards to the question of "why complex instead of real?", the simple example of the similarity of quantum functions to rotating ropes still holds: All of these more complicated cases are complex because, at their heart, every point within them behaves as though it is rotating in an abstract space, in a way that keeps it synchronized with points in immediately neighboring points in space. share|improve this answer I'm not sure whether the OP is aware of this, but it emphasises your comment "it doesn't have to be this way". Real matrices of the form $\left(\begin{array}{cc}a&-b\\b&a\end{array}\right) = I a + i b$ where now $I$ is the $2\times2$ identity and $i= \left(\begin{array}{cc}0&-1\\1&0\end{array}\right)$ form a field wholly isomophic to $\mathbb{C}$. In particular, a phase delay corresponds to multiplication by the rotation matrix $\exp\left(-i\,\omega\,t\right)=\left(\begin{array}{cc}\cos\omega t&-\sin \omega t\\ \sin\omega t&\cos\omega t\end{array}\right) = I \cos\omega t + i\sin\omega t$. –  WetSavannaAnimal aka Rod Vance Jul 29 '13 at 0:47 Rod, yes. A similar trick can be done for quaternions. I'm actually a quaternion bigot: I like to think of many of the complex numbers used in physics as really being overly generalized quaternions, ones in which our built-in 3D bias keeps us from noticing that the imaginary axis of a complex number is actually just a quaternion unit pointer in XYZ space. You lose a lot of representation richness by doing that, since for example you inadvertently abandon the intriguing option of treating changes in the quaternion-view i orientation as a local symmetry of XYZ space. –  Terry Bollinger Jul 31 '13 at 22:36 Although I guess from the OPs point of view, it would be wrong to call it a trick - there are many ways to encode the kinds of properties complex numbers do and this one IS complex numbers (an isomorphic field). As for quaternions, yes, it's a shame that Hamilton, Clifford and Maxwell never held sway over Heaviside. –  WetSavannaAnimal aka Rod Vance Aug 2 '13 at 0:05 If the wave function were real, performing a Fourier transform in time will lead to pairs of positive-negative energy eigenstates. Negative energies with no lower bounds is incompatible with stability. So, complex wave functions are needed for stability. No, the wave function is not a field. It only looks like it for a single particle, but for N particles, it is a function in 3N+1 dimensional configuration space. share|improve this answer This question has been asked since Dirac In fact Dirac's answer is available for $ 100 from JSTOR in a paper by Dirac from I think 1935 ? A recent answer from James Wheeler - is that the zero-signature Killing metric of a new, real-valued, 8-dimensional gauging of the conformal group accounts for the complex character of quantum mechanics Reference is Why Quantum Mechanics is Complex , James T. Wheeler ArXiv:hep-th9708088 share|improve this answer EDIT add: My Answer is GA centric and after the comments I felt the need to say some words about the beauty of Geometric Algebra: On 2nd page of Oersted Medal Lecture (link bellow): (3) GA Reduces “grad, div, curl and all that” to a single vector derivative that, among other things, combines the standard set of four Maxwell equations into a single equation and provides new methods to solve it. Geometry Algebra (GA) encompasses in a single framework for all this: Synthetic Geometry, Coordinate Geometry, Complex Variables, Quaternions, Vector Analysis, Matrix Algebra, Spinors, Tensors, Differential forms. It is one language for all physics. Probably Schrödinger, Dirac, Pauli, etc ... would have used GA if it existed at the time. To the Question: WHY is the wave function complex? This Answer is not helpful: because the wave function is complex (or has a i on it). We have to try something different, not written in your book. In the abstracts I bolded the evidence that the papers are about the WHYs. If someone begs a fish I'll try to give a fishing rod. I'm an old IT analyst who would be unemployed if I had not evolved. Physics is evolving too. end EDIT Recently I've found the Geometric Algebra, Grassman, Clifford, and David Hestenes. I will not detail here the subject of the OP because each one of us need to follow paths, find new ideas and take time to read. I will only provide some paths with part of the abstracts: Overview of Geometric Algebra in Physics Oersted Medal Lecture 2002: Reforming the Mathematical Language of Physics (a good start) In this lecture Hestenes is arguing for a reform of the way in which mathematics is taught to physicists. He asserts that using Geometric Algebra will make it easier to understand the fundamentals of physics, because the mathematical language will be clearer and more uniform. Hunting for Snarks in Quantum Mechanics Abstract. A long-standing debate over the interpretation of quantum mechanics has centered on the meaning of Schroedinger’s wave function ψ for an electron. Broadly speaking, there are two major opposing schools. On the one side, the Copenhagen school (led by Bohr, Heisenberg and Pauli) holds that ψ provides a complete description of a single electron state; hence the probability interpretation of ψψ* expresses an irreducible uncertainty in electron behavior that is intrinsic in nature. On the other side, the realist school (led by Einstein, de Broglie, Bohm and Jaynes) holds that ψ represents a statistical ensemble of possible electron states; hence it is an incomplete description of a single electron state. I contend that the debaters have overlooked crucial facts about the electron revealed by Dirac theory. In particular, analysis of electron zitterbewegung (first noticed by Schroedinger) opens a window to particle substructure in quantum mechanics that explains the physical significance of the complex phase factor in ψ. This led to a testable model for particle substructure with surprising support by recent experimental evidence. If the explanation is upheld by further research, it will resolve the debate in favor of the realist school. I give details. The perils of research on the foundations of quantum mechanics have been foreseen by Lewis Carroll in The Hunting of the Snark! Abstract. A reformulation of the Dirac theory reveals that i¯h has a geometric meaning relating it to electron spin. This provides the basis for a coherent physical interpretation of the Dirac and Sch¨odinger theories wherein the complex phase factor exp(−iϕ/¯h) in the wave function describes electron zitterbewegung, a localized, circular motion generating the electron spin and magnetic moment. Zitterbewegung interactions also generate resonances which may explain quantization, diffraction, and the Pauli principle. Universal Geometric Calculus a course, and follow: III. Implications for Quantum Mechanics The Kinematic Origin of Complex Wave Functions Clifford Algebra and the Interpretation of Quantum Mechanics The Zitterbewegung Interpretation of Quantum Mechanics Quantum Mechanics from Self-Interaction Zitterbewegung in Radiative Processes On Decoupling Probability from Kinematics in Quantum Mechanics Zitterbewegung Modeling Space-Time Structure of Weak and Electromagnetic Interactions to keep more references together: Geometric Algebra and its Application to Mathematical Physics (Chris Thesis) (what lead me to this amazing path was a paper by Joy Christian 'Disproof of Bell Theorem') 'Bon voyage', 'good journey', 'boa viagem' share|improve this answer Why the Down votes? –  Helder Velez Apr 5 '11 at 15:03 @Helder The downvotes are not from me, but I think your Answer doesn't much address the Question, so I think they are justifiable just on that count. More significantly, citing Hestenes is problematic unless you are very specific about what you are taking from him, in which case you could as easily cite someone else who does not make such inflated claims. Too many of Hestenes' claims are not justifiable enough, and all of them have to be read critically to find what is interesting, which is time-consuming. Keep your wits about you as you follow the Hestenes path. –  Peter Morgan Apr 5 '11 at 18:26 @Helder; I have a great deal of respect for Dr. Hestenes' work, send me an email if you want to talk about it. His work directly reads on the complex nature of QM. I'll +1 your answer when I get my votes back (I always use them up). –  Carl Brannen Apr 6 '11 at 1:57 @Helder Velez I am one of your downvoters as I saw it as a very broad answer with lots of references and abstracts reproduced which have little to do with the specific context in which I tried to frame my question. Also, I am not interested in the interpretational aspect of Quantum Mechanics at all, at my stage. –  yayu Apr 6 '11 at 4:52 @Carl Brannen Do you upvote an answer just because it cites the work of someone you respect, despite the fact that it might be of little relevance to the question? –  yayu Apr 6 '11 at 4:57 From the Heisenberg Uncertainty Principle, if we know a great deal about the momentum of a particle we can know very little about its position. This suggests that our mathematics should have a quantum state that corresponds to a plane wave $\psi(x)$ with a precisely known momentum but entirely unknown position. A natural definition for the probability of finding the particle at the position $x$ is $|\psi(x)|^2$. This definition makes sense for both a real wave function and an imaginary wave function. For a plane wave to have no position information is to imply that $|\psi(x)|$ does not depend on position and so is constant. Therefore we must have $\psi$ complex; otherwise there would be no way to store the information "what is the momentum of the particle". So in my view, the complex nature of wave functions arises from the interaction between the necessity for (1) a probability interpretation, (2) the Heisenberg uncertainty principle, and (3) plane waves. share|improve this answer Please clear some doubts for me. 1. The probability interpretation: I think it followed since the wavefunction was complex and physical meaning could only attributed to a real value. If we make a construction $\psi^*\psi$ then we arrive at the continuity equation from the schrodinger equation and the interpretation can now be made that the quantity $\rho=\psi^*\psi$ is the probability density. Starting from an interpretation like $\rho=\psi^*\psi$, I do not see any way to work backwards and convincingly argue that the amplitude $\psi$ must be complex. –  yayu Apr 6 '11 at 18:09 the uncertainty relations follow from the identification of the free particle as a plane wave. I am guessing your answer points in the right direction, I am working on (2) as suggested in Lubos' answer as well and trying to get why $\psi$ is complex valued as a consequence, however I fail to see how anything except (2) is relevant for showing it conclusively. –  yayu Apr 6 '11 at 18:16 @yayu: see my post--there are two essential experimental facts: 1) phase is not directly measurable; 2) interference effects happen in a broad range of quantum materials. It's hard to reconcile these things without using complex numbers. –  Jerry Schirmer Apr 7 '11 at 3:39 Since the physical point of view, the wave function needs to be complex in order to explain the double-slit experiment, as well mentionated in the book of The Feynman Lectures on Physics-III, I suggest you that review chapters 1&3, where it is explained how $\psi$ has to be considered of probabilistic nature, according to the pattern of interference, because "something" has to behave like a wave at the time of crossing through "each one" of the slits. Furthermore, Bohm proclaims that path of the particle (electron,photon, etc.) can be considered classic, so as a consequence you may watch this one, as it follows the rules already known at the macro... in that sense, you can see next reference or this one to consider the covariance of the laws of mechanics. share|improve this answer The wave function is formulated as complex quantity to emphasize that one cannot measure the amplitude and the phase of the wave function simultaneously. If one would formulate the Schrödinger equation as a system of coupled differential equations as you do in point 3, this feature of the wave function would not be manifest (see also my answer here http://physics.stackexchange.com/a/83219/1648). share|improve this answer Dear asmaier, it is usually frown upon to directly copy-paste identical answers. (The problem is if everybody start to copy-paste identical answers en mass.) In general in such situations, please consider one of the following options: (i) Delete three of your answers. (ii) Flag for duplicate posts and delete three of your answers. (iii) If you think the four posts are not duplicates, then personalize each answer to address the four different specific questions. –  Qmechanic Nov 2 '13 at 23:13 Dear Qmechanic, isn't is also frown upon to copy-paste identical comments? ;-) However I admit, that my answers were too similar. So I tried to follow your suggestion (iii) and personalized my answers to address the specific question in a better way. However I still believe the quote from Dirac is very relevant and important, so I will refer to it in every answer. –  asmaier Nov 3 '13 at 14:50 Since both the amplitude and the wavelength cannot be known with precision simultaneously, I think of this as meaning that there is some missing information that must still be dealt with continuously. That information is conveniently stored in the imaginary part of a complex number. share|improve this answer This is not nearly substantiated enough to be an answer, and besides, I'm quite sure that's not a good way to think about that. –  ACuriousMind Sep 10 '14 at 0:17 protected by Qmechanic Nov 28 '14 at 0:11 Would you like to answer one of these unanswered questions instead?
4764b5ca12749979
Page semi-protected From Wikipedia, the free encyclopedia Jump to: navigation, search For other uses, see Atom (disambiguation). Helium atom Helium atom ground state. An illustration of the helium atom, depicting the nucleus (pink) and the electron cloud distribution (black). The nucleus (upper right) in helium-4 is in reality spherically symmetric and closely resembles the electron cloud, although for more complicated nuclei this is not always the case. The black bar is one angstrom (10−10 m or 100 pm). Smallest recognized division of a chemical element Mass range: 1.67×10−27 to 4.52×10−25 kg Electric charge: zero (neutral), or ion charge Diameter range: 62 pm (He) to 520 pm (Cs) (data page) Components: Electrons and a compact nucleus of protons and neutrons An atom is the smallest constituent unit of ordinary matter that has the properties of a chemical element.[1] Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are very small; typical sizes are around 100 pm (a ten-billionth of a meter, in the short scale).[2] However, atoms do not have well-defined boundaries, and there are different ways to define their size that give different but close values. Atoms are small enough that classical physics gives noticeably incorrect results. Through the development of physics, atomic models have incorporated quantum principles to better explain and predict the behavior. Every atom is composed of a nucleus and one or more electrons bound to the nucleus. The nucleus is made of one or more protons and typically a similar number of neutrons (none in hydrogen-1). Protons and neutrons are called nucleons. More than 99.94% of an atom's mass is in the nucleus. The protons have a positive electric charge, the electrons have a negative electric charge, and the neutrons have no electric charge. If the number of protons and electrons are equal, that atom is electrically neutral. If an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively, and it is called an ion. The electrons of an atom are attracted to the protons in an atomic nucleus by this electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by a different force, the nuclear force, which is usually stronger than the electromagnetic force repelling the positively charged protons from one another. Under certain circumstances the repelling electromagnetic force becomes stronger than the nuclear force, and nucleons can be ejected from the nucleus, leaving behind a different element: nuclear decay resulting in nuclear transmutation. The number of protons in the nucleus defines to what chemical element the atom belongs: for example, all copper atoms contain 29 protons. The number of neutrons defines the isotope of the element.[3] The number of electrons influences the magnetic properties of an atom. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules. The ability of atoms to associate and dissociate is responsible for most of the physical changes observed in nature, and is the subject of the discipline of chemistry. History of atomic theory Main article: Atomic theory Atoms in philosophy Main article: Atomism The idea that matter is made up of discrete units is a very old idea, appearing in many ancient cultures such as Greece and India. The word "atom" was coined by ancient Greek philosophers. However, these ideas were founded in philosophical and theological reasoning rather than evidence and experimentation. As a result, their views on what atoms look like and how they behave were incorrect. They also could not convince everybody, so atomism was but one of a number of competing theories on the nature of matter. It was not until the 19th century that the idea was embraced and refined by scientists, when the blossoming science of chemistry produced discoveries that only the concept of atoms could explain. First evidence-based theory In the early 1800s, John Dalton used the concept of atoms to explain why elements always react in ratios of small whole numbers (the law of multiple proportions). For instance, there are two types of tin oxide: one is 88.1% tin and 11.9% oxygen and the other is 78.7% tin and 21.3% oxygen (tin(II) oxide and tin dioxide respectively). This means that 100g of tin will combine either with 13.5g or 27g of oxygen. 13.5 and 27 form a ratio of 1:2, a ratio of small whole numbers. This common pattern in chemistry suggested to Dalton that elements react in whole number multiples of discrete units—in other words, atoms. In the case of tin oxides, one tin atom will combine with either one or two oxygen atoms.[4] Dalton also believed atomic theory could explain why water absorbs different gases in different proportions. For example, he found that water absorbs carbon dioxide far better than it absorbs nitrogen.[5] Dalton hypothesized this was due to the differences in mass and complexity of the gases' respective particles. Indeed, carbon dioxide molecules (CO2) are heavier and larger than nitrogen molecules (N2). Brownian motion In 1827, botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically, a phenomenon that became known as "Brownian motion". This was thought to be caused by water molecules knocking the grains about. In 1905 Albert Einstein proved the reality of these molecules and their motions by producing the first Statistical physics analysis of Brownian motion.[6][7][8] French physicist Jean Perrin used Einstein's work to experimentally determine the mass and dimensions of atoms, thereby conclusively verifying Dalton's atomic theory.[9] Discovery of the electron The Geiger–Marsden experiment The physicist J. J. Thomson measured the mass of cathode rays, showing they were made of particles, but were around 1800 times lighter than the lightest atom, hydrogen. Therefore, they were not atoms, but a new particle, the first subatomic particle to be discovered, which he originally called "corpuscle" but was later named electron, after particles postulated by George Johnstone Stoney in 1874. He also showed they were identical to particles given off by photoelectric and radioactive materials.[10] It was quickly recognized that they are the particles that carry electric currents in metal wires, and carry the negative electric charge within atoms. Thomson was given the 1906 Nobel Prize in Physics for this work. Thus he overturned the belief that atoms are the indivisible, ultimate particles of matter.[11] Thomson also incorrectly postulated that the low mass, negatively charged electrons were distributed throughout the atom in a uniform sea of positive charge. This became known as the plum pudding model. Discovery of the nucleus In 1909, Hans Geiger and Ernest Marsden, under the direction of Ernest Rutherford, bombarded a metal foil with alpha particles to observe how they scattered. They expected all the alpha particles to pass straight through with little deflection, because Thomson's model said that the charges in the atom are so diffuse that their electric fields could not affect the alpha particles much. However, Geiger and Marsden spotted alpha particles being deflected by angles greater than 90°, which was supposed to be impossible according to Thomson's model. To explain this, Rutherford proposed that the positive charge of the atom is concentrated in a tiny nucleus at the center of the atom.[12] Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table.[13] The term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J.J. Thomson created a technique for separating atom types through his work on ionized gases, which subsequently led to the discovery of stable isotopes.[14] Bohr model The Bohr model of the atom, with an electron making instantaneous "quantum leaps" from one orbit to another. This model is obsolete. Main article: Bohr model In 1913 the physicist Niels Bohr proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon.[15] This quantization was used to explain why the electrons orbits are stable (given that normally, charges in acceleration, including circular motion, lose kinetic energy which is emitted as electromagnetic radiation, see synchrotron radiation) and why elements absorb and emit electromagnetic radiation in discrete spectra.[16] Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory. These results refined Ernest Rutherford's and Antonius Van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity. That it is equal to the atomic nuclear charge remains the accepted atomic model today.[17] Chemical bonding explained Chemical bonds between atoms were now explained, by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons.[18] As the chemical properties of the elements were known to largely repeat themselves according to the periodic law,[19] in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus.[20] Further developments in quantum physics The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of the atom. When a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split based on the direction of an atom's angular momentum, or spin. As this direction is random, the beam could be expected to spread into a line. Instead, the beam was split into two parts, depending on whether the atomic spin was oriented up or down.[21] In 1924, Louis de Broglie proposed that all particles behave to an extent like waves. In 1926, Erwin Schrödinger used this idea to develop a mathematical model of the atom that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1926. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa.[22] This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed.[23][24] Discovery of the neutron The development of the mass spectrometer allowed the mass of atoms to be measured with increased accuracy. The device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses. The atomic mass of these isotopes varied by integer amounts, called the whole number rule.[25] The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus.[26] Fission, high-energy physics and condensed matter In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product.[27] A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental nuclear fission.[28][29] In 1944, Hahn received the Nobel prize in chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized.[30] In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies.[31] Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks. The standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions.[32] Subatomic particles Main article: Subatomic particle Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron; all three are fermions. However, the hydrogen-1 atom has no neutrons and the hydron ion has no electrons. The electron is by far the least massive of these particles at 9.11×10−31 kg, with a negative electrical charge and a size that is too small to be measured using available techniques.[33] It is the lightest particle with a positive rest mass measured. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass 1,836 times that of the electron, at 1.6726×10−27 kg. The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of the electron,[34] or 1.6929×10−27 kg, the heaviest of the three constituent particles, but it can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of 2.5×10−15 m—although the 'surface' of these particles is not sharply defined.[35] The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure. However, both protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +2/3) and one down quark (with a charge of −1/3). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles.[36][37] The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces.[36][37] Main article: Atomic nucleus The binding energy needed for a nucleon to escape the nucleus, for various isotopes All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to 1.07 3A fm, where A is the total number of nucleons.[38] This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other.[39] Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay.[40] The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. However, a proton and a neutron are allowed to occupy the same quantum state.[41] For atoms with low atomic numbers, a nucleus that has more neutrons than protons tends to drop to a lower energy state through radioactive decay so that the neutron–proton ratio is closer to one. However, as the atomic number increases, a higher proportion of neutrons is required to offset the mutual repulsion of the protons. Thus, there are no stable nuclei with equal proton and neutron numbers above atomic number Z = 20 (calcium) and as Z increases, the neutron–proton ratio of stable isotopes increases.[41] The stable isotope with the highest proton–neutron ratio is lead-208 (about 1.5). The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3–10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus.[42] Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element.[43][44] If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass–energy equivalence formula, E = mc2, where m is the mass loss and c is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate.[45] The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together.[46] It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease. That means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star.[41] Electron cloud A potential well, showing, according to classical mechanics, the minimum energy V(x) needed to reach each position x. Classically, a particle with energy E is constrained to a range of positions between x1 and x2. The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured.[47] Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form.[48] Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation.[49] How atoms are constructed from electron orbitals and link to the periodic table The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom,[50] compared to 2.23 million eV for splitting a deuterium nucleus.[51] Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals.[52] Nuclear properties By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form,[53] also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single proton element hydrogen up to the 118-proton element ununoctium.[54] All known isotopes of elements with atomic numbers greater than 82 are radioactive.[55][56] About 339 nuclides occur naturally on Earth,[57] of which 254 (about 75%) have not been observed to decay, and are referred to as "stable isotopes". However, only 90 of these nuclides are stable to all decay, even in theory. Another 164 (bringing the total to 254) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 34 radioactive nuclides have half-lives longer than 80 million years, and are long-lived enough to be present from the birth of the solar system. This collection of 288 nuclides are known as primordial nuclides. Finally, an additional 51 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or else as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14).[58][note 1] For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.2 stable isotopes per element. Twenty-six elements have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes.[59][page needed] Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 254 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10 and nitrogen-14. Also, only four naturally occurring, radioactive odd–odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138 and tantalum-180m. Most odd–odd nuclei are highly unstable with respect to beta decay, because the decay products are even–even, and are therefore more strongly bound, due to nuclear pairing effects.[59][page needed] Main articles: Atomic mass and mass number The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed using the unified atomic mass unit (u), also called dalton (Da). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately 1.66×10−27 kg.[60] Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 u.[61] The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 u). However, this number will not be exactly an integer except in the case of carbon-12 (see below).[62] The heaviest stable atom is lead-208,[55] with a mass of 207.9766521 u.[63] As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about 6.022×1023). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 u, and so a mole of carbon-12 atoms weighs exactly 0.012 kg.[60] Shape and size Main article: Atomic radius Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus.[2] However, this assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin.[64] On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right).[65] Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm.[66] When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have recently been shown to occur for sulfur ions[67] and chalcogen ions[68] in pyrite-type compounds. Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope. However, individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width.[69] A single drop of water contains about 2 sextillion (2×1021) atoms of oxygen, and twice the number of hydrogen atoms.[70] A single carat diamond with a mass of 2×10−4 kg contains about 10 sextillion (1022) atoms of carbon.[note 2] If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple.[71] Radioactive decay Main article: Radioactive decay This diagram shows the half-life (T½) of various isotopes with Z protons and N neutrons. The most common forms of radioactive decay are:[73][74] • Beta decay (and electron capture): these processes are regulated by the weak force, and result from a transformation of a neutron into a proton, or a proton into a neutron. The neutron to proton transition is accompanied by the emission of an electron and an antineutrino, while proton to neutron transition (except in electron capture) causes the emission of a positron and a neutrino. The electron or positron emissions are called beta particles. Beta decay either increases or decreases the atomic number of the nucleus by one. Electron capture is more common than positron emission, because it requires less energy. In this type of decay, an electron is absorbed by the nucleus, rather than a positron emitted from the nucleus. A neutrino is still emitted in this process, and a proton changes to a neutron. • Gamma decay: this process results from a change in the energy level of the nucleus to a lower state, resulting in the emission of electromagnetic radiation. The excited state of a nucleus which results in gamma emission usually occurs following the emission of an alpha or a beta particle. Thus, gamma decay usually follows alpha or beta decay. Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion— a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth.[72] Magnetic moment Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin ½ ħ, or "spin-½". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin.[75] In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field.[76][77] The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium. However, for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging.[78][79] Energy levels These electron's energy levels (not to scale) are sufficient for ground states of atoms up to cadmium (5s2 4d10) inclusively. Do not forget that even the top of the diagram is lower than an unbound electron state. The potential energy of an electron in an atom is negative, its dependence of its position reaches the minimum (the most absolute value) inside the nucleus, and vanishes when the distance from the nucleus goes to infinity, roughly in an inverse proportion to the distance. In the quantum-mechanical model, a bound electron can only occupy a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state.[80] The electron's energy raises when n increases because the (average) distance to the nucleus increases. Dependence of the energy on is caused not by electrostatic potential of the nucleus, but by interaction between electrons. For an electron to transition between two different states, e.g. grounded state to first excited level (ionization), it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum.[81] Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors.[82] An example of absorption lines in a spectrum When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined.[83] Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron.[84] When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines.[85] The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect.[86] If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band.[87] Valence and bonding behavior Valency is the combining power of an element. It is equal to number of hydrogen atoms that atom can combine or displace in forming compounds.[88] The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells.[89] For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. However, many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds.[90] Main articles: State of matter and Phase (matter) Snapshots illustrating the formation of a Bose–Einstein condensate Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas.[93] Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond.[94] Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale.[95][96] This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior.[97] Scanning tunneling microscope image showing the individual atoms making up this gold (100) surface. The surface atoms deviate from the bulk crystal structure and arrange in columns several atoms wide with pits between them (See surface reconstruction). Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element.[102] Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth.[103] Origin and current state Atoms form about 4% of the total energy density of the observable Universe, with an average density of about 0.25 atoms/m3.[104] Within a galaxy such as the Milky Way, atoms have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3.[105] The Sun is believed to be inside the Local Bubble, a region of highly ionized gas, so the density in the solar neighborhood is only about 103 atoms/m3.[106] Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's atoms are concentrated inside stars and the total mass of atoms forms about 10% of the mass of the galaxy.[107] (The remainder of the mass is an unknown dark matter.)[108] Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron.[109][110][111] Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei.[112] Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple alpha process) the sequence of elements from carbon up to iron;[113] see stellar nucleosynthesis for details. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation.[114] This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae through the r-process and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei.[115] Elements such as lead formed largely through the radioactive decay of heavier elements.[116] Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating.[117][118] Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay.[119] There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere.[120] Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions.[121][122] Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth.[123][124] Transuranic elements have radioactive lifetimes shorter than the current age of the Earth[125] and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust.[126] Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore.[127] The Earth contains approximately 1.33×1050 atoms.[128] Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals.[129][130] This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter.[131] Rare and theoretical forms Superheavy elements Main article: Transuranium element While isotopes with atomic numbers higher than lead (82) are known to be radioactive, an "island of stability" has been proposed for some elements with atomic numbers above 103. These superheavy elements may have a nucleus that is relatively stable against radioactive decay.[132] The most likely candidate for a stable superheavy atom, unbihexium, has 126 protons and 184 neutrons.[133] Exotic matter Main article: Exotic matter Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature.[134][135] However, in 1996 the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva.[136][137] See also 1. ^ For more recent updates see Interactive Chart of Nuclides (Brookhaven National Laboratory). 2. ^ A carat is 200 milligrams. By definition, carbon-12 has 0.012 kg per mole. The Avogadro constant defines 6×1023 atoms per mole. 1. ^ "Atom". Compendium of Chemical Terminology (IUPAC Gold Book) (2nd ed.). IUPAC. Retrieved 2015-04-25.  2. ^ a b Ghosh, D. C.; Biswas, R. (2002). "Theoretical calculation of Absolute Radii of Atoms and Ions. Part 1. The Atomic Radii". Int. J. Mol. Sci. 3: 87–113. doi:10.3390/i3020087.  3. ^ Leigh, G. J., ed. (1990). International Union of Pure and Applied Chemistry, Commission on the Nomenclature of Inorganic Chemistry, Nomenclature of Organic Chemistry – Recommendations 1990. Oxford: Blackwell Scientific Publications. p. 35. ISBN 0-08-022369-9. An atom is the smallest unit quantity of an element that is capable of existence whether alone or in chemical combination with other atoms of the same or other elements.  6. ^ Einstein, Albert (1905). "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen" (PDF). Annalen der Physik (in German) 322 (8): 549–560. Bibcode:1905AnP...322..549E. doi:10.1002/andp.19053220806. Retrieved 4 February 2007.  7. ^ Mazo, Robert M. (2002). Brownian Motion: Fluctuations, Dynamics, and Applications. Oxford University Press. pp. 1–7. ISBN 0-19-851567-7. OCLC 48753074.  8. ^ Lee, Y.K.; Hoon, K. (1995). "Brownian Motion". Imperial College. Archived from the original on 18 December 2007. Retrieved 18 December 2007.  9. ^ Patterson, G. (2007). "Jean Perrin and the triumph of the atomic doctrine". Endeavour 31 (2): 50–53. doi:10.1016/j.endeavour.2007.05.003. PMID 17602746.  10. ^ Thomson, J. J. (August 1901). "On bodies smaller than atoms". The Popular Science Monthly (Bonnier Corp.): 323–335. Retrieved 2009-06-21.  11. ^ "J.J. Thomson". Nobel Foundation. 1906. Retrieved 20 December 2007.  12. ^ Rutherford, E. (1911). "The Scattering of α and β Particles by Matter and the Structure of the Atom" (PDF). Philosophical Magazine 21 (125): 669–88. doi:10.1080/14786440508637080.  13. ^ "Frederick Soddy, The Nobel Prize in Chemistry 1921". Nobel Foundation. Retrieved 18 January 2008.  14. ^ Thomson, Joseph John (1913). "Rays of positive electricity". Proceedings of the Royal Society. A 89 (607): 1–20. Bibcode:1913RSPSA..89....1T. doi:10.1098/rspa.1913.0057.  15. ^ Stern, David P. (16 May 2005). "The Atomic Nucleus and Bohr's Early Model of the Atom". NASA/Goddard Space Flight Center. Retrieved 20 December 2007.  16. ^ Bohr, Niels (11 December 1922). "Niels Bohr, The Nobel Prize in Physics 1922, Nobel Lecture". Nobel Foundation. Retrieved 16 February 2008.  17. ^ Pais, Abraham (1986). Inward Bound: Of Matter and Forces in the Physical World. New York: Oxford University Press. pp. 228–230. ISBN 0-19-851971-0.  19. ^ Scerri, Eric R. (2007). The periodic table: its story and its significance. Oxford University Press US. pp. 205–226. ISBN 0-19-530573-6.  21. ^ Scully, Marlan O.; Lamb, Willis E.; Barut, Asim (1987). "On the theory of the Stern-Gerlach apparatus". Foundations of Physics 17 (6): 575–583. Bibcode:1987FoPh...17..575S. doi:10.1007/BF01882788.  22. ^ What is the Heisenberg Uncertainty Principle? - Chad Orzel TED-ED talk 23. ^ Brown, Kevin (2007). "The Hydrogen Atom". MathPages. Retrieved 21 December 2007.  24. ^ Harrison, David M. (2000). "The Development of Quantum Mechanics". University of Toronto. Archived from the original on 25 December 2007. Retrieved 21 December 2007.  25. ^ Aston, Francis W. (1920). "The constitution of atmospheric neon". Philosophical Magazine 39 (6): 449–55. doi:10.1080/14786440408636058.  26. ^ Chadwick, James (12 December 1935). "Nobel Lecture: The Neutron and Its Properties". Nobel Foundation. Retrieved 21 December 2007.  27. ^ "Otto Hahn, Lise Meitner and Fritz Strassmann". Chemical Achievers: The Human Face of the Chemical Sciences. Chemical Heritage Foundation. Archived from the original on 24 October 2009. Retrieved 15 September 2009.  28. ^ Meitner, Lise; Frisch, Otto Robert (1939). "Disintegration of uranium by neutrons: a new type of nuclear reaction". Nature 143 (3615): 239–240. Bibcode:1939Natur.143..239M. doi:10.1038/143239a0.  29. ^ Schroeder, M. "Lise Meitner – Zur 125. Wiederkehr Ihres Geburtstages" (in German). Retrieved 4 June 2009.  30. ^ Crawford, E.; Sime, Ruth Lewin; Walker, Mark (1997). "A Nobel tale of postwar injustice". Physics Today 50 (9): 26–32. Bibcode:1997PhT....50i..26C. doi:10.1063/1.881933.  31. ^ Kullander, Sven (28 August 2001). "Accelerators and Nobel Laureates". Nobel Foundation. Retrieved 31 January 2008.  32. ^ "The Nobel Prize in Physics 1990". Nobel Foundation. 17 October 1990. Retrieved 31 January 2008.  33. ^ Demtröder, Wolfgang (2002). Atoms, Molecules and Photons: An Introduction to Atomic- Molecular- and Quantum Physics (1st ed.). Springer. pp. 39–42. ISBN 3-540-20631-0. OCLC 181435713.  34. ^ Woan, Graham (2000). The Cambridge Handbook of Physics. Cambridge University Press. p. 8. ISBN 0-521-57507-9. OCLC 224032426.  35. ^ MacGregor, Malcolm H. (1992). The Enigmatic Electron. Oxford University Press. pp. 33–37. ISBN 0-19-521833-7. OCLC 223372888.  36. ^ a b Particle Data Group (2002). "The Particle Adventure". Lawrence Berkeley Laboratory. Archived from the original on 4 January 2007. Retrieved 3 January 2007.  37. ^ a b Schombert, James (18 April 2006). "Elementary Particles". University of Oregon. Retrieved 3 January 2007.  38. ^ Jevremovic, Tatjana (2005). Nuclear Principles in Engineering. Springer. p. 63. ISBN 0-387-23284-2. OCLC 228384008.  39. ^ Pfeffer, Jeremy I.; Nir, Shlomo (2000). Modern Physics: An Introductory Text. Imperial College Press. pp. 330–336. ISBN 1-86094-250-4. OCLC 45900880.  40. ^ Wenner, Jennifer M. (10 October 2007). "How Does Radioactive Decay Work?". Carleton College. Retrieved 9 January 2008.  41. ^ a b c Raymond, David (7 April 2006). "Nuclear Binding Energies". New Mexico Tech. Archived from the original on 11 December 2006. Retrieved 3 January 2007.  42. ^ Mihos, Chris (23 July 2002). "Overcoming the Coulomb Barrier". Case Western Reserve University. Retrieved 13 February 2008.  43. ^ Staff (30 March 2007). "ABC's of Nuclear Science". Lawrence Berkeley National Laboratory. Archived from the original on 5 December 2006. Retrieved 3 January 2007.  44. ^ Makhijani, Arjun; Saleska, Scott (2 March 2001). "Basics of Nuclear Physics and Fission". Institute for Energy and Environmental Research. Archived from the original on 16 January 2007. Retrieved 3 January 2007.  45. ^ Shultis, J. Kenneth; Faw, Richard E. (2002). Fundamentals of Nuclear Science and Engineering. CRC Press. pp. 10–17. ISBN 0-8247-0834-2. OCLC 123346507.  46. ^ Fewell, M. P. (1995). "The atomic nuclide with the highest mean binding energy". American Journal of Physics 63 (7): 653–658. Bibcode:1995AmJPh..63..653F. doi:10.1119/1.17828.  47. ^ Mulliken, Robert S. (1967). "Spectroscopy, Molecular Orbitals, and Chemical Bonding". Science 157 (3784): 13–24. Bibcode:1967Sci...157...13M. doi:10.1126/science.157.3784.13. PMID 5338306.  48. ^ a b Brucat, Philip J. (2008). "The Quantum Atom". University of Florida. Archived from the original on 7 December 2006. Retrieved 4 January 2007.  49. ^ Manthey, David (2001). "Atomic Orbitals". Orbital Central. Archived from the original on 10 January 2008. Retrieved 21 January 2008.  50. ^ Herter, Terry (2006). "Lecture 8: The Hydrogen Atom". Cornell University. Retrieved 14 February 2008.  51. ^ Bell, R. E.; Elliott, L. G. (1950). "Gamma-Rays from the Reaction H1(n,γ)D2 and the Binding Energy of the Deuteron". Physical Review 79 (2): 282–285. Bibcode:1950PhRv...79..282B. doi:10.1103/PhysRev.79.282.  52. ^ Smirnov, Boris M. (2003). Physics of Atoms and Ions. Springer. pp. 249–272. ISBN 0-387-95550-X.  53. ^ Matis, Howard S. (9 August 2000). "The Isotopes of Hydrogen". Guide to the Nuclear Wall Chart. Lawrence Berkeley National Lab. Archived from the original on 18 December 2007. Retrieved 21 December 2007.  54. ^ Weiss, Rick (17 October 2006). "Scientists Announce Creation of Atomic Element, the Heaviest Yet". Washington Post. Retrieved 21 December 2007.  55. ^ a b Sills, Alan D. (2003). Earth Science the Easy Way. Barron's Educational Series. pp. 131–134. ISBN 0-7641-2146-4. OCLC 51543743.  56. ^ Dumé, Belle (23 April 2003). "Bismuth breaks half-life record for alpha decay". Physics World. Archived from the original on 14 December 2007. Retrieved 21 December 2007.  57. ^ Lindsay, Don (30 July 2000). "Radioactives Missing From The Earth". Don Lindsay Archive. Archived from the original on 28 April 2007. Retrieved 23 May 2007.  58. ^ Tuli, Jagdish K. (April 2005). "Nuclear Wallet Cards". National Nuclear Data Center, Brookhaven National Laboratory. Retrieved 16 April 2011.  59. ^ a b CRC Handbook (2002). 60. ^ a b Mills, Ian; Cvitaš, Tomislav; Homann, Klaus; Kallay, Nikola; Kuchitsu, Kozo (1993). Quantities, Units and Symbols in Physical Chemistry (PDF) (2nd ed.). Oxford: International Union of Pure and Applied Chemistry, Commission on Physiochemical Symbols Terminology and Units, Blackwell Scientific Publications. p. 70. ISBN 0-632-03583-8. OCLC 27011505.  61. ^ Chieh, Chung (22 January 2001). "Nuclide Stability". University of Waterloo. Retrieved 4 January 2007.  62. ^ "Atomic Weights and Isotopic Compositions for All Elements". National Institute of Standards and Technology. Archived from the original on 31 December 2006. Retrieved 4 January 2007.  63. ^ Audi, G.; Wapstra, A.H.; Thibault, C. (2003). "The Ame2003 atomic mass evaluation (II)" (PDF). Nuclear Physics A 729 (1): 337–676. Bibcode:2003NuPhA.729..337A. doi:10.1016/j.nuclphysa.2003.11.003.  64. ^ Shannon, R. D. (1976). "Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides". Acta Crystallographica A 32 (5): 751–767. Bibcode:1976AcCrA..32..751S. doi:10.1107/S0567739476001551.  65. ^ Dong, Judy (1998). "Diameter of an Atom". The Physics Factbook. Archived from the original on 4 November 2007. Retrieved 19 November 2007.  66. ^ Zumdahl, Steven S. (2002). Introductory Chemistry: A Foundation (5th ed.). Houghton Mifflin. ISBN 0-618-34342-3. OCLC 173081482. Archived from the original on 4 March 2008. Retrieved 5 February 2008.  67. ^ Birkholz, M.; Rudert, R. (2008). "Interatomic distances in pyrite-structure disulfides – a case for ellipsoidal modeling of sulfur ions]" (PDF). phys. stat. sol. b 245: 1858–1864. Bibcode:2008PSSBR.245.1858B. doi:10.1002/pssb.200879532.  68. ^ Birkholz, M. (2014). "Modeling the Shape of Ions in Pyrite-Type Crystals". Crystals 4: 390–403. doi:10.3390/cryst4030390.  69. ^ Staff (2007). "Small Miracles: Harnessing nanotechnology". Oregon State University. Retrieved 7 January 2007. —describes the width of a human hair as 105 nm and 10 carbon atoms as spanning 1 nm. 70. ^ Padilla, Michael J.; Miaoulis, Ioannis; Cyr, Martha (2002). Prentice Hall Science Explorer: Chemical Building Blocks. Upper Saddle River, New Jersey USA: Prentice-Hall, Inc. p. 32. ISBN 0-13-054091-9. OCLC 47925884. There are 2,000,000,000,000,000,000,000 (that's 2 sextillion) atoms of oxygen in one drop of water—and twice as many atoms of hydrogen.  71. ^ Feynman, Richard (1995). Six Easy Pieces. The Penguin Group. p. 5. ISBN 978-0-14-027666-4. OCLC 40499574.  72. ^ a b "Radioactivity". Archived from the original on 4 December 2007. Retrieved 19 December 2007.  73. ^ L'Annunziata, Michael F. (2003). Handbook of Radioactivity Analysis. Academic Press. pp. 3–56. ISBN 0-12-436603-1. OCLC 16212955.  74. ^ Firestone, Richard B. (22 May 2000). "Radioactive Decay Modes". Berkeley Laboratory. Retrieved 7 January 2007.  75. ^ Hornak, J. P. (2006). "Chapter 3: Spin Physics". The Basics of NMR. Rochester Institute of Technology. Archived from the original on 3 February 2007. Retrieved 7 January 2007.  76. ^ a b Schroeder, Paul A. (25 February 2000). "Magnetic Properties". University of Georgia. Archived from the original on 29 April 2007. Retrieved 7 January 2007.  77. ^ Goebel, Greg (1 September 2007). "[4.3] Magnetic Properties of the Atom". Elementary Quantum Physics. In The Public Domain website. Retrieved 7 January 2007.  78. ^ Yarris, Lynn (Spring 1997). "Talking Pictures". Berkeley Lab Research Review. Archived from the original on 13 January 2008. Retrieved 9 January 2008.  79. ^ Liang, Z.-P.; Haacke, E. M. (1999). Webster, J. G., ed. Encyclopedia of Electrical and Electronics Engineering: Magnetic Resonance Imaging. vol. 2. John Wiley & Sons. pp. 412–426. ISBN 0-471-13946-7.  80. ^ Zeghbroeck, Bart J. Van (1998). "Energy levels". Shippensburg University. Archived from the original on 15 January 2005. Retrieved 23 December 2007.  81. ^ Fowles, Grant R. (1989). Introduction to Modern Optics. Courier Dover Publications. pp. 227–233. ISBN 0-486-65957-7. OCLC 18834711.  82. ^ Martin, W. C.; Wiese, W. L. (May 2007). "Atomic Spectroscopy: A Compendium of Basic Ideas, Notation, Data, and Formulas". National Institute of Standards and Technology. Archived from the original on 8 February 2007. Retrieved 8 January 2007.  83. ^ "Atomic Emission Spectra — Origin of Spectral Lines". Avogadro Web Site. Retrieved 10 August 2006.  84. ^ Fitzpatrick, Richard (16 February 2007). "Fine structure". University of Texas at Austin. Retrieved 14 February 2008.  85. ^ Weiss, Michael (2001). "The Zeeman Effect". University of California-Riverside. Archived from the original on 2 February 2008. Retrieved 6 February 2008.  86. ^ Beyer, H. F.; Shevelko, V. P. (2003). Introduction to the Physics of Highly Charged Ions. CRC Press. pp. 232–236. ISBN 0-7503-0481-2. OCLC 47150433.  87. ^ Watkins, Thayer. "Coherence in Stimulated Emission". San José State University. Archived from the original on 12 January 2008. Retrieved 23 December 2007.  88. ^ oxford dictionary – valency 89. ^ Reusch, William (16 July 2007). "Virtual Textbook of Organic Chemistry". Michigan State University. Retrieved 11 January 2008.  90. ^ "Covalent bonding – Single bonds". chemguide. 2000.  91. ^ Husted, Robert; et al. (11 December 2003). "Periodic Table of the Elements". Los Alamos National Laboratory. Archived from the original on 10 January 2008. Retrieved 11 January 2008.  92. ^ Baum, Rudy (2003). "It's Elemental: The Periodic Table". Chemical & Engineering News. Retrieved 11 January 2008.  93. ^ Goodstein, David L. (2002). States of Matter. Courier Dover Publications. pp. 436–438. ISBN 0-13-843557-X.  94. ^ Brazhkin, Vadim V. (2006). "Metastable phases, phase transformations, and phase diagrams in physics and chemistry". Physics-Uspekhi 49 (7): 719–24. Bibcode:2006PhyU...49..719B. doi:10.1070/PU2006v049n07ABEH006013.  95. ^ Myers, Richard (2003). The Basics of Chemistry. Greenwood Press. p. 85. ISBN 0-313-31664-3. OCLC 50164580.  96. ^ Staff (9 October 2001). "Bose-Einstein Condensate: A New Form of Matter". National Institute of Standards and Technology. Archived from the original on 3 January 2008. Retrieved 16 January 2008.  97. ^ Colton, Imogen; Fyffe, Jeanette (3 February 1999). "Super Atoms from Bose-Einstein Condensation". The University of Melbourne. Archived from the original on 29 August 2007. Retrieved 6 February 2008.  98. ^ Jacox, Marilyn; Gadzuk, J. William (November 1997). "Scanning Tunneling Microscope". National Institute of Standards and Technology. Archived from the original on 7 January 2008. Retrieved 11 January 2008.  99. ^ "The Nobel Prize in Physics 1986". The Nobel Foundation. Retrieved 11 January 2008. —in particular, see the Nobel lecture by G. Binnig and H. Rohrer. 100. ^ Jakubowski, N.; Moens, Luc; Vanhaecke, Frank (1998). "Sector field mass spectrometers in ICP-MS". Spectrochimica Acta Part B: Atomic Spectroscopy 53 (13): 1739–63. Bibcode:1998AcSpe..53.1739J. doi:10.1016/S0584-8547(98)00222-5.  101. ^ Müller, Erwin W.; Panitz, John A.; McLane, S. Brooks (1968). "The Atom-Probe Field Ion Microscope". Review of Scientific Instruments 39 (1): 83–86. Bibcode:1968RScI...39...83M. doi:10.1063/1.1683116.  102. ^ Lochner, Jim; Gibb, Meredith; Newman, Phil (30 April 2007). "What Do Spectra Tell Us?". NASA/Goddard Space Flight Center. Archived from the original on 16 January 2008. Retrieved 3 January 2008.  103. ^ Winter, Mark (2007). "Helium". WebElements. Archived from the original on 30 December 2007. Retrieved 3 January 2008.  104. ^ Hinshaw, Gary (10 February 2006). "What is the Universe Made Of?". NASA/WMAP. Archived from the original on 31 December 2007. Retrieved 7 January 2008.  105. ^ Choppin, Gregory R.; Liljenzin, Jan-Olov; Rydberg, Jan (2001). Radiochemistry and Nuclear Chemistry. Elsevier. p. 441. ISBN 0-7506-7463-6. OCLC 162592180.  106. ^ Davidsen, Arthur F. (1993). "Far-Ultraviolet Astronomy on the Astro-1 Space Shuttle Mission". Science 259 (5093): 327–34. Bibcode:1993Sci...259..327D. doi:10.1126/science.259.5093.327. PMID 17832344.  107. ^ Lequeux, James (2005). The Interstellar Medium. Springer. p. 4. ISBN 3-540-21326-0. OCLC 133157789.  108. ^ Smith, Nigel (6 January 2000). "The search for dark matter". Physics World. Archived from the original on 16 February 2008. Retrieved 14 February 2008.  109. ^ Croswell, Ken (1991). "Boron, bumps and the Big Bang: Was matter spread evenly when the Universe began? Perhaps not; the clues lie in the creation of the lighter elements such as boron and beryllium". New Scientist (1794): 42. Archived from the original on 7 February 2008. Retrieved 14 January 2008.  110. ^ Copi, Craig J.; Schramm, DN; Turner, MS (1995). "Big-Bang Nucleosynthesis and the Baryon Density of the Universe". Science 267 (5195): 192–99. arXiv:astro-ph/9407006. Bibcode:1995Sci...267..192C. doi:10.1126/science.7809624. PMID 7809624.  111. ^ Hinshaw, Gary (15 December 2005). "Tests of the Big Bang: The Light Elements". NASA/WMAP. Archived from the original on 17 January 2008. Retrieved 13 January 2008.  112. ^ Abbott, Brian (30 May 2007). "Microwave (WMAP) All-Sky Survey". Hayden Planetarium. Retrieved 13 January 2008.  113. ^ Hoyle, F. (1946). "The synthesis of the elements from hydrogen". Monthly Notices of the Royal Astronomical Society 106: 343–83. Bibcode:1946MNRAS.106..343H. doi:10.1093/mnras/106.5.343.  114. ^ Knauth, D. C.; Knauth, D. C.; Lambert, David L.; Crane, P. (2000). "Newly synthesized lithium in the interstellar medium". Nature 405 (6787): 656–58. doi:10.1038/35015028. PMID 10864316.  115. ^ Mashnik, Stepan G. (2000). "On Solar System and Cosmic Rays Nucleosynthesis and Spallation Processes". arXiv:astro-ph/0008382 [astro-ph].  116. ^ Kansas Geological Survey (4 May 2005). "Age of the Earth". University of Kansas. Retrieved 14 January 2008.  117. ^ Manuel 2001, pp. 407–430, 511–519. 118. ^ Dalrymple, G. Brent (2001). "The age of the Earth in the twentieth century: a problem (mostly) solved". Geological Society, London, Special Publications 190 (1): 205–21. Bibcode:2001GSLSP.190..205D. doi:10.1144/GSL.SP.2001.190.01.14. Retrieved 14 January 2008.  119. ^ Anderson, Don L.; Foulger, G. R.; Meibom, Anders (2 September 2006). "Helium: Fundamental models". Archived from the original on 8 February 2007. Retrieved 14 January 2007.  120. ^ Pennicott, Katie (10 May 2001). "Carbon clock could show the wrong time". PhysicsWeb. Archived from the original on 15 December 2007. Retrieved 14 January 2008.  121. ^ Yarris, Lynn (27 July 2001). "New Superheavy Elements 118 and 116 Discovered at Berkeley Lab". Berkeley Lab. Archived from the original on 9 January 2008. Retrieved 14 January 2008.  122. ^ Diamond, H; et al. (1960). "Heavy Isotope Abundances in Mike Thermonuclear Device". Physical Review 119 (6): 2000–04. Bibcode:1960PhRv..119.2000D. doi:10.1103/PhysRev.119.2000.  123. ^ Poston Sr., John W. (23 March 1998). "Do transuranic elements such as plutonium ever occur naturally?". Scientific American.  124. ^ Keller, C. (1973). "Natural occurrence of lanthanides, actinides, and superheavy elements". Chemiker Zeitung 97 (10): 522–30. OSTI 4353086.  125. ^ Zaider, Marco; Rossi, Harald H. (2001). Radiation Science for Physicians and Public Health Workers. Springer. p. 17. ISBN 0-306-46403-9. OCLC 44110319.  126. ^ Manuel 2001, pp. 407–430,511–519. 127. ^ "Oklo Fossil Reactors". Curtin University of Technology. Archived from the original on 18 December 2007. Retrieved 15 January 2008.  128. ^ Weisenberger, Drew. "How many atoms are there in the world?". Jefferson Lab. Retrieved 16 January 2008.  129. ^ Pidwirny, Michael. "Fundamentals of Physical Geography". University of British Columbia Okanagan. Archived from the original on 21 January 2008. Retrieved 16 January 2008.  130. ^ Anderson, Don L. (2002). "The inner inner core of Earth". Proceedings of the National Academy of Sciences 99 (22): 13966–68. Bibcode:2002PNAS...9913966A. doi:10.1073/pnas.232565899. PMC 137819. PMID 12391308.  131. ^ Pauling, Linus (1960). The Nature of the Chemical Bond. Cornell University Press. pp. 5–10. ISBN 0-8014-0333-2. OCLC 17518275.  132. ^ Anonymous (2 October 2001). "Second postcard from the island of stability". CERN Courier. Archived from the original on 3 February 2008. Retrieved 14 January 2008.  134. ^ Koppes, Steve (1 March 1999). "Fermilab Physicists Find New Matter-Antimatter Asymmetry". University of Chicago. Retrieved 14 January 2008.  135. ^ Cromie, William J. (16 August 2001). "A lifetime of trillionths of a second: Scientists explore antimatter". Harvard University Gazette. Retrieved 14 January 2008.  136. ^ Hijmans, Tom W. (2002). "Particle physics: Cold antihydrogen". Nature 419 (6906): 439–40. Bibcode:2002Natur.419..439H. doi:10.1038/419439a. PMID 12368837.  137. ^ Staff (30 October 2002). "Researchers 'look inside' antimatter". BBC News. Retrieved 14 January 2008.  138. ^ Barrett, Roger (1990). "The Strange World of the Exotic Atom". New Scientist (1728): 77–115. Archived from the original on 21 December 2007. Retrieved 4 January 2008.  139. ^ Indelicato, Paul (2004). "Exotic Atoms". Physica Scripta T112 (1): 20–26. arXiv:physics/0409058. Bibcode:2004PhST..112...20I. doi:10.1238/Physica.Topical.112a00020.  140. ^ Ripin, Barrett H. (July 1998). "Recent Experiments on Exotic Atoms". American Physical Society. Retrieved 15 February 2008.  • Manuel, Oliver (2001). Origin of Elements in the Solar System: Implications of Post-1957 Observations. Springer. ISBN 0-306-46562-0. OCLC 228374906.  Further reading External links
e773a56b93cdfe4e
SciELO - Scientific Electronic Library Online vol.105 issue7-8Synthesis of nanocrystalline silicon thin films using the increase of the deposition pressure in the hot-wire chemical vapour deposition techniquePhotocatalytic degradation of methyl red dye author indexsubject indexarticles search Home Pagealphabetic serial listing   South African Journal of Science On-line version ISSN 1996-7489 A.E. Botha Department of Physics, University of South Africa, P.O. Box 392, UNISA 0003, South Africa. E-mail: Key words: spin-dependent electron transport, semiconductor nanostructures 1. Introduction Electron tunnelling phenomena associated with the broken-gap1 configuration in type-II semiconductor heterostructures have been studied both theoretically2 and experimentally.3 In these structures, the minimum in the conduction band of the material on one side of the hetero-interface lies at a lower energy than the maximum in the valence band of the material on the other side. Charge transfer between two adjacent layers can lead to the formation of a 2-D electron gas on one side and a 2-D hole gas on the other, even at zero bias. Interband tunnelling may thus occur between the spatially-separated intrinsic carriers situated on opposite sides of the hetero-interface, i.e. between electrons in the conduction band on one side and holes in the higher-lying valence bands on the other side.4 Under suitable conditions, electron-hole bound states (excitons) may be formed. The InAs/GaSb system, for example, is currently the most promising candidate for observing Bose-Einstein condensation of excitons.5 Since the electron states of both conduction and valence bands of the constituent semiconductors participate in the tunnelling process on an equal footing, theories of electron tunnelling in type-II heterostructures must include the band-mixing effects.6 Direct numerical solution of the multiband k·p Schrödinger equation, which is frequently employed to model these systems, is not feasible due to the inherent numerical instability that occurs in type-II systems. The numerical instability in these systems arises because of the simultaneous presence of propagating (real wave vector) and evanescent (imaginary wave vector) states. To circumvent the numerical difficulties that arise in such calculations, several innovative techniques have been proposed within the context of multiband k·p theory.7,8 Multiband formulations of the transfer matrix method (TMM) become numerically unstable in the simultaneous presence of evanescent and propagating states. For this reason the TMM is not very useful for studying type-II systems. In cases where the TMM fails, it is possible to propagate the scattering matrix instead.9 This technique was first applied to resonant tunnelling in GaAs/A1xGa1–xAs multilayer systems, with the influence of higher bands included.10 Although the scattering matrix method is numerically stable it is computationally no more efficient than the TMM. A third method, the multiband quantum transmitting boundary method,11 is slightly more efficient; although with modern processor speeds the time saved on a typical calculation may be minimal in practice. In Section 2.1 of this article, a new and very general method is proposed for calculating the electronic structure and transport properties of type-II heterostructures. In common with the preceding methods, the theory is based on a realistic, multiband k·p envelope function description of the energy-band structure. Starting from the general form of the n×n matrix Schrödinger equation, a multiband k·p Riccati equation is derived for the logarithmic derivative of the envelope function matrix. In Section 2.2 a boundary condition is derived for the Riccati equation. This boundary condition is used to integrate the Riccati equation numerically over the entire heterostructure. In Section 2.3 it is shown how to obtain the reflection matrix R, which contains the tunnelling amplitudes for each of the n channels along its diagonal. In Section 2.4 it is shown how the solution to the Riccati equation may be incorporated into a self-consistent calculation of the confining potential profile. The current through a device may then be calculated according to the definition given in Section 2.5. In Section 3, the room temperature current was calculated as a function of applied voltage for a 10-nm wide InAs/GaSb/InAs quantum well device and the results were compared to an experiment.3 It was found that the calculated current-voltage characteristics were in semi-quantitative agreement with the measured characteristics. This agreement suggests that, contrary to previous claims, inelastic transport mechanisms did not play a significant role in the resonant interband coupling mechanism that was most likely responsible for the negative differential resistance. 2. Theory 2.1. Multiband k·p Riccati equation Consider a heterostructure with its growth direction along the x-axis. In multiband k·p models the energy-band structure within each layer of the heterostructure is described by a set of coupled differential equations for the components of the slowly varying envelope function, Φ(x). It can be shown that the Schrödinger equation for the envelope function can be decomposed into the following very general matrix form:12 In Equation 1 the eigenvector Φ(x) is a column vector of length equal to the number of bands (n) which are explicitly included in the k·p model being used. The n × n matrices H0, H1 and H2 are usually derived through second order quasi-degenerate perturbation theory (Löwdin partitioning).13 Examples of the decomposition in Equation 1 can be found in Liu et al.,7 Harrison8 and Eppenga et al.14 Instead of working with an n-component column vector Φ(x), it is advantageous to construct an n × n envelope function matrix, denoted by F(x). This matrix contains in its columns the n, linearly independent solution vectors of the stationary problem posed by Equation 1. It can be shown quite generally that H2 is non-singular.15 Equation 1 can therefore be multiplied throughout by , resulting in By substituting the general relation (which holds for any non-singular matrix F) into the left-hand side of Equaion 2, the multiband k·p Riccati equation is obtained. Here Y has been defined as In recognition of the logarithmic derivative form of Equation 5, Y will be referred to as the log-derivative of F. 2.2. Boundary condition for the Riccati equation Consider the multiband potential profile of the heterostructure shown schematically in Fig. 1. The device consists of a central active region sandwiched between two flat-band contact regions. Assume that within the active region (xL, xR) the potential may vary arbitrarily, but that it remains constant (not necessarily zero) in the outer two contact regions, i.e. for x < xL and x > xR. By making the very reasonable assumption that all charge carriers are incident from the left contact region, it follows that there can be no reflected waves in the right contact region. For (x > xR), the matrix solution to Equation 1 can therefore be written in terms of transmitted plane waves as Here D+, T and C are n × n matrices. D+ is a known matrix whose columns are made up of the n linearly-independent eigenvectors of Equation 1 within the right contact region. Although D+ can be obtained analytically for relatively small values of n (for example, see Botha4), a more general numerical method has recently been suggested by Harrison.8 The numerical method consists of re-writing the non-linear eigenproblem posed by Equation 1 as a linear eigenproblem for the wave vector components (ki)x (i = 1, 2,..., n) (see Equation 10.35 in Harrison8). In Equation 6, T is an (as yet) unknown transmission matrix which depends only on the electron energy E, the transverse component of the carrier wave vector k|| and the applied voltage Va. The matrix C is constant with respect to x and may be chosen arbitrarily. It is most convenient to choose C such that FR(x0) = 1, where 1 is the n × n identity matrix and x0 > xR. By making this choice, and evaluating Equation 6 at x = x0, it follows that Because the wave vector components (ki)x and matrices D± only depend on x indirectly through the potentials Vi(x), D± can be treated as constants within the two contact regions. Equation 6 can therefore be differentiated partially with respect to x, yielding Evaluation of Equation 8 at x = x0, with C given by Equation 7, produces the result Since FR(x0)=(x0)=1 (by choice of C), the boundary condition required to integrate Equation 4 is then clearly given by The main advantage of working with Y, as opposed to F, is that its entries never grow excessively large or small. Equation 4 can therefore be integrated without difficulty over several hundreds of nanometres, even in the presence of strong k·p coupling. Once Y has been obtained by numerical integration of Equation 4 over the entire heterostructure, the tunnelling probabilities for each channel can be obtained from the reflection matrix R, defined next. 2.3. Reflection matrix In the left contact region (see Fig. 1), for x < xL, the solution to Equation 2 can also be expressed in terms of plane waves. In this region there are both incident and reflected waves and the solution must be written as where D± are defined as before and R is the desired reflection matrix. By evaluating Equation 12 and its first derivative at x = 0, and multiplying both the resulting equations from the right by C–1, it is found that Here K is defined as in Equation 9. Multiplication of Equation 14 from the right by the inverse of Equation 13, produces By solving for R in Equation 15, the reflection matrix is obtained as All quantities on the right of Equation 16 are to be evaluated at x = 0. The matrix Y(0) is obtained by integrating Equation 4 from some point x = x0 > xR back to x = 0, starting with the boundary condition given by Equation 11. Note that the diagonal entries in R contain the reflection amplitudes for each channel. It can be shown that transmission coefficient Ti for the ith channel is related to R by Ti = 1 – (no sum over i), where Rii is the ith diagonal entry of R and denotes complex conjugation. 2.4. Self-consistent Schrödinger-Poisson scheme Because the above method produces the log-derivative Y(x) it can be used to perform a fully self-consistent calculation of the heterostructure energy-band profile. This is an important advantage over other methods, such as the scattering matrix method which only produces the amplitudes of the envelope functions, not the functions themselves. Once Equation 4 has been integrated and Y(x) is known throughout the entire heterostructure, a second integration can be performed to obtain the envelope function matrix F(x) from Equation 5 and the boundary condition F(x0) = 1. By using the envelope functions, the energy-band profile and the corresponding reflection matrix can then be calculated self-consistently in conjunction with Poisson's equation.16 2.5. Current-voltage characteristics Having obtained the transmission coefficients Ti as functions of the electron energy E, transverse component of the wave vector k|| and applied voltage Va; the current I flowing through the device can be calculated as a function of Va. For the InAs/GaSb/InAs quantum well device studied in Section 3, there is only one open channel and the current I flowing through the structure is defined as17,18 where FA and FC are the respective Fermi distribution functions for the left (anode) and right (cathode) contact regions. In Equation 17, A is the device area, T is the transmission coefficient, ρ is the density of states and νx is the incident carrier velocity. The latter two quantities are computed numerically by using the appropriate bulk carrier energy-dispersion relation. 3. Results In this Section the above theoretical results are implemented numerically by using a six-band matrix Hamiltonian.19–21 In this model the basis has been chosen in order to decouple the spin components of the electron for kz = 0. Each of the two possible spin states of the electron can then be treated separately in terms of two identical 3×3 matrix Hamiltonians. For details concerning this model, including the k·p parameter values for InAs and GaSb, see Botha.4 There are a variety of numerical methods available for integrating coupled systems of first order differential equations such as Equation 4. Although the log-derivative form of Equation 4 is much simpler to integrate than the envelope function appearing in Equation 1, the integration subroutine nevertheless has to be quite sophisticated to take into account the discontinuities in the material parameters and potential as the integration proceeds through the heterostructure.22 The computations in this Section make use of an excellent integration subroutine called ODE. This subroutine is freely available and is described fully by Shampine and Gordan.23 The current I as a function of applied voltage Va has been computed for an InAs/GaSb/InAs quantum well device in which the GaSb layer is 10 nm wide. This device has been manufactured and studied experimentally.3 Based on the description of the device, the potential profile was calculated according to the self-consistent method outlined in Section 2.4. Figure 2 shows the calculated potential profile under zero bias and for Va = 0.6 V, respectively. In order to compute I, as defined in Equation 17, the calculated potential profile was used to obtain the corresponding transmission coefficient T according to the method described at the end of Section 2.3. Note that T is in general a function of the electron energy E, the perpendicular component of the wave vector ky (a good quantum number) and the applied voltage Va. As an example, the transmission spectrum is shown in Fig. 3 for ky = 0 and various applied voltage. The transmission spectrum differs considerably from that predicted by a simple two-band model in which the potential varies linearly.24 Instead of one broad resonance peak, corresponding to a confined light-hole resonant state in the GaSb layer, two narrower peaks are observed in Fig. 3. Physically, this more complicated transmission spectrum is the result of the coupling between a confined light-hole state in GaSb and quasi-bound electron states in the InAs anode. For ky 0 the transmission spectrum is further complicated by coupling of heavy-hole confined states in GaSb with quasi-bound electron states in InAs. It is important to note that, in Fig. 2, for zero applied voltage there are two triangular quantum wells formed in the InAs layers which surround the GsSb barrier. As the applied voltage increases the potential profile gradually changes so that the triangular quantum well to the right of the barrier eventually disappears completely, as shown in Fig. 2 for Va = 0.6 V. On the other hand, an increase in applied voltage tends to increase the depth of the left triangular quantum well (in the anode). Coupling between the quasi-bound electron states in the InAs anode and the confined hole states in GaSb is responsible for the observed negative differential resistance. This coupling also accounts for the reduction in transmission peak height with increasing applied voltage, as seen in Fig. 3. The transmission spectrum, calculated as in Fig. 3, produces the correct current-voltage behaviour for the device. Figure 4 shows the calculated room-temperature current through the device as a function of applied voltage. In this figure the calculated peak height has been normalised (using a multiplicative factor of 1.23) to the experimental peak height in the forward biased direction. This normalisation factor is required mainly because of the uncertainty in the device area (which is in the range of 10–5–10–4 cm2). In addition, there may be inevitable impurities in the sample and the value of the valence band offset between the two materials, and the position of the Fermi levels may also be uncertain. The discrepancies between the current predicted by the present model and the much simpler two-band model, which was originally used to interpret the experimental data, do not alter the main conclusion which may be drawn from this experiment, namely that a resonant interband coupling mechanism is responsible for the observed negative differential resistance.3 However, unlike the two-band current-voltage model, the present model provides semi-quantitative predictions of the peak-to-valley ratios. Since the present model does not take into account inelastic transport mechanisms, it may be concluded from the agreement reached in Fig. 3, that contrary to what was previously thought, inelastic transport mechanisms do not contribute significantly to the observed valley currents in this device, even at room temperature. Whether this conclusion generalises to other similar structures depends on how well the model compares to the other available experimental data (for example see Collins et al.25). A more detailed investigation of the negative differential transport mechanism in GaSb/InAs heterostructures is currently in progress. Finally, it should be emphasised that the preliminary results presented in this section are merely an example of how the general theory may be implemented and compared directly to an experiment. In this article, the main aim was to communicate the very general method that was developed in Sections 2.1–2.4. Since this method was based on the well-established multiband k·p model, it will be useful for theoretical studies of a wide variety of other interesting phenomena which occur in semiconductor heterostructures, for example, anisotropy (band warping)26 and spin-splitting (bulk, structural and due to an applied magnetic field).27 4. Conclusion An alternative method was developed and implemented numerically to calculate electronic transport properties of type-II heterostructures. As an example, a realistic numerical implementation of the method was made by using a six-band matrix Hamiltonian in which the electron spin components decoupled. The preliminary results from this model have been compared to measured current-voltage characteristics of an InAs/GaSb/InAs quantum well device in which the GaSb layer width is 10 nm. The calculated current as a function of applied voltage was found to be in semi-quantitative agreement with the experiment3 and it was therefore concluded that inelastic transport mechanisms probably did not play a significant role in the device. Given that only the bulk k·p parameters for the relatively simple six-band model were used in these calculations, the semi-quantitative agreement obtained from this model is encouraging and warrants further investigation. Such an investigation is in progress. The minor discrepancies between the model and experiment considered, in particular the necessity of using a current normalisation factor, can be attributed mainly to the uncertainty in the device area and the value of the valence band offset parameter. 1. Zakharova A. and Chao K.A. (2002). Influence of band state mixing on interband magnetotunnelling in broken-gap heterostructures. J. Phys.: Condens. Matter 14, 5003–5016.         [ Links ] 2. Edwards G. and Inkson J.C. (1994). A microscopic calculation for hole tunnelling in type-II InAs/GaSb structures. Semicond. Sci. Technol. 9, 178–184.         [ Links ] 3. Luo L.F., Beresford R., Longenbach K.F. and Wang W.I. (1990). Resonant interband coupling in single-barrier heterostructures of InAs/GaSb/InAs and GaSb/InAs/GaSb. J. Appl. Phys. 68, 2854–2857.         [ Links ] 4. Botha A.E. (2006). Effect of remote band coupling on net recombination current in type-II heterostructures. Int. J. Nanosci. 5, 119–129.         [ Links ] 5. De-Leon S. and Laikhtman B. (2000). Excitonic wave function binding energy and lifetime in InAs/GaSb coupled quantum wells. Phys. Rev. B. 61, 2874–2887.         [ Links ] 6. Beresford R. (2007). Conserved flux in interband tunnelling. Solid State Electronics 51, 136–141.         [ Links ] 7. Liu Y.X., Ting D.Z-Y. and McGill T.C. (1996). Efficient, numerically stable multiband k·p treatment of quantum transport in semiconductor heterostructures. Phys. Rev. B 54, 5675–5683.         [ Links ] 8. Harrison P. (2005). Quantum Wells, Wires and Dots, 2nd edn., pp. 345–369. Wiley, New York.         [ Links ] 9. Ikoniƒ Z., Srivastava G.P. and Inkson J.C. (1992). Ordering of lowest conduction-band states in (GaAs)n/(AlAs)m [111] superlattices. Phys. Rev. B 46, 15150–15155.         [ Links ] 10. Yu Kei Ko D. and Inkson J.C. (1988). Matrix method for tunnelling in heterostructures: resonant tunnelling in multilayer systems. Phys. Rev. B 38, 9945–9951.         [ Links ] 11. Ting D.Z-Y. (1999). Multiband and multidimensional quantum transport. Microelectronics Journal 30, 985–1000.         [ Links ] 12. Odermatt S., Luisier M. and Witzigmann B. (2005). Band structure calculation using the k·p method for arbitrary potentials with open boundary conditions. J. Appl. Phys. 97, 046104.         [ Links ] 13. Winkler R. (2003). Spin-Orbit Coupling Effects in Two-dimensional Electron and Hole Systems, pp. 201–205. Springer, Berlin.         [ Links ] 14. Eppenga R., Schuurmans M.F.H. and Colak S. (1987). New k·p theory for GaAs/Ga1–xAlxAs-type quantum wells. Phys. Rev. B 36, 1554–1564.         [ Links ] 15. Smith D.L. and Mailhiot C. (1986). k·p theory of semiconductor superlattice electron structure I. Formal results. Phys. Rev. B 33, 8345–8359.         [ Links ] 16. Tsu R. (2005). Superlattice to Nanoelectronics, pp. 100–103. Elsevier, Oxford.         [ Links ] 17. Duke C.B. (1969). Solid State Physics: Advances in Research and Applications, suppl. 10, pp. 124–126. Academic Press, New York.         [ Links ] 18. Liu Y.X., Marquardt R.R., Ting D.Z-Y. and McGill T.C. (1997). Magnetotunneling in interband tunnel structures.. Phys. Rev. B 55, 7073–7077.         [ Links ] 19. Altarelli M. (1983). Electronic structure and semiconductor-semimetal transition in InAs-GaSb superlattices. Phys. Rev. B 28, 842–845.         [ Links ] 20. Altarelli M. (1983). Electronic structure of semiconductor superlattices. Physica 117B & 118B, 747–749.         [ Links ] 21. Altarelli M. (1983). Electronic structure of semiconductor superlattices. In Applications of High Magnetic Fields in Semiconductor Physics, ed. G. Landwehr, pp. 174–184. Springer, Berlin.         [ Links ] 22. Johnson B.R. (1973). The multichannel log-derivative method for scattering calculations. J. Comp. Phys. 13, 445–449.         [ Links ] 23. Shampine L.F. and Gordan M.K. (1975). Computer Solution to Ordinary Differential Equations: The Initial Value Problem. W.H. Freeman and Company, San Francisco.         [ Links ] 24. Yu E.T., Collins D.A., Ting D.Z-Y., Chow D.H. and McGill T.C. (1990). Demonstration of resonant transmission in InAs/GaSb/InAs interband tunnelling devices. Appl. Phys. Lett. 57, 2675–2677.         [ Links ] 25. Collins D.A., Yu E.T., Rajakarunanayake Y., Söderström J.R., Ting D.Z-Y., Chow D.H. and McGill T.C. (1990). Experimental observation of negative differential resistance from InAs/GaSb interface. Appl. Phys. Lett. 57, 683–685.         [ Links ] 26. Botha A.E. and Singh M.R. (2002). The effect of anisotropy on resonant tunnelling spin polarization in type-II heterostructures. Phys. Stat. Sol. (b) 231, 437–445.         [ Links ] 27. Pfeffer P. and Zawadzki W. (1999). Spin splitting of conduction subbands in III–V heterostructures due to inversion asymmetry. Phys. Rev. B 59, R5312–R5315.         [ Links ] Received 18 May. Accepted 5 June 2009.
93ef41eaa568c6ea
My watch list   Bohr model   In atomic physics, the Bohr model depicts the atom as a small, positively charged nucleus surrounded by electrons that travel in circular orbits around the nucleus — similar in structure to the solar system, but with electrostatic forces providing attraction, rather than gravity. This was an improvement on the earlier cubic model (1902), the plum-pudding model (1904), the Saturnian model (1904), and the Rutherford model (1911). Since the Bohr model is a quantum-physics based modification of the Rutherford model, many sources combine the two, referring to the Rutherford-Bohr model. Introduced by Niels Bohr in 1913, the model's key success lay in explaining the Rydberg formula for the spectral emission lines of atomic hydrogen; while the Rydberg formula had been known experimentally, it did not gain a theoretical underpinning until the Bohr model was introduced. Not only did the Bohr model explain the reason for the structure of the Rydberg formula, but it provided a justification for its empirical results in terms of fundamental physical constants. The Bohr model is a primitive model of the hydrogen atom. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics, and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics, before moving on to the more accurate but more complex valence shell atom. A related model was originally proposed by Arthur Erich Haas in 1910, but was rejected. In the early 20th century, experiments by Ernest Rutherford established that atoms consisted of a diffuse cloud of negatively charged electrons surrounding a small, dense, positively charged nucleus. Given this experimental data, it was quite natural for Rutherford to consider a planetary model for the atom, the Rutherford model of 1911, with electrons orbiting a sun-like nucleus. However, the planetary model for the atom has a difficulty. The laws of classical mechanics, specifically the Larmor formula, predict that the electron will release electromagnetic radiation as it orbits a nucleus. Because the electron would be losing energy, it would gradually spiral inwards and collapse into the nucleus. This is a disaster, because it predicts that all matter is unstable. Also, as the electron spirals inward, the emission would gradually increase in frequency as the orbit got smaller and faster. This would produce a continuous smear, in frequency, of electromagnetic radiation. However, late 19th century experiments with electric discharges through various low-pressure gasses in evacuated glass tubes had shown that atoms will only emit light (that is, electromagnetic radiation) at certain discrete frequencies. To overcome this difficulty, Niels Bohr proposed, in 1913, what is now called the Bohr model of the atom. He suggested that electrons could only have certain motions: 1. The electrons travel in orbits that have discrete quantized momenta, and therefore quantized speeds and energies. That is, not every orbit is possible but only certain specific ones, at certain specific distances from the nucleus. 2. The electrons do not continuously lose energy as they travel. They can only gain and lose energy by jumping from one allowed orbit to another. The significance of the Bohr model is that it states that the laws of classical mechanics do not apply to the motion of the electron about the nucleus. Bohr proposed rather that a new kind of mechanics, or quantum mechanics, describes the motion of the electrons around the nucleus. This model of electrons traveling in quantized orbits was extended into a more accurate model of electron motion about a dozen years later by Werner Heisenberg.Another form of the same theory, modern quantum mechanics, was discovered by the Austrian physicist Erwin Schrödinger independently and by different reasoning. Other points are: 1. When an electron makes a jump from one orbit to another, the energy difference is carried away (or supplied) by a single quantum of light (called a photon) which has an energy equal to the difference in energy between the two orbits. 2. The frequency of the emitted photon is the classical orbit frequency, since photon emission corresponds to classical emission of radiation. Since there are two orbits involved in emission, this is only exact when both orbits have nearly the same frequency, and this holds only when the orbits are large. Since the frequency of a photon is proportional to its energy, rule 2 allowed Bohr to calculate the gap in energy between levels--- the level spacing is equal to Planck's constant divided by the classical orbit period. Stepping down orbit by orbit, he found that the angular momentum changed by h / 2π at every step. So he proposed that the angular momentum L is quantized according to the rule L = n \cdot \hbar = n \cdot {h \over 2\pi} where n = 1,2,3,… and is called the principal quantum number, and h is Planck's constant. The lowest value of n is 1. This corresponds to a smallest possible radius of 0.0529 nm. This is known as the Bohr radius. Once an electron is in this lowest orbit, it can get no closer to the proton. Bohr's condition, that the angular momentum is an integer multiple of \scriptstyle\hbar was later reinterpreted by DeBroglie as a standing wave condition: the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit. n \lambda = 2 \pi r\, Substituting DeBroglie's wavelength reproduces Bohr's rule. Bohr justified his rule by appealing to the correspondence principle, without providing a wave interpretation. Electron energy levels The Bohr model gives almost exact results only for a system where two charged points orbit each other at speeds much less than that of light. This not only includes one-electron systems such as the hydrogen atom, singly-ionized helium, doubly ionized lithium, but it includes positronium and Rydberg states of any atom where one electron is far away from everything else. It can be used for K-line X-ray transition calculations if other assumptions are added (see Moseley's law below). In high energy physics, it can be used to calculate the masses of heavy quark mesons. To calculate the orbits requires two assumptions: 1. The electron is held in a circular orbit by electrostatic attraction. The centripetal force is the Coulomb force. {m v^2\over r} = {k e^2 \over r^2} where m is the mass and e is the charge of the electron. This determines the speed at any radius: v = \sqrt{ k e^2 \over m r} It also determines the total energy at any radius: E= {1\over 2} m v^2 - {k e^2 \over r} = - { k e^2 \over 2r} The total energy is negative and inversely proportional to r. This means that it takes energy to pull the orbiting electron away from the proton. For infinite values of r, the energy is zero, corresponding to a motionless electron infinitely far from the proton. The total energy is half the potential energy, which is true for non circular orbits too by the virial theorem. For larger nuclei, replace ke2 everywhere with Zke2 where Z is the number of protons. For positronium, replace m with the reduced mass m / 2. 2. The angular momentum of the circular orbit L = mvr is an integer multiple of \scriptstyle \hbar. L = m v r = n \frac{h}{2 \pi} = n \hbar n takes the values 1,2,3,... and is called the principal quantum number, h is Planck's constant. Substituting the velocity appropriate to the radius, we can solve for the radius of orbit number n. \sqrt{k e^2 m r_n} = n \hbar r_n = n^2 {\hbar^2 \over k e^2 m} And this gives the energy levels: E_n = - {k e^2 \over 2 r_n} = - {1 \over n^2} {(ke^2)^2 m \over 2 \hbar^2} = {-13.6 \mathrm{eV} \over n^2} So an electron in the lowest energy level of hydrogen (n = 1) has -13.606 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level at (n = 2) is -3.4 eV. The third (n = 3) is -1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom. The combination of natural constants in the energy formula is called the Rydberg energy RE: R_E = { (k e^2)^2 m \over 2 \hbar^2} This expression is clarified by interpreting it in combinations which form more natural units: \, m c^2 : the rest energy of the electron \, {k e^2 \over \hbar c} = \alpha = {1\over 137} : the fine structure constant \, R_E = {1\over 2} (m c^2) \alpha^2 For nuclei with Z protons, the energy levels are: E_n = {Z^2 R_E \over n^2} (Heavy Nuclei) When Z is approximately 100, the motion becomes highly relativistic. Then the Z2 cancels the α2 in R, so the orbit energy is comparable to rest energy. Sufficiently large nuclei, if they were stable, would reduce their charge by creating a bound electron from the vacuum, ejecting the positron to infinity. This is the theoretical phenomenon of electromagnetic charge screening which predicts a maximum nuclear charge. For positronium, the formula uses the reduced mass. For any value of the radius, the electron and the positron are each moving at half the speed around their common center of mass, and each has only one fourth the kinetic energy. The total kinetic energy is half what it would be for a single electron moving around a heavy nucleus. E_n = {R_E \over 2 n^2 } (Positronium) Rydberg formula The Rydberg formula, which was known empirically before Bohr's formula, is now in Bohr's theory seen as describing the energies of transitions or quantum jumps between one orbital energy level, and another. Bohr's formula gives the numerical value of the already-known and measured Rydberg's constant, but now in terms of more fundamental constants of nature, including the electron's charge and Planck's constant. When the electron moves from one energy level to another, a photon is emitted. Using the derived formula for the different 'energy' levels of hydrogen one may determine the 'wavelengths' of light that a hydrogen atom can emit. The energy of a photon emitted by a hydrogen atom is given by the difference of two hydrogen energy levels: E=E_i-E_f=R_E \left( \frac{1}{n_{f}^2} - \frac{1}{n_{i}^2} \right) \, where nf is the final energy level, and ni is the initial energy level. Since the energy of a photon is E=\frac{hc}{\lambda}, \, the wavelength of the photon given off is given by \frac{1}{\lambda}=R \left( \frac{1}{n_{f}^2} - \frac{1}{n_{i}^2} \right). \, This is known as the Rydberg formula, and the Rydberg constant R is RE / hc, or RE / 2π in natural units. This formula was known in the nineteenth century to scientists studying spectroscopy, but there was no theoretical explanation for this form or a theoretical prediction for the value of R, until Bohr. In fact, Bohr's derivation of the Rydberg constant was one reason that his model was immediately accepted. Shell Model of the Atom Bohr extended the model of Hydrogen to give an approximate model for heavier atoms. This gave a physical picture which reproduced many known atomic properties for the first time. Heavier atoms have more protons in the nucleus, and more electrons to cancel the charge. Bohr's idea was that each discrete orbit could only hold a certain number of electrons. After that orbit is full, the next level would have to be used. This gives the atom a shell structure, in which each shell corresponds to a Bohr orbit. This model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons is taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also orbit the inner electrons, so the effective charge Z that they see is reduced by the number of the electrons in the inner orbit. For example, the lithium atom has two electrons in the lowest orbit, and these are orbiting at Z=2, since they see the whole nuclear charge of Z=3, minus the screening effect of the single innermost 1s electron. They thus, in simple theory, orbit at 1/4th the Bohr radius. The outer electron in lithium orbits at roughly Z=1, since the two electrons reduce the charge by 2. This outer electron should be at nearly one Bohr radius from the nucleus [However, the numbers do not come close to being integers in this fashion, save for large atoms and for the case of the Moseley's Law atom, in which innermost electrons do see a nuclear clarge of close to Z-1, and the outermost electron does see a charge of nearly 1]. In most other cases only electrons in the very innermost and the very outermost orbitals see a nuclear charge Z modified by a whole number. The shell model was able to qualitatively explain many of the mysterious properties of atoms which became codified in the late 19th century in the periodic table of the elements. One property was the size of atoms, which could be determined approximately by measuring the viscosity of gasses and density of pure crystaline solids. Atoms tend to get smaller as you move to the right in the periodic table, becoming much bigger at the next line of the table. Atoms to the right of the table tend to gain electrons, while atoms to the left tend to lose them. Elements at the end are chemically inert. In the shell model, this phenomenon is explained by shell-filling. Successive atoms get smaller because they are filling orbits of the same size, until the orbit is full-- at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, and this explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra "d" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (Filling the n=3 d orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models, and which are difficult to calculate even in the modern treatment. Moseley's law and calculation of K-alpha X-ray emission lines Niels Bohr said in 1962, "You see actually the Rutherford work [the nuclear atom] was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley." In 1913 Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alpha line), and their atomic number Z. Moseley's empiric formula was found to be derivable from Rydberg and Bohr's formula (Moseley actually mentions only Ernest Rutherford and Antonius Van den Broek in terms of models). The two additional assumptions that [1] this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and [2], that the atomic number Z when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to (Z-1)². Moseley wrote to Bohr, puzzled about his results, but Bohr was not able to help. At that he thought that the postulated innermost "K" shell of electrons should have at least four electrons, not two. So Moseley published his results without a theoretical explanation. Later, people realized that the effect was caused by charge screening. In the experiment, one of the innermost electrons in the atom is knocked out, leaving a vacancy in the lowest Bohr orbit. This vacancy is then filled by electrons in the next orbit, which has n=2. But the n=2 electrons see an effective charge of Z-1, which is also for some reason the value appropriate for the charge of the nucleus in the lowest Bohr orbit when 1 electron is already there. The energy gained by an electron dropping from the second shell to the first gives Moseley's law for K-alpha lines: E= h\nu = E_i-E_f=R_E (Z-1)^2 \left( \frac{1}{1^2} - \frac{1}{2^2} \right) \, f = \nu = R \left( \frac{3}{4}\right) (Z-1)^2 = (2.46 \times 10^{15} \operatorname{Hz})(Z-1)^2. This latter relationship had been empirically derived by Moseley, in a simple plot of the square root of X-ray frequency against atomic number. Moseley's law not only established the objective meaning of atomic number (see Henry Moseley for detail) but, as Bohr noted, it also did more than the Rydberg derivation to establish the validity of the Rutherford/Van den Broek/Bohr nuclear model of the atom, with atomic number as nuclear charge. The K-alpha line of Moseley's time is now known to be a pair of close lines, written as (Kα1 and Kα2) in Siegbahn notation. The Bohr model gives an incorrect value \scriptstyle \mathbf{L} = \hbar for the ground state orbital angular momentum. The angular momentum in the true ground state is known to be zero. Although mental pictures fail somewhat at these levels of scale, an electron in the lowest modern "orbital" with no orbital momentum, may be thought of as not to rotate "around" the nucleus at all, but merely to go tightly around it in an ellipse with zero area. This is only reproduced in a more sophisticated semiclassical treatment like Sommerfeld's. In modern quantum mechanics, the electron in hydrogen is a spherical cloud of probability which grows more dense near the nucleus. The rate of decay in hydrogen is equal to the Bohr radius, but since Bohr worked with circular orbits, not zero area ellipses, the fact that these two numbers exactly agree is a coincidence. The Bohr model also has difficulty with, or else fails to explain: • Much of the spectra of larger atoms. At best, it can make predictions about the K-alpha and some L-alpha X-ray emission spectra for larger atoms, if two additional ad hoc assumptions are made (see Moseley's law above). Emission spectra for atoms with a single outer-shell electron (atoms in the lithium group) can also be approximately predicted. Also, if the empiric electron-nuclear screening factors for many atoms are known, many other spectral lines can be deduced from the information, in similar atoms of differing elements, via the Ritz-Rydberg combination principles (see Rydberg formula). All these techniques essentially make use of Bohr's Newtonian energy-potential picture of the atom. • The relative intensities of spectral lines; although in some simple cases, Bohr's formula or modifications of it, was able to provide reasonable estimates (for example, calculations by Kramers for the Stark effect). • The existence of fine structure and hyperfine structure in spectral lines, which are known to be due to a variety of relativistic and subtle effects, as well as complications from electron spin. • The Zeeman effect - changes in spectral lines due to external magnetic fields; these are also due to more complicated quantum principles interacting with electron spin and orbital magnetic fields.   Several enhancements to the Bohr model were proposed; most notably the Sommerfeld model or Bohr-Sommerfeld model, which suggested that electrons travel in elliptical orbits around a nucleus instead of the Bohr model's circular orbits. This model supplemented the quantized angular momentum condition of the Bohr model with an additional radial quantization condition, the Sommerfeld-Wilson quantization condition where p is the momentum canonically conjugate to the coordinate q; the integral is the action of action-angle coordinates. This condition is the only one possible, since the quantum numbers are adiabatic invariants. The Bohr-Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The azimuthal quantum number measured the tilt of the orbital plane relative to the x-y plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could be turned this way and that relative to the coordinates without restriction. The Sommerfeld quantization can be performed in different canonical coordinates, and gives answers which are different. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced the modern quantum mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics which Erwin Schrodinger developed in 1926. However, this is not to say that the Bohr model was without its successes. Calculations based on the Bohr-Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbation, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary in shape according to the energy state of the electron. The Bohr-Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization. See also • Franck-Hertz experiment provided early support for the Bohr model. • Moseley's law provided early support for the Bohr model. See also Henry Moseley • Inert pair effect is adequately explained by means of the Bohr model. • Lyman series • Schrödinger equation • Theoretical and experimental justification for the Schrödinger equation • Balmer's Constant • Quantum Mechanics • 1913 in science • Niels Bohr (1913). "On the Constitution of Atoms and Molecules (Part 1 of 3)". Philosophical Magazine 26: 1-25. • Niels Bohr (1913). "On the Constitution of Atoms and Molecules, Part II Systems Containing Only a Single Nucleus". Philosophical Magazine 26: 476-502. • Niels Bohr (1913). "On the Constitution of Atoms and Molecules, Part III". Philosophical Magazine 26: 857-875. • Niels Bohr (1914). "The spectra of helium and hydrogen". Nature 92: 231-232. • Niels Bohr (1921). "Atomic Structure". Nature. • A. Einstein (1917). "Zum Quantensatz von Sommerfeld und Epstein". Verhandlungen der Deutschen Physikalischen Gesellschaft 19: 82-92. Reprinted in The Collected Papers of Albert Einstein, A. Engel translator, (1997) Princeton University Press, Princeton. 6 p.434. (Provides an elegant reformulation of the Bohr-Sommerfeld quantization conditions, as well as an important insight into the quantization of non-integrable (chaotic) dynamical systems.) Further reading • Linus Pauling (1985). General Chemistry, Chapter 3 (3rd ed). Dover Publications.  A great explainer of Chemistry describes the Bohr model, appropriate for High School and College students. • George Gamow (1985). Thirty years that shook Physics, Chapter 2. Dover Publications.  A popularizer of physics explains the Bohr model in the context of the development of quantum mechanics, appropriate for High School and College students • Walter J. Lehmann (1972). Atomic and Molecular Structure: the development of our concepts, chapter 18. John Wiley and Sons.  Great explanations, appropriate for High School and College students • Paul Tipler and Ralph Llewellyn (2002). Modern Physics (4th ed.). W. H. Freeman. ISBN 0-7167-4345-0.  This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Bohr_model". A list of authors is available in Wikipedia.
c6bc2df3f1b0fd10
Weak and strong confinements in prismatic and cylindrical nanostructures • Yuri V Vorobiev1Email author, Affiliated with • Bruno Mera2, Affiliated with • Vítor R Vieira2, Affiliated with • Paul P Horley3 and Affiliated with • Jesús González-Hernández3 Affiliated with Nanoscale Research Letters20127:371 DOI: 10.1186/1556-276X-7-371 Received: 16 April 2012 Accepted: 19 June 2012 Published: 5 July 2012 Cylindrical nanostructures, namely, nanowires and pores, with rectangular and circular cross section are examined using mirror boundary conditions to solve the Schrödinger equation, within the effective mass approximation. The boundary conditions are stated as magnitude equivalence of electron's Ψ function in an arbitrary point inside a three-dimensional quantum well and image point formed by mirror reflection in the walls defining the nanostructure. Thus, two types of boundary conditions - even and odd ones - can be applied, when Ψ functions in a point, and its image, are equated with the same and the opposite signs, correspondingly. In the former case, the Ψ function is non-zero at the boundary, which is the case of a weak confinement. In the latter case, the Ψ function vanishes at the boundary, corresponding to strong quantum confinement. The analytical expressions for energy spectra of electron confined within a nanostructure obtained in the paper show a reasonable agreement with the experimental data without using any fitting parameters. Nanostructures (NS) of different kinds have been actively studied during the last two decades, both theoretically and experimentally. A special interest was focused on quasi-one-dimensional NS such as nanowires, nanorods, and elongated pores that not only modify the main material's parameters, but are also capable of introducing totally new characteristics such as optical and electrical anisotropy, birefringence, etc. In particular, the existence of nanoscale formations on the surface (or embedded into semiconductor) result in quantum confinement effects. As the motion of the carriers (or excitons) becomes restrained, their energy spectra change, moving the permitted energy levels towards higher energies as a consequence of confinement. In the experimental measurements, such modification would be noticed as a blueshift of energy-related characteristics, such as, for example, the edge of absorption. This paper is dedicated to the theoretical investigation of confined particle problem, aiming to explain the available experimental data basing on geometry of corresponding nanoparticles present in the particular material. Here, we focus on elongated NS that can be approximated as prisms or cylinders with different shapes of cross section. The theoretical treatment of NS is based on the solution of the Schrödinger equation, usually within the effective mass approximation [14], although for small NS, such approach can be questioned because the symmetry describing a nanoparticle may not inherit its shape symmetry but would rather depend on atomistic symmetry [5]. In addition, at small scale, it becomes necessary to take into account atomic relaxation and piezoelectric phenomena [6] that may strongly influence the energy states of confined particles and split their energy levels. The detailed consideration of these phenomena can be accounted using the pseudopotential method [7] introduced by Zunger's group that, after a decade, became a standard energy level model for detailed description of quantum dots. However, in cases when dimensions of nano-objects are large enough to validate the effective mass approximation, it is possible to obtain analytical solution to the problem of a particle confined within a quantum dot. An important element of the quantum mechanical description is the boundary conditions; the traditional impenetrable wall conditions (1) are not always realistic and, (2) in many cases (depending on the shape of NS), could not be written in simple analytical form, thus complicating the further analysis. To overcome these problems, we proposed to use a mirrorlike boundary condition [810] assuming that the electron confined in an NS is specularly reflected by its walls acting as mirrors. In addition to a significant simplification of problem solution, this method favors the effective mass approximation. Within the same framework, one can study pores as ‘inverted’ nanostructures (i.e., a void surrounded by semiconductor material) considering the ‘reflection’ of the particle's wave function from the surfaces limiting a pore. Thus, one will obtain essentially the same solution of the Schrödinger equation (and the energy spectrum) for both the pore and NS of the same geometry and size. A previous attempt to treat walls of a quantum system as mirrors in quantum billiard problem [11] yielded quite a complicated analytical form of the boundary conditions that made the solution of Schrödinger equation considerably more difficult. In our treatment of the NS boundary as a mirror, the boundary condition equalizes absolute values of the particle's Ψ function in an arbitrary point inside the NS and the corresponding image point with respect to a mirror-reflective wall. Thus, depending on the sign of the equated Ψ values, one will obtain even and odd mirror boundary conditions. For the case of odd mirror boundary conditions (OMBC), Ψ functions in real point and its images should have the opposite sign, which means that the incident and reflected de Broglie waves cancel each other at the boundary. This case is equivalent to the impenetrable walls with vanishing Ψ function at the boundary, representing a ‘strong’ confinement case. However, some experimental data (see, e.g., [4]) show the evidence that a particle may penetrate the barrier, later returning into the confined volume. Thus, the wave function will not vanish at the boundary, and the system should be considered as a ‘weak’ confinement case as long as the particle flux through the boundary is absent. This case corresponds to even mirror boundary conditions (EMBC), when Ψ function in real point and its images are the same. Below, we analyze solutions of the Schrödinger equation for several cylindrical structures, using mirror boundary conditions of both types and making comparison of the energy spectra obtained with experimental data found in the literature. We start with the simplest case that could be easily treated on the basis of traditional approach - a NS shaped as a rectangular prism with a square base (with the sides a = b oriented along the axes x and y; the side c > a is set along the z direction). Assuming, as it is usually done in the literature, the absence of a potential inside the NS and separating the variables, we look for the solution of the stationary Schrödinger equation ΔΨ + k2 Ψ = 0 (where k2 = 2 mE/ħ2 and m being the particle's effective mass) as the product of plain waves propagating in both directions along the coordinate axes: Ψ = j Ψ j x j = j A j exp ( i k j x j + B j exp ( i k j x j ) ) http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ1_HTML.gif For this case, the even mirror boundary conditions are as follows [10]: Ψ x , y , z = Ψ ( x , y , z ) = Ψ ( x , y , z ) = Ψ ( x , y , z ) = Ψ ( 2 a x , y , z ) = Ψ ( x , 2 b y , z ) = Ψ ( x , y , 2 c z ) http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ2_HTML.gif That renders the following solution (Equation 1) of the Schrödinger equation: Ψ x , y , z = A cos k x x cos k y y cos k z z http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ3_HTML.gif with wave vector components k x a = π n x , k y b = π n y and k z c = π n z http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ4_HTML.gif It gives the following energy spectrum: E = h 2 8 m n x 2 a 2 + n y 2 b 2 + n z 2 c 2 or h 2 8 m n x 2 + n y 2 a 2 + n z 2 c 2 http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ5_HTML.gif The odd mirror boundary conditions are obtained from Equation 2 by inverting the sign of the left-hand-side function. The solution will then be as follows: Ψ x , y , z = B sin k x x sin k y y sin k z z http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ6_HTML.gif The wave vector components will be the same as that presented in Equation 4, yielding the same energy spectrum (Equation 5). Using the traditional impenetrable wall boundaries, one will also obtain the solution in the form (Equation 6) that coincides with the OMBC solution that has a vanishing Ψ function at the boundary. Therefore, the energy spectrum is the same for both types of mirror boundary conditions and impenetrable wall boundary, although the solutions themselves are not equal. In [7], we demonstrated that for NS of spherical shape, the energy spectrum found with EMBC (weak confinement) is different from that corresponding to impenetrable walls conditions. From Equation 5, it is evident that the energy spectrum of prismatic (cylindrical) NS is a sum of the spectra corresponding to the two-dimensional cross-section NS (a square with side length a) and the one-dimensional wire of length c. In a similar manner, the spectrum for cylinders with other cross-section shapes can be constructed using the solutions for two-dimensional triangular or hexagonal structures analyzed previously [8, 9]. Below, we present the analysis of cylindrical NS. Let us consider a nanostructure with a circular cross section of diameter a and cylinder height c. The solution of the problem using a traditional approach can be found in [12, 13]. In our case, we make variable separation in cylindrical coordinates: Ψ r , φ , z = A F r exp ( i p φ ) [ B exp i k z + C exp ( i k z ) ] ,  with integer p = 0 , ± 1 , ± 2 http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ7_HTML.gif We note that the value of p defines the angular momentum: L = pħ. In the case of EMBC, one can apply mirror reflection from the base, which gives B = C, resulting in the following wave function: Ψ ( r , φ , z ) = A F r exp i p φ cos k z http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ8_HTML.gif Strong confinement (OMBC) gives B = −C, which introduces sinkz instead of coskz in Equation 7A. The radial function F(r) is the solution of the following radial equation: d 2 F ( r ) d r 2 + 1 r d R d r + k 2 p 2 r 2 F ( r ) = 0 http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ9_HTML.gif It is Bessel's differential equation regarding the variables kr, the solution of which is given by the cylindrical Bessel function of integer order |p|: J|p|(kr); with, k = ħ−1(2mE n )1/2. Here, m is the effective mass of the particle, and E n is the quantized kinetic energy corresponding to the motion in two-dimensional circular quantum well. The total energy consists of energy contribution for the motion within cross-section plane and along the vertical axis z: E = E n  + Ez. The energy En depends on the values of k and is obtained using boundary conditions. In the traditional case of impenetrable walls, the Ψ function vanishes at the boundary so that the energy values are determined by the roots (nodes) of the cylindrical Bessel function (see Figure 1 for different order numbers n, and also Table 1). The same situation will take place for OMBC, yielding zero wave function at the boundary so that the nodes q|p|i of the Bessel function will define the energy values. Figure 1 Cylindrical Bessel functions J n ( x ). Curve numbers correspond to order n. Table 1 Argument values at nodes and extremes of cylindrical Bessel function q |p|1 t |p|1 q |p|2 t |p|2 q |p|3 t |p|3 q |p|4 t |p|4 If the EMBC are used, the situation becomes different since the function values in the points approaching the boundary of the nanostructure should match those in the image points, making the boundary to correspond to the extremes of the Bessel function (which was strictly proved for the spherical quantum dots (QDs) [10]). Table 1 gives several values of the Bessel function argument kr corresponding to the function nodes (q|p|i) and extremes (t|p|i) calculated for function orders 0, 1, 2, and 3. At the boundary, r = a/2; therefore, the corresponding value of k is 2q|p|i/a for OMBC and 2 t|p|i/a for EMBC. The energy spectrum for a particle confined in a circular-shaped quantum well is as follows: E n = 2 ħ 2 m a 2 S | p | i 2 = ħ 2 2 π 2 m a 2 S | p | i 2 http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ10_HTML.gif Here, the parameter s|p|i takes the values of q|p|i for OMBC (strong confinement) and t|p|i for EMBC (weak confinement). The quantization along the z axis for both the boundary condition types will be E z = h 2 8 m n z 2 c 2 http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_IEq1_HTML.gif, yielding the total energy E = h 2 2 m S | p | i 2 π 2 a 2 + n 2 4 c 2 http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ11_HTML.gif In the case of EMBC, the ground state (GS) energy will be obtained with t11 = 1.625: E GS = h 2 / 2 m 0.268 / a 2 + 1 / 4 c 2 http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ12_HTML.gif In the OMBC case, the GS will be determined by the smallest q value of 2.4: E GS = h 2 / 2 m 0.584 / a 2 + 1 / 4 c 2 http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_Equ13_HTML.gif Equations 10, 11, and 11A can be used for the analysis of optical processes in the NS discussed. In particular, blueshift in exciton ground state can be found from Equations 11 and 11A if one substitutes a reduced exciton mass in place of particle mass m. Using Equation 10, it is possible to obtain in a similar way the energies corresponding to the higher excited states. For long NS with sufficiently large c, the second term in energy does not affect the GS. Thus, the solution for cylindrical NS based on even mirror boundary conditions EMBC (weak confinement) gives the GS shift due to quantum confinement that is (2.4/1.625)2 = 2.18 times smaller than the value obtained for the strong confinement case. In the case of spherical QD [10], the difference was four times. It is reasonable that for strong confinement, the blue shift value exceeds that obtained for the weak confinement case. To illustrate this, we present in Figure 2 the comparison of ground state energy obtained with OMBC and EMBC (using Equations 11 and 11A) on NS diameter for a cylindrical quantum well with parameters of silicon (effective mass for electron 0.26 and 0.49 for a hole, which corresponds to reduced exciton mass of 0.17; bandgap is 1.1 eV for 300 K). As one can see from the figure, the difference of the exciton bandgap scales down with increase of the NS diameter, with invariably higher values observable for the strong confinement case described by OMBC. Figure 2 Dependence of ground state energy on diameter of a cylindrical nanostructure. The plot shows the data obtained with odd and even mirror boundary conditions for an NS with parameters of silicon. The choice of OMBC or EMBC has to be made taking into account the probability of electron tunneling through the walls forming the nanostructure. One can expect that in the case of isolated NS strong confinement (OMBC), approximation will be more appropriate, whereas for NS surrounded by other solid or liquid media (core-shell QDs [10] and pores in semiconductor media), weak confinement with EMBC should be used. Results and discussion Considerable scientific interest has been attracted to semiconductor nanorods (nanowires) and cylindrical pores. Let us mention here publications dealing with arrays of cylindrical pores in sapphire [14], ZnO nanorods grown within these pores [15], as well as CuS and In2O3 nanowires. Usually, the experiments report on relatively large structures measuring 30 nm or more in diameter. As one can see from Equations 11 and 11A, in these cases, the expected blueshift will be about 0.01 eV or less for both the weak and strong confinements. Nevertheless, there exists literature data referring to nanorods of sufficiently small diameter for a pronounced confinement effect. A paper [16] reports on CdS nanorods with a diameter of 5 nm and a length of 40 nm embedded into a liquid crystal. The authors study the optical anisotropy caused by the alignment of the nanorods. To determine it, they measure polarization of photoluminescence due to electron–hole recombination, reporting that the spectral maximum of luminescence is located at 485 nm (2.56 eV), which exceeds the bandgap of the bulk CdS by 0.14 eV. Taking the electron effective mass in CdS [17] as 0.16 m0 and hole effective mass 0.53 m0, one can find the reduced mass μ = 0.134 m0 and the blueshift 0.12 eV using Equation 11, which agrees reasonably with the experiment. As CdS nanostructure is surrounded by liquid crystal media, we were using the EMBC or weak confinement approximation. Another study [18] is focused on the optical properties of CuS nanorods measuring 6 to 8 nm in diameter and 40 to 60 nm in length; the authors report definite blueshift of fundamental absorption edge. Alas, we found no data on the effective masses for CuS, so it was not possible to make numerical comparison with the theory. A particular example of cylindrical QDs is presented by quasi-circular organic molecules like coronene C24H12 (see Figure 3). In this case c < < a, which makes the second term in Equations 10, 11, and 11A very large even for nz = 1, meaning that it has no contribution to the optical properties of the molecule in visible light because the transitions between the states with different nz will correspond to radiation in deep ultraviolet. Therefore, the spectrum is defined by the first term in Equations 10 and 11 that essentially replicates the solution obtained for the case of a long cylinder. Figure 3 Coronene molecule (a) formula and (b) computer-rendered three-dimensional image. Another paper [19] presents the experimental data concerning the optical properties of coronene molecules in tetrahydrofuran (THF) solution. Since the molecules are submerged into media, we expect that weak confinement/EMBC will be most appropriate for solution of the problem. Strong absorption lines were registered at photon energies of 4.1 to 4.3 eV, with weaker absorption down to 3.5 eV. To use our methodology, one should first determine the diameter a of a circle embracing the molecule with its 12 atoms of carbon (Figure 3). The C-C bond length in coronene is d = 1.4 AǺ, which corresponds to the side of a hexagon. Thus, one would have a = d 28 http://static-content.springer.com/image/art%3A10.1186%2F1556-276X-7-371/MediaObjects/11671_2012_Article_890_IEq2_HTML.gif = 0.741 nm. Taking in (Equation 11) m as free electron mass and using only the first term, we obtain the ground state energy EGS= 0.73 eV. The higher energy states (Equation 10) will be defined by the values of s|p|i = t|p|i equal to 2.92, 3.713, 4.30 etc. The corresponding energies are 2.353, 3.805, and 5.1 eV that result in transition energies 1.62, 3.1, and 4.37 eV. The first value is out of the spectral range investigated in [19]; the other two could reasonably fit the absorption observed. If we attempt to treat the case on the basis of strong confinement approximation (OMBC), one should use the q|p|i values in the formulas (Equations 10 and 11A), yielding the ground state of 1.591 eV and excited states at 3.78, 7.21, and 8.35 eV. Therefore, the transition energies would be 2.19, 5.62, and 6.76 eV which have nothing in common with the experimental values, proving that the previous conclusion to use EMBC based on the fact that coronene molecules are embedded into THF medium was the right one. Yet, another paper [20] is devoted to studying coronene-like nitride molecules with the composition N12X12H12, where X can be B, Al, Ga or In. Depending on X, the bond length will vary, giving different values of well diameter a. The authors of [20] give the transition energies between the ground state and the first excited state, corresponding to HOMO-LUMO transition EHL. For these isolated molecules, the strong confinement case/OMBC is expected to be appropriate. The bond lengths and EHL values reported in [20] are listed in Table 2 together with values of a calculated from bond length and the transition energies ΔE found using the expression (Equation 10) with corresponding q values. One can see that ΔE values are reasonably close to the experimental EHL. Solution of the same problem using weak confinement/EMBC results in large discrepancies that fails to explain the experimental data, confirming the correctness of the decision to choose OMBC for isolated molecules. Table 2 The lowest transition energies in coronene-like molecules EHL(eV) [17]] Theoretical description of prismatic and cylindrical nanostructures (including pores in semiconductor) is made using two types of mirror boundary conditions for solution of the Schrödinger equation, resulting in simple analytical procedure to obtain wave functions that offer reasonably good description of optical properties of nanostructures of various shapes. The expressions for energy spectra are defined by the geometry and dimensions of the nanostructures. The even mirror boundary conditions correspond to weak confinement that is applicable for the cases when the nanostructure is embedded into another media (which is especially true for a case of a pore) that enables tunneling through the boundary of the nanostructure. In contrast, odd mirror boundary conditions are more appropriate in the treatment of isolated nanostructures where strong confinement exists. Both cases are illustrated with experimental data, proving good applicability of the corresponding type of boundary conditions. The authors thank the FCT Projeto Estratégico PEst-OE/FIS/UI0091/2011 (Portugal) and CONACYT Basic Science Project 129269 (Mexico). Authors’ Affiliations CINVESTAV-Querétaro, Libramiento Norponiente 2000, Fracc. Real de Juriquilla Centro de Física das Interacções Fundamentais, Instituto Superior Técnico, Universidade Técnica de Lisboa, Avenida Rovisco Pais CIMAV Chihuahua/Monterrey 1. Efros AL, Efros AL: Interband absorption of light in a semiconductor sphere. Sov. Phys. Semicond 1982, 16(7):772–775. 2. Gaponenko SV: Optical Properties of Semiconductor Nanocrystals. Cambridge University Press, Cambridge; 1998.View Article 3. Liu JL, Wu WG, Balandin A, Jin GL, Wang KL: Intersubband absorption in boron-doped multiple Ge quantum dots. Appl Phys Lett 1999, 74: 185–187. 10.1063/1.123287View Article 4. Dabbousi BO, Rodriguez-Viejo J, Mikulec FV, Heine JR, Mattoussi H, Ober R, Jensen KF, Bawendi MG: (CdSe)ZnS Core-shell quantum dots: synthesis and characterization of a size series of highly luminescent nanocrystallites. J Phys Chem B 1977, 101: 9463–9475.View Article 5. Bester G, Zunger A: Cylindrically shaped zinc-blende semiconductor quantum dots do not have cylindrical symmetry: atomistic symmetry, atomic relaxation, and piezoelectric effects. Physical Review B 2005, 71: 045318.View Article 6. Bester G, Wu X, Vanderbilt D, Zunger A: Importance of second-order piezoelectric effects in zinc-blende semiconductors. Phys Rev Lett 2006, 96: 187602.View Article 7. Zunger A: Pseudopotential theory of semiconductor quantum dots. Phys. Stat. Sol. B 2001, 224: 727–734. 10.1002/(SICI)1521-3951(200104)224:3<727::AID-PSSB727>3.0.CO;2-9View Article 8. Vieira VR, Vorobiev YV, Horley PP, Gorley PM: Theoretical description of energy spectra of nanostructures assuming specular reflection of electron from the structure boundary. Phys. Stat. Sol. C. 2008, 5: 3802–3805. 10.1002/pssc.200780107View Article 9. Vorobiev YV, Vieira VR, Horley PP, Gorley PN, González-Hernández J: Energy spectrum of an electron confined in the hexagon-shaped quantum well. Science in China Series E: Technological Sciences. 2009, 52: 15–18. 10.1007/s11431-008-0348-6View Article 10. Vorobiev YV, Horley PP, Vieira VR: Effect of boundary conditions on the energy spectra of semiconductor quantum dots calculated in the effective mass approximation. Physica E 2010, 42: 2264–2267. 10.1016/j.physe.2010.04.027View Article 11. Liboff RL, Greenberg J: The hexagon quantum billiard. J Stat Phys 2001, 105: 389–402. 10.1023/A:1012298530550View Article 12. Robinett RW: Visualizing the solutions for the circular infinite well in quantum and classical mechanics. Am J Phys 1996, 64(4):440–446. 10.1119/1.18188View Article 13. Mel’nikov LA, Kurganov AV: Model of a quantum well rolled up into a cylinder and its applications to the calculation of the energy structure of tubelene. Tech Phys Lett 1997, 23(1):65–67. 10.1134/1.1261619View Article 14. Choi J, Luo Y, Wehrspohn RB, Hilebrand R, Schilling J, Gösele U: Perfect two-dimensional porous alumina photonic crystals with duplex oxide layers. J Appl Phys 2003, 94(4):4757–4762.View Article 15. Zheng MJ, Zhang LD, Li GH, Shen WZ: Fabrication and optical properties of large-scale uniform zinc oxide nanowire arrays by one-step electrochemical deposition technique. Chem Phys Lett 2002, 363: 123–128. 10.1016/S0009-2614(02)01106-5View Article 16. Wu K-J, Chu K-C, Chao C-Y, Chen YF, Lai C-W, Kang CC, Chen C-Y, Chou P-T: CdS nanorods embedded in liquid crystal cells for smart optoelectronic devices. Nano Lett 2007, 7(1):1908–1913.View Article 17. Singh J: Physics of Semiconductors and Their Heterostructures. McGraw-Hill, New York; 1993. 18. Freeda MA, Mahadevan CK, Ramalingom S: Optical and electrical properties of CuS nanorods. Archives of Physics Research. 2011, 2(3):175–179. 19. Xiao J, Yang H, Yin Z, Guo J, Boey F, Zhang H, Zhang O: Preparation, characterization and photoswitching/light-emitting behaviors of coronene nanowires. J Mater Chem 2011, 21: 1423–1427. 10.1039/c0jm02350gView Article 20. Chigo Anota E, Salazar Villanueva M, Hernández Cocoletzi H: Electronic properties of group III-A nitride sheets by molecular simulation. Physica Status Solidi C 2010, 7: 2252–2254. 10.1002/pssc.200983499View Article © Vorobiev et al.; licensee Springer. 2012
a80f8c4b7c0c3e59
Density matrix and its classical counterpart In my current opinion, each of the three alternative dynamical equations • equation for the density matrix • Heisenberg equations for the operators • Feynman's path integral However, first, let me note that Schrödinger is getting way too much credit while Heisenberg who should be credited for dynamics is being screwed. Schrödinger found his equation in 1926. Heisenberg found his equations, that were later shown equivalent to Schrödinger's picture by Dirac, in 1925. So Heisenberg was earlier. Moreover, Heisenberg understood what quantum mechanics meant, unlike Schrödinger. Clearly, people talk about Schrödinger's equation because they are able to understand a partial differential equation - not too different from Maxwell's equations, right? - they actively want to mislead themselves into thinking that the wave function is a classical wave of a sort. It's not. Heisenberg's equations for the operator are more direct a quantum counterpart of Newton's equations of motion. In this picture, the only novelty are the nonzero commutators between the observables - the Heisenberg uncertainty principle etc. Classical phase space Fine. Let me now return to the topic I promised in the title - the density-matrix-centered interpretation of the classical-quantum relationship. We may formulate classical mechanics by the differential equations for x(t), p(t), or whatever degrees of freedom we have. They obey Newton's equations or their generalizations for other degrees of freedom such as Maxwell's equations, and so on. I hope you're still with me. But in reality, we can't know all these observables with complete accuracy and certainty. Measurements of positions and velocities have nonzero error margins; the detailed motion of molecules in some gas is chaotic and we don't know it; the evolution brings a lot of extra uncertainty. The uncertainty grows bigger. For all those and other reasons, it's more realistic to assume that even in classical physics, we don't know the exact x(t), p(t). Instead, we know some probability distribution on the phase space (the space of all initial states, or states defining where the system is at any moment): In this text, x and p will be shortcuts for many coordinates on the phase space. The interpretation of the phase space probability distribution is that dP = rho(xi,pi) dNx dNp, i = 1...N is the infinitesimal probability dP that the system is found around the point given by the x and p coordinates, within a small hypercube of volume given by the measure factor. OK? Of course, if we know how x and p would evolve, we may also determine how the probability distribution would evolve. It would evolve according to Liouville's equation for the Hamiltonian system. It can be written as ∂ rho / ∂t = {H,rho} where the curly brackets are the Poisson brackets. You will find everything on that Wikipedia page if you need to refresh your memory. Now, I want to emphasize that in theory, the description by rho also contains the case in which x,p are known accurately and with certainty: in that case, rho is a delta function located on the point x(t), p(t) of the phase space. In classical physics, a strictly sharp delta-function will remain strictly sharp. Its location is moving as a function of t just like x,p would. However, the more general, "widely spread" form of rho is much more realistic and universal in its applications. Even when it's spread, in classical physics, you could always imagine that there existed an actual x(t), p(t) at each moment - you just didn't know what it was. However, this thesis has several problems: you can't really prove it because classical physics that uses rho(x,p) as the fundamental object, and encourages you to calculate the probabilities from it, is totally consistent as well. Moreover, you can't learn anything - make any new predictions or explanations - out of the purely ideological opinion that sharp values of x(t), p(t) existed - if you don't actually know what those x(t), p(t) exactly were. So the idea that a sharp x(t), p(t) exists in classical physics is pure philosophy - there is no physics behind it. This assumption isn't needed for physics to work (physics is consistent with its negation as well); and it's not useful to learn anything (which is why the physical laws without this assumption of "realism" are complete according to any operational definition of the word "complete"). Quantum mechanics: density matrix Now, the object rho(x,p) is replaced by an operator rho in quantum mechanics. You may write a general operator acting on your quantum mechanical space as a function of x,p - the "generating" operators of your Hilbert space that know about all your degrees of freedom. If you worked with spins, you would have to add spins; if you worked with field theory, you would have to put the fields instead, and so on. It's obvious how the classical Liouville equation for rho(x,p) is generalized in quantum mechanics: the Poisson bracket is simply replaced by the commutator with the right normalization. The equation for the operator rho becomes: iħ ∂ rho / ∂t = [H,rho] := H rho - rho H This is easily obtained from Schrödinger equation for psi if you substitute rho = |psi> <psi| and use the Leibniz rule for the derivative of the product, the ordinary Schrödinger equation for psi, and the complex conjugate Schrödinger equation for the bra-vector psi*. In general, rho is a combination of such products, as we will mention, but the equation is linear so the equation works for the combinations as well. If you think about it for a while, we have made a truly minimalistic change, indeed. In classical physics, the probabilities of different states were given by rho(x,p) which was a function of c-numbers x,p, the coordinates on the phase space. In quantum mechanics, rho is still a general operator, but it's a function of the operators x,p that don't commute with each other. But you may still imagine particular functional prescriptions for rho such as rho = exp(-(x-x0)2 - (p-p0)2) which is a packet concentrated near x0,p0. Of course, three constants should be added in front of the exponential (to make it normalized) as well as both terms in the exponent (to determine the width and obey the dimensional analysis). But I want to keep the formulae simple and comprehensible. It's important to realize that the definition of rho could work for operators rho,x,p, too. In some sense, the number of operators on the Hilbert space is equal to the number of functions on the classical phase space - there exist natural one-to-one maps based on various "orderings" of the operators. The trace of rho is equal to one - it's the total probability. This is the quantum counterpart of the fact that the integral of rho(x,p) over the phase space is one. The corresponding equations - Liouville's equation and the rho-Schrödinger equation - preserve this normalization of rho. The difference between the classical and quantum dynamical equations is just hidden in the fact that rho,x,p are operators in quantum theory and they generally don't commute with each other. But otherwise, the interpretation from classical physics may be pretty much directly extended to the quantum theory! In particular, just like the values of rho(x,p) determine the probabilities as we announced in the definition of rho at the beginning, the operator rho in quantum mechanics determines the probabilities of any states. How does it work? Well, if you pick a particular state in the Hilbert space, it has a well-defined probability if it's an eigenstate of the density matrix. This is an unusual operation that's not usually talked about - because the density matrix isn't an "observable" in the usual sense - like positions or momenta etc. But it's still an operator on the Hilbert space. I will formality treat the density matrix rho as the "operator for the probability". Pure states Imagine that you have a pure state psi. In that case, the density matrix rho is just the tensor product psi.psi* we wrote some time ago. Can you say what is the probability that the system described by this rho is finding itself in a particular state of the Hilbert space such as chi? Yes, but using the analogy between observables and density matrix, you can say such a thing only if chi is an eigenstate of rho. When is chi an eigenstate of the "tensor square" psi.psi*? Well, it's easy. It is if chi is either proportional to psi, or it's orthogonal to psi. In the former case, the eigenvalue of rho is 1 (Yes) and in the latter case, the eigenvalue of rho is 0 (No). It's this simple. Of course, it's not hard to see that the eigenvalues of rho=psi.psi* are (1,0,0,0,0...). Just choose am orthonormal basis of the Hilbert space whose first basis vector is psi itself. So I would like you to adopt the following restricted description: you can only "sharply" say what is the probability that the state described by psi - or by rho = psi.psi* - is finding itself in the state chi either if psi,chi are proportional to each other, in which case the probability is 100%, or if they're orthogonal, in which case the probability is 0%. No other linear superpositions have well-defined probabilities because they're not eigenvalues of rho. This is different from the conventional treatment that talks about probabilities for any state to be in any other state. But what this conventional treatment really means is the "expectation value of the probability", chi*.rho.chi, or |(psi*.chi)|^2. However, because chi is not an eigenstate of rho for a general chi, you shouldn't say that the probability is sharply defined! ;-) You may also allow a collapse of the wave function onto any of the eigenvectors of such a rho. Why? Because it doesn't do anything at all! ;-) The state psi or its density matrix rho = psi.psi* may collapse into some eigenstates chi of rho with the prescribed probabilities (the eigenvalues of rho in those states). The probabilities are 100% for chi = psi (up to a normalization) and 0% for the orthogonal choices. So the state psi will collapse to psi with probability 100% and nothing changes! The collapse to the orthogonal vector has a vanishing probability and the collapse to generic linear superposition isn't allowed because they are not eigenstates of the "probability operator" rho, the density matrix. Now, in general, you don't want rho to describe a pure state psi. In general, rho is a combination of the type rho = sumi=1...N pi |psii> < psii| where the states psi_i don't have to be orthogonal but the density matrix is still required to have its trace equal to one. In general, the density matrix is still a Hermitian operator. It follows that it can be diagonalized. Now, the eigenvalues of rho pick a privileged basis of states that have well-defined probabilities - it's the corresponding eigenvalues of rho - and they can be measured. In fact, you may always imagine that at any moment t, someone in the Heavens or elsewhere "made" rho collapse so that it was reduced to the form chi.chi* where chi is an eigenvector of rho before the collapse. The probability that the collapse picked a particular rho is the corresponding eigenvalue of rho in this state chi. Most people including many physics PhDs are obsessed with the collapse and because they find the spreading wave function too complicated while their brains are inadequate to deal with quantum mechanics, they're impatiently waiting for the right to make the wave function collapse, referring to their consciousness as the ultimate justification of this right to finally kill the spreading wave function. They're also interested in the other creatures who have consciousness and the same glorious rights to collapse "waves" as they do. ;-) I have good news for you! You're allowed to make the the density matrix rho, the generalized state vector, collapse at any moment you wish, without compromising the predictions (e.g. without destroying any interference) whatsoever! The only condition is that you must collapse it to one of the eigenstates of rho at the moment, and the probability imagined by you that the collapse took you to a particular eigenstate chi of the matrix rho has to be given by the eigenvalue of rho in chi. Isn't it great? This erases all your psychological problems with subjectivity etc. In fact, all "realist" observers may agree that the collapse is taking place all the time. (Well, because of relativity, "at a given moment" will still mean different things for differently moving observers, but staunch "realists" don't care about relativity much.) In practice, this may fail to satisfy the emotions of the staunchest anti-quantum zealots. Why? Well, we have already mentioned one reason. If you study the evolution of a pure quantum system that is remaining pure and coherent, the eigenvalues of rho remain (0,0,0...., 0,1,0,. ...0,0,0), which means that the collapse doesn't do anything. You're never allowed to collapse into a wrong basis - a pure state may only be collapsed onto itself. So the wave function continues to spread as time goes - the genuine anti-quantum zealots won't like it. Even if rho has many nonzero eigenstates, they're not necessarily the states that are easy to imagine for the anti-quantum zealots - the eigenstates of rho are typically different than the "intuitively natural" (wrong) states into which the anti-quantum zealots would like to collapse things. However, if you describe a subsystem by its density matrix that has been traced over the other, environmental degrees of freedom, decoherence guarantees that rho for this system will have many nonzero entries. The corresponding eigenstates of rho will be close to some intuitively "classical" states and in that case, the collapse does something nontrivial. Collapse is just in your mind I have formulated my interpretatino of quantum mechanics as allowing you to collapse rho into one of its eigenstates, with probabilities given by the corresponding eigenvalues of rho. The essential property of this rule is that if you insert these collapses with a random outcome obeying this rule to arbitrary moments of your time, your predictions won't change at all. So the collapse is completely subjective. In the past, you could have thought that when I say that the collapse is just a subjective change of knowledge, it was a philosophical assumption. However, in this case, it is actually a provable theorem! What I mean is that when you insert the collapse of rho onto the rho eigenstates as described above, you will get identical predictions for anything you calculate. Why? Well, it's simple. When you decompose rho into the eigenstates but in this case, you require that you use the right basis of rho eigenstates (which always exist) psi_i, each of the terms will be evolving independently and the different states psi_i which are orthogonal at the beginning will stay orthogonal in the future as well, because of the unitarity of the evolution. So if you later, at time T, make a measurement of something, which means that you decompose rho(T) into the eigenstates according to the same formula as the last displayed one, and you will ask about the probabilities of different outcomes, it's clear that each individual term in the decomposition of rho(T) has to come from at most one term in the decomposition of rho(t). To calculate the probability, you will only need p_i extracted from rho(t) while all the other terms will contribute nothing to this evolution and you may forget them. So it doesn't matter whether you will calculate the particular probability of an outcome encoded in the decomposition of rho(T) from the full evolution, or whether you insert the imagined collapse at time t; in both cases, the evolution up to the time t will just add a factor p_i from the decomposition of rho(t). The density matrix rho is gaining an ever larger number of nonzero eigenvalues - you may view this process, resulting from decoherence (=the loss of a realistic chance of different outcomes to interfere with each other in the future, i.e. the loss of the information about the relative phase of their amplitudes), as the "splitting of the many worlds". However, it's very clear that you only have one world in the prescription above. Moreover, the very point of this proof was just the opposite - to show that the collapse, when properly defined, is completely subjective and has no detectable consequences. I guess that MWI bigots want to use some related thinking exactly for the opposite cause - to claim that there objectively exist some thing that don't objectively exist. In this picture, the procedures used to obtain predictions from quantum mechanics are pretty much identical to the predictions obtained from the classical phase space distribution function in classical physics. The only thing is that things including rho are operators and they only have well-defined quantities (such as probabilities in the case of rho) when acting on their eigenstates. The operators don't commute. Collapse to a bigger subspace In the text above, the possibility of a nontrivial collapse was only possible because we were tracing over the environment, thus obtaining a density matrix with an increasing number of independent eigenstates with nonzero eigenvalues. Can we do it without this tracing over? Yes. In this case, the density matrix will stay pure if it was pure. As discussed above, a collapse to a pure state is not doing anything. Instead, you may consider a collapse to a set of projection operators P_1...P_K that sum to one. Analogously to the collapse into a pure state, a collapse into projectors may be freely inserted at any moment whenever all the operators P_1...P_K are chosen to commute with the density matrix rho. If that's so, you may imagine that after the collapse, the density matrix becomes rho.P_j = P_j.rho with probability - well, whatever the normalization of the "reduced magnitude" density matrix has to be divided by to normalize it again. The relationship between the projection operator and pure states that would appear in the "pure collapse" in the previous formalism may be imagined so that e.g. the projection operator is the projection operator on all microstates with a fixed state of the observer system and any state of what we used to call the environment. Again, in this case, one may show that such a collapse - describing a measurement of "a subset of degrees of freedom" - will have no impact on future predictions; it is a purely subjective trick. Needless to say, when I get to this formulation based on projectors that sum up to one, I am "almost" formulating quantum mechanics via the "consistent histories" approach. It's almost the same thing at this moment. Except that I am using a simpler and more explicit rule for the consistency - vanishing of the commutator between the projection operators and the density matrix rho. Note that the eigenvalues of rho which survive the collapse don't have to be equal to each other - the only condition is that you're collapsing onto a subspace of the Hilbert space spanned by rho eigenstates. Every observer also has the right to use projection operators whose commutator [P_i,rho] is not exactly zero but a small nonzero number. In that case, he will introduce errors and inconsistencies and it's up to her what is tolerable. That's analogous to small violations of the "consistency condition" in the usual consistent histories approach. Add to Digg this Add to reddit snail feedback (0) :
edd4c9b4a8888023
Sunday, April 30, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere The Final Theory: two stars A short comment: a reader has pointed out that right now, the crackpot book by • Mark McCutcheon has the average rating of 2 stars because of 200 one-star reviews that suddenly appeared on the website. The new reviews are a lot of fun: many reviews come from Brian Powell, Jack Sarfatti, Greg Jones, Quantoken, David Tong, me, and many others. Many of the readers have written several reviews - and you can see how they struggled to make their reviews acceptable. ;-) When we first informed about the strange system, McCutcheon's book had the average of 5 stars and no bad reviews at all. The previous blog article about this story is here. Saturday, April 29, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere How to spend 6 billion dollars? Related: What can you buy for 300 billion dollars? What is the best way to spend 6 billion dollars? Two weeks of Kyoto ILC: the linear collider One month of war in Iraq Millions of free PCs for kids Ten space shuttle flights Free polls from Additional comments: the world pays about 6 billion dollars for two weeks of the Kyoto protocol which cools down the Earth by 0.00006 degrees or so. The International Linear Collider would have the capacity to measure physics at several TeV more accurately than the LHC, but it is also more expensive - about 6 billion dollars. The U.S. pays 6 billion dollars for one month of the military presence in Iraq. One could buy 60 million computers for kids if the price were $100 as MIT promises. Whenever you launch space shuttle, you pay about 600 million USD. Klaus meets Schwarzenegger Figure 1: Californian leader Schwarzenegger with his Czech counterpart during a friendly encounter in Sacramento. Arnold has accepted Klaus' invitation to the Czech Republic. When Czech president Václav Klaus visited Harvard, he complained that the capitalism of the European Union is not the genuine capitalism the he always believed - the capitalism as taught by the Chicago school - but rather a kind of distorted, socialized capitalism, something that could be taught here at Harvard. ;-) Finally, he could have spoken to the peers - at the Graduate School of Business at University of Chicago. The other speakers over there agree with Klaus' opinions. In his speech, he explained that the Velvet Revolution was done by the people inside the country: it was not imported. Equally importantly, Americans have a naive understanding of the European unification because they don't see the centralized, anti-liberal dimension of this process. Twenty years after Chernobyl On Wednesday morning, it's been 20 years since the Chernobyl disaster; see The communist regimes could not pretend that nothing had happened (although in the era before Gorbachev, they could have tried to do so) but they had attempted to downplay the impact of the meltdown. At least this is what we used to say for twenty years. You may want to look how BBC news about the Chernobyl tragedy looked like 20 years ago. Ukraine remembered the event (see the pictures) and Yushchenko wants to attract tourists to Chernobyl. You may see a photo gallery here. Despite the legacy, Ukraine has plans to expand nuclear energy. Today I think that the communist authorities did more or less exactly what they should have done - for example try to avoid irrational panic. It seems that only 56 people were killed directly and 4,000 people indirectly. See here. On the other hand, about 300,000 people were evacuated which was a reasonable decision, too. And animals are perhaps the best witnesses for my statements: the exclusion zone - now an official national park - has become a haven for wildlife - as National Geographic also explains: • Reappeared: Lynx, eagle owl, great white egret, nesting swans, and possibly a bear • Introduced: European bison, Przewalski's horse • Booming mammals: Badger, beaver, boar, deer, elk, fox, hare, otter, raccoon dog, wolf • Booming birds: Aquatic warbler, azure tit, black grouse, black stork, crane, white-tailed eagle (the birds especially like the interior of the sarcophagus) Ecoterrorists in general and Greenpeace in particular are very wrong whenever they say that the impact of technology on wildlife must always have a negative sign. In other words, the impact of that event has been exaggerated for many years. Moreover, it is much less likely that a similar tragedy would occur today. Nuclear power has so many advantages that I would argue that even if the probability of a Chernobyl-like disaster in the next 20 years were around 10%, it would still be worth to use nuclear energy. Friday, April 28, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Yuval Ne'eman died Because this is a right-wing physics blog, it is necessary to inform you about the saddening news - news I heard from Ari Pakman yesterday - that Yuval Ne'eman (*1925), an eminent Israeli physicist and right-wing politician, died yesterday. If you're interested, you can read the article about him on Wikipedia and Peter Woit's blog, much like the text of Yisrael Medad, Ne'eman's political advisor. News summarized by Google are here. In 1961, Ne'eman published a paper with a visionary title • Derivation of strong interactions from a gauge invariance As far as I understand, the symmetry he was talking about was the flavor symmetry which is not really a gauge symmetry. Ne'eman co-authored the book "The Eightfold Way" with Murray Gell-Mann, contributed tremendously to the development of nuclear and subnuclear physics in Israel (which includes the nuclear weapons), and was the president of Tel Aviv University, among many other organizations. Science and fundamental science Chad Orzel did not like the proposals to build the ILC because they are derived from the assumption that high-energy physics is more fundamental a part of physics than other parts of physics - and he disagrees with this assumption. Instead, he argues that technology is what matters and it does not depend on particle physics. Also, Chad explains that one can have a long career without knowing anything about high-energy physics - which seems to be a rather lousy method to determine the fundamental value of different things. There are three main motivations why people stretch their brains and think about difficult things and science. We may describe the corresponding branches of science as follows: • recreational mathematics • applied science • pure science Recreational mathematics is studied by the people to entertain themselves and show others (and themselves) that they are bright. Chess in flash or without it may be viewed as a part of this category. People do this sort of activity because it is fun. Comedians are doing similar things although their work requires rather different skills. In this category, entertainment value is probably the main factor that determines the importance. People do whatever makes them happy and excited. If someone else does things on their behalf, they prefer those with a higher entertainment value. The invisible hand of freedom and the free market pretty much takes care of this activity. The rules of chess depend on many historical coincidences. Other civilizations could have millions of other games with different rules and the details really don't matter: what matters is that you have a game that requires you to turn your brain on. Applied science is studied because scientific insights can lead to economical benefits. They can improve people's lives, their health, give them new gadgets, and so forth. The practical applications are the driving factor behind applied science. People, corporations, and scientists pay for applied science because it brings them practical benefits. It is often (but not always) the case that the benefits occur at shorter time scales, and it is possible for many corporations and individuals to provide applied scientists with funding. And if you look around, you will see that many fields of applied science are led by laboratories of large corporations - such as IBM, drug companies, and others. Pure science is studied because the human beings have an inherent desire to learn the truth. In our Universe, the truth turns out to be hierarchical in nature. It is composed of a large number of particular statements and insights that can typically be derived from others. For equivalent insights, the derivations can work in both directions. In many other cases, one can only derive A from B but not B from A. The primary axioms, equations, and principles that can be used to derive many others are, by definition, more fundamental. The word "fundamental" means "elementary, related to the foundation or base, forming an essential component or a core of a system, entailing major change". If you respect the dictionaries, the physics of polymers may be interesting, useful, and important - but it is not too fundamental. If Chad Orzel or anyone else offers a contradictory statement, he or she abuses the language. Among the disciplines of physics, high-energy physics is more fundamental than low-energy physics. Moreover, I think that as long as we talk about pure science, being "fundamental" in this sense is a key component of being important. If we want to learn the scientific truth about the world, we want the most fundamental and accurate truth we can get. I am not saying that other fields should be less supported. Nor am I proposing a hierarchical structure between the people who chose different specializations. What I am saying is that other fields that avoid fundamental questions about Nature are being chosen as interesting not only because of their pure scientific value but also because of their practical or entertainment value. You may be trying to figure out what happens with a particular superconductor composed of 150-atom molecules under particular conditions. The number of similar problems may exceed the number of F-theory flux compactifications. How can you decide whether a problem like that - or any other problem in science - is important? As argued above, there are many different factors that decide about the answer: entertainment value, practical applications, and the ability to reveal major parts of the general truth. I guess that the practical applications will remain the most likely justification of a particular specialized research of a very particular type of superconductors. People and societies may have different motivations to study different questions of science. If you extend this line of reasoning, you will realize that people can also do many things - and indeed, they do many things - that have no significant relation with science. And they can spend - and indeed, do spend - their money for many things that have nothing to do with science, especially pure science. And it's completely legitimate and many of these things are important or cool. When you think about the support of science in general, what kind of activity do you really have in mind? I think that pure science is the primary category that we consider. Pure science is the most "scientific" part of science - one that is not motivated by practical applications. As we explained above, pure science has a rather hierarchical structure of insights. If something belongs to pure science, it does not mean that it won't have any applications in the future. In the 1910s-1930s, radioactivity was abstract science. By various twists and turns, nuclear energy became pretty useful. There are surely many examples of this kind. The criterion that divides science into pure science and applied science is not the uncertain answer to the question whether the research will ever be practically useful: the criterion is whether the hypothetical practical applications are the main driving force behind the research. Societies may be more interested in pure science or less interested in pure science. The more they are interested in pure science, the more money they are willing to pay for pure science. A part of this money is going to pure science that is only studied as pure science; another part will end up in fields that are partly pure and partly applied. Chad Orzel thinks that if America saves half a billion dollars for the initial stages of the ILC collider, low-energy physics will get extra half a billion dollars. I think he is not right. The less a society cares about pure science - even about the most fundamental questions in pure science such as those in high-energy physics - the less it is willing to pay for other things without predictable practical applications or entertainment value. Eliminating high-energy experimental physics in the U.S. would be a step towards the suppression of experimental pure science in general. Thursday, April 27, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Iran may nuke Czechia, Italy, Romania According to Haaretz, Iran has just received a first batch of BM-25 missiles from its ally in the Axis of Evil, namely North Korea. They are able to carry nuclear warheads and attack countries such as the Czech Republic, Italy, and Romania. Such a conflict is not hard to start. Imagine that sometime in the future, for example on August 22nd, 2006, Iranian troops suddenly attack Romanian oil rigs on their territory. Romania will respond nervously - and the mad president of Iran will have an opportunity to check out his nukes. The Czech Republic is, together with England, one of two European countries on an Iranian black list of countries whose citizens are not allowed to get 15-day visa for Iran. Some Muslims in the Czech Republic preach that Islamic Shari'a law should be adopted by Czechia. The diplomatic relations between Czechia and Iran cooled down 8 years ago when the Radio Liberty (more precisely in Iran: Radio Tomorrow) started to broadcast anti-government programs in Persian from Prague. See here. US told to invest in particle physics National Academy of Sciences has also recommended the U.S. to invest into neutrino experiments and high-precision tests of the Standard Model to stop the motion of the center of mass of particle physics away from the U.S. New York Times Dennis Overbye from the New York Times describes the same story: the ILC must be on American soil. See also and Nature. CERN new tax Meanwhile, CERN has adopted the digital solidarity principle: 1% of ITC-related transactions must be paid to CERN. Matt Strassler has just described their fascinating work on with Richard Brower, Chung-I Tan, and Joe Polchinski. Return 40 years into the past. The research that eventually evolves into string theory is proposed as a theory of strong interactions: something that would be known as a failed theory of strong interactions for the following 30 years. Things only start to slowly change after the 1997 discovery by Juan Maldacena and a steady flow of new insights eventually leads to a nearly full revival of the description of strong interactions using a "dual" string theory, albeit this string theory is more complicated than what was envisioned in the late 1960s. QCD can be equivalently described as the old string theory with some modern updates: higher-dimensional and braney updates. The basic concepts of the Regge physics included the Regge trajectory, a linear relation between the maximum spin "J" that a particle of squared mass "m^2" can have; the slope - the coefficient "alphaprime" of the linear term "alphaprime times m^2" - is comparable to the inverse squared QCD scale. The dependence of "J" could be given by a general Taylor expansion but both experimentally as well as theoretically, the linear relation was always preferred. Note that "alphaprime" in "the" string theory that unifies all forces is much much smaller area than the inverse squared QCD scale (the cross section of the proton). We are talking about a different setup in AdS/QCD where the four-dimensional gravity may be forgotten. This picture is not necessarily inconsistent with the full picture of string theory with gravity as long as you appreciate the appropriately warped ten-dimensional geometry. At this moment, you should refresh your memory about the chapter 1 of the Green-Schwarz-Witten textbook. There is an interesting limit of scattering in string theory (a limit of the Veneziano amplitude) called the Regge limit: the center-of-mass energy "sqrt(s)" is sent to infinity but the other Mandelstam variable "t" - that is negative in the physical scattering - is kept finite. The scattering angle "sqrt(-t/s)" therefore goes to zero. In this limit, the Veneziano amplitude is dominated by the exchange of intermediate particles of spin "J". Because the indices from the spin must be contracted, the interaction contains "J" derivatives, and it therefore scales like "Energy^J". Because there are two cubic vertices like that in the simple Feynman diagram of the exchange type, the full amplitude goes like "Energy^{2J}=s^J" where the most important value of the spin "J" is the linear function of "t" given by the linear Regge relation above. The amplitude behaves in the Regge limit like "s^J(t)" where "J(t)" is the appropriate linear Regge relation. You can also write it as "exp(J(t).ln(s))". Because "t=-s.angle^2", you see that the amplitude is Gaussian in the "angle". The width of the Gaussian goes like "1/sqrt(ln(s))" in string units. Correspondingly, the width of the amplitude Fourier-transformed into the transverse position space goes like "sqrt(ln(s))" in string units. That should not be surprising: "sqrt(ln(s))" is exactly the typical transverse size of the string that you obtain by regulating the "integral dsigma x^2" which equals, in terms of the oscillators, "sum (1/n)" whose logarithmic divergence must be regulated. The sum goes like "ln(n_max)" where "n_max" must be chosen proportional to "alphaprime.s" or so. If you scatter two heavy quarkonia (or 7-7 "flavored" open strings in an AdS/CFT context, think about the Polchinski-Strassler N=1* theory) - which is the example you want to consider - the interaction contains a lot of contributions from various particles running in the channel. But the formula for the amplitude can be written as a continuous function of "s,t". So it seems that you are effectively exchanging an object whose angular momentum "J" is continuous. Whatever this "object" is, you will call it a pomeron. In perturbative gauge theory, such pomeron exchange is conveniently and traditionally visualized in terms of Feynman diagrams that are proportional to the minimum power of "alpha_{strong}" that is allowed for a given power of "ln(s)" that these diagrams also contain: you want to maximize the powers of "ln(s)" and minimize the power of the coupling constant and keep the leading terms. When you think for a little while, this pomeron exchange leads to the exchange of DNA-like diagrams: the diagrams look like ladder diagrams or DNA. There are two vertical strands - gluons - stretched in between two horizontal external quarks in the quarkonia scattering states. And you may insert horizontal sticks in between these two gluons, to keep the diagrams planar. If you do so, every new step in the ladder adds a factor of "alpha_{strong}.ln(s)". You can imagine that "ln(s)" comes from the integrals over the loops. What is the spin of the particles being exchanged for small values of "t", the so-called intercept (the absolute term in the linear relation)? It is a numerical constant between one and two. Matt essentially confirmed my interpretation that you can imagine QCD to be something in between an open string exchange (whose intercept is one) and a closed string exchange (whose intercept is two). The open string exchange with "J=1" is valid at the weak QCD coupling - it corresponds to a gluon exchange. At strong coupling, you are exchanging closed strings with "J=2". For large positive values of "t", you are in the deeply unphysical region because the physical scattering requires negative values of "t" (spacelike momentum exchange). But you can still talk about the analytical structure of the scattering amplitude - Mellin-transformed from "(s,t)" to "(s,J)". For large positive "t", you will discover the Regge behavior which agrees with string theory well. Unfortunately, this is the limit of scattering that can't be realized experimentally. Nevertheless, for every value of "t", you find a certain number of effective "particles" that can be exchanged - with spins up to "J" which is linear in "t". The negative values of "t" can be probed experimentally, and this is where string theory failed drastically in the 1970s: string theory gave much too soft (exponentially decreasing) behavior of the amplitude at high energies even though the experimental data only indicated a much harder (power law) behavior. So now you isolate two different classes of phenomena: • the naive string theory is OK for large positive "t" • the old string theory description of strong interactions fails for negative "t"; the linear Regge relation must break down here But the old string theory only fails for negative "t" if you don't take all the important properties of that string theory into account. The most important property that was forgotten 35 years ago was the new, fifth dimension. The spectrum of particles - eigenvalues of "J" - is related to the Laplacian but it is not just a four-dimensional Laplacian; it also includes a term in the additional six dimensions, especially the fifth holographic dimension of the anti de Sitter space. And this term can become - and indeed, does become - important. What is the spectrum of allowed values of "J" of intermediate states that you can exchange at a given value of "t"? Recall that each allowed value of "J" of the intermediate objects generates a pole in the complex "J" plane - or a cut whenever the spectrum of allowed "J" becomes continuous. For large positive "t", the spectrum contains a few (roughly "alphaprime.t") eigenvectors with positive "J"s, and a continuum with "J" being anything below "J=1". For negative values of "t", you only see the continuum spectrum (a cut) for "J" smaller than one. Don't forget that the value of "J" appears as the exponent of "s" in the amplitude for the Regge scattering. We are talking about something like "s^{1.08}" or "s^{1.3}" - both of these exponents appear in different kinds of experiments and can't be calculated theoretically at this moment. Matt argues convincingly that the Regge behavior for large positive "t", with many poles plus the cut below "J=1", is universal. The "empty" behavior at large negative "t" where you only see the continuum below "J=1" is also universal. It is only the crossover region around "t=0" that is model-dependent and where the details of the string-theoretical background enter. And they can calculate the spectrum of "J" as a function of "t" in toy models from string theory. They assume that the string-theoretical scattering in the AdS space takes place locally in ten dimensions, and just multiply the corresponding amplitudes by various kinematical and warp factors - the usual Polchinski-Strassler business. The spectrum of poles and cuts in the "J" plane reduces to the problem to find the eigenvalues of a Laplacian - essentially to a Schrödinger equation for a particle propagating on a line. You just flip the sign of the energy eigenvalues "E" from the usual quantum mechanical textbooks to obtain the spectrum of possible values of "J". And they can determine a lot of things just from the gravity subsector of string theory - where you exchange particles of spin two (graviton) plus a small epsilon that arises as a string-theoretical correction. For large positive "t", you obtain a quantum mechanical problem with a locally negative (binding) potential that leads to the discrete states - those that are seen at the Regge trajectory. When all these things are put together, they can explain a lot about physics observed at HERA. The calculation is not really a calculation from the first principles because they are permanently looking at the HERA experiments to see what they should obtain. But they are not the first physicists who use these dirty tricks: in the past, most physicists were constantly cheating and looking at the experiments most of their time. ;-) Wednesday, April 26, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Rae Ann: Alien recycling By Rae Ann, one of the four winners who have seen the #400,000 figure. My first grader brought home some interesting EPA publications for school children. While I totally support teaching children to recycle and be mindful of wise use of resources I think it's a little off to tell them that 'garbage leads to climate change'. And what's with the little flying saucers and aliens (graphics in the publications)? What do they have to do with climate change and garbage?? One publication does open with the statement, "Space creatures might think the idea of reusing containers is an alien concept but here on Earth it's easy to keep an old jar out of the trash and give it new life." (That is a direct quote and the missing comma is their punctuation error.) Well, how does the government know that aliens don't recycle? Is it because they have left a bunch of their stuff here? Hmm? Sounds like a very prejudiced and discriminatory attitude to me. What is that teaching our kids about aliens?? The Czech Fabric of the Cosmos My friend Olda Klimánek has translated Brian Greene's book "The Fabric of the Cosmos" into Czech - well, I was checking him a bit, reading his translation twice - and the book was just released by Paseka, a Czech publisher, under the boring name "Struktura vesmíru" (The Structure of the Universe). The other candidate titles were just far too poetic. I think he is a talented writer and translator and there will surely be many aspects in which his translation is gonna be better than my "Elegantní vesmír" (The Elegant Universe). What I find very entertaining is the different number of pages of this book (in its standard hardcover editions) in various languages: • Czech: Struktura vesmíru, 488 pages • Polish: Struktura kosmosu, 552 pages • English: The Fabric of the Cosmos, 576 pages • Portuguese: O tecido do cosmo, 581 pages • Italian: La trama del cosmo, 612 pages • French: La magie du Cosmos, 666 pages • Korean: 우주의 구조, 747 pages • German: Der Stoff, aus dem der Kosmos ist, 800 pages I am not kidding and as far as I know, Olda's translation is complete. If you need to know, 800/488 = 1.64. ;-) The Czech Elegant Universe was also much shorter than the German one but the ratio was less dramatic. I like the rigid rules of German but this inflation of the volume is simply off the base. The Czech language has similar grammar rules but it avoids the articles and it has much more free word order. A slightly more complex system of declination removes many prepositions. And Olda may simply be a more concise translator. :-) Tuesday, April 25, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Uncle Al: on the equivalence principle By Uncle Al who has submitted a #400,000 screenshot. Does the Equivalence Principle have a parity violation? Weak interactions (e.g., the Weak Interaction) routinely violate parity conservation. Gravitation is the weakest interaction. Either way, half of contemporary gravitation theory is dead wrong. Gravitation theory can be written parity-even or parity-odd; spacetime curvature or spacetime torsion. Classical gravitation has Green~Rs function Newton and metric Einstein or affine Weitzenböck and teleparallel Cartan. String theory has otherwise and heterotic subsets. Though their maths are wildly different, testable empirical predictions within a class are exactly identical... ...with one macroscopic disjoint exception: Do identical chemical composition local left and right hands vacuum free fall identically? Parity-even spacetime is blind to geometric parity (chirality simultaneously in all directions). Parity-odd spacetime would manifest as a background pseudoscalar field. The left foot of spacetime would be energetically differently fit by a sock or left shoe compared to a right shoe. String theory could be marvelously pruned. Does a single crystal solid sphere of space group P3(1)21 quartz (right-handed screw axes) vacuum freefall identically to an otherwise macroscopically identical single crystal solid sphere of space group P3(2)21 quartz (left-handed screw axes)? Both will fall along minimum action paths. In parity-odd spacetime those local paths will be diastereotopic and measurably non-parallel -- a background left foot fit with left and right shoes. Frank Wilczek: Fantastic Realities Technical note: Everyone who will be a visitor number 400,000 and who will submit an URL for the screenshot proving the number today will be allowed to post any article on this blog, up to 6 kilobytes. The reader #400,000 was Rae Ann who just returned from a trip - what a timing. :-) UncleAl has still opened the page (reload) when it was showing #400,000, much like Doug McNeil. I have no way to tell who was the first one. The others just reloaded the page and obtained the same number because it was not their first visit today, and it thus generated no increase of the counter. Congratulation to all three. fantastic realities Yes, I just saw this irresistable book cover at Betsy Devine's blog. The book is called and Frank Wilczek is apparently using a QCD laser. The journeys include many of Wilczek's award-winning Reference Frame columns. Have you heard of Wilczek's Reference Frame columns in Physics Today? Let me admit that I have not. ;-) Because of the highly positive reviews, your humble correspondent has just decided to double the number of copies that Frank Wilczek is going to sell. Right now, the yesterday's rank is 100,000 and today's rank is 130,000. Look at the promotional web pages of the book, buy the book, and see tomorrow what it does with the rank. Remember that the rank is approximately inversely proportional to the rates of selling the books. Update: At 7:00 p.m., the rank was about 11,000, better than 136,000 in the morning. On Wednesday 8:30 a.m., the rank was 9,367, an improvement by a factor of fifteen from the rank 24 hours earlier. The promotional web pages also reveal that Betsy is proud to be the 4th Betsy found by Google. Congratulations, and I wish her to capture the most important Frank Wilczek blog award, too. ;-) Monday, April 24, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Bruce Rosen: brain imaging Bruce Rosen started the colloquium by saying that it is useful to have two degrees - PhD and MD - because every time he gives a talk for the physicians, he may impress them by physics, and every time he speaks in front of the physicists, he may impress them by medicine. And he did. Although there are many methods to study the anatomy and physiology of the brain - such as EEG and/or flattening the brain by a hammer which is what some of Rosen's students routinely do - Rosen considers NMR to be the epicenter of all these methods. (This is a conservative physics blog, so we still refer to these procedures as NMR and not MRI.) This bias should not be unexpected because Rosen's advisor was Ed Purcell. Some of the results he shown were obtained by George Bush who is an extremely smart scientist as well as psychiatrist, besides being a good expert in B-physics. Rosen has shown a lot of pictures and videosequences revealing how the activity of the brains depends on time in various situations, on the presence of various diseases, on the age, and on the precise way how the brains are being monitored. Many of these pictures were very detailed and methods already exist to extract useful data from the pictures and videos that can't be seen by a naked eye. Human brains are being observed at 10 Tesla or so, and magnetic field of 15 Tesla is the state-of-the-art environment to scan the brains of smaller animals. The frequency used in these experiments is about half a gigahertz. Many tricks how to drastically reduce the required amount of drugs that the subject must take before the relevant structures are transparent have been found. Most of the data comes from observations of water that is a dominant compound in the human body and not only the human body. It turns out that the blood that carries oxygen and the blood that carries carbon dioxide is diamagnetic and paramagnetic, respectively. That simplifies the NMR analysis considerably. There's a lot of data in the field and fewer ways to draw the right conclusions and interpretations out of the data. OVV in higher dimensions? Brett McInnes proposes a generalization of the Hartle-Hawking approach to the vacuum selection problem pioneered by Ooguri, Vafa, and Verlinde (OVV) and described by this blog article to higher dimensions. McInnes identifies the existence of two possible Lorentz geometries associated with one Euclidean geometry as the key idea of the OVV paradigm. He argues that the higher-dimensional geometries must have flat compact sections which is certainly a non-trivial and possibly incorrect statement: Everything you wanted to know about Langlands ... geometric duality but you were afraid to ask could be answered in this 225-page-long paper by Edward Witten and Anton Kapustin: Previous blog articles about the Langlands program the following ones: A semi-relevant discussion about related topics occurs at Not Even Wrong. Translation and related news Just a technical detail: I've added two utilities to the web pages of individual articles: • related news and searches, powered by Google (blue box under each article) • translations of the blog articles to German, French, and Spanish, powered by Google (three flags at the top of the articles) I apologize to the readers from the remaining 142 countries that also visit this website - according to the Neocounter - besides the three countries indicated above that their language has yet to be included. :-) Recent comments Also, "recent comments" were added to the sidebar of the main page. The recent slow comments in the lower Manhattan (skyscraper) area are sorted according to the corresponding article. You may find out which article the comment belongs to if you hover over the timestamp. You can also click it. There are also ten "recent fast comments" in a scrolling window at the upper portion of the sidebar. Sunday, April 23, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Leonard Susskind Podcast I am just listening a podcast with Leonard Susskind. You can find the link somewhere on this page. I will add it here later. Then you click "Podcasts" on the left side, and the second one is Susskind: the 5.57 MB is 23:55 long. Entertaining, recommended. Manic Miner Manic Miner... Manic miner flash game removed from the page because it was making a lot of noise. Please click at the second "Manic miner". How many people used to play such things 20 years ago? Links to previous flash games on this blog can be found here. PageRank algorithm finds physics gems Several colleagues from Boston University and from Brookhaven have proposed a method to look for influential papers using the same algorithm that Google uses to rank web pages. This algorithm uses the list of web pages (or papers) and the links between them (or citations) as input. The web pages or papers are nodes of a graph and the citations are oriented links. It works as follows: You have lots of "random walkers". Each of them sits at some paper XY. In each step, each random walker either jumps to a random paper in the full database, with probability "d", or it jumps to a random paper mentioned in the references of the previous paper XY, with probability "1-d". Once the number of random walkers associated with each paper reaches equilibrium (approximately), the algorithm terminates. The number of walkers at each paper gives you the rank. Saturday, April 22, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Illinois: Particle Accelerator Day Illinois' governor has declared this Saturday (or Friday?) to be the and everyone must celebrate. Mr. Blagojevich is trying to attract the future linear ILC collider to his state. Congratulations, Argonne and Fermilab. illuminates some ILC attempts of these two facilities here. Meanwhile, on the same day, the celebrations of the Earth Day, invented by John McConnell, dominate in Massachusetts. Those who are already fed up with the Earth - and with Google Earth - may try Google Mars. Via JoAnne. Detlev Buchholz: algebraic quantum field theory Prof. Detlev Buchholz who is a rather famous researcher in the algebraic quantum field theory community has given the duality seminar today and we had a tête-à-tête discussion yesterday. He has attempted to convert the string theorists to the belief system of algebraic quantum field theory which is not a trivial task. Algebraic quantum field theory is a newer version of the older approach of axiomatic quantum field theory. In this approach, the basic mathematical structure is the algebra of bounded operators acting on the Hilbert space. In fact, for every region R, you can find and define a subalgebra of the full algebra of operators, they argue. A goal is to construct - or at least prove the existence of - quantum field theories that do not depend on any classical starting point. This is a nice goal. Because the string theorists know S-dualities and many other phenomena in field theory and string theory which imply that a quantum theory can have many classical descriptions - more precisely, it can have many classical limits - we are certainly open to the possibility that we will eventually be able to formulate our interesting theories without any direct reference to a classical starting point. Instead, we will be able to derive all possible classical limits of string/M-theory from a purely non-classical starting point. On the other hand, the particle physics and string theory communities are deeply rooted in experimental physics and we simply do not want to talk about some abstract concepts without having any particular theory that can at least in principle predict the results of experiments and that respects these concepts. In fact, we want to focus on theories of the same kind that are relevant for observational physics. Friday, April 21, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Evolving proton-electron mass ratio? Update: In 2008, a new experiment with ammonia found no time-dependence in the last 6 billion years. Klaus Lange has pointed out that describes a Dutch experiment performed primarily in the European Southern Observatory - hold your breath, this observatory is located in Chile. They measured the spectrum of the molecular hydrogen that depends on the proton-electron mass ratio "mu". Note that this ratio is about 1836.15. Twenty years ago I played with the calculator and it turned out that this number can be written as • 6.pi^5 = 1836.12. This agreement has promoted me to the king of all crackpots: with only three characters, namely six, pi, five, I can match around 5 or 6 significant figures of the correct result. Actually my calculator only had 8 significant figures (with no hidden figures) and I exactly matched 8 significant figures of 1836.1515 written in the mathematical tables of that time. Later I learned that someone else has actually published this "discovery" fifty years ago, and the agreement got worse with better calculators and better measurements in particle physics. More seriously, the Dutchmen now claim that the ratio was 1.00002 times higher twelve billion years ago. The New Scientist immediately speculates that this could prove extra dimensions or string theory. I, for one, have absolutely no idea where this statement comes from. I personally believe that these constants have been constant in the last 12 billion years - and moreover this opinion is completely and naturally compatible with string theory. George Bush meets Prof. Albert Einstein As soon as Lee Smolin asked the question several gifted Korean engineers from a company called Hanson Robotics gave a possible answer by asking a better question: Click the link above to see the final stages of the project. Anatomical pictures are here. The engineers decided that the body was not too important - what matters is Einstein's brain (plus face, to make a good package). They replaced the body by a robot. Everything has worked fine so Prof. Albert Einstein, nicknamed Albert HUBO, could meet with the president of the United States of America. It is widely believed that Hon. George W. Bush has convinced Prof. Albert Einstein to oppose the attempts of Einstein's colleagues to force Bush to take the nuclear option off the table. HUBO said that the Iranians could be working on the same bomb. Click the photograph above or here to see a directory with many other photographs. Einstein's robotic brother ASIMO supervised by Koizumi, the prime minister of Japan, has met the former Czech prime minister Špidla in 2003 and demonstrated that Špidla was a sourball. See here. So far, Prof. Einstein, much like Honda's ASIMO, only knows how to walk, serve tea, and compute spin foam amplitudes, so it is not terribly useful. But they hope to teach Einstein quantum mechanics and bosonic string theory next week and how to climb stairs in a few years. Meeting a robot in 1999 In 1999 or so, when I was at Rutgers, I met a robot in the Busch campus dining hall. He came to us, shaked my hand and we talked about everything - including Heisenberg's uncertainty principle. His voice was a typical computer voice equipped with a very authentic human intonation. He was so interesting and smart! The debate was much more meaningful than most debates with various loop quantum gravity people and many others. I was stunned: have they already succeeded to create artificial intelligence that exceeds not only 90% of people but also many senior professors? The answer was mysterious for half a day. Later I could re-check that it was a "synthetic personality" that was remotely controlled by a human being from about 50 meters. The human being could see through the robot's eyes, and he could control the motion and submit his speech that was transformed into the computerized voice color. Thursday, April 20, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Jefferson Physics Laboratory becomes a historic site Jefferson Physical Laboratory where we have offices has been declared a historic site by the American Physical Society, mostly because it is the first building that was ever built in the U.S. for physics research. Figure 1: The picture is mine See the letter that President Lawrence Summers and the department chair John Huth received here. The picture above is from 2002 but you already see the new attick which is pretty these days. Recall that it is exactly this Jefferson tower where the first gravitational red shift experiment was done by Pound, Rebka, and Snyder in the early 1960s. Its 22.6 meters were enough to measure the 4.92 x 10^{-15} relative change of the frequency of 14.4 keV gamma rays from iron-57. The prediction of general relativity - of a red shift factor "(1+gh/c^2)" (verify the numbers!) - was confirmed with a 1% accuracy. Integrability: giant magnons Diego Hofman and Juan Maldacena - these two physicists should not be confused with Diego Maradona - study the • excitations of N=4 gauge theory in d=4 in the planar limit. Recall that according to the gauge-gravity holographic correspondence, the strong coupling limit describes type IIB string theory on the product space "AdS5 x S5". A few years ago, Berenstein, Maldacena, and Nastase have shown that the gauge theory is not equivalent to pure supergravity but the full string theory; they identified the strings with the long traces. This research direction has been transformed into the studies of integrability and spin chains (these are the discretized strings) and we have talked about this topic at various places, for example here. This spin chain itself carries excitations and the most important ones are called magnons: it's an excitation that reverts the direction of a single spin (or the "magnetic moment" if you wish) in the spin chain and propagates as a wave along the chain. In the planar limit, i.e. up to the leading terms in the "1/N" expansion, physics should simplify. Many people have believed for some time that a full exact solution of string theory in this limit should exist. This task is equivalent to a full understanding of the worldsheet of a string propagating in the "AdS5 x S5" background for the simplest choice of its topology. In the variables mentioned above, the question is reduced to the spectrum, the dispersion relations, and the S-matrix of the magnons. Effectively, one needs to study the S-matrix for various polarizations and encounters a "256 x 256" matrix. Its form was recently fixed by Niklas Beisert, up to an overall normalization. Moreover, one month ago, Romuald Janik of Poland has shown how the crossing symmetry emerges from the formulae for the S-matrix. Hofman and Maldacena confirm the results but add something extremely interesting: the adjective "giant". In analogy with giant gravitons, you may suspect that there will be a new picture that replaces the original point-like magnon excitations by something big. Harvard Crimson: environmentalism is dead The Harvard Crimson has looked at environmentalism with critical and bright Harvard eyes and concluded that and that we have a chance to enter a new era in which the environment itself, not an ideology, is a winner. Piotr Brzezinski '07, a member of the Resource Efficiency Program, argues that all the dire predictions have so far been falsified, and if our care about the environment is supposed to impact reality in the future, those who care must abandon some methods such as the authoritative Soviet style manifested in the Kyoto protocol, open some taboos for debate, and start to publish realistic appraisals of reality even if they lead to less exciting headlines in the newspapers. Wednesday, April 19, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Michele Papucci: neutrino optics Michele Papucci from Berkeley gave a talk about neutrino optics. There will be a preprint about it with Gilad Perez, Hitoshi Murayama, and one more author whose name will be completed here if necessary. When we test cosmological models, we rely on regular optics of photons. Are there other "eyes" we could use? They must be weakly interacting, so the only possibilities are • gravitational waves • neutrinos Michele only focused on the last ones, the neutrinos. More precisely, it is the electron antineutrinos he is interested in. They are produced by supernovae (yes, there is some neutrino oscillation physics you must take into account); on the other hand, the Sun only creates neutrinos so the solar neutrino backgrounds does not affect their proposed experiments. You can't really measure the direction from which they come (pretty fuzzy optics) because the particles created in the inverse beta decay have a momentum that is virtually uncorrelated with the antineutrinos' momenta. So the only thing you can measure is the distribution of energies. BBC climate software confuses 200,000 computers This story is a good example how the climate models work in the most optimistic case. The idle time of most PCs is wasted. About 30,000 people like me run software such as MPRIME - the search for the greatest prime integers in the world. It is a well-defined activity and there are very good reasons to trust this software. Actually, there exist other programs above the BOINC platform, and some of them can be found in this list: However, some people don't like things like LHC at home too much. Instead, they want to save the world and help the humanity. So they download the third program in the list above, namely You can also join the group of 200,000 enthusiasts, the saviors of the planet, if you click the link above and continue with "Taking part in CPDN". This community will calculate the date of the armageddon. ;-) But wait a minute. The Reference Frame has been saying that the existing climate models are not trustworthy and those who run them often fail to respect basic principles of science. Ignore bloggers at your peril Clifford Johnson has pointed out an article in the Guardian. The article discusses some kind of research about the influence of bloggers. It also mentions three companies that were affected by bloggers because the bloggers described physics of Kryptonite locks, McDonald's abracadabra, as well as Dell whose last CEO was possibly fired. Well, the visitor data indicates that very different segments of the society are being influenced. For example, many people were looking for Angela Merkel semi-naked today. And of course, people are still interested in Mary Winkler as well as a potential massive nuclear strike. More demanding readers look for physics blogs uncertainty as well as the sad story of John Brodie, the physicist. Tuesday, April 18, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Cosmological breaking of SUSY and the seesaw Tonight, Michael McGuigan has made a new step in his attempt to make the seesaw mechanism for the cosmological constant realistic: The paper combines the previous work of Michael McGuigan - that we discussed here and that was mostly based on this blog article and/or comments of this article by Sean Carroll - with the brave proposal of my (former) adviser Tom Banks: Recall that Tom has proposed to interpret the cosmological constant - the curvature of empty space - as the primary effect and the supersymmetry breaking in particle physics as its consequence. This changes the question from "why is the cosmological constant so small" to the question "why is the supersymmetry breaking in particle physics so strong". The supersymmetry breaking induced by the tiny curvature of our Universe would normally be negligible, and Tom circumvents this problem by suggesting that an important exponent in his power law is corrected from a classical value of 1/4 to the value of 1/8 by huge effects of virtual black holes whose loops are localized near the de Sitter horizon. The relation with the seesaw mechanism is not quite clear to me - although both methods of course try to obtain the same kind of result for the vacuum energy (but via different effects, I think). Right now I don't have enough time to tell you exactly what I think about the proposal but the paper is rather concrete and tries to apply the Wheeler-DeWitt equation on various string-theoretical backgrounds. He seems to show that the off-diagonal elements of the vacuum energy (transitions) exist in three spacetime dimensions or less. Can you obtain these off-diagonal elements from Coleman-DeLuccia-like instantons? I believe that the proposal is interesting enough to be looked at. Incidentally, Apple finally offers Mac users a decent operating system. It is called Windows XP. Monday, April 17, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Stanislav Petrov supersedes Easter Bunny and Jesus Christ During the paganic era, people would celebrate Easter as the holidays of spring, fertility, and Easter bunny. The Christians cleverly overwrote this special season by the anniversary of resurrection of Jesus Christ, our savior. However, things changed again in 2006. The liberal blogosphere, including Cosmic Variance and In Search of 42, among hundreds of other blogs, has replaced the Easter bunny and Jesus Christ by a Soviet military officer: the Easter has become the Stanislav Petrov Day. It is not exactly clear why the Easter season was chosen. Well, Stanislav Petrov (*1939) saved the world on September 26th, 1983. He realized that the Soviet computer system was crappy - because it was a technology developed in a left-wing political system - and discarded the warning of his computers that the American missiles were approaching the Soviet targets. By having failed to inform his superiors, he has arguably saved half a billion lives. ;-) The details have been secret until 1998. However, the rough story was not. I remember that on Monday, September 26th 1983, when I was in the 4th grade, during Andropov's era, we were just playing volleyball in the gym or something like that when the school radio announced that the international situation deteriorated and the conflict was imminent. We have never learned anything else beyond this single message in the school radio and the worries faded away completely. Today, Petrov lives in relative poverty as a Russian pensioner. A San Francisco peace organization named him the new savior of the world (only one of his two predecessors enjoys the same honor; don't confuse the honor with the true savior of modern music) and awarded him with a breathtaking amount of $1,000. Congratulations. If someone wants to send him more money, let me know. Back to 2006 But we live in 2006 and the main target right now is not Moscow but Tehran. Professor James Miller who is a game theory expert and a candidate for the president of Harvard - one that vows to defeat feminism - has offered a smooth scenario how the U.S. attacks on Iran will be started and justified. The Israeli prime minister will inform Bush that Israel is threatened and it will have to nuke Iran unless the nuclear program of the crazy mullahs is stopped. Because Iran wants to wipe Israel off the map, Israel has a kind of moral right to make such an announcement. The U.S. weapons are much stronger and cleaner than the Israeli weapons. By using both types against Iran, Bush will save not only Israel but he will also save millions of Iranian lives that would otherwise be lost because of the dirty Israeli nukes. Next year, Easter Bunny, Jesus Christ, and Stanislav Petrov will be replaced by George Bush (and James Miller), the new savior. Mahmoud is probably a nail Meanwhile, it's been announced that Mahmoud Ahmadinejad is probably a nail. What kind of nail? He is a nail of the Hidden Imam who is secretly the Sovereign of the World and who has been hiding since 941. :-) See here. Mahmoud received the presidency from the Hidden Imam for promising to provoke a clash of civilizations. Mahmoud realizes that the U.S. is the last infidel country whose military is not impotent and Mahmoud, supported by God, will defeat the U.S. in a long asymmetric war. But he will wait until 2008 when Bush is out of office because Bush is clearly an aberration - everyone else since Truman would run away. A divine anthropic coincidence puts the triumph of the Iranian Manhattan project, secretly pursued by Imam Hossein Nuclear University, to the same year 2008, Mahmoud argues. Wow. These people are real nutcases which is not a good combination with the advanced P-2 centrifuges that, according to the New York Times, are suddenly again being developed in Iran. Mahmoud has just given Hamas the same thing that Harvard has pledged for the feminist programs: 50 million dollars. Finally, Reuel Gerecht from AEI asks the question: The U.S. and the U.K. have already been training an occupation of a fictitious Middle East country called "Korona" in 2015 whose territory happens to coincide with Iran and whose citizens are Iranians. Well, obviously, some dynamics is on both sides. Saturday, April 15, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Carlo Rovelli and graviton propagator Several readers have asked me what I think about a new in loop quantum gravity, an attempt described as a groundbreaking paper by a fellow blogger and included in the unfinished revolution by another blogger. It would be far too dramatic to say that I am flabbergasted but one thing is clear. The work is so manifestly incorrect that I just can't fully comprehend how someone who has attended at least one quantum field theory course can fail to see it. But of course, yes, I am happy that people are still trying different things and some of them don't get discouraged by decades of failure - and I always open such papers with an enthusiastic hope that a new breakthrough will appear in front of my eyes. ;-) The paper linked above is supposed to be a more complete version of Rovelli's previous graviton propagator paper. Indeed, you can see that several pages in these two papers are identical. Most of these two papers' assumptions are misguided, nearly all the nontrivial steps are erroneous, and the results are incorrect, too. Semiclassical GR Let us start with semiclassical gravity. At this level, the graviton propagator is philosophically analogous to the propagators of all other quantum fields you can think of - for example the electromagnetic field. You must start with a background; the simplest background is the flat Minkowski space. This means that you write the full metric as • g_{mn} = eta_{mn} + sqrt(Gnewton) h_{mn} Here, eta_{mn} is a background, i.e. a classical vacuum expectation value of the quantum field while h_{mn} is the fluctuation around this background that remains a quantum field and is treated as a set of small numbers. The full gravitational action can be expanded in "h_{mn}", to get Happy Easter Something analogous to annihilating letters, jumping frog, shooting frog, and stained glass. Click here for Easter eggs in full screen. Cuba vs. Czechia 1:1 Meanwhile, Cuba has expelled the Czech diplomat, Mr. Stanislav Kázecký, for spying on behalf of the U.S. - which is most likely not true. The Czech Republic has followed all the decent traditions and refused to extend the visa for a Cuban diplomat, too. :-) While the Czechoslovak Socialist Republic was one of Cuba's closest friends, the Czech Republic is its #2 foe. A large portion of the U.N. resolutions that criticize the situation in Cuba as well as the trade restrictions for the European Union members have been proposed by the Czech Republic. There have been many recent incidents between the two countries. For example, the countrymate of mine above is a psychologist called Helena Houdová. (In fact, she is my citymate, from Pilsen.) She was former Miss Czech Republic 1999 and the Dean's world hero of the week. In January 2006, she decided to take pictures of the Cuban slums, something that Fidel Castro pretends not to exist. She was immediately arrested (together with her friend, Mariana Kroftová, who is also a model) - for taking the pictures - and the commies have confiscated her film. As you can imagine, those communist morons can't really compete with a modern capitalist young woman from the Czech Republic and her state-of-the-art technologies. She stored a memory card from her digital camera in her bra. Today, she is showing the alarming pictures of the "island of freedom" all around the world. Cuba has canceled various celebrations of the Czech national holidays and expelled or temporarily arrested many Czech citizens - the aristocrat Schwarzenberg and the politician Ivan Pilip (with his friend Filip Bubeník) are two most well-known examples. You can try to liberate Pilip by shooting 50 Cuban agents here. Friday, April 14, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere La Griffe du Lion: prison ratios La Griffe du Lion has a new technical analysis of a sociological issue. He asks: His answer is based on mathematics that is more or less equivalent to his previous analysis of women in science. The conservative states impose a lower threshold to be arrested - they only tolerate smaller crime. This makes the groups of people behind bars less selective. Because the black crime Gaussian is broader and higher than the white ones in the same way as the male math aptitude Gaussian is broader and higher than the female one, smaller selectivity is translated to a less dramatic ratio between the black and white percentages. It is therefore logical and inevitable that the racial disparity is more striking in the left-wing states. The indentity of La Griffe du Lion remains a mystery to us. Is George W. Bush a feminist? David Goss has sent me an insightful that starts with the announcement that the Bush administration is going to investigate universities with fewer women in math and science than the feminists such as Barbara Boxer would like. Schlafly notices that even though Bush has been the president for more than five years, Bill Clinton's feminist policies are apparently still in force. She asks: Is Bush a feminist or just a gentleman who is intimidated by the feminists? At the physical level of policies, there is no real difference between the two answers. 171 wrestling teams have already been intentionally destroyed by these dumb policies and math and science may follow. Schlafly explains how this mindless feminist mentality, based on a striking misunderstanding of the differences between men and women, can have a devastating effect on universities and beyond. There is of course not a shred of evidence of any discrimination, she writes: men are simply more interested in competitive sports, math, and science. Moreover, when it comes to muscle growth, testosterone is the key to success. After having explained how unreasonable the feminist approach is, she says that the Bush administration is ignoring one example of increasing gender disparity that can indeed have bad consequences: a decreasing percentage of male schoolteachers. With all my respect for George W. Bush, let me offer an obvious answer to Schlafly's basic question. Yes, Bush is a feminist and he in fact does think that women are brighter in many respects including science and math - and most discussions he has with the First Lady have to reinforce this belief. ;-) Bert Schroer vs. path integral Prof. Bert Schroer has publicized his essay in which he argues that there is something wrong with the path integrals and they should be universally replaced by algebraic methods. Because half of the Internet is going to decide that he must be right, at least in some sense, let me also post the correct answers to his doubts - which includes a trivial assertion that his statements are nonsensical. The first couple of pages are filled with a content-free bitterness about the path integrals and a unsubstantiated promotion of algebraic quantum field theory: the kind of silly unphysical whining that all of us know very well from "Not Even Wrong" and other places on the Internet. The author is upset about the "string theory caravan" that does not support "great" ideas - such as the "great" idea of Prof. Schroer himself that the path integrals are bad. The first non-trivial statement appears on page 3. Prof. Schroer essentially claims that the path integrals give a wrong result if you use them to describe a spinning top. The critical sentence is the following: • The paradoxical situation consists in the fact that although the higher fluctuation terms (higher perturbations) are nonvanishing, one must ignore them in order to arrive at the rigorous result. Wow. The path integral fails at higher orders, he says. Of course that this statement is a complete nonsense. Path integrals are better, not worse, to compute loop effects, especially if one has to deal with non-Abelian gauge symmetries. By introducing the Faddeev-Popov ghosts, the best formalism to calculate higher-order effects in this theory may be developed. Moreover, the path integral is also a superior approach in obtaining non-perturbative corrections such as instanton corrections. Path integrals also make the Lorentz symmetry of quantum field theories manifest and they have other advantages, too. Thursday, April 13, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Google calendar A new service by Google is You need to have a Google account - for example a Gmail account. With Gmail, you may also incorporate the calendar - with the list of things you have to do - into the corner of your Gmail inbox. The interface is based on a traditionally fresh, Google-like, no-nonsense environment. See Calendar help for more details. Incidentally, you will also be able to make Google searches using your voice and telephone: Flux compactifications of M-theory and F-theory Today we had an oral exam, some minor progress in the calculations of the black hole corrections, and I attended Cumrun Vafa's class which is always a good opportunity to refresh one's knowledge of various things. He started with the Dijkgraaf-Vafa correspondence, and finished with flux compactifications. I will write comments about Dijkgraaf-Vafa later, but let me start with the following: Flux compactifications As the Becker sisters explained, the compactification of M-theory on Calabi-Yau four-folds (which are eight-real-dimensional which leaves three large spacetime dimensions) actually requires nonzero values of the four-form field strength G4. It is because the eleven-dimensional action contains the terms of the form • S = int C3 /\ ( G4 /\ G4 - I8(R) ) + ... The first term is a tree-level Chern-Simons term needed for classical supersymmetry of eleven-dimensional supergravity while the second term depending on the Riemann tensor R may be viewed as a one-loop correction. Note that the one-loop terms are often determined independently of the UV details of physics and M-theory is no exception. Wednesday, April 12, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Richard Lindzen: Climate of Fear Prof. Richard Lindzen of MIT is one of the world's most respected climate scientists or, if you at least allow me to use the alarmists' words, he is considered by them to be the world's most respectable climate skeptic. See also: Lindzen 2008: Climate Science: Is it currently designed to answer questions? Today, in his Wall Street Journal article, he describes not only the reasons why the public should not believe the statements that the carbon dioxide emissions are bringing us closer to the armageddon but especially the intense intimidation campaign that the scientists who reach politically incorrect conclusions have to face. One of the topics that Lindzen talks about are the double standards in the journals where non-alarmist articles about the climate are commonly refused without review as being without interest. I have already learned how it works which is why I recommended Steve McIntyre not to spend too much time trying to get his articles published in "mainstream" journals. But the main focus of Lindzen's discussions seems to be funding. Funding is something that is cut for all of those who indicate the obvious - namely that science offers no justification for bizarre policies such as the Kyoto protocol. Harvard energy initiative On Monday, we had a faculty lunch meeting at the Faculty Club and one of the topics was the so-called "Harvard energy initiative". A short story is that a large amount of money was given to something described by these three words - and up to 10 new faculty positions are expected to be created - except that no one knows what "Harvard energy initiative" means and what people should be hired. Tuesday, April 11, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Microsoft: competition for Google Scholar Everyone knows a search engine that is used for more than 50 percent of the searches in the world. Many of us find another service comparably priceless: That's a place where you can search through the full text of scientific articles in all fields you can imagine, and get the results sorted according to the relevance which is a criterion that includes the number of citations. In the 1980s, IBM would be a very important company in the computer industry but Microsoft took over. Is Google going to make Microsoft obsolete in a similar way? What will be the result of the Microsoft vs. Google competition? Well, the guys in Microsoft seem to be smarter than those in IBM 20 years ago and they don't want to give up. So the counterparts of are while the counterpart of will be It became available tonight before 9 p.m. Eastern time - but the website so far fails to give any scholarly results. Instead, I get the standard search results. Also, arXiv seems to be absent from their list of journals - the only journal with "arxi" in it is "Rethinking in Marxism". Monday, April 10, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Elizabeth Lada: stars born in clusters Elizabeth Lada from University of Florida is an astronomer who is most famous for defending the statement that most stars are born in clusters. This statement has brought two communities that studied star formation - those who only wanted the overall rate and those who investigated individual cases microscopically - closer together. It was interesting to see a nice colloquium from an adjacent field - a field whose conclusions are slightly less theoretical, quantitative, universal, and principled than ours but one that can offer nicer pictures. Some of the main messages of the talk are the following: CDF excitement - press conference This is how good P.R. looks like at Fermilab. ;-) Or is it more than good P.R.? From: June Matthews We've received advanced word on some exciting new results from the CDF experiment at Fermilab, where Christoph Paus heads up the MIT effort. This is the wording: Fermilab will hold a press conference at 4:00 pm today (Central Time) with details on the precision measurement of extremely rapid transitions between matter and antimatter. It has been known for 50 years that very special species of subatomic particles can make spontaneous transitions between matter and antimatter. In this exciting new result, CDF physicists measured the rate of these matter-antimatter transitions for the B sub s meson, which consists of the heavy bottom quark bound by the strong nuclear interaction to a strange anti-quark, a staggering rate that challenges the imagination: 200 billion times per second. There will be a life feed of the press conference available on the web at: Marc Kastner P.S. You can click the envelope icon two lines below this one to send the announcement about the press conference to all your friends who might be interested. The online press conference started at 5:00 p.m. Eastern Daylight Saving Time or 2:00 p.m. Californian time and ended one hour later. Main content: They determined that at 99.5% confidence level, they have seen oscillations between matter and antimatter with frequency 17.33 plus minus something inverse picoseconds. The results are consistent with the Standard Model and place new upper bounds on flavor violation of new physics such as supersymmetry. Harper under pressure: scrap Kyoto While many other politicians experienced pressure from the activists, Stephen Harper, the prime minister of Canada, is under pressure from who urge him to scrap the "pointless" Kyoto protocol. See the full letter and signatories here. They explain that "global climate change" is an emerging science and the Kyoto treaty would not have been negotiated in the 1990s if the parties knew what we know today. The cliche "climate change is real" is a meaningless phrase used by the activists to fool the public into believing that a climate catastrophe is looming. Climate is changing all the time because of natural reasons and the human impact still can't be disentangled from the natural noise. Meanwhile, Rona Ambrose has reviewed the situation and concluded that the targets can't be met by Canada: it's impossible. The Canadian economy is recently doing very well which is of course very bad for similar anti-growth policies: the emissions are growing while they should be shrinking according to the protocol. I think that Canada itself should also honestly admit that if we will hypothetically face warming, Canada will benefit from it. The goal should be to isolate the countries that are supposed to be the "losers" of the hypothetical warming and help them. And also help those countries that face problems that are unrelated to warming which is far more often the case. ;-) But help them not with the crazy egalitarian policies according to which the whole planet must be heated up or cooled down simultaneously, but help them by rational, focused, meaningful projects. The U.N. should allow Canada to change what its contributions will be and the whole U.N. framework for climate change should be re-built on new principles. See also: Kyoto hopes vanish. Rona Ambrose now intends to challenge the international focus on setting emission targets. I am sure she has enough intelligence and charm to do important things. Incidentally, the last link explains some proposed biomass projects that could actually make at least some sense but even these things should be studied and planned rationally. Prof. Bob Carter, a paleoclimate geologist, explains in the London Telegraph that the main problem with the global warming is that it stopped in 1998. Meanwhile, Al Gore has admitted that the global warming is no longer a scientific or political issue: it is a moral or a religious issue, if you will, and Al Gore is a prophet. Figure 1: The picture from the Boston Globe shows what the alarmists consider a "balanced journalism" and fair reporting about the climate. John Brodie - sad story Today, Rutland Herald offers a very sad story about whom many people, not only at Princeton University, Stanford University and the Perimeter Institute, know pretty well. John has suffered from bipolar mental illness - the same disorder that Mary Winkler has been treated for - and jumped into a cold river on January 28th, 2006. Technically, his most well-known paper was his work with Amihay Hanany about brane boxes but the paper he co-authored and one can't forget is with Bernevig, Susskind, and Toumbas about the construction of the quantum Hall effect from D-branes. Via Not Even Wrong. Sunday, April 09, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Readying a massive (nuclear) strike on Iran Update: Iran claims to have shot down an unmanned airplane from Iraq on Sunday. In the Czech Republic, it is the news #1 at major servers such as but no one seems to care in the U.S. According to the April 17th issue of and its investigative journalist Seymour Hersh, the White House is finalizing plans for a major air attack against selected targets in Iran. The situation has developed quite a bit during the last year. The theory behind these plans is that an attack is the only method how to stop Mahmoud Ahmadinejad, a modern potential counterpart of Adolf Hitler as the White House officials describe him in private discussions, from developing nuclear weapons and using them against Israel and, with the help of terrorists, against the whole civilized world. The attacks are meant to humiliate the Iranian religious government and to make the people overthrow it. I personally don't believe that the bombing would encourage Iranians to follow America. I did not believe similar idealistic predictions in Iraq either. The support of Hussain was clearly significant. Environmentalists who like sustainability should like the bombing campaign because the "coercion" attacks will be "sustained". Another theory is that Ahmadinejad sees the West as "wimps who will cave in". Some sources argue that it is a public misconception that Bush has been mostly thinking about Iraq since 9/11 - the main and more ambitious ideas were always about Iran. Even Quantoken agrees that the real danger is Iran. The White House is secretly communicating with the members of the U.S. senate and no one really objects to the idea of a war. There is no international opposition either because no one really likes the regime of Iran, Hersh argues. Even ElBaradei agrees that the Iranian leaders are 100% certified nutcases. On the other hand, no other country - not even Great Britain - is going to actively support nuking. Some plans are already underway. Some of the Iranian nuclear facilities are deep underground (25 meters) and Pentagon believes that they will require a bunker-buster tactical nuclear weapon such as B61-11, the "earth penetrating" thermonuclear daughter of the old B61-7 gravity bomb, developed in 1997 under Clinton. The energy from this key nuclear product is able to penetrate up to 100 meters of soil (not rock) and the bomb explodes 6 meters beneath the surface. One of the main targets is Natanz, 300 kilometers south of Tehran. This particular plan is not technologically new because the U.S. was thinking about bombing a similar facility near Moscow in the early 1980s. Rather detailed plans already exist how big a part of the air force of Iran has to be eliminated for the fix and what to do with the mess that would probably emerge in Iran and Southern Iraq. Controversy exists how many places would have to be bombed and whether the nuclear option is useful. I definitely recommend you to read the article. What about the Reference Frame? I am always afraid of a war - and I am always repelled by its obvious negative consequences. On the other hand, there seems to be a rather clear danger in the air (athough I can't rigorously prove it), and if this operation became necessary and remained a job for the air forces and avoided ground battles, I would be moderately optimistic because all these operations in the past were rather successful. Incidentally, the U.S. troops in Iran will mark the facilities by lasers to increase the accuracy of the operation and reduce the civil casualties. Nuclear weapons have been silent for 60 years but they're not really a hot new technology. At the high school, during the first Gulf War, our classes were often cancelled and we were watching. Most of the boys in our class were truly impressed. Whenever the U.S. technology edge is being displayed, one can always see the natural authority of America, especially if a maximum effort is made to minimize civil casualties. The Reference Frame recommends all readers in Iran - and everyone they know - to move at least 50 kilometers from the neighborhood of the potential targets, especially Natanz (plus other targets enumerated in Wikipedia in the link at the bottom). We also recommend all citizens of Iran to start a revolution and establish democracy and freedom in Iran. This blog can't guarantee that the story from New Yorker is accurate but there are very good reasons to think that it might be true. Hersh, the author of the article, has won a Pulitzer prize in 1970 for uncovering a massacre in Vietnam by the U.S. troops and he was also the reporter who broke the Abu Ghraib prison scandal. That's a pretty good record, I think. He likes to expose things that look anti-Bush but whether his new article is really anti-Bush remains to be seen. The contingency planning is obviously what many people in Pentagon are paid for but I can tell you neither how many decisions have actually been made, nor whether such a thing could work out as smoothly as some successful operations in the past. Other sources: The hypothetical bombing poses many dilemmas - moral, strategic, tactical, psychological, economical - and question marks but the psychological pressure could not be that bad. The Reference Frame also recommends Mahmoud Ahmadinejad to establish democracy, give up nuclear ambitions, and resign. Such a reasonable decision could hypothetically save millions of lives.
6dc0818ba181c9ce
From Wikipedia, the free encyclopedia Jump to: navigation, search Comparison of a wavefunction in the Coulomb potential of the nucleus (blue) to the one in the pseudopotential (red). The real and the pseudo wavefunction and potentials match above a certain cutoff radius . In physics, a pseudopotential or effective potential is used as an approximation for the simplified description of complex systems. Applications include atomic physics and neutron scattering. The pseudopotential approximation was first introduced by Hans Hellmann in 1934.[1] Atomic physics[edit] The pseudopotential is an attempt to replace the complicated effects of the motion of the core (i.e. non-valence) electrons of an atom and its nucleus with an effective potential, or pseudopotential, so that the Schrödinger equation contains a modified effective potential term instead of the Coulombic potential term for core electrons normally found in the Schrödinger equation. The pseudopotential is an effective potential constructed to replace the atomic all-electron potential (full-potential) such that core states are eliminated and the valence electrons are described by pseudo-wavefunctions with significantly fewer nodes. This allows the pseudo-wavefunctions to be described with far fewer Fourier modes, thus making plane-wave basis sets practical to use. In this approach usually only the chemically active valence electrons are dealt with explicitly, while the core electrons are 'frozen', being considered together with the nuclei as rigid non-polarizable ion cores. It is possible to self-consistently update the pseudopotential with the chemical environment that it is embedded in, having the effect of relaxing the frozen core approximation, although this is rarely done. First-principles pseudopotentials are derived from an atomic reference state, requiring that the pseudo- and all-electron valence eigenstates have the same energies and amplitude (and thus density) outside a chosen core cut-off radius . Pseudopotentials with larger cut-off radius are said to be softer, that is more rapidly convergent, but at the same time less transferable, that is less accurate to reproduce realistic features in different environments. 1. Reduction of basis set size 2. Reduction of number of electrons 3. Inclusion of relativistic and other effects 1. One-electron picture. 2. The small-core approximation assumes that there is no significant overlap between core and valence wave-function. Nonlinear core corrections[2] or "semicore" electron inclusion[3] deal with situations where overlap is non-negligible. Early applications of pseudopotentials to atoms and solids based on attempts to fit atomic spectra achieved only limited success. Solid-state pseudopotentials achieved their present popularity largely because of the successful fits by Walter Harrison to the nearly free electron Fermi surface of aluminum (1958) and by James C. Phillips to the covalent energy gaps of silicon and germanium (1958). Phillips and coworkers (notably Marvin L. Cohen and coworkers) later extended this work to many other semiconductors, in what they called "semiempirical pseudopotentials".[4] Norm-conserving Pseudopotential[edit] Norm-conserving and ultrasoft are the two most common forms of pseudopotential used in modern plane-wave electronic structure codes. They allow a basis-set with a significantly lower cut-off (the frequency of the highest Fourier mode) to be used to describe the electron wavefunctions and so allow proper numerical convergence with reasonable computing resources. An alternative would be to augment the basis set around nuclei with atomic-like functions, as is done in LAPW. Norm-conserving pseudopotential was first proposed by Hamann, Schlüter, and Chiang (HSC) in 1979.[5] The original HSC norm-conserving pseudopotential takes the following form: where projects a one-particle wavefunction, such as one Kohn-Sham orbital, to the angular momentum labeled by . is the pseudopotential that acts on the projected component. Different angular momentum states then feel different potentials, thus the HSC norm-conserving pseudopotential is non-local, in contrast to local pseudopotential which acts on all one-particle wave-functions in the same way. Norm-conserving pseudopotentials are constructed to enforce two conditions. 1. Inside the cut-off radius , the norm of each pseudo-wavefunction be identical to its corresponding all-electron wavefunction:[6] where and are the all-electron and pseudo reference states for the pseudopotential on atom . 2. All-electron and pseudo wavefunctions are identical outside cut-off radius . Ultrasoft Pseudopotentials[edit] Ultrasoft pseudopotentials relax the norm-conserving constraint to reduce the necessary basis-set size further at the expense of introducing a generalized eigenvalue problem.[7] With a non-zero difference in norms we can now define: and so a normalised eigenstate of the pseudo Hamiltonian now obeys the generalized equation where the operator is defined as where are projectors that form a dual basis with the pseudo reference states inside the cut-off radius, and are zero outside: A related technique[8] is the projector augmented wave (PAW) method. Fermi pseudopotential[edit] Enrico Fermi introduced a pseudopotential, , to describe the scattering of a free neutron by a nucleus.[9] The scattering is assumed to be s-wave scattering, and therefore spherically symmetric. Therefore, the potential is given as a function of radius, : where is the Planck constant divided by , is the mass, is the Dirac delta function, is the bound coherent neutron scattering length, and the center of mass of the nucleus.[10] The Fourier transform of this -function leads to the constant neutron form factor. Phillips pseudopotential[edit] James C Phillips developed a simplified pseudopotential while at Bell Labs useful for describing silicon and germanium. 1. ^ Schwerdtfeger, P. (August 2011), "The Pseudopotential Approximation in Electronic Structure Theory", ChemPhysChem, doi:10.1002/cphc.201100387  2. ^ Louie, Steven G.; Froyen, Sverre; Cohen, Marvin L. (August 1982), "Nonlinear ionic pseudopotentials in spin-density-functional calculations", Physical Review B, American Physical Society, 26 (4): 1738–1742, Bibcode:1982PhRvB..26.1738L, doi:10.1103/PhysRevB.26.1738  3. ^ Reis, Carlos L.; Pacheco, J. M.; Martins, José Luís (October 2003), "First-principles norm-conserving pseudopotential with explicit incorporation of semicore states", Physical Review B, American Physical Society, 68 (15), p. 155111, Bibcode:2003PhRvB..68o5111R, doi:10.1103/PhysRevB.68.155111  4. ^ M. L. Cohen, J. R. Chelikowsky, "Electronic Structure and Optical Spectra of Semiconductors", (Springer Verlag, Berlin 1988) 5. ^ Hamann, D. R.; Schlüter, M.; Chiang, C. (1979-11-12). "Norm-Conserving Pseudopotentials". Physical Review Letters. 43 (20): 1494–1497. doi:10.1103/PhysRevLett.43.1494.  6. ^ Bachelet, G. B.; Hamann, D. R.; Schlüter, M. (October 1982), "Pseudopotentials that work: From H to Pu", Physical Review B, American Physical Society, 26 (8), pp. 4199–4228, Bibcode:1982PhRvB..26.4199B, doi:10.1103/PhysRevB.26.4199  7. ^ Vanderbilt, David (April 1990), "Soft self-consistent pseudopotentials in a generalized eigenvalue formalism", Physical Review B, American Physical Society, 41 (11), pp. 7892–7895, Bibcode:1990PhRvB..41.7892V, doi:10.1103/PhysRevB.41.7892  8. ^ Kresse, G.; Joubert, D. (1999). "From ultrasoft pseudopotentials to the projector augmented-wave method". Bibcode:1999PhRvB..59.1758K. doi:10.1103/PhysRevB.59.1758.  9. ^ E. Fermi (July 1936), "Motion of neutrons in hydrogenous substances", Ricerca Scientifica, 7: 13–52  10. ^ Squires, Introduction to the Theory of Thermal Neutron Scattering, Dover Publications (1996) ISBN 0-486-69447-X
42115aa92c2a17b1
Thursday, 8 July 2010 Why I am here Hello cyber-world! I decided to start this blog to share my pithy ramblings that are the result of hours spent perusing the web-er-tubes. I do not know where this blog will lead, who will choose to follow or whether anyone will, in fact, read it. I do know, however, what motivated me to start this. Let me share with you... are you settled? Okay. First of all, I am an avid reader of the ScienceBlogs writer, PZ Myers (not to be confused with his scary alter-ego of misspellings, PZ Meyers) and his blog, Pharyngula. Recently, there was an interesting discussion on why there are a lack of prominent female sceptics. Now, the term 'sceptic' can be applied to many things, whether it is a sceptic of religion, a sceptic of faith, a sceptic of the world, a sceptic of miracles, or any other possible thing you want to be sceptical of, the current societal infatuation with Twilight included. I like to think that the world of science is the ultimate haven for sceptics. We were generally led to science by a certain curiosity of how the world around us actually works. And I mean, ACTUALLY works. None of this religion-mumbo-jumbo. I do not know about you, but I find the concept of quark-gluon plasmas much more fascinating than a big man with a great beard pointing his finger and saying 'Me want Universe!' (and you want to know something even better? There's this thing called EVIDENCE for plasmas! Shhhh). So I am here to lend my voice to the world of female sceptics. Indeed, there are few women in my field and I imagine even fewer who want to risk exposing themselves to the cruel world of the blogosphere, but I can take it. If I can be tattooed 15 times I can certainly take on the world of the interwebs. My personal journey with science has led to me researching the existence of gravitational waves, with a dollop of astrophysics and a soupçon of statistics. This was not a journey of enlightenment, of bright lights and shiny epiphanies, but a tiring one that consisted of fits of anger, holes in my wall the size of physics textbooks, morbid cartoons of Cauchy being eaten by a T-Rex, sweat-laced homework assignments, tears, rage, frustration and many other horrible things. However, it also resulted in some amazing bonding moments with fellow students, the beauty of Cauchy's Integral Theorem, the shininess of finally solving the Schrödinger equation, more science fiction marathons than one would think possible and the amazing feeling of finally being handed the diplomas by the same professors who made my life miserable (but very educated) for four years. Four years, it flies by. So that is me and that is why you should trust me. Also, there are not enough female, tattooed, scientists out there to show other girls who may not fit the stereotype that we do, actually exist and would open them with welcome arms to this world! I am here to share my thoughts with you, oh Reader. We are going to have a wild adventure together analysing and tearing apart the world around us, with occasional rants on my behalf of my research (all fellow mourners welcome). I may also attempt to teach you some cool things, if you want to learn. Want to join? It should be fun! So pull up a browser, pour yourself some coffee and let the discussion ensue. To emulate the immortal words of The Doctor, I am most definitely a mad woman with an internet account. Let's have an adventure! 1. You are so wonderful :) :) :) 2. YAY! The intertubes need more sexy skeptics!
5e08f1079e9cbc51
Sunday, April 09, 2017 Friday, March 31, 2017 A friend challenged my opinion about relativity by quoting time dilation. Here is my response to him. I am not superstitious, but I apply my mind to whatever I read before accepting or rejecting them. You have resorted to a circular logic here: since relativity is true, time dilation as really happening for the high energy muons and then you claim time dilation proves relativity! But before I explain what you have read and believed, let me tell you something about time dilation itself. The so-called experimental proof by atomic clocks have failed to prove time dilation. Hence a fake report was circulated by forging figures to “prove” time dilation. The readings are still available in the Naval Archives and you can verify for yourself that the initial reports of the readings were changed in the final report. I challenge you to prove me wrong. The so-called time dilation in GPS systems are not at all based on time dilation of SR. It is due to refraction. Light is known to slow down or speed up depending upon the density of the medium through which it travels. The atmosphere has different layers with varying density. Thus, light slows down while coming through such layers. Thus, it does not prove time dilation. High energy muons or Cosmic ray muons are produced at about 16000 meters above the ground as cosmic radiation from outer space collides with the atoms of the Earth’s atmosphere. Apparently scientists have observed that significant numbers of these cosmic ray muons (which are produced in the upper crust of the Earth’s atmosphere) survive to reach to the ground level. In other words a significant proportion of cosmic muons are able to travel a distance of 16000 meters in their life span. But scientists from their experiments on laboratory muons, found that muons live for only about 2 microseconds and travel at a speed of 0.9c. So according to them, a muon can only travel about 600 meters in its life time.  Then how could the cosmic ray muons, produced in the upper crust of the earth’s atmosphere, are able to travel a distance of 16000 meters and reach the ground? The only possible explanation for this scenario according to the superstitious is time dilation and length contraction. But here Scientists have only noted the life span and speed of low energy muons produced in particle accelerators. How can the same be considered true for the high energy cosmic ray muons? For example, imagine that we have seen a boy traveling a distance of 100 meters in 10 seconds. We calculate his speed as 10 meters/sec. Now if we see his twin brother traveling 200 meters in 10 seconds, we would not say that the second boy has experienced time dilation or space contraction. Rather we would simply say that the second boy has run faster than the first boy. Or else we have to suspect that our measurement of time was not correct in one or the both scenarios. And same must be the case with the muons. The probability that a muon decays in some small time interval dt is λdt, where λ is a constant “decay rate”. The "lifetime" τ of a muon is the reciprocal of λ, τ = 1/λ. This simple exponential relation is typical of radioactive decay. You do not have a single clump of muons whose surviving number can be easily measure. Instead, you detect muon decays from muons that enter your detector at essentially random times - typically one at a time. Their decay time distribution has a simple exponential form. Decay time distribution D(t) means that the time-dependent probability that a muon decays in the time interval between t and t + dt is given by D(t)dt. Because the muon decay time is exponentially distributed, it does not matter that the muons whose decays we detect are not born in the detector but somewhere above us in the atmosphere. An exponential function always “looks the same” in the sense that whether you examine it at early times or late times, its e-folding time is the same. Muons are stopped in a large block of scintillator, where they subsequently decay into an electron/positron and two neutrinos. A short light pulse is produced by the stopping muon which is detected and amplified by a photomultiplier tube. When the muon decays a second pulse is produced by the emitted electron or positron. The signals from the photo multiplier are fed into an electronic circuit which determines the time delay between the two pulses. Under certain conditions (gravity, energy state, environment etc.) why not a muon travel faster or live longer before it decays into the smaller particles. We know that electrons can travel at different velocities. Then why do scientists insist that all muons travel at the same speed, and introduce absurd notions like time dilation/ space contraction when they see cosmic muons traveling a much longer distance than the laboratory muons in their lifetime? Why not the muons produced in the laboratory experience the same time dilation and length contraction if their speed was same as that of the cosmic ray muons? And if they did, why have not we seen the laboratory muons travel the same 16000 meters as their cosmic counter parts? And if they travelled 16000 meters distance in their life span of 2 microseconds, what would be their speed? Thursday, March 30, 2017 A friend wanted to know: what black holes are made of? If they form from the collapse of a huge star which runs out of hydrogene and helium, and after it explodes in supernova explosion, then the rest of gasses is away. It collapses into heavy elements stored nucleus. What happens with matter under extreme conditions we cannot create on earth to watch closely? However, very high pressures and very high and low temperatures are generated. Let us keep certain observations in mind. 1) The black holes are known to have intense magnetic fields. It is known that when heated above 176° Fahrenheit (80° celsius), magnets will quickly lose their magnetic properties. The magnet will become permanently demagnetized if exposed to these temperatures for a certain length of time or heated at a significantly higher temperature (Curie temperature). Modern magnet materials do lose a very small fraction of their magnetism over time. For Samarium Cobalt materials, for example, this has been shown to be less that 1% over a period of ten years. Thus, to have the intense magnetic field, the interor of black holes cannot be hot. However, the modern notion is that it must be tremendously hot - millions of degrees Kelvin - which is self-contradictory. 2) The event horizon of a black hole is said to be a one-way filter in the black hole: anything can enter it, but nothing can leave it. The concept of event horizon itself is wrong. If a light pulse expanding in two dimensions and time will sketch concentric circles and not as you have shown, which is a three dimensional structure. If we add the third dimension, then it will sketch concentric spheres of increased radius and not a time cone. This has misguided science for a long time and needs correction. We see when light from a radiating body reaches our eyes. It has no colors. Light itself is not visible unless it meets our eye to show its source. We see only the source and not the light proper. We see color only when a reflected light meets our eyes. All radiation move in curved path, i.e., waves within a fixed band. But once it is received in an apparatus including the eye, it behaves as a straight line. In both cases, it does not behave like a cone. 3) But that notion is now changing. Black holes do not have such "event horizons" according to Hawking, conformed by NASA. In that case, the event horizon would, in theory, become smaller than the apparent horizon. Hawking's new suggestion is that the apparent horizon is the real boundary. The absence of event horizons means that there are no black holes — in the sense of regimes from which light cannot escape to infinity. This was suggested earlier by Abhas Mitra, but his solution - Magnetically Eternally Collapsing Object (MECO) - is again wrong. If it is magnetic, it cannot be hot and nothing can collapse eternally. 4) Black holes are detected indirectly by the intense x-rays detected by them. When material falls into a black hole from a companion star, it gets heated to millions of degrees Kelvin and accelerated. The superheated materials emit X-rays, which can be detected by X-ray telescopes. But the difference in origin of x-rays and gamma rays is that, x-rays are emitted by the negatively charged outer electron cells of atoms, whereas gamma rays are emitted by the positively charged nucleus. There is no reason to believe that in a black hole, it happens otherwise. The nature of negative charge is to confine positive charge. Thus, there must be a positive charge in the core of the black hole, which should not be hot. The only possibility is it has to be antimatter. 5) A black hole is said to be a very simple object: it has only three properties: mass, spin and electrical charge. Because of the way in which black holes form, their electrical charge is said to be probably zero. But then charge neutral objects do not emit x-rays. If the radiation coming from the positively charge core, it should be gamma rays and not x-rays. 6) An object with immense mass (hence gravitational pull) like a galaxy or black hole between the Earth and a distant object could bend the light from the distant object into a focus – gravitational lensing. If a visible star or disk of gas has a "wobbling" motion or spinning and there is not a visible reason for this motion and the invisible reason has an effect that appears to be caused by an object with a mass greater than three solar masses (too big to be a neutron star), then it is possible that a black hole is causing the motion. Scientists then estimate the mass of the black hole by looking at its effect on the visible object. For example, at the center of the Milky Way, we see an empty spot where all of the stars are circling around as if they were orbiting a really dense mass. That is where the black hole is. 7) Black holes have spin. But since it is constituted of antimatter core, it cannot have normal spin, but internal spin, which would make entry into a black hole a winding path. Interestingly, our ancients have described some such object, which has a winding path to the core, which is lighted but not hot, and any matter coming into contact with it gets annihilated. They describe the object by various names like Shilocchaya (meaning compact object), Guha (meaning visibility from within), whose center was described as negatively charged and called Swayamprabha (meaning self illuminated). If we compare this description and the fact that the centers of galaxies have black holes, we come to an interesting conclusion. Think of a charge neutral object coming near a positively charged object. Part of the charge neutral object facing the positively charged object suddenly develops a negative charge and the other end positive charge, so that the negative charge, which generally confines the positive charge, is now itself confined between positive charges. This in turn leads to reaction involving release of high energy and further realignment restoring balance. Such a thing happens inside atoms continuously involving protons and neutrons. The extra energy released appears as W bosons with a mass of 80.385 +/- 0.015 GeV/c^2, even though the masses of protons and neutrons are of the order of 938.28 MeV/c^2 and 939.57 MeV/c^2 respectively. Conclusion: Black holes are macro equivalents of neutrons. Saturday, March 04, 2017 We are appalled by your great wisdom. Being a Professor devoted to spreading education and eradicating ignorance, we beg you to be gracious enough to consider us as a student and kindly explain and educate us the basic foundations of “Orch OR” and the meaning of space-time and its fabric. Since you have published the paper with a renowned Physicist, your insight into the subject must be great and it will be a novel experience of learning physics from a Professor of Psychology, who is an authority of physics also. Let us discuss “Orch OR” in the first half, after which we will discuss physics. In your paper “Quantum computation in brain microtubules? The Penrose-Hameroff ‘Orch OR’ model of Consciousness”, you begin with “Potential features of quantum computation could explain enigmatic aspects of consciousness. The Penrose-Hameroff model (orchestrated objective reduction: ‘Orch OR’) suggests that quantum superposition and a form of quantum computation occur in microtubules - cylindrical protein lattices of the cell cytoskeleton within the brain’s neurons. Microtubules couple to and regulate neural-level synaptic functions, and they may be ideal quantum computers because of dynamical lattice structure, quantum-level subunit states and intermittent isolation from environmental interactions”. Only the scaling up or down of digitized particles or the result of measurement of their interactions can be computed. If “Potential features of quantum computation could explain enigmatic aspects of consciousness”, then do you mean consciousness can be digitized? There is no proof in support of this view. Though our sense organs are digitized, they are individually meaningless - without mixing, there cannot be awareness of any object. Let us take the example of a rose. Eyes see only a reflected light that gives the perception of color. When both eyes see the colour from different angles, our sense of touch gives the contracting features to indicate various depths, which are amalgamated as form. The nose gives the fragrance. Our sense of taste (which can be meaningful only after it is transformed to a fluid state) gives the sense of glow (with liquid content and not dryness). Finally, our recalling of a past experience where we heard people calling a similar object as rose gives us the perception of rose. Without the mix of all digitized inputs, there will be no perception or awareness, which is called consciousness. Thus, who or what represents the “mixer” in your model? When you talk about the “enigmatic aspects of consciousness”, how can you measure or compute it? Enigma is mysterious or difficult to understand. If it is mysterious, it is not known. If it is difficult to understand, you cannot be sure that what you assume is correct. Then how can you compute something unknown or something about which you are not sure? Further, what are the other aspects of consciousness? Giving a partial explanation about something is not enough. You must list out all aspects to avoid ambiguity and misinterpretation. We have explained in later paragraphs that except the state measured at a time ‘t’, all other unknown states are combined and are called the superposition of states. Then how can you compute the unknown states or “quantum superposition and a form of quantum computation (that) occur in microtubules”? Penrose’s Quantum gravity formulations are not widely accepted in the scientific community and till date there is no definite proof of gravity being quantized – gravitons have never been discovered. Thus, the whole thing including his theory, is reduced to speculation. Can speculation be science? When you say: “Microtubules couple to and regulate neural-level synaptic functions, and they may be ideal quantum computers because of dynamical lattice structure, quantum-level subunit states and intermittent isolation from environmental interactions”, you appear to relate microtubules to consciousness. But in a dead body, the microtubules do not function. Then how could you deduce theories of consciousness from something, which by itself is not conscious? All operations involve energy. The moment the vital energy leaves the body; all organs including the microtubules, DNA/RNA, neurons, genes, etc. become non-functional. Thus, the vital energy would appear to be responsible for consciousness. But energy is mechanical. Thus, it cannot explain consciousness. How do you explain this paradox? When you talk about “orchestrated objective reduction”, you clarify, it as “implementing multiple computations simultaneously, in parallel, according to quantum linear superposition”. You admit that “The main obstacle to realization of quantum computation is the problem of interfacing to the system (input, output) while also protecting the quantum state from environmental decoherence. If this problem can be overcome, then present day classical computers may evolve into quantum computers”. We have described the “workings of the human mind” in terms of contemporary information technology as follows: Stuart Hameroff says: “feelings drive evolution, not survival of genes”. In one of the papers we edited, we had written as follows: There is a proverb: “Prayojanamanuddishya Na Mandopi Pravartate”. The meaning is that no one does anything without necessity. What is this necessity? A necessity (प्रयोजन) is that which propels everyone and everything to initiate an effort (येनप्रयुक्तः प्रवर्त्तते). Before someone does anything, he/she feels a deficiency somewhere which needs to be addressed. If he/she has the knowledge (ज्ञानम्) of any mechanism to address the deficiency, he/she feels the need (इच्छा) for that thing to be done (यमर्थमभीप्सन् जिहासन् वा कर्म्मारभते). Then only he/she puts that knowledge to execute the needed action (कृति). This principle applies to everyone for every action using every technology (सर्वे प्राणिनः सर्वाणि कर्म्माणि   सर्वाश्चविद्या व्याप्ताः). Thus, mind and intelligence, which propel towards action due to freewill, are important in conscious efforts. We have already explained in detail that mind is not a computer, though there are similarities and consciousness is different from mind and intelligence. Even for Jeeva, it is the Observer for the “feelings like happiness, pains, desire, attachment, repulsion, etc”. Thus, it has “knowledge” of all these, which propels it into action by releasing or applying necessary energy (mechanism of observation), which is confined in the body (observed), which is the base for such experiences. The sensory agencies including the objects or processes through which they function, such as DNA, microtubules, etc., are the instrumentalities used in the mechanism of observation. Modern scientists focus only on these instrumentalities ignoring the Observer. The Jeeva is characterized by its response to desire, repulsion, effort due to freewill (not mechanical motion, which is a sign of inertness), feelings of happiness, pain, and knowledge or awareness (इच्छा-द्वेष-प्रयत्न-सुख-दुःख-ज्ञानान्यात्मनो लिङ्गम्). None of the agencies discussed by Hameroff and others including DNA, microtubules, etc., can explain these. They do not have a clear idea about “mind”, “feeings”, etc., about which we discussed in in detail in our papers for the earlier conferences. We frequently compare mind with RAM, brain with HDD. Mind supports sensory instruments and reports to intelligence, like RAM supports applications (task). RAM has volatile memory and hangs from time to time if overloaded. Similarly mind goes to sleep if overworked. Intelligence is like CPU, which does the processing of all sensory inputs. Just like CPU cannot execute a program that is “not on the disc” and has not been loaded in the RAM, intelligence cannot act without mind. If memory speed is less than FSB, it takes too long to fetch an instruction or an operand. Similarly, mind shows dullness or brightness based on its species specific speed. Just like the CPU and RAM differ in processing capabilities (arithmetic dexterity) and storage capacity respectively even after the computer breaks down; different species show different levels of behaviour. These are input, memory, processing and output related and not perception related (as “I know” or happiness, pain, desire etc). Vital energy that starts breathing, which continues perpetually is like the power supply (electricity provided by a battery). The first breath is like the BIOS Chip, which boosts the computer and searches and loads the OS to RAM from ROM that cannot be modified, which is equivalent to memory content of the new born (such as to cry to draw attention of others when it is uncomfortable or to suckle the nipple when it is brought near its mouth when hungry and many such first time behaviour, which has not been experienced by it since birth). First breathing is like first boosting of the computer. Like Consciousness, OS is same for all computers, but BIOS varies from computer to computer. Similarly, consciousness in all living beings exhibits itself through DNA coding, which is species specific. It is a program semi permanently stored into one of the main chips. The OS creates virtual memory in HDD by creating a page-file when the system runs out of RAM. Similarly, we recollect more recollections correlated with greater connectivity among different regions of brain. Sometimes “over-clocking” boosts up OS speed. Similarly, suddenly we have bright ideas.  More RAM directly increases the amount of applications run simultaneously, faster loading time, faster boot up, and overall greater boost through all aspects. Greater brain size and surface area (creases) does the same for living beings. The better the CPU, more information can be processed at a time. Similarly, better intelligence can take faster decisions. The better the HDD, the faster the information can be passed on to the processor. The bigger the HDD, more information can be stored. The bigger the brain surface area, the faster and better operations could be performed. The CPU processes information in computers using logic gates. Intelligence does the same thing through sensory agencies. CPU directs RAM to do what is important. RAM can provide inputs, but cannot directly take decisions. Intelligence takes decisions based on inputs provided by mind only. When switched off, RAM becomes empty. CMOS battery keeps the CMOS alive the chip even when the computer is turned off. Similarly, intelligence remains active even in deep sleep. This way, macroscopic phenomena are connected to the brain’s known neural activity. But when you write in your paper “macroscopic quantum phenomena”, we are at a loss. If it is macroscopic, it cannot be quantum. If both are the same, both these terms are superfluous. There are differences between brain’s software and computer software. A computer can simultaneously test for more than one condition or execute multiple commands. But the brain cannot do so. They follow sequence of logical efforts first and knowledge of such efforts later. Computers run on standard/special programs, which are soft, i.e., flexible to be instantly reprogrammed. These are put to the hardware to become operational. Similarly, the body matter including the bacteria, neurons, DNA, microtubules, etc, are hardwares that operationalize the life’s software. But who writes the program? Only conscious beings can initiate action based on freewill. It is different from motion, which is a mechanical reaction. Thus, we have to admit a super consciousness outside all mechanical devices including robots. Since human mind is like computer, it cannot write its own program. Self-reproduction is a mechanical process. There is no self-reproduction in consciousness (the object or mechanism of perception may change, but one perception cannot be differentiated from another), though linked information retrieval may give other notions, such as replication. We request you to kindly educate us on this issue. You propose: “microtubules within the brain’s neurons are viewed as self-organizing quantum computers”. Self-organization is a process where some form of overall order or coordination arises out of the local interactions between smaller component parts of an initially disordered system. The process of self-organization can be spontaneous, and it is not necessarily controlled by any auxiliary agent outside of the system. This would make the neuron’s function without any auxiliary agent outside of the system. Then how do you explain the dead neurons and how do you differentiate between the two types of neurons? Without clarifying the basis of your theory, the deductions or details become meaningless tryst with fiction. As we understand from our little knowledge of physics, space and time arise out of our perception of sequence and interval. When the sequence of objects and their interval are involved, we call the interval space. When events are involved, we call their interval time. Measurement is a process of comparison between similar. For measuring space and time, we take easily intelligible and fairly repetitive distances and events to devise a unit. For example, a day or a year is a fairly repetitive and easily intelligible event. We take these as time units and sub-divide it to get the second or multiply it with the speed of time to get light year. We design clocks to synchronize the ticks with the subdivision called second. Similarly, we design scales of unit interval between objects and take it as the unit for measurement of distance. The interval between any two objects is compared – scaled up or down – with this unit. The result depicting the interval is called space through alternative symbolism since space itself has no identifying characteristics and cannot be measured in the absence of objects. The sequential arrangements of objects are depicted through coordinates. For example, the actual distance between two objects or points may not always reflect their distance (geodesics). For this purposes various coordinate systems such as Cartesian and polar coordinates are used. Dimension is not the same as direction. It describes the interface between the internal structural space and external relational space with other objects that remains invariant under mutual transformation. The result of measurement is always related to a time t, and is frozen for use at later times t1, t2, etc, when the object has evolved further. All other unknown states are combined together and are called superposition of states. Hence there is an uncertainty inherent in it. Since the interface is not constant in fluids, there can be no fixed result of measurement of their dimension – hence it cannot remain invariant under mutual transformation. For this reason, volume is considered for fluids, which remain constant under mutual transformation. However, since time does not fulfil any of these criteria, it cannot have dimension or volume. Einstein defined space as what we measure by measuring rods and time as what we measure by the ticks of a clock. Since he did not define “what we measure”, we are confused about his meaning. We also have difficulty in understanding measurement in a moving from using a scale in the fixed frame, but we are not seeking clarification about it now. We beg you to kindly teach us proper physics by clarifying the position, such as “what we measure” for space and time and “how time is considered a fourth dimension”. Also we are confused about the extra dimensions, which remain undetected even after a century, but which every scientist swears by. To us, it appears like the “flower of the sky” – something only heard of but never seen. Since all scientists including Penrose use it, we must be ignorant about it. Hence we beg you to kindly educate us. The wave-function was popularized by Schrödinger. He noted that it may happen in radioactive decay that “the emerging particle is described ... as a spherical wave ... that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however does not show a more or less constant uniform surface glow, but rather lights up at one instant at one spot ....”. He observed that one can easily arrange, for example by including a cat in the system, “quite ridiculous cases” with the ψ-function of the entire system having in it the living and the dead cat mixed or smeared out in equal parts. Thus it is because of the “measurement problem” of macroscopic superposition that Schrödinger found it difficult to regard the wave function as “representing reality”. But then what does reality represent? With evident disapproval, Schrödinger describes how the reigning doctrine rescues itself by having recourse to epistemology. We are told that no distinction is to be made between the state of a natural object and what we know about it, or perhaps better, what we can know about it. Actually – it is said - there is intrinsically only awareness, observation, measurement. But what is the proof for the validity of this statement? We request you to kindly educate on this. One of the assumptions of quantum mechanics is that any state of a physical system and its time evolution is represented by the wave-function, obtained by the solution of time-dependent Schrödinger equation. Secondly, it is assumed that any physical state is represented by a vector in Hilbert space being spanned on one set of Hamiltonian eigenfunctions and all states are bound together with the help of superposition principle. However, if applied to a physical system, these two assumptions exhibit mutual contradiction. It is said that any superposition of two solutions of Schrödinger equation is also a solution of the same equation. However, this statement can have physical meaning only if the two solutions correspond to the same initial conditions. By superposing solutions belonging to different initial conditions, we obtain solutions corresponding to fully different initial conditions, which imply that significantly different physical states have been combined in a manner that is not allowed. The linear differential equations that hold for general mathematical superposition principles have nothing to do with physical reality, as actual physical states and their evolution is uniquely defined by corresponding initial conditions. These initial conditions characterize individual solutions of Schrödinger equation. They correspond to different properties of a physical system, some of which are conserved during the entire evolution. The physical superposition principle has been deduced from the linearity of Schrödinger differential equation without any justification. This arbitrary assumption has been introduced into physics without any proof. The solutions belonging to diametrically different initial conditions have been arbitrarily superposed. Such statements like: “quantum mechanics including superposition rules have been experimentally verified” is absolutely wrong. All tests hitherto have concerned only consequences following from the Schrödinger equation. We request you to kindly educate on this. Similarly, the Schrödinger equation in so-called one dimension (it is a second order equation as it contains a term x2, which is in two dimensions and mathematically implies area) is converted to three dimensional by addition of two similar factors for y and z axis. Three dimensions mathematically imply volume. Addition of three (two dimensional) areas does not generate (three dimensional) volume and x2+y2+z2  (x.y.z). We request you to kindly educate on this. You have described feelings by giving examples of “Epicurean delight, ‘dopaminergic reward’, Freud’s ‘pleasure principle’, spiritual bliss, and altruism (it feels better to give than to receive)”. These are details like the different dishes served in a banquet (Epicurean delight). But the question is what is the basic principle? Why do we need the different dishes or the banquet itself? While discussing “feelings”, these become important. We need food for survival, which is a physical and biological necessity as a mechanical function. From our experience with different dishes, we have developed certain tastes for certain combinations of edible ingredients that not only fulfil our need for food, but also are harmonious to our maintenance of the body (good health). While there are various alternatives leading to the same goal, we select only a few, because, we “feel” comfortable (memory of past experience with it was harmonious to our taste – not Dopaminergic reward, which is a manipulated reaction). Some neurotransmitters modulate the activity of specific brain nucleus (such as nuclei accumbens, putamen, ventral tegmental area - VTA, among others) and synchronizes the activity of these nuclei to establish the neurobiological mechanism to set the hedonic element of learning. Experimental evidence highlights the activity of different brain nuclei modulating the mechanisms whereby dopamine biases memory towards events that are of motivational significance. Such biased memory cannot be used to formulate a theory. Thus, “feeling” is related to memory of objects encountered in the past. If that was harmonious with our composition (did not release free radicals), we feel comfortable. Otherwise we “feel” uncomfortable or pain. This is in conformity with Freud’s principle that the mind seeks pleasure and avoids pain - the child learns that the environment does not always permit immediate gratification. His “maturity” is based on memory. Yet, in his book Beyond the Pleasure Principle, published in 1921, Freud considered the possibility of “the operation of tendencies beyond the pleasure principle, that is, of tendencies more primitive than it and independent of it”. Through an examination of the role of ‘repetition compulsion’ in potentially over-riding the pleasure principle, Freud ultimately developed his opposition between Eros, the life instinct, and Thanatos, the death drive. Thus, the basic feelings are reduced only to ‘pleasure’ and ‘pain’ (bhoga), with food, selter, sex, etc being subsidiary instrumentalities (upabhoga – secondary bhoga). Thus, it is not universal (avidyaa). If we can have knowledge (vidyaa) about the nature of objects we use, we could chose only what is good for us leaving out the rest. Then, there cannot be any “feeling”, but only knowledge of the Nature. It has nothing to do with survival - genetic or otherwise. This is why “scientific approaches to brain function can’t account for feelings or consciousness (‘qualia’, the ‘hard problem’)”. Regarding Dopaminergic reward, one must be careful. In a recent study: “On the Nature and Nurture of Intelligence and Specific Cognitive Abilities: The More Heritable, the More Culture Dependent” published in Psychological Science, (DOI: 10.1177/0956797613493292), researchers investigated how heritability coefficients vary across specific cognitive abilities both theoretically and empirically. They assessed the “Cultural load” of various cognitive abilities by taking the average percentage of test items that were adjusted when the test was adapted for use in 13 different countries. The finding suggests that: 1.      In adult samples, culture-loaded subtests tend to demonstrate greater heritability coefficients than do culture-reduced subtests; and 2.      In samples of both adults and children, a subtest’s proportion of variance shared with general intelligence is a function of its cultural load. The above finding implies that, the extent to which a test of cognitive ability correlates with Intelligence Quotient (IQ) is the extent to which it reflects societal demands and not cognitive demands. “IQ” here refers to the general intelligence factor: technically defined as the first factor derived from a factor analysis of a diverse battery of cognitive tests, representing a diverse sample of the general population, explaining the largest source of variance in the dataset. Further, in adults, higher heritability of the cognitive test reflects more test-dependence on cultureThe effects were medium-to-large and statistically significant. Highly culturally loaded tests such as Vocabulary, Spelling, and Information had relatively high heritability coefficients and were also highly related to IQ. This counter-intuitive finding is inconsistent with the traditional investment theory and aggravated the nature-nurture debate of intelligence. The question: “why did the most culturally-loaded tests have the highest heritability coefficients” – returns many puzzles. The society is a homogeneous learning environment – school systems are all the same; everyone in a class has the same educational experiences; yet the cognitive ability varies. If the traditional investment theory is correct and crystallized intelligence (such as vocabulary, general knowledge) is more cognitively demanding than solving the most complex abstract reasoning tests, then tests such as vocabulary would have to depend more on IQ than fluid intelligence. But why tests such as vocabulary would have a higher cognitive demand than tests that are less culturally-loaded but more cognitively complex (such as tests of abstract reasoning)? Also, this theory doesn't provide an explanation for why the heritability of IQ increases linearly from childhood to young adulthood. One way out is to abandon some long held assumptions in the West. These findings are best understood in terms of genotype-environment covariance, in which cognitive abilities and knowledge dynamically feed off each other. Those with a proclivity to engage in cognitive complexity will tend to seek out intellectually demanding environments. As they develop higher levels of cognitive ability, they will also tend to achieve relatively higher levels of knowledge. More knowledge will make it more likely that they will eventually end up in more cognitively demanding environments, which will facilitate the development of an even wider range of knowledge and skills. Societal demands influence the development and interaction of multiple cognitive abilities and knowledge, thus causing positive correlations among each other, and giving rise to the general intelligence factor. These findings do not mean that differences in intelligence are entirely determined by culture. The structure of cognitive abilities is strongly influenced by genes also. What these findings do suggest is that there is a much greater role of culture, education, and experience in the development of intelligence than mainstream Western theories of intelligence have assumed. Behavioural genetics researchers - who parse out genetic and environmental sources of variation – have often operated on the assumption that genotype and environment are independent and do not co-vary. These findings suggest they very much do co-vary. Attempts were made to link perception and intelligence - for instance, do intelligent people see more detail in a scene? Now scientists at the University of Rochester and at Vanderbilt University have demonstrated that high IQ may be best predicted by combining what we perceive and what we cannot. In two studies in the journal Current Biology, researchers discovered that performance on this test was more correlated with IQ than any other sensory-intelligence link ever explored - but the high-IQ participants were not simply scoring better overall. Individuals with high IQ indeed detected movement accurately within the smallest frame - a finding that suggests that the ability to rapidly process information contributes to intelligence. More intriguing was the fact that subjects who had higher IQ, struggled more than other subjects to detect motion in the largest frame. The authors suggest that the findings underscore how intelligence requires that we think fast but focus selectively, ignoring distractions. Earlier, analysts of the US Army data claiming black-white difference invented a “Spearman’s hypothesis” to show that “the magnitude of the black-white differences on tests of cognitive ability are directly proportional to the test’s correlation with IQ”. In “Psychology, Public Policy, and Law” 2005, Vol. 11, DOI: 10.1037/1076-8971.11.2.235, the authors made the case that this proves that black-white differences must be genetic in origin. But the recent findings discussed above suggest just the opposite: The bigger the difference in cognitive ability between blacks and whites, the more the difference is determined by cultural influencesMore study on the role of genotype-environment covariance in the development of cognitive ability needs to be done. There is a saying in our country: “when speaking about a subject on which you are not an expert, you should be brief and not use it out of context”. Unfortunately, now-a-days, people are drawing limited quotes from other fields and used high-sounding words to show of how much they know about everything. Your example of Penrose is one such instance. This is called the age of science. But currently physics is at cross-roads. There are a large number of different approaches to the foundations of Quantum Mechanics (QM). Each approach is a modification of the theory that introduces some new aspect with new equations which need to be interpreted. Thus there are many interpretations of QM. Every theory has its own model of reality. There is no unanimity regarding what constitutes reality. Quantum Mechanics is not compatible with Relativity. General Relativity does not work beyond solar system. The “Information Paradox” shows that either Quantum theory is wrong or Relativity is wrong. Both cannot be correct simultaneously. Most of the ‘established theories’ have been questioned as the latest observations find mind boggling anomalies in theoretical prediction and actual measurement. In hierarchy problem of dark energy, the theory and observation differ by a mind boggling factors ranging from 1057 to 10120. Yet, this theory got Nobel Prize. Similarly fantasies like extra-dimensions have not been proved even after more than a century. In short, there is a severe crisis in physics, though no one is publicly admitting it, as they fear that international funding will dry up. We will discuss the fallacies of Penrose separately. There is a saying in Vedanta "Yat pinde tat Brahmaande", which implies, the microcosm and the macrocosm replicate each other. Thus, it is no wonder that there will be many such instances. You admit that QM and GR do not commute. Many scientists including Penrose have tried to harmonize both. Penrose's pet theme about quantum gravity does not stand the test of proof because there is no proof that gravity can be quantized - graviton, its predicted carrier particle, has not been discovered. Thus, it is fiction. For the last three years, the Information paradox has proved that either QM is correct or GR is correct. Both cannot simultaneously correct. We have shown that GR cannot be correct. You can ask Dr. Lee Smolin about our paper. Regarding the gravity wave, there are many questions. It has not been independently verified by other teams. Hence it could be a misleading inference or chance. We may point out that more than 3 years ego CERN/LHC announced the discovery about the Higg's boson - the so-called god-damned particle refereed to the God particle. We was among the first to question it during September 2012. Even today, you can verify from the web page of CERN that the discovered particle IS NOT HIGGS BOSON BUT HIGG LIKE. Similarly, even though it was claimed that it provides mass to everything, it is totally wrong. It provides mass through weak interaction, which is about 0.5% of the total mass.  Of course not being a physicist, you will not understand these.  In case you do not agree with our views, we challenge you to PROVE not brand us wrong. Even Einstein did not claim that GR is about consciousness. The world over accepts GR as the theory of gravitation which replaced Newtonian theory. Thus, do you mean gravitation is consciousness? Ha. Ha. Good joke. What is gravitising consciousness? Coining new words like Orch OR to impress people? Sorry, we are not impressed.  We are not bothered by the claims on GW. We have a different interpretation for it which fake scientists like you will never understand.  Instead of claiming that Orch OR will explain consciousness, please specifically reply to our queries, which challenges the basics of your fiction touted as a theory. Otherwise kindly change your position and please do not waste the time of others. Members of this group are too intelligent to see through your gimmick. For “Penrose’s suggestion that gravity affects collapse of the wave-function which gives rise to consciousness” to be acceptable, he or you must give proof and not issue diktat. Till date QM believed that observation by an intelligent agent collapses wave function. Suddenly you issue your command like “God said let there be light and there was light”. Do you claim yourself to be God? Otherwise prove “how gravity affects collapse of the wave-function” and how it "gives rise to consciousness"? For people like you and Penrose, the prestigious scientific magazine Nature had to publish a paper warning scientists to defend the integrity of physics. You can read the full paper at Nature 516, 321–323 (18 December 2014) doi:10.1038/516321a. To give examples of falsification of Einstein's 1905 constant-speed-of-light postulate and measurement in a moving frame, the following videos clearly show that the frequency measured by the moving observer shifts (Doppler effect) because the speed of the light pulses relative to him shifts: “Doppler effect - when an observer moves towards a stationary source. ...the velocity of the wave relative to the observer is faster than that when it is still”. “Doppler effect - when an observer moves away from a stationary source. ...the velocity of the wave relative to the observer is slower than that when it is still”. Einstein’s relativity can only be saved by assuming that the motion of the observer miraculously changes the wavelength of the incoming light (or the distance between subsequent pulses), as in the following picture: Ironically, the muon lifetime experiment, correct or not, tests an invalid prediction of Einstein’s relativity. It follows from Einstein’s 1905 two postulates that time dilation is symmetrical - either observer sees the other’s clock running slow. Yet Einstein found it profitable to inform the world that, although time dilation is symmetrical, it is still asymmetrical - the stationary clock runs faster than the travelling one: ON THE ELECTRODYNAMICS OF MOVING BODIES, A. Einstein, 1905: (a) “The observer moves together with the given measuring-rod and the rod to be measured, and measures the length of the rod directly by superposing the measuring-rod, in just the same way as if all three were at rest”, or (b) “By means of stationary clocks set up in the stationary system and synchronizing with a clock in the moving frame, the observer ascertains at what points of the stationary system the two ends of the rod to be measured are located at a definite time. The distance between these two points, measured by the measuring-rod already employed, which in this case is at rest, is the length of the rod” The method described at (b) is misleading. We can do this only by setting up a measuring device to record the emissions from both ends of the rod at the designated time, (which is the same as taking a photograph of the moving rod) and then measure the distance between the two points on the recording device in units of velocity of light or any other unit. But the picture will not give a correct reading due to two reasons: ·      If the length of the rod is small or velocity is small, then length contraction will not be perceptible according to the formula given by Einstein. ·      If the length of the rod is big or velocity is comparable to that of light, then light from different points of the rod will take different times to reach the recording device and the picture we get will be distorted due to different Doppler shift. Thus, there is only one way of measuring the length of the rod as in (a). Einstein goes on to say: “From this there ensues the following peculiar consequence. If at the points A and B of K there are stationary clocks which, viewed in the stationary system, are synchronous; and if the clock at A is moved with the velocity v along the line AB to B, then on its arrival at B the two clocks no longer synchronize, but the clock moved from A to B lags behind the other which has remained at B by tv^2/2c^2 (up to magnitudes of fourth and higher order), t being the time occupied in the journey from A to B." This is tantamount to saying that, although elephants are unable to fly, they can still do so by just flapping their ears. Yet the breathtaking impliciations of Einstein's invalid conclusion (time travel into the future etc) enchanted the world: John Barrow FRS, professor of mathematical sciences at the University of Cambridge: "Einstein restored faith in the unintelligibility of science. Everyone knew that Einstein had done something important in 1905 (and again in 1915) but almost nobody could tell you exactly what it was. When Einstein was interviewed for a Dutch newspaper in 1921, he attributed his mass appeal to the mystery of his work for the ordinary person: “Does it make a silly impression on me, here and yonder, about my theories of which they cannot understand a word? I think it is funny and also interesting to observe. I am sure that it is the mystery of non-understanding that appeals to impresses them, it has the colour and the appeal of the mysterious”. Relativity was a fashionable notion. It promised to sweep away old absolutist notions and refurbish science with modern ideas. In art and literature too, revolutionary changes were doing away with old conventions and standards. All things were being made new. Einstein’s relativity suited the mood. Nobody got very excited about Einstein’s Brownian motion or his photoelectric effect but relativity promised to turn the world inside out”.
f2caf0da0356a40e
Written by April 4, 2010 7:09 pm 16 comments The complete series, Complexity Explained by Dr. Vinod Wadhawan, can be accessed here. In this concluding part of the series on complexity I recapitulate the basic ideas about complexity, and then revisit the questions about the origin of the universe we live in, the origin of life, and the origin of consciousness. The bottom line is that the word ‘origin’ should be replaced by ‘evolution.’ And what evolves with time is complexity, resulting in the emergence of new properties or phenomena which could not have been anticipated. 17.1 Recapitulation of the Main Ideas in Complexity Science With reductionism comes the conviction that a court proceeding to try a man for murder is “really” nothing but the movement of atoms, electrons, and other particles in space, quantum and classical events, and ultimately to be explained by, say, string theory. Stuart Kauffman (2006) 1. Classical microscopic laws of physics are characterized by determinism and time-reversal symmetry. Determinism means that if the position and the momentum of a particle are known at any instant of time, then the laws of classical mechanics determine the position and momentum at all instants of time, both future and past. The success of space missions is an example of the applicability of the deterministic equations of motion to simple (or simplifiable) systems (in contrast to complex systems). Simple systems have the linearity feature: The inevitable imprecision in our knowledge of the physical parameters of such a system does not lead to disastrous or runaway consequences in our predictions about the mechanics of the system. 2. By contrast, chaotic systems, though deterministic, are governed by nonlinear equations of motion, and consequently we cannot predict their behaviour far into the future. Chaos is an example of the fact that determinism does not necessarily imply predictability. 3. The familiar second law of thermodynamics is a striking example of emergence in complex systems. The laws of mechanics (classical or quantum) applicable to any microscopic particle comprising a macroscopic system are time-symmetric; but the macroscopic system has the emergent property of time-asymmetry, embodied in the fact that the entropy of the system cannot decease with the passage of time. 4. In the macroscopic world, we associate the direction of increasing entropy with the direction of increasing time. Entropy is a measure of disorder, and negative entropy or negentropy is a measure of information. 5. The emergence feature of complex systems makes the reductionistic approach to understanding complex natural phenomena quite inapplicable. But that does not mean that we should swing to the other extreme and adopt only a holistic approach. It is important to understand the distinction between chaotic, random, and complex systems. In a chaotic system there is determinism without predictability. Order and disorder coexist in a complex system. And randomness means a complete lack of structure or order (‘algorithmic irreducibility’). I shall be addressing these issues in a forthcoming book. 6. Complex systems have a hierarchical structure of complexity. The structure at one level leads to the next level of complexity, and each level of complexity often results in the emergence of new laws. 7. The new laws do not violate any of the laws operating at the lower levels of complexity. There is no question of ‘downward causality’ because, deep down under, everything interacts with everything else and we only have interactions, rather than actions and reactions (or causes and effects). 8. Physical laws, though always valid, are not always convenient or relevant for explaining, say, the chemical behaviour of a system. Similarly, biology is not always conveniently understood in terms of the laws of chemistry or physics alone. Nevertheless, if we consider only neighbouring or contiguous levels of hierarchical complexity, a reductionistic or constructionistic approach can often be useful. 9. Flow of energy through an open thermodynamic system can take the system so far away from equilibrium that there is a bifurcation in phase space, resulting in self-organization. Such bifurcations can occur repeatedly in a complex system, and there is no way to predict as to which branch of a bifurcation will be chosen, because the choice depends on random fluctuations at the moment of the bifurcation. This fact lies at the heart of (unpredictable) emergence of novel features during the time-evolution of a complex system. 10. Simple local rules can lead to the emergence of complex overall patterns, behaviour, or properties. This is how swarm intelligence emerges. 11. The flow of energy through a complex system results in a build up of the information content of the system. A state of complete order, as also a state of complete randomness, has low information content and a low degree of complexity. The more interesting complex systems usually fall in-between these two extremes. 12. Complexity thrives best at the ‘edge’ between order and disorder. Complex adaptive systems tend to self-organize so as to inch towards this so-called ‘edge of chaos.’ 13. Per Bak’s notion of self-organized criticality provided important insights into how and why complex systems move to a state at or near the edge of chaos. 14. Positive feedback is an important mechanism of how self-organization can occur. However, it is not the only possible mechanism for this. Often, chain reactions achieve something similar. And negative feedback provides the necessary antidote for maintaining a state of optimal balance and perpetual novelty. 17.2 How did the Universe Emerge out of ‘Nothing’? Diogenes Laertius IX This is the toughest of the three questions I revisit in this article. I wrote about cosmic evolution in Part 7 of this series, but want to make up here for some important omissions. What happened immediately before the Big Bang? The answer to this question is important for understanding some observations in astronomy. How can energy be created out of nothing, and how is it continuing to increase as the universe expands? I quoted Seth Lloyd (2006) in Part 7: ‘Quantum mechanics describes energy in terms of quantum fields, a kind of underlying fabric of the universe, whose weave makes up the elementary particles – photons, electrons, quarks. The energy we see around us, then – in the form of Earth, stars, light, heat – was drawn out of the underlying quantum fields by the expansion of our universe. Gravity is an attractive force that pulls things together. . . As the universe expands (which it continues to do), gravity sucks energy out of the quantum fields. The energy in the quantum fields is almost always positive, and this positive energy is exactly balanced by the negative energy of gravitational attraction. As the expansion proceeds, more and more positive energy becomes available, in the form of matter and light – compensated for by the negative energy in the attractive force of the gravitational field.’ Apart from quantum-mechanical effects and the gravitational interaction, other dominant factors in the early stages were the immensely high temperatures and pressures. In the beginning it was all radiation, and no matter. And the energy content and the information content were very small. The energy content and the information content built up as the universe expanded and extracted more and more energy out of the underlying quantum fabric of space and time. According to the current theories, the energy grew very rapidly in the beginning (by a process called inflation), and the amount of information grew less rapidly. Immediately after the Big Bang there was a hot plasma of elementary particles, which expanded and cooled very quickly. In fact, the first structures got formed within a fraction of a second after the explosion. Protons and neutrons were formed from quarks. One minute after the Big Bang, helium nuclei were formed. Soon, a full 24% of all matter was in the form of helium nuclei. Radiation interacts primarily with ions (rather than atoms).A few tens of thousand of years after the Big Bang, the first electrically neutral matter was formed, when protons and electrons combined to form atoms of hydrogen. This marked the separation of electrically neutral matter from radiation. On further cooling, gravitational effects became more and more important, as electrically neutral atoms could now clump together because of gravitational attraction. This clumping went on to produce galaxies ultimately. There are gaps in our understanding of how structure arose out of what was a structureless field of radiation in the beginning. In particular, we do not yet know whether there are forms of matter other than what we already know. Even as early as in the 1930s, it was known that gravitational effects in large galactic clusters are much higher than what can be expected from the known amount of matter there. Apparently, there is another, unknown, form of matter that is a full 90% of all matter, as indicated indirectly by the gravitational effects. It is called dark matter because we are unable to observe it; we infer its existence only through its gravitational effects. Perhaps neutrinos have something to do with this dark matter. Or perhaps some still undiscovered elementary particles, including some very heavy (but unobserved) ones, may be involved. These particles might have got formed in the very hot conditions soon after the Big Bang. The reasons for the occurrence of the Big Bang are still a puzzle. Another puzzle in modern cosmology is the fact that matter and the cosmic background radiation are distributed quite homogeneously throughout the observable universe. Consider a galaxy that is 5000 million light years away today from our galaxy, namely the Milky Way. When the universe was, say, just one million years old, it (the universe) was only a thousandth of its present size. Therefore at that time the two galaxies must have been 5 million years apart. But since the age of the universe at that time was only one million years, not enough time was available for the two galaxies to have exchanged signals of any kind (assuming that nothing travels faster than the speed of light). There could not have been any kind of communication between the contents of one galaxy and the other. So how did the homogenization of the shock waves associated with the Big Bang occur? There is general agreement that the emergence of matter from the early radiation field was a kind of symmetry-breaking phase transition. This can be likened to the phase transition from liquid water (which is homogeneous, or translation-invariant) to ice (which is not translation-invariant). The radiation field was translation-invariant, and the appearance of matter broke this translational symmetry. A hypothetical field called the Higgs field has been introduced in cosmology to understand these phenomena. This field breaks the symmetries of the interactions among the elementary particles, and gives the particles their mass. The Higgs-field theory predicts the existence of a cosmological constant. Such a constant was indeed introduced much earlier by Einstein, and then withdrawn because it amounted to introducing into his theory of gravitation a parameter ‘by hand,’ with no theoretical justification. Einstein’s cosmological constant was intended to provide the repulsive force needed to compensate for the attractive force of long-distance gravity. In other words, if gravity could be switched off, Einstein’s cosmological constant would result in a rapid inflation of the universe. But once it was known that the universe is expanding, it became unnecessary to try to counterbalance the attractive gravitational force. The Higgs field results in the existence of a new cosmological constant, which turns ‘empty’ space into a space that has an energy content. The problem at present is that the predicted cosmological constant has too large a value for a correct understanding of the observed cosmic evolution. It is believed that perhaps the Higgs cosmological constant had a large value right after the Big Bang, resulting in a violent and very rapid expansion (or inflation) of the universe. At a certain stage of this inflation, a cosmic phase transition occurred, which freed enormous amounts of energy (rather like the release of latent heat when steam condenses to liquid water). In a way, this energy flash or Big Bang marked the actual birth of our cosmos. After this prelude of inflation and cosmic phase transition, the normal (much slower) expansion of the universe set in, and has continued ever since. During the inflation prelude, the universe grew extremely rapidly from a volume smaller than that of the nucleus of an atom to the size of a tennis ball. If we associate the Big Bang with the moment at the end of the (very quick) inflation episode, certain cosmological mysteries get resolved. When the universe was just the size of a tennis ball, regions that are far apart today could have been in contact then, thus resulting in the observed homogenization of the universe. This new model of the Big Bang (i.e. a phase transition after the inflation prelude) answers a few additional perplexing questions as well. The model implies that the observable cosmos is a part of a much bigger system. Our Big Bang occurred in a certain region of the cosmos, leaving other regions untouched. More Big Bangs can keep occurring in other regions of the cosmos, opening up the possibility of parallel universes. There is thus a multiverse, rather than a universe. In a multiverse, Big Bangs occur repeatedly, and each resulting universe has values of fundamental constants that just happen to be what they are. The universe we live in happens to have values of fundamental constants that make our emergence and existence possible. Otherwise we would not have emerged and evolved. This brings us to the much-maligned anthropic principle. The principle states that: The parameters and the laws of physics in our universe can be taken as fixed; it is simply that we humans have appeared in the universe to ask such questions at a time when the conditions were just right for our life. I have not included a discussion of this principle in the present series because it is covered in another article (on biocentrism) on this website, which I coauthored with Ajita Kamal. Although there is no law saying that the degree of complexity of the universe must always increase, an empirical observation is that it is increasing, and increasing at an exponential rate. There can be some local decreases in complexity (there is even an anthropocentric angle to this issue), but the overall complexity of our universe is increasing. This has been explained in terms of the fact that our universe is expanding, and thus getting a continuous supply of free energy or negentropy (cf. Part 7). But how long will the universe continue to expand? Did time begin? Will time end? Here are three likely answers given by the noted cosmologist Paul Frampton in a recent (2010) book: Most likely: The present expansion will end after a finite amount of time, the universe will contract, bounce and repeat the cycle. In this cyclic universe, time had no beginning, and will have no end. Next most likely: The present expansion will end after a finite time in a Big Rip. Time began in the Big Bang some 13.7 billion years ago, and will end some trillion years in the future. Least likely: The present expansion will continue for an infinite time. Time began 13.7 billion years ago, and will never end. In his book Prof. Frampton challenges this prevailing ‘conventional wisdom.’ 17.3 How did Life Emerge out of No-Life? Stuart Kauffman (2006) As discussed in Part 10, it is not easy to define life. One consequence of this situation is that life must have emerged very very gradually. Thus it is meaningless to try to identify a point of time which marked the ‘origin’ of life on Earth. As discussed in Parts 8, 9, and 12, a whole lot of chemical evolution of complexity preceded the emergence of what we intuitively understand as life.image172 I discussed only two models of the likely origins of life in Part 12. For a more comprehensive description, please see the 2006 online article by Stuart Kauffman. I described Kauffman’s work on autocatalytic sets of molecules in Part 9, and his RBNs (random Boolean networks) in Part 12. He has been emphasizing the importance of the self-organization feature of complex systems in the evolution of biological complexity. He uses the phrase ‘order for free‘ for this non-Darwinian evolution of complexity: While it may sound as if ‘order for free’ is a serious challenge to Darwinian evolution, it’s not so much that I want to challenge Darwinism and say that Darwin was wrong. I don’t think he was wrong at all. I have no doubt that natural selection is an overriding, brilliant idea and a major force in evolution, but there are parts of it that Darwin couldn’t have gotten right. One is that if there is order for free – if you have complex systems with powerfully ordered properties – you have to ask a question that evolutionary theories have never asked: Granting that selection is operating all the time, how do we build a theory that combines self-organization of complex systems – that is, this order for free – and natural selection? There’s no body of theory in science that does this. There’s nothing in physics that does this, because there’s no natural selection in physics – there’s self organization. Biology hasn’t done it, because although we have a theory of selection, we’ve never married it to ideas of self-organization. One thing we have to do is broaden evolutionary theory to describe what happens when selection acts on systems that already have robust self-organizing properties. This body of theory simply does not exist.(Chapter 20, “Order for Free”, The Third Culture, 1995). Kauffman’s work brings out the inevitability of the emergence of life. The prevailing conditions were such that life just had to appear because of the relentless evolution of complexity. A knowledgeable alien would be very surprised if life had not emerged here. Thus, the ‘origin’ of life is the easiest of the three questions I am revisiting in this article. There is nothing miraculous or supernatural about the origin of life. 17.4 How does Consciousness Arise? Meanwhile, my approximate theory is that mind is acausal, quantum mechanics is acausal on the familiar Born interpretation of the Schrödinger equation, (to the grief of Einstein), that consciousness is due to a special state where a system is persistently poised between quantum and classical behaviour, that the emergence of classical behaviour in the mind-brain system, perhaps by decoherence, is the “mind making something actual” happen in the physical world, and – big jump – that consciousness itself consists in this quantum coherent state as lived by the organism. This is a long jump, but not impossible. I don’t even think it is stupider than other theories of consciousness, and may be true. Whatever the case, consciousness is ontologically emergent in this universe. Stuart Kauffman (2006) image173 The problem with the word ‘consciousness’ is that it is what Marvin Minsky calls a ‘suitcase word.’ It stands for a whole set of processes. Naturally, it is difficult to discuss it in a scientific manner. From the complexity perspective, consciousness arises from swarm intelligence, the swarm here being that of neurons. In a large swarm, local rules can lead to astonishingly complex behaviour and novel phenomena and sensations. The self-referential nature of consciousness is what makes it look so puzzling. But the fact is that, long ago (in 1931), Kurt Gödel shook the foundations of mathematics by proving that even such an innocuous thing as the formal system of positive integers can have self-referential properties. Self-reference and formal rules can make systems acquire meaning, despite the fact that each constituent of the system in without meaning. Nevertheless, there are difficulties galore: The debate on consciousness is not likely to end anytime soon. 17.5 Acknowledgements The idea of writing this series of articles was suggested by Mr. Ajita Kamal, Editor of Nirmukta. Ajita has been of great help throughout, and made several useful suggestions. My Ph. D. student Indranil Bhaumik was immensely helpful by sending me several important books in pdf format. Ms. Malgorzata Koraszewska took the trouble of translating these articles into Polish and publishing them at She has done a thorough job indeed, consulting experts when in doubt about the exact Polish equivalent of a technical word in English. The Polish versions of these articles were discussed in a much more lively way than the originals in English. Unfortunately I could not take part there because of the language barrier, but was happy to answer some questions forwarded to me by Malgorzata. I not only enjoyed writing these articles, it was also a great learning experience for me because of the comments and questions posted on, as also on and some other websites which picked up some of these articles. I also received a lot of feedback from scientists-friends through private emails. I shall feel amply rewarded for the time and effort I have put into the writing of these articles if I have succeeded in inducing even a few of the readers to shun all kinds of irrational belief systems. Science is rational. Science is fun. Science has both a humbling and a liberating influence on those who have imbibed the spirit ofimage174 the scientific method. The skepticism inherent in the scientific method, and its emphasis on making only falsifiable statements, are essential tools for acquiring knowledge we can trust with a high degree of confidence. Nature is highly creative, and this creativity comes from the relentless evolution of complexity. A flower is a piece of art, and complexity science tells us how this ‘natural art’ can arise (emerge) without the need for the existence of the artist or the creator. Dr. Vinod Kumar Wadhawan is a Raja Ramanna Fellow at the Bhabha Atomic Research Centre, Mumbai and an Associate Editor of the journal PHASE TRANSITIONS. All parts of Dr. Wadhawan’s series on Complexity Explained can be found here. This post was written by: - who has written 36 posts on Nirmukta. Very interesting. In my opinion, the quintessential question is not why life emerged in this universe ( I have a philosophy similar to Kauffman’s ). But about why the conditions in this universe are such that complexity keeps on ever increasing ? Is this due to a chance ordering of initial conditions of the universe ? Or is it tied to a deeper level to the very physical laws at the microscopic level (which at the moment, we assume to be time-symmetric) ? I would like to keep my options open on these questions. Apart from the 2nd law of thermodynamics which talks about increasing entropy in large systems, there is also the decay of Beta particles due to the weak force, which is known to be assymetric in time. Or these types of assymetry related at a fundamental level ? In which case, we don’t have to suppose about these “magical” initial conditions in the universe. There could have been nothing left to chance, the increase of complexity could be something extremely essential to any logical possible set of physical laws in a universe. I think you should mention the connection to this evolutionist perspective of the universe with the Indian philosophy of Samkhya (enumeration). The fundamental difference where you might disagree with Samkhya is on the notion of Purusha which is held to be distinct from nature (Prakriti). But the question of Purusha is not related to the notion of consciousness (self-image) but to the notion of qualia (or the so called hard problem of consciousness). The philosophy of Samkhya puts ego (self-image) within the domain of nature and argues forcefully how it arises automatically from the evolution of nature. So, if you can take the perspective of ignoring qualia altogether (be agnostic about it : whether they exist or not, it doesn’t matter to nature’s evolution), you can use the framework of Samkhya to explain this theory of evolution of complexity. • Vinod Wadhawan My answer is there is Part 7 of this series, and I reproduce it here: 7.7 Why is There so Much Complexity in the Universe? • Dear Dr. Wadhawan I have written a blog post recently on the Samkhya system of philosophy and how it is related to the system of counting with zeros. Samkhya literally means enumeration and it is inevitable that it is tied to the Indian method of enumeration with zeros, so I am very surprised that nobody has analysed it with this perspective so far. The general reaction of western academics is to couple it with the philosophy of Decartes (and with the Cartesian duality of body and mind) which gives a completely wrong idea about this system. I would be immensely pleased if you read my blog and comment on how and where you disagree with this philosophical system. • Vinod Wadhawan Dear Ray Lightning: I read your blog about the philosophy of Kapil Muni. Great job! Gregory Chaitin is a co-founder of the field of algorithmic information theory. One of his recent books deals at length with the problem of enumeration and real numbers. If you will give me your email ID, I can send you the soft copy of that book. Considering how young you are, I am very impressed by your deep understanding of philosophy. • Thanks very much for your kind words, Vinod Sir :) I didn’t know about the connection between the expansion of the universe and the increase of entropy. I will read your other articles in this series and try understand it better. Also, I will be very happy to receive the soft copy of the book by Gregory Chaitin. I am currently writing up my PhD thesis so actually I don’t have time to read much. But I hopefully will find time later in the holidays after my thesis defence. My email is vakibs AT gmail • Dear Dr. Wadhawan, I was waiting for this epilogue, when you told me in our personal meeting 10 days back. You have done wonderful work, complexity has been a topic shunned naturally by working scientists. They would rather be interested in simplifying situations they handle in their immediate assignments. The complexity analysis on the other hand needs larger perspective and desire to rise above immediate concerns. Even if the immediate concerns are perfectly valid scientific goals. I will be reading full series again and again. I am tempted here to quote following report not as a distraction, but to underline the fact that no scientific work can ever be termed finished in this complex universe. It is also understood that, sitting in a corner and meditating will not resolve the complex scientific questions either. Science, and in particular physics as we have both have experience in, grew out of efforts to ‘quantify’ carefully collected data and observations. Even the abstract theories have ultimately been tested on numerically verified evidence of predictions from those theories. And only then the theories were adopted in the vast body of modern scientific knowledge. In contrast, the ancient Indian knowledge, certainly does not involve similar methodology. And although many mathematically profound statements can be traced scattered within great epics and various texts etc. no such mathematical modelling and eventual predictive methodology appears there-in. Why most analysts are greatly enamoured or vastly disappointed (dependent on their ideological leanings) with Indian ancient writings is that, these ancient texts recurrently refer to ideas of cosmological proportions. The scientifically minded rationalists are aghast at this audacity of Rishis. While analyst like me simply want to acknowledge that, there is no way we can dictate to past thinkers: On what subject they should have commented or not commented on is totally beyond our control. And there is also the big question too. Were these thinkers just audacious lots – – nothing more? and wherther they should be shunned for their audacity alone? Or perhaps, they did have some inhuman inkling on the nature of universe? Were these thoughts much bigger for their stone age brains to be able to really handle? There is a definite singularity here. The way we understand human evolution on planet earth , and the way Rishis appear more advanced for their geological ages. To me the Sanskrit language itself present as a great singularity. From premitive, almost non-existent PIE to most evolved million word vocabulary, infinitely poetic, rhythymic, and clear ( not at all obscure like many of the ancient and archaic languages including old english) construct is greatest of mysteries . In my opinion, that is. (I will however not like to contest the opinion of those who know more than me on this subject.) Therefore, is it possible that proof of existence of the galaxies in the observable universe, billions of years older even at the time ( 13.75 billion year ago) when big bang ocuurred, could warrant new thinking on our current understanding of the univesre? For me, conceptually it is not difficult to understand that one sequence of universe could have started with big bang, even when there were galaxies existing from even older big bangs. The idea of origin of space-time itself coupled to the origin of big bang, goes kaput with such thought, but why should that be very sacrosanct either? So the new picture could be, …. series of big bangs intertwined in time space …… Since I am not writing a scientific piece here, for (scientific) sanity sake I must stop at this point. Suffice it to say that, a recurrent idea in indian thoughts is that of universe with no beginning and no end. No other religious/non religious ancient body of knowledge even attepmts a cosmology. For us scientific minded to be angry at the … ‘audacity’ … is not a very rational response , I think. The possibility exists that new scientific findings, and new research efforts could keep validating some ideas from ancient thinkers also. That has been my position. That leaves me flexibility to be flexible without being unscientific. Human intelligence when undertakes honest pursuits, it can keep surprising itself immensely. • Vinod Wadhawan Dear Ravi: Thank you very much for your comments and feedback. As you are aware, I have been doing a lot of writing (research papers, books, articles). And often I express open admiration for the originality and the brilliance of certain scientists from all over the world. As a proud Indian I shall be only too happy to be able to cite the work of great Indians of the past. I am not able to do that often. I have been thinking about why this is so. To get a proper answer, we should first make a clear distinction between science, mathematics, and philosophy. My impression is that, while ancient Indian philosophy and mathematics were great, our ancestors somehow missed the bus regarding science. The essence of the scientific mehod is foreign to our soil. Of course, I shall be only too happy to be proved wrong. Please correct me if you can. • Vinod Wadhawan I had a look at the website link you suggested. Let us wait and see what the LHC experiments will tell us. • Ravi, I think there can be times when philosophers and mathematicians can stumble onto empirical truths about the universe, owing to their quest for finding beauty and symmetry in everything. Take the example of Pythagoras, who thought that the earth was a sphere. Why ? Because, for him, a sphere is the most perfect shape, absolutely symmetric in all directions. Pythagoras’ idea was not popular at that moment in Greece, but ultimately he is found to be right. And the reason why the earth is a sphere is also related to his love for symmetry : the gravitational force is symmetric in all directions. Of course, we now know that the earth is not a “perfect” sphere and more like an ellipsoid. In a similar way, I think we can rationalize some of the insights of ancient Indians. Not because of some supernatural gifts from the sky, but because of clear and lucid thinking, a few Indians might have stumbled onto truths. However, there is a pitfall for this love for beauty and symmetry. Often, nature will catch us off-guard and has properties absolutely absurd with no meaning whatsoever underneath. Or may be, that “meaning” can only be comprehended only by more lucid minds that ours. Whether that state of total lucidity to see the whole universe and make sense out of it all is capable to the reach of human beings, I don’t know. But I’d like to dream that it is so. :) • Dear Dr. Wadhawan, The points are well taken. The advent of modern science has been truly overwhelming. It also helped drive away the ‘ghosts’ both imaginary and literary types from the minds of human population. An average human being is now much more liberated, well informed and able to democratically enjoy the fruits of science –be it flying in the aeroplanes or get best medical treatment in the hospitals and so on. We all have a stake in propogating scientific advancement as much as the real scientific temperament, including myself. That science will ultimately find answers to most problems of existence etc. is also reasonable expectation. No problems with this. The view point I have developed additionally is this: ‘Science’ is not same as ‘desription of science’. Take for example the paper on human genome research in Nature. Due to restriction on number of words for a given type of communication, large effort is spent on drafting of scientific papers, and this is common knowledge. In these current times, and prevailing scientific environment, certain phrases, abbreviations, implications and styling of conclusions etc. will be normally used, and this will clearly reflect in the presentation of the paper. So will be the case for most other scientific research and technological achievements. Let me introduce a hypothetical scenario here: Suppose at this juncture, we are struck by a natural calamity — of global proportions, or even by a limited/unlimited nuclear war. Much of the infrastructure is rendered useless or even turned to dust, but written words in digital and or paper form survive the destruction, let us say because these were kept in the safe vaults. A thousand years then pass, in great dis-array and it then again happens that scientific brains of unusual calibre gather to start rebuilding the knowledge base. Fresh, from scratch. Obviously, they do not necessarily speak and write in English. At least do not use same sophistication and use only a remnant language rebuilt totally differently. The conventions in existence today about what makes for a good scientific communication is no longer known to them. In short, what we do as moderns today, can be perceived as very archaic after thousands of years, particularly so if destruction of global proportins were to annihilate much of the working proofs of our modern achievements. Will the printed word alone (without proof of any of the working devices, HPLC machines and techniques of DNA research, no high purity chemicals and no electrophoresis plates etc.) still have same sanctity to the greatly handicapped investigators of future? Same way as it happens today? — even to the lay analysts who are more than willing to comment on any thing that human genome may be inferring or not? Whole DNA paterning of African mother to Aryan race to polynesian braves to apes and missing links— any thing goes in the name of science when going is good. The ‘proofs’ for older knowledge when much of the information is destroyed, and when we have perforce to work with only random scaterred remnants of information, will naturally be missing. What I do, in such scenario is to look for coherence in thoughts, in old texts, without ascribing or imposing my preconceived ideas of the presumed intelligence, honesty or capability of the unknown, – unnamed authors there in. The reward is not only in terms of some fesh ways of thinking, but also of literary type. Such beautiful poetry, such evolved thoughts on possibilities of human mind, and such penchant for vivid imagery, in itself is the reward. I do hope earnestly that, we do not get to suffer any calamities of global scale, which force us into oblivion , before we unravel the mysteries of universe using all the current level scientific knowledge and methods. And we do not have to suffer the same fate as our ancestral thinkers. ( I am not suggesting that a meteor struck the post vedic earth, here; but as Lord Krishna says in Gita, that HE ‘had’ given his knowledge to sun, who also was known as Vaivaswat manu, but in time the knowledge was lost. And HE was then forced to come to the ‘mrytyu loka’ again to revive the knowledge and tell this to Arjun….) Ravi Khardekar • Vinod Wadhawan Philosophers can certainly stumble upon great truths sometimes. And it is the job of the scientist to verify the verifiable truths. Till the verification occurs, it is only a matter of opinion whether something is true or not. Of course, one can also use logic sometimes to reject certain statements as not true. I said ‘verifiable truths’. Can there be unverifiable truths also? Yes indeed. Gödel and Turing (and more recently Chaitin) showed that a number of statements in a formal system are true for no reason, i.e. it is not possible to prove their truth by using the existing axioms of the formal system. In other words, more axioms have to be added to the formal system. This is a very tantalizing situation indeed. It blurs the distinction between what is done in mathematics and what is done in physics. More later. • Vinod Wadhawan Somebody has asked me to give an example of something that is true without any reason. I thought I should do it on this website so that others can also participate in the fun. The most famous example is the Gödel statement: It is either false or true. If it is false, then it is provable, but that is absurd because then it contradicts itself. If it is true, then it is unprovable, meaning that it is true without any reason. It turns out that in any formal system there can be any number of statements or theorems which are true without any reason. There are axioms and there are formal rules of logic in a formal system, and a statement or a theorem is said to be true for a reason if it can be derived from the axioms by applying the logic rules. But it turns out that there are many theorems which are true for no reason. Gödel’s statement was the culmination of a long succession of historical developments. I shall mention just one example of such a paradoxical situation (from the work of Bertrand Russell). There is a small town with just one barber. He shaves all those and only those who do not shave themselves. The troublesome question is: Does he shave himself? He shaves himself if and only if he does not shave himself! This is known as the Russell paradox. • Hello Ray, I should not prolong the discussions here. But your blog site does not allow comments. We had a lucky 3000 odd years, in which ideas of greek-roman philosophers not only survived but also formed the basis for modern science. Out of curiosity, I looked in wikipedia, before writing this comment. Pythogorous is well known to the scientific community for his famous theorem, which later mathematicians used to develop the discipline further. Wikipedia also mentions about ‘Pythogorian religion’ founded by him, and also the fact that he was a mystic. His theorem survived tests of time, but other work didnot. As Dr. Wadhawan rightly pointed out , the scientific method has a built-in scrutiny, according to which, timeless & useful will survive. Esoteric or unfalsifible may also survive but will have limited use. Kepler’s theorem did wonders for our understanding of planetary science and force of gravity. But he himself in his later days was lost in some pursuit that he could not communicate coherently about. Life has its charms, both ways. We should pursue and profess science as more verifiable common heritage of modern man. If some of us can draw inspiration from earlier esoteric works to create some new work, even this should be acceptable. I will be reading Kapila Muni more seriously, thanks to you. • Vinod Wadhawan Mythology and folklore (even philosophy) make a large number of unfalsifiable statements. What should one do with such statements? As a scientist I can only ignore them, and treat them as not a part of science. It is entirely possible (even permissible or desirable) that some enthusiast takes them very seriously and then recasts them into falsifiable form. They can then become a part of science if verified to be true. I can describe something as sublime philosophy, but then what? Somebody should be able to convert it into science; otherwise it is just a curiosity that somebody could think up such interesting, even profound, things. I am very impressed by what Kapil Muni propounded, but it cannot gain widespread acceptance and importance unless it leads to progress in science. What else can anybody do except to follow the scientific method of acquiring knowledge and understanding that one can trust with a high degree of confidence? • Dr. Vinod Wadhawan your series of articles on complexity are a masterpiece. Very concise, beautifully simple. You should write a book. Leave a Reply Comments are moderated. Please see our commenting guidelines
effb29f1ca3f0fbd
Hawking and unitarity The previous blog article about the very same topic was here. However, Hawking's semiclassical calculation leads to an exactly (piecewise) thermal final state. Such a mixed state in the far future violates unitarity - pure states cannot evolve into mixed states unitarily - and it destroys the initial information about the collapsed objects which is why we call it "information loss puzzle". A tension with quantum mechanics emerges. There have been roughly three major groups of answers that people proposed. 1. One of them is essentially dead today; it is the remnant theory. It argued that the black hole does not evaporate completely. Instead, a small light remnant with a large entropy remains after the evaporation process - and this remnant is what preserves the information. This approach is highly disfavored today because such small seeds simply should not be able to carry large entropy (because it violates holography). Moreover, this approach does not save unitarity anyway because the scenario still assumes the thermal radiation to be in a mixed state. 2. The other two general answers are obvious. One of them says that the information is lost, indeed. The qualitative features of Hawking's semiclassical calculations - the evolution into mixed states - survive in the exact analysis, too. Such an approach is popular among the General Relativity fundamentalists who believe that the fabric of spacetime is exactly what we think it is classically; causality in particular must be exact and no information can ever get out from a black hole. I formulated the argument in a way that makes it clear that it looks dumb to me - especially today when we know that topology of space may change and that black holes exist in unitary backgrounds of string theory. The Hawking process itself is an example of a violation of the strict rules of locality and causality by black hole physics! 3. The last answer, the only one that has always respected the principles of the 20th century physics, says that the information is preserved in the same way as in any other process in the world - burning books is an example. (Only later, I noticed that Hawking has independently chosen the very same example.) When we burn books, it looks as though we are destroying information, but of course the information about the letters remains encoded in the correlations between the particles of smoke that remains; it's just hard to read a book from its smoke. The smoke otherwise looks universal much like the thermal radiation of a black hole. But we know that if we look at the situation in detail, using the full many-body Schrödinger equation, the state of the electrons evolves unitarily. The same thing must hold for black holes. And the feeling that such a transfer of information is impossible because of the horizon is just an illusion; it is an artifact of the semiclassical approximation that paints the rules of locality and causality as more strict than they are in the full theory. Locality and causality are, in general, approximate emergent concepts that appear in the (semi)classical limit. The power of the full theory of quantum gravity to violate locality and causality in a subtle way is manifested whenever horizons develop, and it is responsible for the conservation of the information. Note that the conservation of the information is the only answer that can be acceptable for a physicist who treats the postulates of quantum mechanics seriously. No doubt, the postulates of quantum mechanics seem rigid and un-modifiable, while the exact degrees of freedom and terms in the Lagrangian that describe general relativity are flexible. The quantum mechanical postulates have a higher priority, and they tell us that the information must be preserved in the details of the nearly thermal Hawking radiation that remains after the black hole disappears. While Stephen Hawking has believed that the information was lost - and he has made bets of this kind - he eventually switched to our side in the summer of 2003 or 2004 (I am uncertain now). As you could hear from CNN and other major global new agencies, he officially admitted that his opinion was incorrect. The deep insights in string theory have convinced him that John Preskill was right and the bet is lost; Hawking gave an encyclopedia to Preskill as promised. Among these insights that have convinced Hawking, you find Matrix theory and especially the AdS/CFT correspondence. Gravity in asymptotically AdS spaces has an equivalent description in terms of a conformal field theory living on its boundary. This conformal field theory is manifestly unitary and has no room for destruction of the information. This answers an equivalent question about gravity, too. This brings most sane physicists to the opinion that the information is preserved and gravitational physics is not that special after all. But it does not give us a quantitative, calculable framework that would explain how does the information get out of the black holes and what do these subtle correlations that remember the initial state look like. Hawking's recent solution Hawking has announced that he had solved the problem. The main ideas of his solutions are the following ones: • The scattering S-matrix is the main "nice" observable that should be calculated in a theory of quantum gravity. (I fully agree.) • The scattering does not prevent a black hole from being formed, but such a black hole is just like any other intermediate state or resonance. (I fully agree.) • The thermal nature of the resulting radiation is a consequence of an approximation (that becomes accurate for large black holes) but there is no qualitative difference between black hole intermediate states and other intermediate states; the transition if smooth. (It was actually just me who formulated this point in this way.) • Just like in quantum field theory, the Euclidean setup combined with the Wick rotation is an essential technical tool to do the calculations; Hawking refers to Euclidean gravity as the "only sane way" to do quantum gravity. In the gravitational context, this approach was promoted and improved by Hawking and Gibbons. In fact, the Euclidean approach may be even more important in quantum gravity than it is in quantum field theory and its procedures may represent am even larger fraction of the derivations in the gravitational context. (I agree, and as far as I know, the people who disagree - such as Jacques Distler - have not offered any rational and valid arguments so far.) OK, so Hawking tells you to calculate the S-matrix by a Euclidean path integral over topologically trivial configurations (spacetimes) - those that are continuously connected to the empty spacetime. Such a process may involve a production of a large number of particles in the final state which is a hallmark of an intermediate black hole. Once you calculate the Euclidean S-matrix, you Wick rotate the results to get the amplitudes for the Minkowski signature. Note that we have only included the topologically trivial spacetimes and this is a good choice that preserves unitarity. On the second page, Hawking proceeds with some technical subtleties. He wants to allow strong gravitational fields to occur even in the initial and final states, it seems. (It does not seem necessary when one talks about the generic S-matrix elements but it is conceivable that these strong fields appear in the Euclidean spacetime anyway.) With strong gravitational fields in place, one can't meaningfully define the wavefunction at time "t" because there is no preferred diff-invariant way of slicing the spacetime. Hawking solves this by a seemingly bizarre operation. He calculates a partition sum with periodic Euclidean time instead of the transition amplitude; it is not 100% clear at this point how will he introduce the initial and final states to this setup. (Note that the Euclidean time is spacelike and it should therefore not be interpreted as a source of the usual violation of causality.) Moreover, this partition sum has a volume-extensive divergent factor. Hawking regulates this infrared problem by introducing a small negative (anti-de-Sitter-like) cosmological constant that does not change local physics of small black holes much. He obviously deforms the picture into an AdS one in order to get a background that is as well-defined as the usual AdS/CFT backgrounds in string theory. Hawking states that because we are making all measurements at infinity, we can never be sure whether a black hole is present inside or not. This looks like cheating to me; equivalently, it suggests that no true solution is being looked for. Of course that if we only work with the boundary degrees of freedom, we will see no unitarity violations and no problems associated with the black hole dynamics. It's simply because all these things are encoded in the CFT which is unitary. The true surviving question is how is this unitary description reconciled with the bulk interpretation in which a macroscopic black hole is demonstrably present and has the potential to cause information loss headaches. Hawking does not have a working convergent path integral beyond the semiclassical approximation, but let us join Hawking and pretend that this problem is absent. He computes the partition sum over geometries whose boundaries are topologically S^2 (the sphere at infinity) times an S^1 (the periodic Euclidean time) at infinity; he works in four spacetime dimensions. There are two simple spacetimes with this boundary: B^3 times S^1 is the empty flat (or anti-de-Sitter) spacetime while S^2 times D^2 is the anti-de-Sitter Schwarzschild topology. While the empty spacetime can be foliated, the S^2 times D^2 cannot because it has no S^1 factor, roughly speaking. Because it can't be foliated, you can't even define what the conservation of the information should mean in this topologically non-trivial case. The contribution to the correlators coming from the topologically trivial case are conserved as the Lorentzian time T grows; the contributions from the topologically non-trivial backgrounds decay. On page 3, Hawking confirms that he was inspired by Maldacena's hep-th/0106112 about the eternal black holes in anti de Sitter space. In that case, you also have two - actually three - geometries that fit into the S^1 times S^2 boundary: empty space, small black holes, large black holes (compared to the radius of curvature). The large black holes dominate the ensemble; they have a large negative action. Nevertheless, using the bulk techniques you may calculate that a correlator of O(x)O(y) on the boundary decays for large separations (while it has the usual flat-space behavior if x,y are nearby). Such a decrease looks much like other cases of information loss; nevertheless in this case you may argue that there is a unitary CFT behind it and the exponential decrease may be in principle reduced to repeated scattering. Maldacena also showed that the contribution of the empty spacetime does not decay and it has the right magnitude to be consistent with unitarity; Hawking argues that he strengthened this observation by having showed that the path integral over topologically trivial spacetimes only is unitary. (Again, it is not obvious whether his formal argument holds in reality because of the usual loop UV problems of general relativity.) The large black holes are not too interesting because they don't evaporate. Instead, we want to look at the small black holes. Hawking has been trying to find a Euclidean geometry corresponding to an evaporating Lorentzian black hole for years. Now he says that he failed because there is no such geometry. In the Euclidean setup, only the metrics that can be foliated - empty space and eternal black holes - should be added to the path integral. One of the main question that you must certainly ask is: Why does dynamics over topologically trivial spacetime look like the creation of a long-lived black hole with horizons in the Lorentzian signature? I believe that Hawking does not fully answer this question; he only says that "thermal fluctuations may occasionally be large enough to cause a gravitational collapse that creates a small black hole". Let me re-iterate that such a short comment is deeply unsatisfactory. What we want to understand in the first place is the bulk description of the process in which we can see that the usual long-lived black hole is there; we want to see how are the concepts of locality and causality corrected so that the information can escape. Hawking only says that this solution of the information loss puzzle is possible. We could have said the same thing just because there is a dual unitary CFT description. But the local bulk dynamical mechanisms that make these things possible remain nearly as cloudy as before. Some of Hawking's conclusions say: • There are no baby universes branching off - which is what Hawking used to think. The information is preserved purely in our Universe. • The black hole can form while remaining topological trivial because its evaporation may be viewed as a tunnelling process (Hartle-Hawking). Although this comment can't be considered to be a quantitative answer to my main question, I like it, and let me describe an analogy. Imagine quantum mechanics of a particle on a line. The classically inaccessible regions (E smaller than V) may be compared to the black hole interior. Classically, these are qualitatively different regions from the rest. However, quantum mechanically, the qualitative difference disappears because of tunnelling. All points on the line are qualitatively on equal footing. You can get there. This is why the black hole should be thought of as having a trivial topology quantum mechanically. The situation would change for an infinite inaccessible region (infinite black hole) where you can't tunnel. Let me summarize: Hawking's argument why the evolution is unitary probably works and The Reference Frame agrees with virtually all of Hawking's broader opinions, but such a solution is not much different from the observation that the dual CFT is unitary. The question why these unitary processes look like a small long-lived black hole and how the necessary correlations are created remains mostly unanswered. Hawking has lost a bet but he seems to think that he has made the critical steps to solve the information loss puzzle. While he has given the encyclopedia of baseball to John Preskill, next time he will give him the ashes from a burned book (or the nearly thermal Hawking radiation) because John Preskill can always reconstruct the information out of them. Add to Digg this Add to reddit snail feedback (3) : reader Quantoken said... Lubos said: Not so easily, Hawking would have to give not just the ashes, but also every little bit of debrises and that could be flying away, and every single photons emitted during the burning, as well as preserve the exact direction and time that the photons fly away. You need every little bit of quantum information to reconstruct the book. Hawking might as well could be easier give Preskill a time machine to allow him travel back to the moment right before the book was burned, or, just give the book. reader Olias said... Lubos, I believe there is an hidden variable contained within the end-script by Hawking. The Ashes: may be with reference to Cricket, rather than Baseball? There is also a saying in the English Language:Its Just not Cricket!..which has the meaning of, Rule's have been Broken? So in the context of say, Baseball, one can state that if a player has gained an unfair advantage, by say bending the Rule Book, English people tend to shout:It's just not Cricket! the hawking paragraph again , and one can see a little bit of Hawking 'handwaving', tinged with a famous sense of humour? reader esc said... Thanks for an accessable explaination of this situation. I caught part of the recent Discovery Channel program on this tonight in a bar with poorly-written subtitles, and needed a quick catch-up on the current state of things in this world. I can't wait for a book to come out that is as comprehensible to a casual user of high science like myself as Gleik's Chaos was back in high school, but covering more recent events.
e84b63a643add1b0
non'rel•a•tiv•is'tic quan'tum mechan'ics Pronunciation: (non'rel-u-ti-vis'tik, non"-), [key] Physics. a form of quantum mechanics that excludes relativistic effects and is approximately applicable to low-energy problems, as the structure of atoms and molecules. Cf. matrix mechanics, Schrödinger equation. nonregentnon rep Related Content
ddbadbe0a45f1bda
Thursday, May 26, 2016 more random quotes: scott aaronson new perspectives So, John Horgan, the End of Science guy, interviewed Scott Aaronson, a theoretical computer scientist interested in quantum computing and computational complexity theory. In the following, some random quotes. On Quantum Mechanics     [Q]uantum mechanics is astonishingly simple—once you take the physics out of it!  In fact, QM isn’t even “physics” in the usual sense: it’s more like an operating system that the rest of physics runs on as application software.     [A]ccepting quantum mechanics didn’t mean giving up on the computational worldview: it meant upgrading it, making it richer than before.  There was a programming language fundamentally stronger than BASIC, or Pascal, or C—at least with regard to what it let you compute in reasonable amounts of time.  And yet this quantum language had clear rules of its own; there were things that not even it let you do (and one could prove that); it still wasn’t anything-goes.  The Computational Universe     If it’s worthwhile to build the LHC or LIGO—wonderful machines that so far, have mostly triumphantly confirmed our existing theories—then it seems at least as worthwhile to build a scalable quantum computer, and thereby prove that our universe really does have this immense computational power beneath the surface.      Firstly, quantum computing has supplied probably the clearest language ever invented—namely, the language of qubits, quantum circuits, and so on—for talking about quantum mechanics itself. Secondly, one of the most important things we’ve learned about quantum gravity—which emerged from the work of Stephen Hawking and the late Jacob Bekenstein in the 1970s—is that in quantum gravity, unlike in any previous physical theory, the total number of bits (or actually qubits) that can be stored in a bounded region of space is finite rather than infinite.  In fact, a black hole is the densest hard disk allowed by the laws of physics, and it stores a “mere” 1069 qubits per square meter of its event horizon!  And because of the dark energy (the thing, discovered in 1998, that’s pushing the galaxies apart at an exponential rate), the number of qubits that can be stored in our entire observable universe appears to be at most about 10122. So, that immediately suggests a picture of the universe, at the Planck scale of 10^-33 meters or 10^-43 seconds, as this huge but finite collection of qubits being acted upon by quantum logic gates—in other words, as a giant quantum computation.  The Big Picture     Ideas from quantum computing and quantum information have recently entered the study of the black hole information problem—i.e., the question of how information can come out of a black hole, as it needs to for the ultimate laws of physics to be time-reversible.  Related to that, quantum computing ideas have been showing up in the study of the so-called AdS/CFT (anti de Sitter / conformal field theory) correspondence, which relates completely different-looking theories in different numbers of dimensions, and which some people consider the most important thing to have come out of string theory.      [S]ome of the conceptual problems of quantum gravity turn out to involve my own field of computational complexity in a surprisingly nontrivial way.  The connection was first made in 2013, in a remarkable paper by Daniel Harlow and Patrick Hayden.  Harlow and Hayden were addressing the so-called “firewall paradox,” which had lit the theoretical physics world on fire (har, har) over the previous year.     In summary, I predict that ideas from quantum information and computation will be helpful—and possibly even essential—for continued progress on the conceptual puzzles of quantum gravity.      If civilization lasts long enough, then there’s absolutely no reason why there couldn’t be further discoveries about the natural world as fundamental as relativity or evolution. One possible example would be an experimentally-confirmed theory of a discrete structure underlying space and time, which the black-hole entropy gives us some reason to suspect is there.      [T]he ocean of mathematical understanding just keeps monotonically rising, and we’ve seen it reach peaks like Fermat’s Last Theorem that had once been synonyms for hopelessness.  I see absolutely no reason why the same ocean can’t someday swallow P vs. NP, provided our civilization lasts long enough.  In fact, whether our civilization will last long enough is by far my biggest uncertainty.      More seriously, it was realized in the 1970s that techniques borrowed from mathematical logic—the ones that Gödel and Turing wielded to such great effect in the 1930s—can’t possibly work, by themselves, to resolve P vs. NP.  Then, in the 1980s, there were some spectacular successes, using techniques from combinatorics, to prove limitations on restricted types of algorithms.  Some experts felt that a proof of P≠NP was right around the corner.  But in the 1990s, Alexander Razborov and Steven Rudich discovered something mind-blowing: that the combinatorial techniques from the 1980s, if pushed just slightly further, would start “biting themselves in the rear end,” and would prove NP problems to be easier at the same time they were proving them to be harder!  Since it’s no good to have a proof that also proves the opposite of what it set out to prove, new ideas were again needed to break the impasse.      This characteristic of quantum mechanics—the way it stakes out an “intermediate zone,” where (for example) n qubits are stronger than n classical bits, but weaker than 2n classical bits, and where entanglement is stronger than classical correlation, but weaker than classical communication—is so weird and subtle that no science-fiction writer would have had the imagination to invent it.  But to me, that’s what makes quantum information interesting: that this isn’t a resource that fits our pre-existing categories, that we need to approach it as a genuinely new thing.      [I]f scanning my brain state, duplicating it like computer software, etc. were somehow shown to be fundamentally impossible, then I don’t know what more science could possibly say in favor of “free will being real”!     I hate when the people in power are ones who just go with their gut, or their faith, or their tribe, or their dialectical materialism, and who don’t even feel self-conscious about the lack of error-correcting machinery in their methods for learning about the world.     Just in the fields that I know something about, NP-completeness, public-key cryptography, Shor’s algorithm, the dark energy, the Hawking-Bekenstein entropy of black holes, and holographic dualities are six examples of fundamental discoveries from the 1970s to the 1990s that seem able to hold their heads high against almost anything discovered earlier (if not quite relativity or evolution). Wednesday, February 17, 2016 Decoding Financial Networks: Hidden Dangers and Effective Policies  Two changes have ushered in a new era of analyzing the complex and interdependent world surrounding us. One is related to the increased influx of data, furnishing the raw material for this revolution that is now starting to impact economic thinking. The second change is due to a subtler reason: a paradigm shift in the analysis of complex systems. The buzzword "big data" is slowly being replaced by what is becoming established as "data science." While the cost of computer storage is continually falling, storage capacity is increasing at an exponential rate. In effect, seemingly endless streams of data, originating from countless human endeavors, are continually flowing along global information superhighways and being stored not only in server farms and the cloud, but -- importantly -- also in the researcher's local databases. However, collecting and storing raw data is futile if there is no way to extract meaningful information from it. Here, the budding science of complex systems is helping distill meaning from this data deluge. Traditional problem-solving has been strongly shaped by the success of the reductionist approach taken in science. Put in the simplest terms, the focus has traditionally been on things in isolation -- on the tangible, the tractable, the malleable. But not so long ago, this focus shifted to a subtler dimension of our reality, where the isolation is overcome. Indeed, seemingly single and independent entities are always components of larger units of organization and hence influence each other. Our world, while still being comprised of many of the same "things" as in the past, has become highly networked and interdependent -- and, therefore, much more complex. From the interaction of independent entities, the notion of a system has emerged. Understanding the structure of a system's components does not bring insights into how the system will behave as a whole. Indeed, the very concept of emergence fundamentally challenges our knowledge of complex systems, as self-organization allows for novel properties -- features not previously observed in the system or its components -- to unfold. The whole is literally more than the sum of its parts. This shift away from analyzing the structure of "things" to analyzing their patterns of interaction represents a true paradigm shift, and one that has impacted computer science, biology, physics and sociology. The need to bring about such a shift in economics, too, can be heard in the words of Andy Haldane, chief economist at the Bank of England (Haldane 2011): Economics has always been desperate to burnish its scientific credentials and this meant grounding it in the decisions of individual people. By itself, that was not the mistake. The mistake came in thinking the behavior of the system was just an aggregated version of the behavior of the individual. Almost by definition, complex systems do not behave like this. [...] Interactions between agents are what matters. And the key to that is to explore the underlying architecture of the network, not the behavior of any one node. In a nutshell, the key to the success of complexity science lies in ignoring the complexity of the components while quantifying the structure of interactions. An ideal abstract representation of a complex system is given by a graph -- a complex network. This field has been emerging in a modern form since about the turn of the millennium (Watts and Strogatz 1998; Barabasi and Albert 1999; Albert and Barabasi 2002; Newman 2003). Underpinning economics with insights from complex systems requires a major culture change in how economics is conducted. Specialized knowledge needs to be augmented with a diversity of expertise. Or, in the words of Jean-Claude Trichet, former president of the European Central Bank (Trichet 2010): I would very much welcome inspiration from other disciplines: physics, engineering, psychology, biology. Bringing experts from these fields together with economists and central bankers is potentially very creative and valuable. Scientists have developed sophisticated tools for analyzing complex dynamic systems in a rigorous way. What's more, scientists themselves have acknowledged this call for action (see, e.g., Schweitzer et al. 2009; Farmer et al. 2012). In what follows, I will present two case studies that provide an initial glimpse of the potential of applying such a data-driven and network-inspired type of research to economic systems. By uncovering patterns of organization otherwise hidden in the data, these studies caught the attention not only of scholars and the general public, but also of policymakers. The network of global corporate control A specific constraint related to the analysis of economic and financial systems lies in an unfortunate relative lack of data. While other fields are flooded with data, in the realm of economics, a lot of potentially valuable information is deemed proprietary and not disclosed for strategic reasons. A viable detour is utilizing a good proxy that is exhaustive and widely available. Ownership data, representing the percentages of equity a shareholder has in certain companies, is such a dataset. The structure of the ownership network is thought to be a good proxy for that of the financial network (Vitali, Glattfelder and Battiston 2011). However, this is not the main reason for analyzing such a dataset. Ownership networks represent an interface between the fields of economics and complex networks because information on ownership relations crucially unlocks knowledge relating to the global power of corporations. As a matter of fact, ownership gives a certain degree of control to the shareholder. In other words, the signature of corporate control is encoded in these networks (Glattfelder 2013). These and similar issues are also investigated in the field of corporate governance. Bureau van Dijk's commercial Orbis database comprises about 37 million economic actors (e.g., physical persons, governments, foundations and firms) located in 194 countries as well as roughly 13 million directed and weighted ownership links for the year 2007. In a first step, a cross-country analysis of this ownership snapshot was performed (Glattfelder and Battiston 2009). A key finding was that the more control was locally dispersed, the higher the global concentration of control lay in the hands of a few powerful shareholders. This is in contrast to the economic idea of "widely held" firms in the United States (Berle and Means 1932). In fact, these results show that the true picture can only be unveiled by considering the whole network of interdependence. By simply focusing on the first level of ownership, one is misled by a mirage. In a next step, the Orbis data was used to construct the global network of ownership. By focusing on the 43,060 transnational corporations (TNCs) found in the data, a new network was constructed that comprised all the direct and indirect shareholders and subsidiaries of the TNCs. Then, this network of TNCs, containing 600,508 nodes and 1,006,987 links, was further analyzed (Vitali, Glattfelder and Battiston 2011). Figure 1 shows a small sample of the network. Analyzing the topology of the TNC network reveals the first signs of an organizational principle at work. One can see that the network is actually made up of many interconnected sub-networks that are not connected among themselves. The cumulative distribution function of the size of these connected components follows a power law, as there are 23,824 such components varying in size from many single isolated nodes to a cluster of 230 connected nodes. However, the largest connected component (LCC) represents an outlier in the powerlaw distribution, as it contains 464,006 nodes and 889,601 links. This super-cluster contains only 36 percent of all TNCs. In effect, most TNCs "prefer" to be part of isolated components that comprise a few hundred nodes at most. But what can be said about the TNCs in the LCC? By adding a proxy for the value or size of firms, the network analysis can be extended. In the study, the operating revenue was used for the value of firms. Now it is possible to see where the valuable TNCs are located in the network. Strikingly, the 36 percent of TNCs in the LCC account for 94 percent of the total TNC operating revenue. This finding justifies focusing further analysis solely on the LCC. In general, assigning a value v_j to firm j gives additional meaning to the ownership network. As mentioned, a good proxy reflecting the economic value of a company is the operating revenue. Assigning such a non-topological variable to the nodes uncovers a deeper level of information embedded in the network. If shareholder i holds a fraction W_{ij} of the shares of firm j, W_{ij} v_j represents the value that i holds in j. Accordingly, the portfolio value of firm i is given by p_i = sum_j W_{ij} v_j, (1.1) However, in ownership networks, there are also chains of indirect ownership 80 links. For instance, firm i can gain value from firm k via firm j, if i holds shares in j, which, in turn, holds shares in k. Symbolically, this can be denoted as i -> j -> k. Using these building blocks, and the fact that ownership is related to control, a methodology is introduced that estimates the degree of influence that each agent wields as a result of the network of ownership relations. In other words, a network centrality measure is provided that not only accounts for the structure of the shareholding relations, but -- crucially -- also incorporates the distribution of value. This allows for the top shareholders to be identified. As it turns out, 730 top shareholders have the potential to control 80 percent of the total operating revenue of all TNCs. In effect, this measure of influence is one order of magnitude more concentrated than the distribution of operating revenue. These top shareholders are comprised of financial institutions located in the United States and the United Kingdom (note that holding many ownership links does not necessarily result in a high value of influence). Combining these two dimensions of analysis -- that is, the topology and the shareholder ranking -- finally uncovers yet another pattern of organization. A striking feature of the LCC is that it has a tiny but distinct core of 1,318 nodes that are highly interconnected (12,191 links). Analyzing the identity of the firms present in this core reveals that many of them are also top shareholders. Indeed, the 147 most influential shareholders in the core can potentially control 38 percent of the total operating revenue of all TNCs. In other words, a "superentity" with disproportional power is identified in the already powerful core, akin to a fractal structure. This emerging power structure in the global ownership network has possible negative implications. For instance, as will be discussed in the next section, global systemic risk is sensitive to the connectivity of the network (Battiston et al. 2007; Lorenz and Battiston 2008; Wagner 2009; Stiglitz 2010; Battiston et al. 2012a). Moreover, global market competition is threatened by potential collusion (O'Brien and Salop 2001; Gilo, Moshe and Spiegel 2006). Subjecting a comprehensive global economic dataset to a detailed network analysis has the power to unveil organizational patterns that have previously gone undetected. Although the exact numbers in the study should be taken with a grain of salt, they still give a good first approximation. For instance, the very different methods that can be used to estimate control from ownership all provide very similar aggregated network statistics. Finally, although it cannot be proved that the top influencers actually exert their power or are able to leverage their privileged position, it is also impossible to rule out such activities -- especially since these channels for relaying power can be utilized in a covert manner. In any case, the degree of influence assigned to the shareholders can be understood as the probability of achieving one's own interest against the opposition of the other actors -- a notion reminiscent of Max Weber's idea of potential power (Weber 1978). An ongoing research effort aims to extend this analysis to include additional annual snapshots of the global ownership network up to 2012. The focus now lies on the dynamics and evolution of the network. In particular, the stability of the core over time will be analyzed. Preliminary results on a small subset of the data suggest that the structure of the core is indeed stable. If verified, this would imply that the emergent power structure is resilient to forces reshaping the network architecture, such as the global financial crisis. The structure could also potentially be resistant to market reforms and regulatory efforts. In an interconnected system, the notion of risk can assume many guises. The simplest and most obvious manifestation is that of individual risk. The colloquialism "too big to fail" captures the promise that further disaster can be averted by identifying and assisting the major players. This approach, however, does not work in a network. In systems where the agents are connected and therefore codependent, the relevant measure is systemic risk. Only by understanding the architecture of the network's connectivity can the propagation of financial distress through the system be understood. In essence, systemic risk is akin to the process of an epidemic spreading through a population. A naive intuition would suggest that by increasing the interconnectivity of the system, the threat of systemic risk is reduced. In other words, the overall system should be more resilient when agents diversify their individual risks by increasing the shared links with other agents. Unfortunately, this can be shown to be false (Battiston et al. 2012a). Granted, in systems with feedback loops, such as financial systems, initial individual risk diversification can indeed start off by reducing systemic risk. However, there is a threshold related to the level of connectivity, and once it has been reached, any additional diversification effort will only result in increased systemic risk. Above this certain value, feedback loops and amplifications can lead to a knife-edge property, in which case stability is suddenly compromised. Now a paradox emerges: Although individual financial agents become more resistant to shocks coming from their own business, the overall probability of failure in the system increases. In the worst-case scenario, the efforts of individual agents to manage their own risk increase the chances that other agents in the system will experience distress, thereby creating more systemic risk than the risk they reduced via risk-sharing. Against this backdrop, the highly interconnected core of the global ownership network looms ominously. To summarize, in the presence of a network, it is not enough to simply identify the big players that have the potential to damage the system should they experience financial distress. Instead, it is crucial to analyze the network of codependency. The phrase "too connected to fail" captures this focus. However, for this approach to be implemented, a full-blown network analysis is required. Insights can only be gained by simulating the dynamics of such a system on its underlying network structure. For instance, one cannot calculate analytically the threshold of connectivity past which diversification has a destabilizing effect. Still, there is a final step that can be taken in analyzing systemic risk in networks. Next to "too big to fail" (which focuses on the nodes) and "too connected to fail" (which incorporates the links), a third layer can be added by utilizing a more sophisticated network measure called "centrality." In a nutshell, a node's centrality simply depends on its neighbors' centrality. For example, PageRank, the algorithm that Google uses to rank websites in its search-engine results, is a centrality measure. A webpage is more important if other important webpages link to it. Recall also that the methodology for computing the degree of influence that was discussed in the previous section is another example of centrality. A study focusing on this "too central to fail" notion of systemic risk has been conducted (Battiston et al. 2012b). The work employed previously confidential data on the 2008 crisis gathered by the US Federal Reserve to assess systemic risk as part of the Fed's emergency loans program. Inspired by the methodology behind the computation of shareholder influence and PageRank, a novel centrality measure for tracking systemic risk, called DebtRank, is introduced. In the study, debt data from the Fed is augmented with the ownership data used in the analysis of the network of global corporate control. As mentioned, the ownership network is a valid proxy for the undisclosed financial network linking banks. The data also includes detailed information on daily balance sheets for 407 institutions that, together, received bailout funds worth $1.2 trillion from the Fed. The data covers 1,000 days from before, during and after the peak of the crisis, from August 2007 to June 2010. The study focuses on the 22 banks that collectively received three-quarters of that bailout money. It is interesting to observe that almost all of these banks were members of the "super-entity." DebtRank computes the likelihood that a bank will default as well as how much this would damage the creditworthiness of the other banks in the network. In essence, the measure extends the notion of default contagion into that of distress propagation. Crucially, Debt- Rank proposes a quantitative method for monitoring institutions in a network and identifying the ones that are the most important for the stability of the system. Figure 2 shows an "X-ray image" of the global financial crisis unfolding. It is striking to observe how many of the major players are affected and how some individual institutions threaten the majority of the economic value in the network (a DebtRank value larger than 0.5). Indeed, if a bank with a DebtRank value close to one defaults, it could potentially obliterate the economic value of the entire system. And, finally, the issue of "too central to fail" becomes dauntingly visible: Even institutions with relatively small asset size can become fragile and threaten a large part of the economy. The condition for this to happen is given by the position in the network as measured by the centrality. In a forthcoming publication (Battiston et al. 2015), the notion of DebtRank is re-expressed making use of the more common notion of leverage, defined as the ratio between an institution's assets and equity. From this starting point, the authors develop a stress-test framework that allows the computation of a whole set of systemic risk measures. Again, since detailed data on the bilateral exposures between financial institutions is not publicly available, the true architecture of the financial network cannot be observed. In order to overcome this problem, the framework utilizes Monte Carlo samples of networks with realistic topologies (i.e., network realizations that match the aggregate level of interbank exposure for each financial institution). As an illustrative exercise, the authors run the framework on a set of European banks, with empirical data comprising the aggregated interbank lending and borrowing volumes having been obtained from Bankscope, which covers 183 EU banks. The interbank network is reconstructed for the years 2008 to 2013 using the so-called fitness model. Importantly, the attention is placed not only on first-round effects of an initial shock, but also on the subsequent additional rounds of reverberations within the interbank network. A crucial result is given by the following relation: L(2) = l^b S, (1.2) where L(2) represents the total relative equity loss of the second round of distress propagation induced by the initial shock S, and with l^b > 0 being the weighted average of the interbank leverage. In other words, l^b is derived from the interbank assets and equity. In detail, S is computed from the unit shock on the value of external assets and the external leverage, that is, from the leverage related to the assets that do not originate from within the interbanking system. Equation (1.2) implies the highly undesirable conclusion that the second-round effect of distress propagation is also at least as detrimental as the initial shock. This result highlights the important fact that waves of financial distress ripple multiple times through the network in a way that intensifies the problem for the individual nodes. This mechanism only truly becomes visible in a network analysis of the system. In empirical terms, this result is also compelling, as levels of interbank leverage are often around a value of two. In this light, the distress in the second round can be twice as big as the initial distress on the external assets. To conclude, neglecting second-round effects could therefore lead to a severe underestimation of systemic risk. Outlook for policy-making What is the added value of trying to understand the economy as an interconnected complex system? The most important result to mention in this context is the power of such analysis to uncover hidden features that would otherwise go undetected. Stated simply, the intractable complexity of financial systems can be decoded and understood by unraveling the underlying network. A prime example of a network analysis uncovering unsuspected latent features is the detection of the tiny, but highly interconnected core of powerful actors in the global ownership network. It is a novel finding that the most influential companies do not conduct their business in isolation, but rather are entangled in an extremely intricate web of control. Notice, however, that the very existence of such a small, powerful and self-controlled group of financial institutions was unsuspected in the economics literature. Indeed, its existence is in stark contrast with many theories on corporate governance (see, e.g., Dore 2002). However, understanding the structure of interaction in a complex system is only the first step. Once the underlying network architecture is made visible, the resulting dynamics of such systems can be analyzed. Recall that distress spreads through the network like an epidemic, infecting one node after another. In other words, the true understanding of the notion of systemic risk in a financial setting crucially relies on the knowledge of this propagation mechanism, which again is determined by the network topology. As discussed above, in a real-world setting in which feedback loops can act as amplifiers, the second-round effect of an initial shock is also at least as big as the initial impact. It should be noted that the notorious "bank stress tests" also aim at assessing such risks. More specifically, it is analyzed whether, under unfavorable economic scenarios, banks have enough capital to withstand the impact of adverse developments. Unfortunately, while commendable, these efforts only emphasize first-round effects and therefore potentially underestimate the true dangers to a significant degree. A recent example is the Comprehensive Assessment conducted by the European Central Bank in 2014, which included the Asset Quality Review. A first obvious application of the knowledge derived from a complex-systems approach to finance and economics is related to monitoring the health of the system. For instance, DebtRank allows systemic risk to be measured along two dimensions: the potential impact of an institution on the whole system as well as the vulnerability of an institution exposed to the distress of others. This identifies the most dangerous culprits, namely, institutions with both high vulnerability and impact. In Figure 3, the whole extent of the financial crisis becomes apparent, as high vulnerability was indeed compounded with high impact in 2008. In 2013, high vulnerability was offset by relatively low impact. In addition to analyzing the health of the financial system at the level of individual actors, an index could be constructed that incorporates and aggregates the many facets of systemic risk. In this case, sectors and countries could also be scrutinized. A final goal would be the implementation of forecasting techniques. What probable trajectories leading into crisis emerge from the current state of the system? As Haldane (2011) noted in contemplating the idea of forecasting economic turbulences: It would allow regulators to issue the equivalent of weather-warnings -- storms brewing over Lehman Brothers, credit default swaps and Greece. It would enable advice to be issued -- keep a safe distance from Bear Stearns, sub-prime mortgages and Icelandic banks. And it would enable "what-if?" simulations to be run -- if UK bank Northern Rock is the first domino, what will be the next? In essence, a data- and complex systems-driven approach to finance and economics has the power to comprehensively assess the true state of the system. This offers crucial information to policymakers. By shedding light on previously invisible vulnerabilities inherent in our interconnected economic world, the blindfolds of ignorance can be removed, paving the way to policies that effectively mitigate systemic risk and avert future global crises. References and Figures  —  —  —  This was a chapter contribution to “To the Man with a Hammer: Augmenting the Policymaker’s Toolbox for a Complex World”, Bertelsmann Stiftung, 2016: This article collection helps point the way forward. Gathering a distinguished panel of complexity experts and policy innovators, it provides concrete examples of promising insights and tools, drawing from complexity science, the digital revolution and interdisciplinary approaches. Table of contents:  —  —  —  See also "Ökonomie neu denken", February 16, 2016, Frankfurt am Main and Podiumsdiskussion. Friday, December 11, 2015 At the Dawn of Human Collective Intelligence trusting the universe to reach ever higher levels of complexity The following was a contribution first published on the 30th of November 2015 in “HOW TO SAVE HUMANITY — Essays and answers from the desks of futurists, economists, biologists, humanitarians, entrepreneurs, activists and other people who spend a lot of time caring about, improving, and supporting the future of humanity.” It is an interesting idiosyncrasy of our times that we have become increasingly accustomed to the ongoing success of the human mind in probing reality and understanding the world we live in. Indeed, the relevance of this ever growing body of knowledge, describing the universe and ourselves in greater and greater detail, cannot be overstated. But today, even the most breathtaking technological breakthroughs, fostered by this knowledge, can hardly capture the collective attention span for long. It is as if we have come to expect our technological abilities to steadily accelerate and reach breakneck speeds. On the other hand, we have also become very accustomed, and alarmingly indifferent and unconcerned, about the state of human affairs. As a species, our recent terraforming activities have fundamentally transformed the biosphere we rely on, resulting in considerable impact for us individually. In a nutshell, we have devised linear systems that extract resources at one end, which, after being consumed, are disposed of at the other end. However, on a finite planet, extraction soon becomes exploitation and disposal results in pollution. Today, this can be witnesses at unprecedented global scales. Just consider the following: substantial levels of pesticides and BPA in vast populations and even remote populations (like Inuit women whose breast milk is toxic due to pollutants accumulating in the ocean’s food chain), increase of chronic diseases, antimicrobial resistance, the Great Pacific and the North Atlantic garbage patches, e-waste, exploding levels of greenhouse gases, peak oil and phosphorus, land degradation, deforestation, water pollution, food waste, overfishing, dramatic loss of biodiversity,. . . The list is constantly growing as we await the arrival of the next billion human inhabitants on this planet. Compounding this acute problem is the fact that today’s generations are living at the expense of future generations, ecologically and economically. For instance, we have reached Earth Overshoot Day in 2015 on the 13th of August. Each year, this day measures when human consumption of Earth’s natural resources, or humanity’s ecological footprint, approximately reaches the world’s biocapacity to generated those natural resources in a year. Since the introduction of this measure in 1970, when the 23rd of December marked Earth Overshoot Day, this tipping point has been occurring earlier and earlier. Moreover, just check the Global Debt Clock, recording public debt worldwide, to see an incomprehensibly and frighteningly high figure, casting an ominous shadow over future prosperity. Yes, the outlook is very dire indeed. The Two Modes of Intelligence In essence, we have an abundance of individual intelligence, fueling knowledge generation and technological proficiency, but an acute lack of collective intelligence, which would allow our species to co-evolve and co-exists in a sustainable manner with the biosphere that keeps it alive. This is the true enigma of our modern times: why does individual intelligence not foster collective intelligence? Take, for instance, a single termite. The biological capacity for cognition is very limited. However, as a collective swarm, the termites engineer nests they equip with air-conditioning capabilities, ensuring a constant inside temperature allowing the termites to cultivate a fungus which digest food for them they could otherwise not utilize. Now take any human. Amazing feats of higher cognitive functioning are manifested: self-awareness, sentience, language capability, creativity, abstract reasoning, formation and defense of beliefs, and much, much more. Remarkably but regrettably, multiplying this amazing potential and capacity times a few billion results in our current sate of affairs. It is interesting to note that all biological systems do not feature centralized decision making. There are no architect or engineering termites overseeing construction, no CPU in our brains responsible for consciousness. This decentralized and bottom up approach appears to result in the emergence of collective intelligence, in other words, in self-organization, adaptivity, and resilience. Indeed, this incredible robustness of biological complex systems is most probably the reason why we still can continue with “business as usual” despite the continued devastating blows we have delivered to the biosphere. In stark contrast to these natural systems, all human systems, from political to economic, are all characterized by centralized governance. This top down approach to collective organization appears to systematically lack adaptivity, resilience, and, most importantly, sustainability. The Zeitgeist and Beyond We truly live in tumultuous times. Next to the increasing external pressures just outlined, we are also exposed directly to our own destructiveness. In a global environment where ignorance, myopia, denial, cynicism, indifference, callousness, alienation, disenchantment, and superficiality reign it is not surprising to witness the rise of fundamentalism and violence in all corners of the world. Neither is it really surprising that many people then try and escape this angst short-term by distracting consumerism and numbing materialism overall. Which then leads to the next predicament: This is a strange, rather perverse story. Just to put it in very simple terms: it’s a story about us, people, being persuaded to spend money we don’t have, on things we don’t need, to create impressions that won’t last, on people we don’t care about. (Tim Jackson’s 2010 TED talk.) The reality of the society we’re in, is there are thousands and thousands of people out there, leading lives of quiet scream- ing desperation, where they work long hard hours, at jobs they hate, to enable them to buy things they don’t need, to impress people they don’t like. (Nigel Marsh’s 2011 TED talk.) Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed. The moral and spiritual damage that comes from this situa- tion is profound. It is a scar across our collective soul. Yet virtually no one talks about it. (David Graeber, “On the Phenomenon of Bullshit Jobs”, 2013.) Our collective psyche is suffering under the current zeitgeist. In just a few decades the complexity and uncertainty of the lives we lead has dramatically increased and we now struggle even harder to find meaning. So, was this it? Are we simply yet another civilization at the precipice of its demise? Are we just a very brief, albeit spectacular, perturbation in the billion year history of life on Earth, which will undoubtedly adapt and continue for billions of years until our sun runs out of fuel? At the Dawn Perhaps things are not as they seem. Maybe the chaotic paths to destruction or survival really are only separated by the metaphorical flapping of the wings of a butterfly. In the case at hand, a mere flicker in the minds of people — for instance, a radical and contagious thought or idea — could alter the course of history. Indeed, perhaps acquiring collective intelligence is not as hard as we might imagine. What is missing is possibly a subtle change in the way we perceive and think of ourselves and the world we inhabit; a change that would initiate a true shift in our behavior which could lead to adaptive, resilient, and sustainable human systems and interactions. Maybe the difficulty lies in the simple fact that we all first need to focus on ourselves for the common ground to emerge which would allow global change to flourish on. One of the earliest and strongest constraints everyone of us as child is confronted with is the imprinting of local and static sociocultural and religious narratives, mostly emphasizing external authority. To resist this initial molding requires a very critical and open-minded worldview, not something every human child comes equipped with. What would happen if we would replace these obviously dysfunctional foundational stories that we have been telling our children? What if we, as a species, agreed to convey ideas to the next generation which do not simply depend on the geographic location of birth but represent something more functional, universal, and unifying? Ideas that also stress self-responsibility and self-reliance? Modern neuroscience heavily emphasizes the plasticity of the human brain. This neuroplasticity reflects how the brain’s circuits constantly get rewired due to changes not only in the environment, but crucially also in response to inner changes within the mind. Cultivating different thought patterns results in different neural networks. As a consequence, we should never underestimate how untainted young brains, exposed to novel empowering ideas, could result in a generation of “new” humans, significantly different from the last one. Possibly some of the following ideas could meet this challenge — ideas capable of transforming the inner space of the mind and thus having the power to emanate into the outer world. Cultivating a Responsible, Dynamic, and Inclusive Mindset First, acknowledge that you are not the center of the universe. The local “reality bubble” you live in is arbitrary and infused with ideas relevant to the past. Your way of life is not representative or defining for the human species. Foreign ideas, beliefs, and ways of life are as justified as your own ones. The way you perceive reality depends on the exact levels of dozens of neurotransmitters and the biologically evolved hardwiring in your brain. In efect, what appears as real and true is always contingent and relative. Reality could be vastly richer, bigger, and more complex than anyone ever dared to dream. And never forget to appreciate the amazing string of measurable coincidences that had to conspire for you to read this sentence: from the creation of space, time, and energy, to the formation of the first heavy elements in the burning cores of stars which then got scattered into the cosmos when they exploded as supernovae and started to assemble into organic matter, which could store information and spontaneously began to replicate, sparking the evolution of life, which gradually reached ever higher and higher levels of complexity until a lump of organic matter, organized as a network of dozens of billions of nodes and roughly 100 trillion links, became self-aware. Secondly, place yourself into the center of your universe. You alone are in charge of your life and solely responsible for your actions. You have the freedom in your mind to choose how you respond to internal urges and external influences. You can strive to cultivate a state of happiness and gratitude in your mind, regardless of the circumstances outside of your mind. Embrace change and accept that impermanence is an immutable fact of life. Let go of the illusion of control. Finally, cultivate a dynamic and inclusive mindset. Assume that all people act to the best of their possibilities and capacities. Face the fact that you can be very wrong in the beliefs you deeply cherish and avoid the illusion of knowledge. Be open to the possibility that other people could be right. Allow your beliefs and ideas to be malleable, adaptive, and self-correcting. Try and strike a healthy balance between critical thinking and openmindedness. Can we dare to imagine a future, when we teach our children to be empathetic but critical thinkers? When we teach them to be independent and not to seek acknowledgment form others but only themselves? When we teach them not to fear and discriminate against what is perceived as different and foreign; not to fear change and frantically cling on to the status quo, but to face the never ending challenges of life with confidence and trust? Imagine the collective intelligence that could emerge from a “swarm” of such individuals, emphasizing social inclusion next to cultivating a deep feeling of connectedness to the matrix of life and a profound appreciation of being an integral part of the enigma of existence. Simply by leaving out one generation’s worth of flawed and harmful imprinting, and by filling the arising void with radically functional and dynamic ideas and concepts, has the power to change everything. The First Rays of Light What if we already are in the middle of the transition and have not yet realized that it is happening? Despite the fact that we are still fueling dysfunctional collective ideas, perhaps we are already witnessing the beginning of a profound paradigm shift towards collective intelligence. Take the recent emergence of decentralized financial and economic interactions that are slowly disrupting the status quo. For instance, the nascent rise of the blockchain ledger in a trustless peer-to-peer network enabling unthinkable new ways of human economic cooperation. Or the impact of free-access and free-content collaborative efforts providing us with unrestricted availability of nearly unlimited knowledge and constantly evolving, cutting-edge software. Or peer-to-peer lending, crowdfunding, and crowd-sourcing with the capacity to leverage the network effect created by a collective of like-minded people. And not to forget the success of shareconomies, offering a radically different blueprint to the way business has been conducted in the past. All these new technologies are based on bottom up, dynamic, decentralized, networked, unconstrained, and self-organizing human interactions. It is impossible to gauge the future impact of these systems today. Similarly, imagine trying to asses the potential of a new technology, called the Internet, in the early 1990s. No one had the audacity to predict what today has emerged form this initial network, then comprised of a few million computers, now affecting every aspect of modern human life. We are truly living in a brave new world of unprecedented potential, where future utopias or dystopias are only separated by a thought, an idea, a behavior able to replicate and trigger self-organizing and adaptive collective action. So, where will you be at the dawning of human collective intelligence? Wednesday, July 22, 2015 The Consciousness of Reality / The Illusion of Knowledge back in the game;) The following is another iteration of my little hobby (see the "Evolution Section" at the end): This talk is basically based on Parts II and III of the book I'm currently writing, with the working title: The Illusion of Knowledge: Why Uncertainty is Woven into Any Description of Reality—And Why It Does Not Matter Part I focusses on formal thought systems, mathematics, and physics---in great detail, as witnessed by the over 140 equations introduced to the reader. In essence, Part I is a testimony to the human mind's unprecedented understanding of the workings of the reality it finds itself embedded in. Then, in Part II, notions of certainty are deconstructed, with respect to knowledge, truth, and reality. The subjective, context-dependent, and ambiguous nature of every experience and belief is emphasized. So where does this leave us? Do we really live in a cynical universe, which reveals itself to the human mind just as far as it awakens the false hope in its comprehensibility and leaves us forever in a state of epistemological nihilism? I sincerely believe otherwise: enter Part III. With brave, radical, out-of-the-box thinking, I believe we can advance our knowledge of the most fundamental questions relating to our existence and existence itself. Some such ideas are: the information-theoretic and information-processing foundation of reality (universe as computer, reality as a simulation) next to the primacy and/or universality of consciousness (consciousness creates reality). The TOK so far: The book will be an open access publication with Springer. Yes, at one point I will try and crowd-fund the costs;) The slides are found here and the transcript of the talk reads: What is real? Well, all of this obviously. But what exactly is it? OK, so you all woke up this morning. A sense of self kicked in. [break] Memories returned. [break] And you became aware of an external world. So, you are an entity that exist in a physical reality. But this begs three questions. [break] What can we know about these things? [break] What is the true nature of reality? [break] And what is an “I” anyway? OK, so let’s start with the question of knowledge. “The more you know, the less you understand.” [break] “I know that I know nothing.” To be fair, these quotes are quite old. Surely today people are less uncertain. Well… “Those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt.” This is from the great philosopher and mathematician Bertrand Russell. [break] “While differing widely in the various little bits we know, in our infinite ignorance we are all equal.” Karl Popper was one of the most influential philosophers of science. And in the same vein: [break] “Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.” Daniel Kahneman is the father of behavioral economics and a Nobel laureate. OK, so let’s agree that from a philosophical point of view the notion of certainty is a bit tricky. But we have science, which is a knowledge-­‐generation machine. Or not? Ever since the Pythagoreans, people have realized that the book of nature is written in the language of mathematics. Or in the words of the great mathematician David Hilbert: [break] “Mathematics is the foundation of all exact knowledge of natural phenomena.” But this raises a very profound question. “What is it that breathes fire into the equations and makes a universe for them to describe?” [break] So, basically, “Two miracles confront us here: the existence of laws of nature and the human mind's capacity to divine them.” You all know Stephen Hawking and Eugene Wigner received the Nobel prize in physics in the 60s. Physicists have o]en been puzzled about the general nature of science, because “Fundamentally, we do not know why our theories work so well.” [break] And “The deeper an explanation is, the more new problems it creates.” David Deutsch is one of the pioneers of quantum computing. But the bewilderment does not stop here. “There is no logical path to these laws; only intuition can reach them.” [break] “Perhaps it is culture rather than nature that dictates the content of scientific theories.” Now intuition and culture don’t usually spring to mind when thinking about science. And are also not really ideas one would associates with these two great physicists. So, yes, science works and gives us the amazing gift of technology. But what science exactly is and why it works no one really knows. And surprisingly, Kurt Gödel and Gregory Chaitin showed us that at the heart of mathematics lurks incompleteness and randomness. OK, to summarize: What we have been talking about is called epistemology. It is the branch of philosophy concerned with the nature of knowledge. But just because our knowledge of reality turns out to be a bit elusive doesn’t mean that reality itself should be suspect. Now ontology is the word philosophers use when dealing with the nature of reality. OK, so, let’s move on to this question, what about the nature of reality? Well… “Modern physics has conquered domains that display an ontology that cannot be coherently captured or understood by human reasoning.” Ernst von Glasersfeld was a distinguished philosopher who coined the term “radical constructivism”. This is the idea, that all knowledge is always subjective. But what exactly is he talking about here? Let’s zoom into the fabric of reality. In other words, let’s enter the quantum world. [break] These are symbols from the book of nature. This equation describes the birth of quantum physics. Something no one saw coming. Indeed, Max Planck introduced it in an act of despair. [break] And it turns out that this new realm of reality is a truly bizarre place. Particles behave as waves and vice versa, depending on how you look at them. [break] There is an intrinsic limit to the amount of information you can have. [break] And everything is instantaneously connected to everything else. This is called entanglement and you can use it to encrypt information. But things get worse. Some quantum experiments are truly mindboggling: they appear to alter the past or break causality. OK, let’s look at the universe. What is out there? [break] Well it turns out that of all the stuff there is, only 5% is ordinary matter. 26% is called dark matter, and no one really knows what it is made of. And 69% is dark energy, some mysterious force in the vacuum making our universe expanded faster and faster the bigger it gets. This was discovered in 1998 and was awarded with a Nobel prize. And even things as innocuous as time, can be very problematic on closer inspection. So much so, that some physicists suspect it doesn’t really exist. [break] “The passage of time is simply an illusion created by our brains.” [break] And what about emergence and self-­‐organization. This is a map of the internet. It is as though there is a fundamental force in the universe driving it to ever higher levels of complexity and structure. Just look at ants: where does this collective intelligence come from that allows them to become such an amazingly clever super-­‐organism. And why can’t we humans achieve this? OK , so reality is indeed a very weird place. But perhaps we can find a sanctuary of clarity and regularity within ourselves. Let’s look at our brains and how we perceive the world. [break] “Instead of reality being passively recorded by the brain, it is actively constructed by it.” [break] “You're not perceiving what's out there. You're perceiving whatever your brain tells you.” [break] “What we call normal perception does not really differ from hallucinations, except that the latter is not anchored by external input.” Wow. But at least I am in control of my mind. Or not? “The conscious mind is not at the center of action, but on a distant edge, hearing but whispers of the activity.” [break] “The exact levels of dozens of neurotransmitters are critical for who you believe yourself to be.” [break] “Beliefs about logic, economics, ethics, emotions, beauty, social interaction, love, are all products of the biologically evolved ‘hardwiring’ in the brain.” These are the words of David Eagleman. He is a neuroscientists and writer. And yes, things get worse. These two books are an embarrassment to any human being who believes in rationality. Countless experiments show how easily we can be manipulated. Without ever suspecting a thing. And don’t fool yourself. We all fail equally at this. Other experiments have shown that the simple expectation of an experience changes how you perceive it. For instance, tasting wine you thought was expensive results in neural activity in your pleasure center. This does not happen for the same wine if you are told it is cheap. The same is true for how you feel pain. And then there are the placebo and nocebo effects, where your beliefs shape your reality. Like overdosing on sugar pills and nearly dying because you thought they were antidepressants. Then there is this phenomenon called false awakening, where you dream that you wake up. To experience this can be quite unsettling. [break] “To wake up twice in a row is something that can shatter many of the intuitions you have about consciousness: [break] that the vividness, the coherence, and the crispness of a conscious experience are evidence that you are really in touch with reality.” Thomas Metzinger is a philosopher of the mind interested in neuroscience. He asks: “Well how do you know that you actually woke up this morning?” And then things can go terribly wrong in the mind. This book on psychopathology is a frightening 800 pages thick. Jill Bolte Taylor studies brain anatomy. When she had a golf-­‐ball sized blood-­‐clot in her le] hemisphere due to a stroke, this is what she experienced. [break] My consciousness shifted away from my normal perception of reality, to some esoteric space where I'm witnessing myself having this experience. [break] I can no longer define the boundaries of my body -­‐ I can't define where I begin and where I end. [break] I felt at one with all the energy that was, and it was beautiful there. Now these aren't really worlds I would expect from someone who’s left brain is being damaged, but rather from someone like… …this. This is Christian Rätsch. He is an anthropologist specialized in ethnopharmacology. His book, called “The Encyclopedia of Psychoactive Plants” is nearly 1’000 pages thick. And remember what Eagleman said about hallucinations: they are just as real. Perhaps this realization prompted the next quote. There are these extraordinary other types of universe. Aldous Huxley was talking about his experiences with LSD. So what does this all mean? Where does it leave us? Well, if we are really honest, the answer is [break] We don’t know. Basically we are back to René Descartes. “I think therefore I exists”. So, the only thing I cannot deny, is that I am having a subjective experience now. That’s all. But perhaps we can do better. Perhaps if we are willing to abandon some of our cherished beliefs about reality we can start to understand more. And there is a glimmer on the horizon. Information is physical. [break] All things physical are information-­‐theoretic. John Wheeler helped develop general relativity giving us the word black hole. And Rolf Landauer made important contributions to information processing in the 60s. “The universe is made not of chunks of stuff, but chunks of information — ones and zeros.” [break] “Quantum physics requires us to abandon the distinction between information and reality.” Seth Lloyd and Anton Zeilinger are currently pioneering the field of quantum information. They are helping us build quantum computers. A second theme is that we are in fact involved in creating reality. This is an idea going back to Immanuel Kant and is also found in Buddhism. [break] “This is a participatory universe.” [break] “Reality is something that comes into being through the very act of human cognition.” [break] “Consciousness is all that exists. Space-­‐time and matter never were fundamental denizens of the universe but have always been among the humbler contents of consciousness.” Richard Tarnas is a historian and author of the book “The Passion of the Western Mind”. An epic journey looking at All the ideas that have shaped our modern world view. And Donald Hoffman is a cognitive scientist. So, continuing with this idea: “Our belief that there is a single universe shared by multiple observers is wrong. Instead, each observer has their own universe.” [break] “This cosmic solipsism turns on all of our common sense notions about the world; then again fundamental physics has a long history of disregarding our common sense notions.” Amanda Gefter is a science journalist and author. And solipsism is the view that only one’s own mind exists. Her idea can perhaps also be summarized as follows: Objectivity is the illusion that observations are made without an observer.” Heinz von Foerster was a physicists and philosopher and one of the pioneers of cybernetics. But finally, a word of caution. Although these last quotes come from very sober and keen thinkers, they still could be wrong. In fact, everything I have been saying could be wrong. But if we all really are in charge of our own universe, made of pure information, it is essential for us to look for wisdom and truth within ourselves. Perhaps looking for reality outside of the mind is the wrong way to go. Thank you. Index under construction (mostly just people for now): • I don't really recall when this all started. Ever since I was 15 I wanted to study physics. After graduating in 1999, I had more questions than answers relating to reality and consciousness. • In 2001: "On the Structure of the Vacuum and the Dynamics of Scalar Fields" (here) looking at some of the limits of modern physics. • My first job (which would last for 12 years, where I was developing trading model algorithms for the FX market at Olsen Ltd) was an intuitive transition from fundamental physics to complex systems. "A New Kind of Science" by S. Wolfram. • In 2005 I was looking for a new challenge in addition to my work and googled  the Chair of Systems Design by chance. • Applied for a PhD there (50% next to my finance job) and had to give a talk that summer where I started to think about the analytical/algorithmic and fundamental/complex paradigms;  "Alternate Realities: Mathematical Models of Nature and Man" by J. Casti: science as the art of encoding reality domains into abstract representations. • In the summer of 2006 I read "Zen and the Art of Motorcycle Maintenance" by R. Pirsig in a hammock on one of the Andaman islands after visiting Delhi and Varanasi for our charity • I consolidated a lot of stuff between 2006 and 2009. These blog posts at Olsen Ltd and stuff on my old webpage: 1, 2, 3, 4. • It all got more serious when I took a course on the philosophy of science during my PhD in 2008: G. Brun and D. Kuenzle, ETH Zurich. • All of this now fuelled the contents of Appendix A of my dissertation (2010, PDF) which got updated and published in Springer's Theses series (2013): Laws of nature; paradigms of fundamental processes and complex systems; epistemological and ontological challenges; postmodernism, constructivism, and relativism. • Which then prompted these blog posts about certainty, reality, and perception: 1, 2, 3,  (2011 - 2012). • Sometime: "Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos" S. Lloyd. • 2011: "The Passion of the Western Mind: Understanding the Ideas That Have Shaped Our World View" R. Tarnas, "Incognito: The Secret Lives of the Brain" D. Eagleman. • Ideas which also flowed into this Ignite talk in 2011 (or as blog post), which represents a rough sketch of the current talk.  • 2012: "The Ego Tunnel: The Science of the Mind and the Myth of the Self" T. Metzinger. • Not sure when the ideas of  consciousness entered the picture, but happy to see such crazy ideas also being espoused by scientists and philosophers today. • In 2013 I started negotiating with Springer and started writing...  Saturday, February 1, 2014 snow, wind, and avalanches I <3 pow Freeriding is arguably the most fun thing to do on a snowboard. But as the proverb has it: no risk, no fun. There is always a looming threat due to avalanches. Although, judging the risk of avalanche danger is today based on a lot of scientific knowledge, allowing for proper  assessments resulting in decision strategies (see, for instance, Werner Munter), there is always a residual risk. Avalanches are very complex phenomena, depending on a web of factors, like temperature, slope orientation and steepness, terrain, vegetation, snowpack, ... A very difficult variable to deal with is wind. Heavy winds during snow fall can pack incredible amounts of snow at very specific exposures. And windy conditions after the last snow fall can result in very local hot spots. Often only experience can help here. Recently, we had to deal with this. In order to reach the side of the mountain we planned on descending, there was some windpacked powder to deal with. Between the three of us, we triggered four avalanches. Luckily they were all small and superficial - but you never know. Interestingly, the final couloirs greeted us with epic pow, very different in quality to the other slopes... Perhaps the greatest safety accomplishment in the last years has been the introduction of avalanche airbags. A simple idea based on increasing the volume associated with the freerider. In an avalanche, understood as granular media moving under the influence of gravity, larger particles tend to travel to the surface. This is vital for survival, as being rescued before about 20 minutes results in a very good survival rate, which drops significantly after that. One last thing. If you are "lucky" enough to be close to the tear where the avalanche rips away from the slope, you have a few seconds left to do the right thing. Next to deploying the airbag you can actually try and ride out of the avalanche. When the snow silently crumbles around you, it's like surfing! Your board actually carries you and if you are not distracted by the dynamics of everything around you moving, you can focus on a sideways exit. This happened to me here: Not sure how easy this is on skis though, as you can see herehere  and here (note the effect of the airbag - the last guy didn't have one; those must have been long 5 1/2 minutes). Watch the pros struggling: 1, 2, 3, 4, 5. And try not to do this, after you decide to gun it. And then there's these guys: 1, 2. Please, don't be one of those people who turn up with no safety equipment or say stuff like, "but I've never seen an avalanche come down on this slope" or "hey, there were already some tracks, no big deal"! And finally, why bother? Why expose yourself to unnecessary risk? Because it is so much fun, that's why:) Safe and awesome freeriding! Wednesday, November 6, 2013 old posts from This is a collection of old blog posts, going back to 2006. For some strange reason I thought it would be a good idea to have two blogs. They have been migrated here from a philosophy of science primer - part III • part I: some history of science and logical empiricism, • part II: problems of logical empiricism, critical rationalism and its problems. After the unsuccessful attempts to found science on common sense notions as seen in the programs of logical empiricism and critical rationalism, people looked for new ideas and explanations. the thinker The Kuhnian View Thomas Kuhn’s enormously influential work on the history of science is called the Structure of Scientific Revolutions. He revised the idea that science is an incremental process accumulating more and more knowledge. Instead, he identified the following phases in the evolution of science: • prehistory: many schools of thought coexist and controversies are abundant, • history proper: one group of scientists establishes a new solution to an existing problem which opens the doors to further inquiry; a so called paradigm emerges, • paradigm based science: unity in the scientific community on what the fundamental questions and central methods are; generally a problem solving process within the boundaries of unchallenged rules (analogy to solving a Sudoku), • crisis: more and more anomalies and boundaries appear; questioning of established rules, • revolution: a new theory and weltbild takes over solving the anomalies and a new paradigm is born. Another central concept is incommensurability, meaning that proponents of different paradigms cannot understand the other’s point of view because they have diverging ideas and views of the world. In other words, every rule is part of a paradigm and there exist no trans-paradigmatic rules. This implies that such revolutions are not rational processes governed by insights and reason. In the words of Max Planck (the founder of quantum mechanics; from his autobiography): Kuhn gives additional blows to a commonsensical foundation of science with the help of Norwood Hanson and Willard Van Orman Quine: • every human observation of reality contains an a priori theoretical framework, • underdetermination of belief by evidence: any evidence collected for a specific claim is logically consistent with the falsity of the claim, • every experiment is based on auxiliary hypotheses (initial conditions, proper functioning of apparatus, experimental setup,…). People slowly started to realize that there are serious consequences in Kuhn’s ideas and the problems faced by the logical empiricists and critical rationalists in establishing a sound logical and empirical foundation of science: • postmodernism, • constructivism or the scoiology of science, • relativism. Modernism describes the development of Western industrialized society since the beginning of the 19th Century. A central idea was that there exist objective true beliefs and that progression is always linear. Postmodernism replaces these notions with the belief that many different opinions and forms can coexist and all find acceptance. Core ideas are diversity, differences and intermingling. In the 1970s it is seen to enter scientific and cultural thinking. Postmodernism has taken a bad rap from scientists after the so called Sokal affair, where physicist Alan Sokal got a nonsensical paper published in the journal of postmodern cultural studies, by flattering the editors ideology with nonsense that sounds good. Postmodernims has been associated with scepticism and solipsism, next to relativism and constructivism. Notable scientists identifiable as postmodernists are Thomas Kuhn, David Bohm and many figures in the 20th century philosophy of mathematics. As well as Paul Feyerabend, an influential philosopher of science. To quote the Nobel laureate Steven Weinberg on Kuhnian revolutions: If the transition from one paradigm to another cannot be judged by any external standard, then perhaps it is culture rather than nature that dictates the content of scientific theories. Constructivism excludes objectivism and rationality by postulating that beliefs are always subject to a person’s cultural and theological embedding and inherent idiosyncrasies. It also goes under the label of the sociology of science. In the words of Paul Boghossian (in his book Fear of Knowledge: Against Relativism and Constructivism): Constructivism about rational explanation: it is never possible to explain why we believe what we believe solely on the basis of our exposure to the relevant evidence; our contingent needs and interests must also be invoked. The proponents of constructivism go further: […] all beliefs are on a par with one another with respect to the causes of their credibility. It is not that all beliefs are equally true or equally false, but that regardless of truth and falsity the fact of their credibility is to be seen as equally problematic. From Barry Barnes’ and David Bloor’s Relativism, Rationalism and the Sociology of Knowledge. In its radical version, constructivism fully abandons objectivism: • Objectivity is the illusion that observations are made without an observer(from the physicist Heinz von Foerster; my translation) • Modern physics has conquered domains that display an ontology that cannot be coherently captured or understood by human reasoning (from the philosopher Ernst von Glasersfeld); my translation In addition, radical constructivism proposes that perception never yields an image of reality but is always a construction of sensory input and the memory capacity of an individual. An analogy would be the submarine captain who has to rely on instruments to indirectly gain knowledge from the outside world. Radical constructivists are motivated by modern insights gained by neurobiology. Historically, Immanuel Kant can be understood as the founder of constructivism. On a side note, the bishop George Berkeley went even as far as to deny the existence of an external material reality altogether. Only ideas and thought are real. Another consequence of the foundations of science lacking commonsensical elements and the ideas of constructivism can be seen in the notion of relativism. If rationality is a function of our contingent and pragmatic reasons, then it can be rational for a group A to believe P, while at the same time it is rational for group B to believe in negation of P. Although, as a philosophical idea, relativism goes back to the Greek Protagoras, its implications are unsettling for the Western mid:anything goes (as Paul Feyerabend characterizes his idea of scientific anarchy). If there is no objective truth, no absolute values, nothing universal, then a great many of humanity’s century old concepts and beliefs are in danger. It should however also be mentioned, that relativism is prevalent in Eastern thought systems, and as an example found in many Indian religions. In a similar vein, pantheism and holism are notions which are much more compatible with Eastern thought systems than Western ones. Furthermore, John Stuart Mill’s arguments for liberalism appear to also work well as arguments for relativism: • fallibility of people’s opinions, • opinions that are thought to be wrong can contain partial truths, • accepted views, if not challenged, can lead to dogmas, • the significance and meaning of accepted opinions can be lost in time. From his book On Liberty. But could relativism be possibly true? Consider the following hints: • Epistemological • problems with perception: synaesthesia, altered states of consciousness (spontaneous, mystical experiences and drug induced), • psychopathology describes a frightening amount of defects in the perception of reality and ones self, • people suffering from psychosis or schizophrenia can experience a radically different reality, • free will and neuroscience, • synthetic happiness, • cognitive biases. • Ontological • nonlocal foundation of quantum reality: entanglement, delayed choice experiment, • illogical foundation of reality: wave-particle duality, superpositions, uncertainty, intrinsic probabilistic nature, time dilation (special relativity), observer/measurment problem in quantum theory, • discreteness of reality: quanta of energy and matter, constant speed of light, • nature of time: not present in fundamental theories of quantum gravity, symmetrical, • arrow of time: why was the initial state of the universe very low in entropy? • emergence, selforganization and structureformation. In essence, perception doesn’t necessarily say much about the world around us. Consciousness can fabricate reality. This makes it hard to be rational. Reality is a really bizarre place. Objectivity doesn’t seem to play a big role. And what about the human mind? Is this at least a paradox free realm? Unfortunately not. Even what appears as a consistent and logical formal thought system, i.e., mathematics, can be plagued by fundamental problems. Kurt Gödel proved that in every consistent non-contradictory system of mathematical axioms (leading to elementary arithmetic of whole numbers), there exist statements which cannot be proven or disproved in the system. So logical axiomatic systems are incomplete. As an example Bertrand Russell encountered the following paradox: let R be the set of all sets that do not contain themselves as members. Is R an element of itself or not? If you really accede to the idea that reality and the perception of reality by the human mind are very problematic concepts, then the next puzzles are: • why has science been so fantastically successful at describing reality? • why is science producing amazing technology at breakneck speed? • why is our macroscopic, classical level of reality so well behaved and appears so normal although it is based on quantum weirdness? • are all beliefs justified given the believers biography and brain chemistry? a philosophy of science primer - part II Continued from part I The Problems With Logical Empiricism The programme proposed by the logical empiricists, namely that science is built of logical statements resting on an empirical foundation, faces central difficulties. To summarize: • it turns out that it is not possible to construct pure formal concepts that solely reflect empirical facts without anticipating a theoretical framework, • how does one link theoretical concepts (electrons, utility functions in economics, inflational cosmology, Higgs bosons,…) to experiential notions? • how to distinguish science from pseudo-science? Now this may appear a little technical and not very interesting or fundamental to people outside the field of the philosophy of science, but it gets worse: • inductive reasoning is invalid from a formal logical point of view! • causality defies standard logic! This is big news. So, just because I have witnessed the sun going up everyday of my life (single observations), I cannot say it will go up tomorrow (general law). Observation alone does not suffice, you need a theory. But the whole idea here is that the theory should come from observation. This leads to the dead end of circular reasoning. But surely causality is undisputable? Well, apart from the problems coming from logic itself, there are extreme examples to be found in modern physics which undermine the common sense notion of a causal reality: quantum nonlocalitydelayed choice experiment. But challenges often inspire people, so the story continues… Critical Rationalism OK, so the logical empiricists faced problems. Can’t these be fixed? The critical rationalists belied so. A crucial influence came from René Descartes’ and Gottfried Leibniz’ rationalism: knowledge can have aspects that do not stem from experience, i.e., there is an immanent reality to the mind. The term critical refers to the fact, that insights gained by pure thought cannot be strictly justified but only critically tested with experience. Ultimate justifications lead to the so called Münchhausen trilemma, i.e., one of the following: • an infinite regress of justifications, • circular reasoning, • dogmatic termination of reasoning. The most influential proponent of critical rationalism was Karl Popper. His central claims were in essence • use deductive reasoning instead of induction, • theories can never be verified, only falsified. Although there are similarities with logical empiricism (empirical basis, science is a set of theoretical constructs), the idea is that theories are simply invented by the mind and are temporarily accepted until they can be falsified. The progression of science is hence seen as evolutionary process rather than a linear accumulation of knowledge. Sounds good, so what went wrong with this ansatz? The Problems With Critical Rationalism In a nutshell: • basic formal concepts cannot be derived from experience without induction; how can they be shown to be true? • deduction turns out to be just as tricky as induction, • what parts of a theory need to be discarded once it is falsified? To see where deduction breaks down, a nice story by Lewis Carroll (the mathematician who wrote the Alice in Wonderland stories): What the tortoise Said to Achilles. If deduction goes down the drain as well, not much is left to ground science on notions of logic, rationality and objectivity. Which is rather unexpected of an enterprise that in itself works amazingly well employing just these concepts. Explanations in Science And it gets worse. Inquiries into the nature of scientific explanation reveal further problems. It is based on Carl Hempel’s and Paul Oppenheim’s formalisation of scientific inquiry in natural language. Two basic schemes are identified: deductive-nomological and inductive-statistical explanations. The idea is to show that what is being explained (the explanandum) is to be expected on the grounds of these two types of explanations. The first tries to explain things deductively in terms of regularities and exact laws (nomological). The second uses statistical hypotheses and explains individual observations inductively. Albeit very formal, this inquiry into scientific inquiry is very straightforward and commonsensical. Again, the programme fails: • can’t explain singular causal events, • asymmetric (a change in the air pressure explains the readings on a barometer, however, the barometer doesn’t explain why the air pressure changed), • many explanations are irrelevant, • as seen before, inductive and deductive logic is controversial, • how to employ probability theory in the explanation? So what next? What are the consequences of these unexpected and spectacular failings of the most simplest premises one would wish science to be grounded on (logic, empiricism, causality, common sense, rationality, …)? The discussion is ongoing and isn’t expected to be resolved soon. Seepart III a philosophy of science primer - part I Naively one would expect science to adhere to two basic notions: • common sense, i.e., rationalism, • observation and experiments, i.e., empiricism. Interestingly, both concepts turn out to be very problematic if applied to the question of what knowledge is and how it is acquired. In essence, they cannot be seen as a foundation for science. But first a little history of science… Classical Antiquity The Greek philosopher Aristotle was one of the first thinkers to introduce logic as a means of reasoning. His empirical method was driven by gaining general insights from isolated observations. He had a huge influence on the thinking within the Islamic and Jewish traditions next to shaping Western philosophy and inspiring thinking in the physical sciences. Modern Era Nearly two thousand years later, not much changed. Francis Bacon (the philosopher, not the painter) made modifications to Aristotle’s ideas, introducing the so called scientific method where inductive reasoning plays an important role. He paves the way for a modern understanding of scientific inquiry. Approximately at the same time, Robert Boyle was instrumental in establishing experiments as the cornerstone of physical sciences. Logical Empiricism So far so good. By the early 20th Century the notion that science is based on experience (empiricism) and logic, and where knowledge is intersubjectively testable, has had a long history. The philosophical school of logical empiricism (or logical positivism) tries to formalise these ideas. Notable proponents were Ernst Mach, Ludwig Wittgenstein, Bertrand Russell, Rudolf Carnap, Hans Reichenbach, Otto Neurath. Some main influences were: • David Hume’s and John Locke’s empiricism: all knowledge originates from observation, nothing can exist in the mind which wasn’t before in the senses, • Auguste Comte’ and John Stuart Mills’ positivism: there exists no knowledge outside of science. In this paradigm (see Thomas Kuhn a little later) science is viewed as a building comprised of logical terms based on an empirical foundation. A theory is understood as having the following structure: observation -> empirical concepts -> formal notions -> abstract law. Basically a sequence of ever higher abstraction. This notion of unveiling laws of nature by starting with individual observations is called induction (the other way round, starting with abstract laws and ending with a tangible factual description is called deduction, see further along). And here the problems start to emerge. See part II Stochastic Processes and the History of Science: From Planck to Einstein How are the notions of randomness, i.e., stochastic processes, linked to theories in physics and what have they got to do with options pricing in economics? How did the prevailing world view change from 1900 to 1905? What connects the mathematicians Bachelier, Markov, Kolmogorov, Ito to the physicists Langevin, Fokker, Planck, Einstein and the economists Black, Scholes, Merton? The Setting • Science up to 1900 was in essence the study of solutions of differential equations (Newton’s heritage); • Was very successful, e.g., Maxwell’s equations: four differential equations describing everything about (classical) electromagnetism; • Prevailing world view: • Deterministic universe; • Initial conditions plus the solution of differential equation yield certain prediction of the future. Three Pillars By the end of the 20th Century, it became clear that there are (at least?) two additional aspects needed in a completer understanding of reality: • Inherent randomness: statistical evaluations of sets of outcomes of single observations/experiments; • Quantum mechanics (Planck 1900; Einstein 1905) contains a fundamental element of randomness; • In chaos theory (e.g., Mandelbrot 1963) non-linear dynamics leads to a sensitivity to initial conditions which renders even simple differential equations essentially unpredictable; • Complex systems (e.g., Wolfram 1983), i.e., self-organization and emergent behavior, best understood as outcomes of simple rules. Stochastic Processes • Systems which evolve probabilistically in time; • Described by a time-dependent random variable; • The probability density function describes the distribution of the measurements at time t; • Prototype: The Markov process. For a Markov process, only the present state of the system influences its future evolution: there is no long-term memory. Examples: • Wiener process or Einstein-Wiener process or Brownian motion: • Introduced by Bachelier in 1900; • Continuous (in t and the sample path) • Increments are independent and drawn from a Gaussian normal distribution; • Random walk: • Discrete steps (jumps), continuous in t; • Is a Wiener process in the limit of the step size going to zero. To summarize, there are three possible characteristics: 1. Jumps (in sample path); 2. Drift (of the probability density function); 3. Diffusion (widening of the probability density function). Probability distribution function showing drift and diffusion: Probability distribution function with drift and diffusion But how to deal with stochastic processes? The Micro View • Presented a theory of Brownian motion in 1905; • New paradigm: stochastic modeling of natural phenomena; statistics as intrinsic part of the time evolution of system; • Mean-square displacement of Brownian particle proportional to time; • Equation for the Brownian particle similar to a diffusion (differential) equation. • Presented a new derivation of Einstein’s results in 1908; • First stochastic differential equation, i.e., a differential equation of a “rapidly and irregularly fluctuating random force” (today called a random variable) • Solutions of differential equation are random functions. However, no formal mathematical grounding until 1942, when Ito developed stochastic calculus: • Langevin’s equations interpreted as Ito stochastic differential equations using Ito integrals; • Ito integral defined to deal with non-differentiable sample paths of random functions; • Ito lemma (generalized integration rule) used to solve stochastic differential equations. • The Markov process is a solution to a simple stochastic differential equation; • The celebrated Black-Scholes option pricing formula is a stochastic differential equation employing Brownian motion. The Fokker-Planck Equation: Moving To The Macro View • The Langevin equation describes the evolution of the position of a single “stochastic particle”; • The Fokker-Planck equation describes the behavior of a large population of of “stochastic particles”; • Formally: The Fokker-Planck equation gives the time evolution of the probability density function of the system as a function of time; • Results can be derived more directly using the Fokker-Planck equation than using the corresponding stochastic differential equation; • The theory of Markov processes can be developed from this macro point of view. The Historical Context • Developed a theory of Brownian motion (Einstein-Wiener process) in 1900 (five years before Einstein, and long before Wiener); • Was the first person to use a stochastic process to model financial systems; • Essentially his contribution was forgotten until the late 1950s; • Black, Scholes and Merton’s publication in 1973 finally gave Brownian motion the break-through in finance. • Founder of quantum theory; • 1900 theory of black-body radiation; • Central assumption: electromagnetic energy is quantized, E = h v; • In 1914 Fokker derives an equation on Brownian motion which Planck proves; • Applies the Fokker-Planck equation as quantum mechanical equation, which turns out to be wrong; • In 1931 Kolmogorov presented two fundamental equations on Markov processes; • It was later realized, that one of them was actually equivalent to the Fokker-Planck equation. 1905 “Annus Mirabilis” publications. Fundamental paradigm shifts in the understanding of reality: • Photoelectric effect: • Explained by giving Planck’s (theoretical) notion of energy quanta a physical reality (photons), • Further establishing quantum theory, • Winning him the Nobel Prize; • Brownian motion: • First stochastic modeling of natural phenomena, • The experimental verification of the theory established the existence of atoms, which had been heavily debate at the time, • Einstein’s most frequently cited paper, in the fields of biology, chemistry, earth and environmental sciences, life sciences, engineering; • Special theory of relativity: the relative speeds of the observers’reference frames determines the passage of time; • Equivalence of energy and mass (follows from special relativity): E = m c^2. Einstein was working at the Patent Office in Bern at the time and submitted his Ph.D. to the University of Zurich in July 1905. Later Work: • 1915: general theory of relativity, explaining gravity in terms of the geometry (curvature) of space-time; • Planck also made contributions to general relativity; • Although having helped in founding quantum mechanics, he fundamentally opposed its probabilistic implications: “God does not throw dice”; • Dreams of a unified field theory: • Spend his last 30 years or so trying to (unsuccessfully) extend the general theory of relativity to unite it with electromagnetism; • Kaluza and Klein elegantly managed to do this in 1921 by developing general relativity in five space-time dimensions; • Today there is still no empirically validated theory able to explain gravity and the (quantum) Standard Model of particle physics, despite intense theoretical research (string/M-theory, loop quantum gravity); • In fact, one of the main goals of the LHC at CERN (officially operational on the 21st of October 2008) is to find hints of such a unified theory (supersymmetric particles, higher dimensions of space). Technorati , laws of nature What are Laws of Nature? Regularities/structures in a highly complex universe Allow for predictions • Dependent on only a small set of conditions (i.e., independent of very many conditions which could possibly have an effect) …but why are there laws of nature and how can these laws be discovered and understood by the human mind? No One Knows! • G.W. von Leibniz in 1714 (Principes de la nature et de la grâce): • Why is there something rather than nothing? For nothingness is simpler and easier than anything • E. Wigner, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences“, 1960: • […] the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and […] there is no rational explanation for it • […] it is not at all natural that “laws of nature” exist, much less that man is able to discover them • […] the two miracles of the existence of laws of nature and of the human mind’s capacity to divine them • […] fundamentally, we do not know why our theories work so well In a Nutshell • We happen to live in a structured, self-organizing, and fine-tuned universe that allows the emergence of sentient beings (anthropic principle) • The human mind is capable of devising formal thought systems (mathematics) • Mathematical models are able to capture and represent the workings of the universe See also this post: in a nutshell. The Fundamental Level of Reality: Physics Mathematical models of reality are independent of their formal representation: invariance and symmetry • Classical mechanics: invariance of the equations under transformations (e.g., time => conservation of energy) • Gravitation (general relativity): geometry and the independence of the coordinate system (covariance) • The other three forces of nature (unified in quantum field theory): mathematics of symmetry and special kind of invariance See also these posts: funadamentalinvariant thinking. Towards Complexity • Physics was extremely successful in describing the inanimate world the in the last 300 years or so • But what about complex systems comprised of many interacting entities, e.g., the life and social sciences? • The rest is chemistryC. D. Anderson in 1932; echoing the success of a reductionist approach to understanding the workings of nature after having discovered the positron • At each stage [of complexity] entirely new laws, concepts, and generalizations are necessary […]. Psychology is not applied biology, nor is biology applied chemistryP. W. Anderson in 1972; pointing out that the knowledge about the constituents of a system doesn’t reveal any insights into how the system will behave as a whole; so it is not at all clear how you get from quarks and leptons via DNA to a human brain… Complex Systems: Simplicity The Limits of Physics • Closed-form solutions to analytical expressions are mostly only attainable if non-linear effects (e.g., friction) are ignored • Not too many interacting entities can be considered (e.g., three body problem) The Complexity of Simple Rules • S. Wolfram’s cellular automaton rule 110: neither completely random nor completely repetitive • [The] results [simple rules give rise to complex behavior] where were so surprising and dramatic that as I gradually came to understand them, they forced me to change my whole view of science […]; S. Wolfram reminiscing on his early work on cellular automaton in the 80s (”New Kind of Science”, pg. 19) Complex Systems: The Paradigm Shift • The interaction of entities (agents) in a system according to simple rules gives rise to complex behavior • The shift from mathematical (analytical) models to algorithmic computations and simulations performed in computers (only this bottom-up approach to simulating complex systems has been fruitful, all top-down efforts have failed: try programming swarming behaviorant foragingpedestrian/traffic dynamics,… not using simple local interaction rules but with a centralized, hierarchical setup!) • Understanding the complex system as a network of interactions (graph theory), where the complexity (or structure) of the individual nodes can be ignored • Challenge: how does the macro behavior emerge from the interaction of the system elements on the micro level? See also these posts: complexswarm theorycomplex networks. Laws of Nature Revisited So are there laws of nature to be found in the life and social sciences? • Yes: scaling (or power) laws • Complex, collective phenomena give rise to power laws […] independent of the microscopic details of the phenomenon. These power laws emerge from collective action and transcend individual specificities. As such, they are unforgeable signatures of a collective mechanism; J.P. Bouchaud in “Power-laws in Economy and Finance: Some Ideas from Physics“, 2001 Scaling Laws Scaling-law relations characterize an immense number of natural patterns (from physics, biology, earth and planetary sciences, economics and finance, computer science and demography to the social sciences) prominently in the form of • scaling-law distributions • scale-free networks • cumulative relations of stochastic processes A scaling law, or power law, is a simple polynomial functional relationship f(x) = a x^k     <=>   Y = (X/C)^E Scaling laws • lack a preferred scale, reflecting their (self-similar) fractal nature • are usually valid across an enormous dynamic range (sometimes many orders of magnitude) See also these posts: scaling lawsbenford’s law. Scaling Laws In FX • Event counts related to price thresholds • Price moves related to time thresholds • Price moves related to price thresholds • Waiting times related to price thresholds FX scaling law Scaling Laws In Biology So-called allometric laws describe the relationship between two attributes of living organisms as scaling laws: • The metabolic rate B of a species is proportional to its mass M: B ~ M^(3/4) • Heartbeat (or breathing) rate T of a species is proportional to its mass: T ~ M^(-1/4) • Lifespan L of a species is proportional to its mass: L ~ M^(1/4) • Invariants: all species have the same number of heart beats in their lifespan (roughly one billion) allometric law (Fig. G. West) G. West (et. al) proposes an explanation of the 1/4 scaling exponent, which follow from underlying principles embedded in the dynamical and geometrical structure of space-filling, fractal-like, hierarchical branching networks, presumed optimized by natural selection: organisms effectively function in four spatial dimensions even though they physically exist in three. • The natural world possesses structure-forming and self-organizing mechanisms leading to consciousness capable of devising formal thought systems which mirror the workings of the natural world • There are two regimes in the natural world: basic fundamental processes and complex systems comprised of interacting agents • There are two paradigms: analytical vs. algorithmic (computational) • There are ‘miracles’ at work: • the existence of a universe following laws leading to stable emergent features • the capability of the human mind to devise formal thought systems • the overlap of mathematics and the workings of nature • the fact that complexity emerges from simple rules • There are basic laws of nature to be found in complex systems, e.g., scaling laws Technorati , animal intelligence This is the larger lesson of animal cognition research: It humbles us. We are not alone in our ability to invent or plan or to contemplate ourselves—or even to plot and lie. Many scientists believed animals were incapable of any thought. They were simply machines, robots programmed to react to stimuli but lacking the ability to think or feel. We’re glimpsing intelligence throughout the animal kingdom. Copyright Vincent J. Musi, National Geographic A dog with a vocabulary of 340 words. A parrot that answers “shape” if asked what is different, and “color” if asked what is the same, while being showed two items of different shape and same color. An octopus with “distinct personality” that amuses itself by shooting water at plastic-bottle targets (the first reported invertebrate play behavior). Lemurs with calculatory abilities. Sheep able to recognize faces (of other sheep and humans) long term and that can discern moods. Crows able to make and use tools (in tests, even out of materials never seen before). Human-dolphin communication via an invented sign language (with simple grammar). Dolphins ability to correctly interpret on the first occasion instructions given by a person displayed on a TV screen. This may only be the tip of the iceberg… Read the article Animal Minds in National Geographic`s March 2008 edition. Ever think about vegetarianism? Technorati , complex networks The study of complex networks was sparked at the end of the 90s with two seminal papers, describing their universal: • small-worlds property [1], • and scale-free nature [2] (see also this older post: scaling laws). weighted network unweighted network Today, networks are ubiquitous: phenomena in the physical world (e.g., computer networks, transportation networks, power grids, spontaneous synchronization of systems of lasers), biological systems (e.g., neural networks, epidemiology, food webs, gene regulation), and social realms (e.g., trade networks, diffusion of innovation, trust networks, research collaborations, social affiliation) are best understood if characterized as networks. The explosion of this field of research was and is coupled with the increasing availability of • huge amounts of data, pouring in from neurobiology, genomics, ecology, finance and the Word-wide Web, …, • computing power and storage facilities. The new paradigm states that it is best to understand a complex system, if it is mapped to a network. I.e., the links represent the some kind of interaction and the nodes are stripped of any intrinsic quality. So, as an example, you can forget about the complexity of the individual bird, if you model the flocks swarming behavior. (See these older posts: complexfundamentalswarm theoryin a nutshell.) Only in the last years has the attention shifted from this topological level of analysis (either links are present or not) to incorporate weights of links, giving the strength relative to each other. Albeit being harder to tackle, these networks are closer to the real-world system it is modeling. However, there is still one step missing: also the vertices of the network can be assigned with a value, which acts as a proxy for some real-world property that is coded into the network structure. The two plots above illustrate the difference if the same network is visualized [3] using weights and values assigned to the vertices (left) or simply plotted as a binary (topological) network (right)… [1] Strogatz S. H. and Watts D. J., 1998, Collective Dynamics of ‘Small-World’ Networks, Nature, 393, 440–442. [2] Albert R. and Barabasi A.-L., 1999, Emergence of Scaling in Random Networks, [3] Cuttlefish Adaptive NetWorkbench and Layout cool links… think statistics are boring, irrelevant and hard to understand? well, think again. two examples of visually displaying important information in an amazingly cool way: territory size shows the proportion of all people living on less than or equal to US$1 in purchasing power parity a day. displays a large collection of world maps, where territories are re-sized on each map according to the subject of interest. sometimes an image says more than a thousand words… want to see the global evolution of life expectancy vs. income per capita from 1975 to 2003? and additionally display the co2 emission per capita? choose indicators from areas as diverse as internet users per 1′000 people to contraceptive use amongst adult women and watch the animation. gapminder is a fantastic tool that really makes you think… work in progress… Some of the stuff I do all week… Complex Networks Visualizing a shareholder network: The underlying network visualization framework is JUNG, with theCuttlefish adaptive networkbench and layout algorithm (coming soon). The GUI uses Swing. Stochastic Time Series Scaling laws in financial time series: A Java framework allowing the computation and visualization of statistical properties. The GUI is programmed using SWT. plugin of the month The Firefox add-on Gspace allows you to use Gmail as a file server: This extension allows you to use your Gmail Space (4.1 GB and growing) for file storage. It acts as an online drive, so you can upload files from your hard drive and access them from every Internet capable system. The interface will make your Gmail account look like a FTP host. tech dependence… Because technological advancement is mostly quite gradual, one hardly notices it creeping into ones life. Only if you would instantly remove these high tech commodities, you’d realize how dependent one has become. A random list of ‘nonphysical’ things I wouldn’t want to live without anymore: • everything you ever wanted to know — and much more • (e.g., news, scholar, maps, webmaster tools, …): basically the internet;-) • Web 2.0 communities (e.g.,,,,,,, …): your virtual social network • towards the babel fish  • recommendations from the fat tail of the probability distribution • Web browsers (e.g., Firefox): your window to the world • Version control systems (e.g., Subversion): get organized • CMS (e.g., TYPO3): disentangle content from design on your web page and more • LaTeX typesetting software (btw, this is not a fetish;-): the only sensible and aesthetic way to write scientific documents • Wikies: the wonderful world of unstructured collaboration • Blogs: get it out there • Java programming language: truly platform independent and with nice GUI toolkits (SWT, Swing, GWT); never want to go back to C++ (and don’t even mention C# or .net) • Eclipse IDE: how much fun can you have while programming? • MySQL: your very own relational database (the next level: db4o) • PHP: ok, Ruby is perhaps cooler, but PHP is so easy to work with (e.g., integrating MySQL and web stuff) • Dynamic DNS (e.g., let your home computer be a node of the internet • Web server (e.g., Apache 2): open the gateway • CSS: ok, if we have to go with HTML, this helps a lot • VoIP (e.g., Skype): use your bandwidth • P2P (e.g., BitTorrent): pool your network • Video and audio compression (e.g., MPEG, MP3, AAC, …): information theory at its best • Scientific computing (R, Octave, gnuplot, …): let your computer do the work • Open source licenses (Creative Commons, Apache, GNU GPL, …): the philosophy! • Object-oriented programming paradigm: think design patterns • Rich Text editors: online WYSIWYG editing, no messing around with HTML tags • SSH network protocol: secure and easy networking • Linux Shell-Programming (”grep”, “sed”, “awk”, “xargs”, pipes, …): old school Unix from the 70s • E-mail (e.g., IMAP): oops, nearly forgot that one (which reminds me of something i really, really could do without: spam) • Graylisting: reduce spam • Debian (e.g., Kubuntu): the basis for it all • apt-get package management system: a universe of software at your fingertips • Compiz Fusion window manager: just to be cool… It truly makes one wonder, how all this cool stuff can come for free!!! climate change 2007 Confused about the climate? Not sure what’s happening? Exaggerated fears or impending cataclysm? A good place to start is a publication by Swiss Re. It is done in a straightforward, down-to-earth, no-bullshit and sane manner. The source to the whole document is given at the bottom. Executive Summary The Earth is getting warmer, and it is a widely held view in the scientific community that much of the recent warming is due to human activity. As the Earth warms, the net effect of unabated climate change will ultimately lower incomes and reduce public welfare. Because carbon dioxide (CO₂) emissions build up slowly, mitigation costs rise as time passes and the level of CO₂ in the atmosphere increases. As these costs rise, so too do the benefits of reducing CO₂ emissions, eventually yielding net positive returns. Given how CO₂ builds up and remains in the atmosphere, early mitigation efforts are highly likely to put the global economy on a path to achieving net positive benefits sooner rather than later. Hence, the time to act to reduce these emissions is now.The climate is what economists call a “public good”: its benefits are available to everyone and one person’s enjoyment and use of it does not affect another’s. Population growth, increased economic activity and the burning of fossil fuels now pose a threat to the climate. The environment is a free resource, vulnerable to overuse, and human activity is now causing it to change. However, no single entity is responsible for it or owns it. This is referred to as the “tragedy of the commons”: everyone uses it free of charge and eventually depletes or damages it. This is why government intervention is necessary to protect our climate. Climate is global: emissions in one part of the world have global repercussions. This makes an international government response necessary. Clearly, this will not be easy. The Kyoto Protocol for reducing CO₂ emissions has had some success, but was not considered sufficiently fair to be signed by the United States, the country with the highest volume of CO₂ emissions. Other voluntary agreements, such as the Asia-Pacific Partnership on Clean Development and Climate – which was signed by the US – are encouraging, but not binding. Thus, it is essential that governments implement national and international mandatory policies to effectively reduce carbon emissions in order to ensure the well-being of future generations. The pace, extent and effects of climate change are not known with certainty. In fact, uncertainty complicates much of the discussion about climate change. Not only is the pace of future economic growth uncertain, but also the carbon dioxide and equivalent (CO₂e) emissions associated with economic growth. Furthermore, the global warming caused by a given quantity of CO₂e emissions is also uncertain, as are the costs and impact of temperature increases. Though uncertainty is a key feature of climate change and its impact on the global economy, this cannot be an excuse for inaction. The distribution and probability of the future outcomes of climate change are heavily weighted towards large losses in global welfare. The likelihood of positive future outcomes is minor and heavily dependent upon an assumed maximum climate change of 2° Celsius above the pre-industrial average. The probability that a “business as usual” scenario – one with no new emission-mitigation policies – will contain global warming at 2° Celsius is generally considered as negligible. Hence, the “precautionary principle” – erring on the safe side in the face of uncertainty – dictates an immediate and vigorous global mitigation strategy for reducing CO₂e emissions. There are two major types of mitigation strategies for reducing greenhouse gas emissions: a cap-and-trade system and a tax system. The cap-and-trade system establishes a quantity target, or cap, on emissions and allows emission allocations to be traded between companies, industries and countries. A tax on, for example, carbon emissions could also be imposed, forcing companies to internalize the cost of their emissions to the global climate and economy. Over time, quantity targets and carbon taxes would need to become increasingly restrictive as targets fall and taxes rise. Though both systems have their own merits, the cap-and-trade policy has an edge over the carbon tax, given the uncertainty about the costs and benefits of reducing emissions. First, cap-and-trade policies rely on market mechanisms – fluctuating prices for traded emissions – to induce appropriate mitigating strategies, and have proved effective at reducing other types of noxious gases. Second, caps have an economic advantage over taxes when a given level of emissions is required. There is substantial evidence that emissions need to be capped to restrict global warming to 2 °C above preindustrial levels or a little more than 1 °C compared to today. Given that the stabilization of emissions at current levels will most likely result in another degree rise in temperature and that current economic growth is increasing emissions, the precautionary principle supports a cap-and-trade policy. Finally, cap-and-trade policies are more politically feasible and palatable than carbon taxes. They are more widely used and understood and they do not require a tax increase. They can be implemented with as much or as little revenue-generating capacity as desired. They also offer business and consumers a great deal of choice and flexibility. A cap-and-trade policy should be easier to adopt in a wide variety of political environments and countries. Whichever system – cap-and-trade or carbon tax – is adopted, there are distributional issues that must be addressed. Under a quantity target, allocation permits have value and can be granted to businesses or auctioned. A carbon tax would raise revenues that could be recycled, for example, into research on energy-efficient technologies. Or the revenues could be used to offset inefficient taxes or to reduce the distributional aspects of the carbon tax. Source: “The economic justification for imposing restraints on carbon emissions”, Swiss Re, Insights, 2007; PDF scaling laws Scaling-law relations characterize an immense number of natural processes, prominently in the form of 1. scaling-law distributions, 2. scale-free networks, 3. cumulative relations of stochastic processes. A scaling law, or power law, is a simple polynomial functional relationship, i.e., f(x) depends on a power of x. Two properties of such laws can easily be shown: • a logarithmic mapping yields a linear relationship, • scaling the function’s argument x preserves the shape of the functionf(x), called scale invariance. See (Sornette, 2006). Scaling-Law Distributions Scaling-law distributions have been observed in an extraordinary wide range of natural phenomena: from physics, biology, earth and planetary sciences, economics and finance, computer science and demography to the social sciences; see (Newman, 2004). It is truly amazing, that such diverse topics as • the size of earthquakes, moon craters, solar flares, computer files, sand particle, wars and price moves in financial markets, • the number of scientific papers written, citations received by publications, hits on webpages and species in biological taxa, • the sales of music, books and other commodities, • the population of cities, • the income of people, • the frequency of words used in human languages and of occurrences of personal names, • the areas burnt in forest fires, are all described by scaling-law distributions. First used to describe the observed income distribution of households by the economist Pareto in 1897, the recent advancements in the study of complex systems have helped uncover some of the possible mechanisms behind this universal law. However, there is as of yet no real understanding of the physical processes driving these systems. Processes following normal distributions have a characteristic scale given by the mean of the distribution. In contrast, scaling-law distributions lack such a preferred scale. Measurements of scaling-law processes yield values distributed across an enormous dynamic range (sometimes many orders of magnitude), and for any section one looks at, the proportion of small to large events is the same. Historically, the observation of scale-free or self-similar behavior in the changes of cotton prices was the starting point for Mandelbrot’s research leading to the discovery of fractal geometry; see (Mandelbrot, 1963). It should be noted, that although scaling laws imply that small occurrences are extremely common, whereas large instances are quite rare, these large events occur nevertheless much more frequently compared to a normal (or Gaussian) probability distribution. For such distributions, events that deviate from the mean by, e.g., 10 standard deviations (called “10-sigma events”) are practically impossible to observe. For scaling law distributions, extreme events have a small but very real probability of occurring. This fact is summed up by saying that the distribution has a “fat tail” (in the terminology of probability theory and statistics, distributions with fat tails are said to be leptokurtic or to display positive kurtosis) which greatly impacts the risk assessment. So although most earthquakes, price moves in financial markets, intensities of solar flares, … will be very small, the possibility that a catastrophic event will happen cannot be neglected. Scale-Free Networks Another modern research field marked by the ubiquitous appearance of scaling-law relations is the study of complex networks. Many different phenomena in the physical (e.g., computer networks, transportation networks, power grids, spontaneous synchronization of systems of lasers), biological (e.g., neural networks, epidemiology, food webs, gene regulation), and social (e.g., trade networks, diffusion of innovation, trust networks, research collaborations, social affiliation) worlds can be understood as network based. In essence, the links and nodes are abstractions describing the system under study via the interactions of the elements comprising it. In graph theory, the degree of a node (or vertex), k, describes the number of links (or edges) the node has to other nodes. The degree distribution gives the probability distribution of degrees in a network. For scale-free networks, one finds that the probability that a node in the network connects with k other nodes follows a scaling law. Again, this power law is characterized by the existence of highly connected hubs, whereas most nodes have small degrees. Scale-free networks are • characterized by high robustness against random failure of nodes, but susceptible to coordinated attacks on the hubs, and • thought to arise from a dynamical growth process, called preferential attachment, in which new nodes favor linking to existing nodes with high degrees. It should be noted, that another prominent feature of real-world networks, namely the so-called small-world property, is separate from a scale-free degree distribution, although scale-free networks are also small-world networks; (Strogatz and Watts, 1998). For small-world networks, although most nodes are not neighbors of one another, most nodes can be reached from every other by a surprisingly small number of hops or steps. Most real-world complex networks - such as those listed at the beginning of this section - show both scale-free and small-world characteristics. Some general references include (Barabasi, 2002), (Albert and Barabasi, 2001), and (Newman, 2003). Emergence of scale-free networks in the preferential attachment model (Albert and Barabasi, 1999). An alternative explanation to preferential attachment, introducing non-topological values (called fitness) to the vertices, is given in (Caldarelli et al., 2002). Cumulative Scaling-Law Relations Next to distributions of random variables, scaling laws also appear in collections of random variables, called stochastic processes. Prominent empirical examples are financial time-series, where one finds empirical scaling laws governing the relationship between various observed quantities. See (Guillaume et al., 1997) and (Dacorogna et al., 2001). Albert R. and Barabasi A.-L., 2001, Statistical Mechanics of Complex Networks, Barabasi A.-L., 2002, Linked — The New Science of Networks, Perseus Publishing, Cambridge, Massachusetts. Caldarelli G., Capoccio A., Rios P. D. L., and Munoz M. A., 2002, Scale- free Networks without Growth or Preferential Attachment: Good get Richer, Dacorogna M. M., Gencay R., Müller U. A., Olsen R. B., and Pictet O. V., 2001, An Introduction to High-Frequency Finance, Academic Press, San Diego, CA. Guillaume D. M., Dacorogna M. M., Dave R. D., Müller U. A., Olsen R. B., and Pictet O. V., 1997, From the Bird’s Eye to the Microscope: A Survey of New Stylized Facts of the Intra-Daily Foreign Exchange Markets, Finance and Stochastics, 1, 95–129. Mandelbrot B. B., 1963, The variation of certain speculative prices, Journal of Business, 36, 394–419. Newman M. E. J., 2003, The Structure and Function of Complex Networks, Newman M. E. J., 2004, Power Laws, Pareto Distributions and Zipf ’s Law, Sornette D., 2006, Critical Phenomena in Natural Sciences, Series in Synergetics. Springer, Berlin, 2nd edition. Nature, 393, 440–442. See also this post: laws of nature. swarm theory National Geographic`s July 2007 edition: Swarm Theory benford’s law In 1881 a result was published, based on the observation that the first pages of logarithm books, used at that time to perform calculations, were much more worn than the other pages. The conclusion was that computations of numbers that started with 1 were performed more often than others: if d denotes the first digit of a number the probability of its appearance is equal to log(d + 1). The phenomenon was rediscovered in 1938 by the physicist F. Benford, who confirmed the “law” for a large number of random variables drawn from geographical, biological, physical, demographical, economical and sociological data sets. It even holds for randomly compiled numbers from newspaper articles. Specifically, Benford’s law, or the first-digit law, states, that the occurrence of a number with first digit 1 is with 30.1%, 2 with 17.6%, 3 with 12.5%, 4 with 9.7%, 5 with 7.9%, 6 with 6.7%, 7 with 5.8%, 8 with 5.1% and 9 with 4.6% probability. In general, the leading digit d ∈ [1, …, b−1] in base b ≥ 2 occurs with probability proportional to log_b(d + 1) − log_b(d) = log_b(1 + 1/d). First explanations of this phenomena, which appears to suspend the notions of probability, focused on its logarithmic nature which implies a scale-invariant or power-law distribution. If the first digits have a particular distribution, it must be independent of the measuring system, i.e., conversions from one system to another don’t affect the distribution. (This requirement that physical quantities are independent of a chosen representation is one of the cornerstones of general relativity and called covariance.) So the common sense requirement that the dimensions of arbitrary measurement systems shouldn’t affect the measured physical quantities, is summarized in Benford’s law. In addition, the fact that many processes in nature show exponential growth is also captured by the law, which assumes that the logarithms of numbers are uniformly distributed. So how come one observes random variables following normal and scaling-law distributions? In 1996 the phenomena was mathematically rigorously proven: if one repeatedly chooses different probability distribution and then randomly chooses a number according to that distribution, the resulting list of numbers will obey Benford’s law. Hence the law reflects the behavior of distributions of distributions. Benford’s law has been used to detect fraud in insurance, accounting or expenses data, where people forging numbers tend to distribute their digits uniformly. There is an interesting observation or conjecture to be made from the Mataphysics Map in the post what can we know?, concerning the nature of infinity. The Finite Many observations reveal a finite nature of reality: • Energy comes in finite parcels (quatum mechanics) • The knowledge one can have about quanta is a fixed value (uncertainty) • Energy is conserved in the universe • The speed of light has the same constant value for all observers (special relativity) • The age of the universe is finite • Information is finite and hence can be coded into a binary language Newer and more radical theories propose: • Space comes in finite parcels • Time comes in finite parcels • The universe is spatially finite • The maximum entropy in any given region of space is proportional to the regions surface area and not its volume (this leads to the holographic principle stating that our three dimensional universe is a projection of physical processes taking place on a two dimensional surface surrounding it) So finiteness appears to be an intrinsic feature of the Outer Reality box of the diagram. There is in fact a movement in physics ascribing to the finiteness of reality, called Digital Philosophy. Indeed, this finiteness postulate is a prerequisite for an even bolder statement, namely, that the universe is one gigantic computer (a Turing complete cellular automata), where reality (thought and existence) is equivalent to computation. As mentioned above, the selforganizing structure forming evolution of the universe can be seen to produce ever more complex modes of information processing (e.g., storing data in DNA, thoughts, computations, simulations and perhaps, in the near future, quantum computations). There is also an approach to quantum mechanics focussing on information stating that an elementary quantum system carries (is?) one bit of information. This can be seen to lead to the notions of quantisation, uncertainty and entanglement. The Infinite It should be noted that zero is infinity in disguise. If one lets the denominator of a fraction go to infinity, the result is zero. Historically, zero was discovered in the 3rd century BC in India and was introduced to the Western world by Arabian scholars in the 10th century AC. As ordinary as zero appears to us today, the great Greek mathematicians didn’t come up with such a concept. Indeed, infinity is something intimately related to formal thought systems (mathematics). Irrational numbers have an infinite number of digits. There are two measures for infinity: countability and uncountablility. The former refers to infinite series as 1, 2, 3, … Whereas for the latter measure, starting from 1.0 one can’t even reach 1.1 because there are an infinite amount of numbers in the interval between 1.0 and 1.1. In geometry, points and lines are idealizations of dimension zero and one, respectively. So it appears as though infinity resides only in the Inner Reality box of the diagram. The Interface If it should be true that we live in a finite reality with infinity only residing within the mind as a concept, then there should be some problems if one tries to model this finite reality with an infinity-harboring formalism. Perhaps this is indeed so. In chaos theory, the sensitivity to initial conditions (butterfly effect) can be viewed as the problem of measuring numbers: the measurement can only have a finite degree of accuracy, whereas the numbers have, in principle, an infinite amount of decimal places. In quatum gravity (the, as yet, unsuccessful merger of quantum mechanics and gravity) many of the inherent problems of the formalism could by bypassed, when a theory was proposed (string theory) that replaced (zero space) point particles with one dimensionally extended objects. Later incarnations, called M-theory, allowed for multidimensional objects. In the above mentioned information based view of quantum mechanics, the world appears quantised because the information retrieved by our minds about the world is inevitably quantised. So the puzzle deepens. Why do we discover the notion of infinity in our minds while all our experiences and observations of nature indicate finiteness? medical studies medical studies often contradict each other. results claiming to have “proven” some causal connection are confronted with results claiming to have “disproven” the link, or vice versa. this dilemma affects even reputable scientists publishing in leading medical journals. the topics are divers: • high-voltage power supply lines and leukemia [1], • salt and high blood pressure [1], • heart diseases and sport [1], • stress and breast cancer [1], • smoking and breast cancer [1], • praying and higher chances of healing illnesses [1], • the effectiveness of homeopathic remedies and natural medicine, • vegetarian diets and health, • low frequency electromagnetic fields and electromagnetic hypersensitivity [2], basically, this is understood to happen for three reasons: • i.) the bias towards publishing positive results, • ii.) incompetence in applying statistics, • ii.) simple fraud. publish or perish. in order the guarantee funding and secure the academic status quo, results are selected by their chance of being published. an independent analysis of the original data used in 100 published studies exposed that roughly half of them showed large discrepancies in the original aims stated by the researchers and the reported findings, implying that the researchers simply skimmed the data for publishable material [3]. this proves fatal in combination with ii.) as every statistically significant result can occur (per definition) by chance in an arbitrary distribution of measured data. so if you only look long enough for arbitrary results in your data, you are bound to come up with something [1]. often, due to budget reasons, the numbers of test persons for clinical trials are simply too small to allow for statistical relevance. ref. [4] showed next to other things, that the smaller the studies conducted in a scientific field, the less likely the research findings are to be true. statistical significance - often evaluated by some statistics software package - is taken as proof without considering the plausibility of the result. many statistically significant results turn out to be meaningless coincidences after accounting for the plausibility of the finding [1]. one study showed that one third of frequently cited results fail a later verification [1]. another study documented that roughly 20% of the authors publishing in the magazine “nature” didn’t understand the statistical method they were employing [5]. iii.) a.) two thirds of of the clinical biomedical research in the usa is supported by the industry - double as much as in 1980 [1]. it was shown that in 1000 studies done in 2003, the nature of the funding correlated with the results: 80% of industry financed studies had positive results, whereas only 50% of the independent research reported positive findings. it could be argued that the industry has a natural propensity to identify effective and lucrative therapies. however, the authors show that many impressive results were only obtained because they were compared with weak alternative drugs or placebos. [6] iii.) b.) quoted from “Andrew Wakefield (born 1956 in the United Kingdom) is a Canadian trained surgeon, best known as the lead author of a controversial 1998 research study, published in the Lancet, which reported bowel symptoms in a selected sample of twelve children with autistic spectrum disorders and other disabilities, and alleged a possible connection with MMR vaccination. Citing safety concerns, in a press conference held in conjunction with the release of the report Dr. Wakefield recommended separating the components of the injections by at least a year. The recommendation, along with widespread media coverage of Wakefield’s claims was responsible for a decrease in immunisation rates in the UK. The section of the paper setting out its conclusions, known in the Lancet as the “interpretation” (see the text below), was subsequently retracted by ten of the paper’s thirteen authors. In February of 2004, controversy resurfaced when Wakefield was accused of a conflict of interest. The London Sunday Times reported that some of the parents of the 12 children in the Lancet study were recruited via a UK attorney preparing a lawsuit against MMR manufacturers, and that the Royal Free Hospital had received £55,000 from the UK’s Legal Aid Board (now the Legal Services Commission) to pay for the research. Previously, in October 2003, the board had cut off public funding for the litigation against MMR manufacturers. Following an investigation of The Sunday Times allegations by the UK General Medical Council, Wakefield was charged with serious professional misconduct, including dishonesty, due to be heard by a disciplinary board in 2007. In December of 2006, the Sunday Times further reported that in addition to the money given to the Royal Free Hospital, Wakefield had also been personally paid £400,000 which had not been previously disclosed by the attorneys responsible for the MMR lawsuit.” wakefield had always only expressed his criticism of the combined triple vaccination, supporting single vaccinations spaced in time. the british tv station channel 4 exposed in 2004 that he had applied for patents for the single vaccines. wakefield dropped his subsequent slander action against the media company only in the beginning of 2007. as mentioned, he now awaits charges for professional misconduct. however, he has left britain and now works for a company in austin texas. it has been uncovered that other employees of this us company had received payments from the same attorney preparing the original law suit. [7] should we be surprised by all of this? next to the innate tendency of human beings to be incompetent and unscrupulous, there is perhaps another level, that makes this whole endeavor special. the inability of scientist to conclusively and reproducibly uncover findings concerning human beings is maybe better appreciated, if one considers the nature of the subject under study. life, after all, is an enigma and the connection linking the mind to matter is elusive at best (i.e., the physical basis of consciousness). the bodies capability to heal itself, i.e., the placebo effect and the need for double-blind studies is indeed very bizarre. however, there are studies questioning, if the effect exists at all;-) taken from  (consult also for the corresponding links for the sources cited below) [1] This article in the magazine issued by the Neue Zürcher Zeitung by Robert Matthews [2] C. Schierz; Projekt NEMESIS; ETH Zürich; 2000 [3] A. Chan (Center of Statistics in Medicine, Oxford) et. al.; Journal of the American Medical Association; 2004 [4] J. Ioannidis; “Why Most Published Research Findings Are False” ; University of Ioannina; 2005 [5] R. Matthews, E. García-Berthou and C. Alcaraz as reported in this “Nature” article; 2005 [6] C. Gross (Yale University School of Medicine) et. al.; “Scope and Impact of Financial Conflicts of Interest in Biomedical Research “; Journal of the American Medical Association; 2003 [7] H. Kaulen; “Wie ein Impfstoff zu Unrecht in Misskredit gebracht wurde”; Deutsches Ärzteblatt; Jg. 104; Heft 4; 26. Januar 2007 in a nutshell Science, put simply, can be understood as working on three levels: • i.) analyzing the nature of the object being considered/observed, • ii.) developing the formal representation of the object’s features and its dynamics/interactions, • iii.) devising methods for the empirical validation of the formal representations. To be precise, level i.) lies more within the realm of philosophy (e.g., epistemology) and metaphysics (i.e., ontology), as notions of origin, existence and reality appear to transcend the objective and rational capabilities of thought. The main problem being: “Why is there something rather than nothing? For nothingness is simpler and easier than anything.”; [1]. In the history of science the above mentioned formulation made the understanding of at least three different levels of reality possible: • a.) the fundamental level of the natural world, • b.) inherently random phenomena, • c.) complex systems. While level a.) deals mainly with the quantum realm and cosmological structures, levels b.) and c.) are comprised mostly of biological, social and economic systems. a.) Fundamental Many natural sciences focus on a.i.) fundamental, isolated objects and interactions, use a.ii.) mathematical models which are a.iii.) verified (falsified) in experiments that check the predictions of the model - with great success: “The enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious. There is no rational explanation for it.”; [2]. b.) Random Often the nature of the object b.i.) being analyzed is in principle unknown. Only statistical evaluations of sets of outcomes of single observations/experiments can be used to estimate b.ii.) the underlying model, and b.iii.) test it against more empirical data. Often the approach taken in the fields of social sciences, medicine, and business. c.) Complex Moving to c.i.) complex, dynamical systems, and c.ii.) employing computer simulations as a template for the dynamical process, unlocks a new level of reality: mainly the complex and interacting world we experience at our macroscopic length scales in the universe. Here two new paradigms emerge: • the shift from mathematical (analytical) models to algorithmic computations and simulations performed in computers, • simple rules giving rise to complex behavior: “And I realized, that I had seen a sign of a quite remarkable and unexpected phenomenon: that even from very simple programs behavior of great complexity could emerge.”; [3]. However, things are not as clear anymore. What is the exact methodology, and how does it relate to underlying concepts of ontology and epistemology, and what is the nature of these computations per se? Or within the formulation given above, i.e., iii.c.), what is the “reality” of these models: what do the local rules determining the dynamics in the simulation have to say about the reality of the system c.i.) they are trying to emulate? There are many coincidences that enabled the structured reality we experience on this planet to have evolve: exact values of fundamental constants (initial conditions), emerging structure-forming and self-organizing processes, the possibility of (organic) matter to store information (after being synthesized in supernovae!), the right conditions of earth for harboring life, the emergent possibilities of neural networks to establish consciousness and sentience above a certain threshold, … Interestingly, there are also many circumstances that allow the observable world to be understood by the human mind: • the mystery allowing formal thought systems to map to patterns in the real world, • the development of the technology allowing for the design and realization of microprocessors, • the bottom-up approach to complexity identifying a micro level of simple interactions of system elements. So it appears that the human mind is intimately interwoven with the fabric of reality that produced it. But where is all this leading to? There exists a natural extension to science which fuses the notions from levels a.) to c.), namely • information and information processing, • formal mathematical models, • statistics and randomness. Notably, it comes from an engineering point-of-view and deals with quantum computers and comes full circle back to level i.), the question about the nature of reality: “[It can be shown] that quantum computers can simulate any system that obeys the known laws of physics in a straightforward and efficient way. In fact, the universe is indistinguishable from a quantum computer.”; [4]. At first blush the idea of substituting reality with a computed simulation appears rather ad hoc, but in fact it does have potentially falsifiable notions: • the discreteness of reality, i.e., the notion that continuity and infinity are not physical, • the reality of the quantum realm should be contemplated from the point of view of information, i.e., the only relevant reality subatomic quanta manifest is that they register one bit of information: “Information is physical.”; [5]. [1] von Leibniz, G. W., “Principes de la nature et de la grâce”, 1714 [2] Wigner, E. P., “Symmetries and Reflections”, MIT Press, Cambridge, 1967 [3] Wolfram, S., “A New Kind of Science”, Wolfram Media, pg. 19, 2002 [4] Lloyd, S., “Programming the Universe”, Random House, pgs. 53 - 54, 2006 [5] Landauer, R., Nature, 335, 779-784, 1988 See also: “The Mathematical Universe” by M. Tegmark. Related posts: See also this post: laws of nature. what can we know? Put bluntly, metaphysics asks simple albeit deep questions: • Why do I exist? • Why do I die? • Why does the world exist? • Where did everything come from? • What is the nature of reality? • What is the meaning of existence? • Is there a creator or omnipotent being? Although these questions may appear idle and futile, they seem to represent an innate longing for knowledge of the human mind. Indeed, children can and often do pose such questions, only to be faced with resignation or impatience of adults. To make things simpler and tractable, one can focus on the question “What can we know?”. When you wake up in the morning, you instantly become aware of your self, i.e., you experience an immaterial inner reality you can feel and probe with your thoughts. Upon opening your eyes, a structured material outer reality appears. These two unsurmountable facts are enough to sketch a small metaphysical diagram: Focussing on the outer reality or physical universe, there exists an underlying structure forming and selforganizing process starting with an initial singularity or Big Bang (extremely low entropy state, i.e., high order, giving rise to the arrow or direction of time). Due to the exact values of physical constants in our universe, this organizing process yields structures eventually giving birth to stars, which, at the end of their lifecycle, explode (supernovae) allowing for nuclear reactions to fuse heavy elements. One of these heavy elements brings with it novel bonding possibilities, resulting in a new pattern: organic matter. Within a couple of billion years, the structure forming process gave rise to a plethora of living organisms. Although each organism would die after a short lifespan, the process of life as a whole continued to live in a sustainable equilibrium state and survived a couple of extinction events (some of which eradicated nearly 90% of all species). The second law of thermodynamics states, that the entropy of the universe is increasing, i.e., the universe is becoming an ever more unordered place. It would seem that the process of life creating stable and ordered structures violates this law. In fact, complex structures spontaneously appear where there is a steady flow of energy from a high temperature input source (the sun) to a low temperature external sink (the earth). So pumping a system with energy leads it to a state far from the thermodynamic equilibrium which is characterized by the emergence of ordered structures. Viewed from an information processing perspective, the organizing process suddenly experienced a great leap forward. The brains of some organisms had reached a critical mass, allowing for another emergent behavior: consciousness. The majority of people in industrialized nations take a rational and logicall outlook on life. Although one might think this is an inevitable mode of awareness, it actually is a cultural imprinting as there exist other civilization putting far less emphasis on rationality. Perhaps the divide between Western and Eastern thinking illustrates this best. Whereas the former is locked in continuous interaction with the outer world, the latter focuses on the experience of an inner reality. A history of meditation techniques underlines this emphasis on the nonverbal experience of ones self. Thought is either totally avoided, or the mind is focused on repetitive activities, in effect deactivating it. Recall from fundamental that there are two surprising facts to be found. On the one hand, the physical laws dictating the fundamental behavior of the universe can be mirrored by formal thought systems devised by the mind. And on the other hand, real complex behavior can be emulated by computer simulations following simple laws (the computers themselves are an example of technological advances made possible by the successfull modelling of nature by formal thought systems). This conceptual map allows one to categorize a lot of stuff in a concise manner. Also, the interplay between the outer and inner realities becomes visible. However, the above mentioned questions remain unanswered. Indeed, more puzzles appear. So as usual, every advance in understanding just makes the question mark bigger… Continued here: infinity? invariant thinking… Arguably the most fruitful principle in physics has been the notion of symmetry. Covariance and gauge invariance - two simply stated symmetry conditions - are at the heart of general relativity and the standard model (of particle physics). This is not only aesthetically pleasing it also illustrates a basic fact: in coding reality into a formal system, we should only allow the most minimal reference to be made to this formal system. I.e. reality likes to be translated into a language that doesn’t explicitly depend on its own peculiarities (coordinates, number bases, units, …). This is a pretty obvious idea and allows for physical laws to be universal. But what happens if we take this idea to the logical extreme? Will the ultimate theory of reality demand: I will only allow myself to be coded into a formal framework that makes no reference to itself whatsoever. Obviously a mind twister. But the question remains: what is the ultimate symmetry idea? Or: what is the ultimate invariant? Does this imply “invariance” even with respect to our thinking? How do we construct a system that supports itself out of itself, without relying on anything external? Can such a magical feat be performed by our thinking? Taken from this newsgroup message See also: fundamental While physics has had an amazing success in describing most of the observable universe in the last 300 years, the formalism appears to be restricted to the fundamental workings of nature. Only solid-state physics attempts to deal with collective systems. And only thanks to the magic of symmetry one is able to deduce fundamental analytical solutions. In order to approach real life complex phenomena, one needs to adopt a more systems oriented focus. This also means that the interactions of entities becomes an integral part of the formalism. Some ideas should illustrate the situation: • Most calculations in physics are idealizations and neglect dissipative effects like friction • Most calculations in physics deal with linear effect, as non-linearity is hard to tackle and is associated with chaos; however, most physical systems in nature are inherently non-linear • The analytical solution of three gravitating bodies in classical mechanics, given their initial positions, masses, and velocities, cannot be found; it turns out to be a chaotic system which can only be simulated in a computer; however, there are an estimated hundred billion of galaxies in the universe Systems Thinking Systems theory is an interdisciplinary field which studies relationships of systems as a whole. The goal is to explain complex systems which consist of a large number of mutually interacting and interwoven parts in terms of those interactions. A timeline: • Cybernetics (50s): Study of communication and control, typically involving regulatory feedback, in living organisms and machines • Catastrophe theory (70s): Phenomena characterized by sudden shifts in behavior arising from small changes in circumstances • Chaos theory (80s): Describes the behavior of non-linear dynamical systems that under certain conditions exhibit a phenomenon known as chaos (sensitivity to initial conditions, regimes of chaotic and deterministic behavior, fractals, self-similarity) • Complex adaptive systems (90s): The “new” science of complexity which describes emergence, adaptation and self-organization; employing tools such as agent-based computer simulations In systems theory one can distinguish between three major hierarchies: • Suborganic: Fundamental reality, space and time, matter, … • Organic: Life, evolution, … • Metaorganic: Consciousness, group dynamical behavior, financial markets, … However, it is not understood how one can traverse the following chain: bosons and fermions -> atoms -> molecules -> DNA -> cells -> organisms -> brains. I.e., how to understand phenomena like consciousness and life within the context of inanimate matter and fundamental theories. e.g., systems view Category Theory The mathematical theory called category theory is a result of the “unification of mathematics” in the 40s. A category is the most basic structure in mathematics and is a set of objects and a set of morphisms (maps). A functor is a structure-preserving map between categories. This dynamical systems picture can be linked to the notion of formal systems mentioned above: physical observables are functors, independent of a chosen representation or reference frame, i.e., invariant and covariant. Object-Oriented Programming This paradigm of programming can be viewed in a systems framework, where the objects are implementations of classes (collections of properties and functions) interacting via functions (public methods). A programming problem is analyzed in terms of objects and the nature of communication between them. When a program is executed, objects interact with each other by sending messages. The whole system obeys certain rules (encapsulationinheritancepolymorphism, …). Some advantages of this integral approach to software development: • Easier to tackle complex problems • Allows natural evolution towards complexity and better modeling of the real world • Reusability of concepts (design patterns) and easy modifications and maintenance of existing code • Object-oriented design has more in common with natural languages than other (i.e., procedural) approaches Algorithmic vs. Analytical Perhaps the shift of focus in this new weltbild can be understood best when one considers the paradigm of complex system theory: • The interaction of entities (agents) in a system according to simple rules gives rise to complex behavior: Emergence, structure-formation, self-organization, adaptive behavior (learning), … This allows a departure from the equation-based description to models of dynamical processes simulated in computers. This is perhaps the second miracle involving the human mind and the understanding of nature. Not only does nature work on a fundamental level akin to formal systems devised by our brains, the hallmark of complexity appears to be coded in simplicity (”simple sets of rules give complexity”) allowing computational machines to emulate its behavior. complex systems It is very interesting to note, that in this paradigm the focus is on the interaction, i.e., the complexity of the agent can be ignored. That is why the formalism works for chemicals in a reaction, ants in an anthill, humans in social or economical organizations, … In addition, one should also note, that simple rules - the epitome of deterministic behavior - can also give rise to chaotic behavior. The emerging field of network theory (an extension of graph theory,yielding results such as scale-free topologies, small-worlds phenomena, etc. observed in a stunning veriety of complex networks) is also located at this end of the spectrum of the formal descriptions of the workings of nature. Finally, to revisit the analytical approach to reality, note that in the loop quantum gravity approach, space-time is perceived as a causal network arising from graph updating rules (spin networks, which are graphs associated with group theoretic properties), where particles are envisaged as ‘topological defects’ and geometric properties of reality, such as dimensionality, are defined solely in terms of the network’s connectivity pattern. list of open questions in complexity theory. 2 Responses to “complex” 1. jbg » Blog Archive » complex networks Says: […] The new paradigm states that it is best to understand a complex system, if it is mapped to a network. I.e., the links represent the some kind of interaction and the nodes are stripped of any intrinsic quality. So, as an example, you can forget about the complexity of the individual bird, if you model the flocks swarming behavior. (See these older posts: complex, fundamental, swarm theory, in a nutshell.) […] 2. jbg » Blog Archive » laws of nature Says: […] See also these posts: complex, swarm theory, complex networks. […] What is science? • Science is the quest to capture the processes of nature in formal mathematical representations So “math is the blueprint of reality” in the sense that formal systems are the foundation of science. In a nutshell: • Natural systems are a subset of reality, i.e., the observable universe • Guided by thought, observation and measurement natural systems are “encoded” into formal systems • Using logic (rules of inference) in the formal system, predictions about the natural system can be made (decoding) • Checking the predictions with the experimental outcome gives the validity of the formal system as a model for the natural system Physics can be viewed as dealing with the fundamental interactions of inanimate matter. For a technical overview, go to the here. math models • Mathematical models of reality are independent of their formal representation This leads to the notions of symmetry and invariance. Basically, this requirement gives rise to nearly all of physics. Classical Mechanics Symmetry, understood as the invariance of the equations under temporal and spacial transformations, gives rise to the conservation laws of energy, momentum and angular momentum. In layman terms this means that the outcome of an experiment is unchanged by the time and location of the experiment and the motion of the experimental apparatus. Just common sense… Mathematics of Symmetry The intuitive notion of symmetry has been rigorously defined in the mathematical terms of group theory. Physics of Non-Gravitational Forces The three non-gravitational forces are described in terms of quantum field theories. These in turn can be expressed as gauge theories, where the parameters of the gauge transformations are local, i.e., differ from point to point in space-time. The Standard Model of elementary particle physics unites the quantum field theories describing the fundamental interactions of particles in terms of their (gauge) symmetries. Physics of Gravity Gravity is the only force that can’t be expressed as a quantum field theory. Its symmetry principle is called covariance, meaning that in the geometric language of the theory describing gravity (general relativity) the physical content of the equations is unchanged by the choice of the coordinate system used to represent the geometrical entities. To illustrate, imagine an arrow located in space. It has a length and an orientation. In geometric terms this is a vector, lets call it a. If I want to compute the length of this arrow, I need to choose a coordinate system, which gives me the x-, y- and z-axes components of the vector, e.g., a = (3, 5, 1). So starting from the origin of my coordinate system (0, 0, 0), if I move 3 units in the x direction (left-right), 5 units in the y-direction (forwards-backwards) and 1 unit in the z direction (up-down), I reach the end of my arrow. The problem is now, that depending on the choice of coordinate system - meaning the orientation and the size of the units - the same arrow can look very different: a = (3, 5, 1) = (0, 23.34, -17). However, everytime I compute the length of the arrow in meters, I get the same number independent of the chosen representation. In general relativity the vectors are somewhat like multidimensional equivalents called tensors and the commonsense requirement, that the calculations involving tensor do not depend on how I represent the tensors in space-time, is covariance. It is quite amazing, but there is only one more ingredient needed in order to construct one of the most estethic and accurate theories in physics. It is called the equivalence principle and states that the gravitational force is equivalent to the forces experienced during acceleration. This may sound trivial, has however very deep implications. micr macro math models Physics of Condensed Matter This branch of physics, also called solid-state physics, deals with the macroscopic physical properties of matter. It is one of physics first ventures into many-body problems in quantum theory. Although the employed notions of symmetry do not act at such a fundamental level as in the above mentioned theories, they are a cornerstone of the theory. Namely the complexity of the problems can be reduced using symmetry in order for analytical solutions to be found. Technically, the symmetry groups are boundary conditions of the Schrödinger equation. This leads to the theoretical framework describing, for example, semiconductors and quasi-crystals (interestingly, they have fractal properties!). In the superconducting phase, the wave functionbecomes symmetric. The Success It is somewhat of a miracle, that the formal systems the human brain discovers/devises find their match in the workings of nature. In fact, there is no reason for this to be the case, other than that it is the way things are. The following two examples should underline the power of this fact, where new features of reality where discovered solely on the requirements of the mathematical model: • In order to unify electromagnetism with the weak force (two of the three non-gravitational forces), the theory postulated two new elementary particles: the W and Z bosons. Needless to say, these particles where hitherto unknown and it took 10 years for technology to advance sufficiently in order to allow their discovery. • The fusion of quantum mechanics and special relativity lead to the Dirac equation which demands the existence of an, up to then, unknown flavor of matter: antimatter. Four years after the formulation of the theory, antimatter was experimentally discovered. The Future… Albeit the success, modern physics is still far from being a unified, paradox-free formalism describing all of the observable universe. Perhaps the biggest obstacles lies in the last missing step to unification. In a series of successes, forces appearing as being independent phenomena, turned out to be facets of the same formalism: electricity and magnetism was united in the four Maxwell equations; as mentioned above, electromagnetism and the weak force were merged into the electroweak force; and finally, the electroweak and strong force were united in the framework of the standard model of particle physics. These four forces are all expressed as quantum (field) theories. There is only one observable force left: gravity. The efforts to quantize gravity and devise a unified theory, have taken a strange turn in the last 20 years. The problem is still unsolved, however, the mathematical formalisms engineered for this quest - namely string/M-theory and loop quantum gravity - have had a twofold impact: • A new level in the application of formal systems is reached. Whereas before, physics relied on mathematical branches that where developed independently from any physical application (e.g., differential geometry, group theory), string/M-theory is actually spawning new fields of mathematics (namely in topology). • These theories tell us very strange things about reality: • Time does not exist on a fundamental level • Space and time per se become quantized • Space has more than three dimensions • Another breed of fundamental particles is needed: supersymmetricmatter Unfortunately no one knowns if these theories are hinting at a greater reality behind the observable world, or if they are “just” math. The main problem being the fact that any kind of experiment to verify the claims appears to be out of reach of our technology… 4 Responses to “fundamental” 1. jbg » Blog Archive » complex networks Says: 2. jbg » Blog Archive » what can we know? Says: 3. jbg » Blog Archive » in a nutshell Says: […] fundamental and complex […] 4. jbg » Blog Archive » laws of nature Says: […] See also this post: funadamental, invariant thinking. […]
b3b1e876cf57debc
Take the 2-minute tour × Teaching graduate analysis has inspired me to think about the completeness theorem for Fourier series and the more difficult Plancherel theorem for the Fourier transform on $\mathbb{R}$. There are several ways to prove that the Fourier basis is complete for $L^2(S^1)$. The approach that I find the most interesting, because it uses general tools with more general consequences, is to use apply the spectral theorem to the Laplace operator on a circle. It is not difficult to show that the Laplace operator is a self-adjoint affiliated operator, i.e., the healthy type of unbounded operator for which the spectral theorem applies. It's easy to explicitly solve for the point eigenstates of the Laplace operator. Then you can use a Fredholm argument, or ultimately the Arzela-Ascoli theorem, to show that the Laplace operator is reciprocal to a compact operator, and therefore has no continuous spectrum. The argument is to integrate by parts. Suppose that $$\langle -\Delta \psi, \psi\rangle = \langle \vec{\nabla} \psi, \vec{\nabla \psi} \rangle \le E$$ for some energy $E$, whether or not $\psi$ is an eigenstate and even whether or not it has unit norm. Then $\psi$ is microscopically controlled and there is only a compact space of such $\psi$ except for adding a constant. The payoff of this abstract proof is the harmonic completeness theorem for the Laplace operator on any compact manifold $M$ with or without boundary. It also works when $\psi$ is a section of a vector bundle with a connection. My question is whether there is a nice generalization of this approach to obtain a structure theorem for the Laplace operator, or the Schrödinger equation, in non-compact cases. Suppose that $M$ is an infinite complete Riemannian manifold with some kind of controlled geometry. For instance, say that $M$ is quasiisometric to $\mathbb{R}^n$ and has pinched curvature. (Or say that $M$ is amenable and has pinched curvature.) Maybe we also have the Laplace operator plus some sort of controlled potential --- say a smooth, bounded potential with bounded derivatives. Then can you say that the spectrum of the Laplace or Schrödinger operator is completely described by controlled solutions to the PDE, which can be interpreted as "almost normalizable" states? There is one case of this that is important but too straightforward. If $M$ is the universal cover of a torus $T$, and if its optional potential is likewise periodic, then you can use "Bloch's theorem". In other words you can solve the problem for flat line bundles on $T$, where you always just have a point spectrum, and then lift this to a mixed continuous and point spectrum upstairs. So you can derive the existence of a fancy spectrum that is not really explicit, but the non-compactness is handled using an explicit method. I think that this method yields a cute proof of the Plancherel theorem for $\mathbb{R}$ (and $\mathbb{R}^n$ of course): Parseval's theorem as described above gives you Fourier completeness for both $S^1$ and $\mathbb{Z}$, and you can splice them together using the Bloch picture to get completeness for $\mathbb{R}$. share|improve this question Only a simple remark. In the non-compact case, the paradigmatic example is the harmonic oscillator $$ -\Delta_{\mathbb R^d}+\frac{\vert x\vert^2}{4} $$ with spectrum $\frac{d}{2}+\mathbb N$. The eigenvectors are the Hermite functions with an explicit expression from the so-called Maxwellian $\psi_0=(2\pi)^{-d/4}\exp{-\frac{\vert x\vert^2}{4}}$ and the creation operators $(\alpha!)^{-1/2}(\frac{x}{2}-\frac{d}{dx})^\alpha \psi_0$. In one dimension the operator $-\frac{d^2}{dx^2}+x^4$ (quartic oscillator) has also a compact resolvent, but nothing explicit is known about the eigenfunctions. –  Bazin May 2 '12 at 13:53 More subtle is the compactness of the resolvent of the 2D $$ -\Delta_{\mathbb R^2}+x^2y^2. $$ –  Bazin May 2 '12 at 13:54 I just saw this playing around on meta.... Are you asking a question beyond that spectrally almost every solution is polynomially bounded? –  Helge Aug 15 '12 at 18:53 @Helge - That's part of the story, but in the ordinary Plancherel theorem, not the hardest part to state or prove. You would also want some statement about the spectral measure (that is, the projection-valued measure produced by the spectral theorem) associated to the Laplace or Schrodinger operator. Again, if you have a Laplace operator on a closed manifold, there is an algorithm to diagonalize it completely. The completeness theorem is considered very important, and not just the fact that you can find eigenfunctions. –  Greg Kuperberg Aug 18 '12 at 3:16 2 Answers 2 Since this has not been mentioned, let me point to the Weyl-Stone-Titchmarsh-Kodaira theorem which gives the generalized Fourier transform and Plancherel formula of a selfadjoint Sturm-Liouville operator. The ODE section in Dunford-Schwartz II presents this. See also the nice original paper Kodaira (1949). The (one-dimensional) Schrödinger operator with periodic potential (Hill's operator) is also treated in Kodaira's paper. In several variables, scattering theory provides Plancherel theorems. For the Dirichlet Laplacian in the exterior of a compact obstacle, one can find a result of this kind in chapter 9 of M.E. Taylor's book PDE II. Formula (2.15) in that chapter is the Plancherel theorem of the Fourier transform $\Phi$ defined in (2.8). Stone's formula represents the (projection-valued) spectral measure of a selfadjoint operator as the limit of the resolvent at the real axis. It is a key ingredient in proofs of these results. share|improve this answer Too big to fit well as comment: There is a seeming-technicality which is important to not overlook, the question of whether a symmetric operator is "essentially self-adjoint" or not. As I discovered only embarrasingly belatedly, this "essential self-adjointness" has a very precise meaning, namely, that the given symmetric operator has a unique self-adjoint extension, which then is necessarily given by its (graph-) closure. In many natural situations, Laplacians and such are essentially self-adjoint. But with any boundary conditions, this tends not to be the case, exactly as in the simplest Sturm-Liouville problems on finite intervals, not even getting to the Weyl-Kodaira-Titchmarsh complications. Gerd Grubb's relatively recent book on "Distributions and operators" discusses such stuff. The broader notion of Friedrichs' canonical self-adjoint extension of a symmetric (edit! :) semi-bounded operator is very useful here. At the same time, for symmetric operators that are not essentially self-adjoint, the case of $\Delta$ on $[a,b]$ with varying boundary conditions (to ensure symmetric-ness) shows that there is a continuum of mutually incomparable self-adjoint extensions. Thus, on $[0,2\pi]$, the Dirichlet boundary conditions give $\sin nx/2$ for integer $n$ as orthonormal basis, while the boundary conditions that values and first derivatives match at endpoints give the "usual" Fourier series, in effect on a circle, by connecting the endpoints. This most-trivial example already shows that the spectrum, even in the happy-simple discrete case, is different depending on boundary conditions. share|improve this answer Your Answer
df5646251ef3ab24
I’ve just uploaded to the arXiv my paper “The high exponent limit $p \to \infty$ for the one-dimensional nonlinear wave equation“, submitted to Analysis & PDE.  This paper concerns an under-explored limit for the Cauchy problem \displaystyle -\phi_{tt} + \phi_{xx} = |\phi|^{p-1} \phi; \quad \phi(0,x) = \phi_0(x); \quad \phi_t(0,x) = \phi_1(x) (1) to the one-dimensional defocusing nonlinear wave equation, where \phi: {\Bbb R} \times {\Bbb R} \to {\Bbb R} is the unknown scalar field, p > 1 is an exponent, and \phi_0, \phi_1: {\Bbb R} \to {\Bbb R} are the initial position and velocity respectively, and the t and x subscripts denote differentiation in time and space.  To avoid some (extremely minor) technical difficulties let us assume that p is an odd integer, so that the nonlinearity is smooth; then standard energy methods, relying in particular on the conserved energy \displaystyle E(\phi)(t) = \int_{\Bbb R} \frac{1}{2} |\phi_t(t,x)|^2 + \frac{1}{2} |\phi_x(t,x)|^2 + \frac{1}{p+1} |\phi(t,x)|^{p+1}\ dx, (2) on finite speed of propagation, and on the one-dimensional Sobolev embedding H^1({\Bbb R}) \subset L^\infty({\Bbb R}), show that from any smooth initial data \phi_0, \phi_1, there is a unique global smooth solution \phi to the Cauchy problem (1). It is then natural to ask how the solution \phi behaves under various asymptotic limits.  Popular limits for these sorts of PDE include the asymptotic time limit t \to \pm \infty, the non-relativistic limit c \to \infty (where we insert suitable powers of c into various terms in (1)), the small dispersion limit (where we place a small factor in front of the dispersive term +\phi_{xx}), the high-frequency limit (where we send the frequency of the initial data \phi_0, \phi_1 to infinity), and so forth. Tristan Roy recently posed to me a different type of limit, which to the best of my knowledge has not been explored much in the literature (although some of the literature on limits of the Ginzburg-Landau equation has a somewhat similar flavour): the high exponent limit p \to \infty (holding the initial data \phi_0, \phi_1 fixed).  From (1) it is intuitively plausible that as p increases, the nonlinearity gets “stronger” when |\phi| > 1 and “weaker” when |\phi| < 1; the “limiting equation” \displaystyle -\phi_{tt} + \phi_{xx} = |\phi|^{\infty} \phi; \quad \phi(0,x) = \phi_0(x); \quad \phi_t(0,x) = \phi_1(x) (3) would then be expected to be linear when |\phi| < 1 and infinitely repulsive when |\phi| > 1 (i.e. in the limit, the solution should be confined to range in the interval [-1,1], much as is the case with linear wave and Schrödinger equations with an infinite barrier potential; though with the key difference that the nonlinear barrier in (3) is confining the range of \phi rather than the domain.). Of course, the equation (3) does not make rigorous sense as written; we need to formalise what an “infinite nonlinear barrier” is, and how the wave \phi will react to that barrier (e.g. will it reflect off of it, or be absorbed?).  So the questions are to find the correct description of the limiting equation, and to rigorously demonstrate that solutions to (1) converge in some sense to that equation. It is natural to require that \phi_0 stays away from the barrier, in the sense that |\phi_0(x)| < 1 for all x; in particular this implies that the energy (2) stays (locally) bounded as p \to \infty; it also ensures that (1) converges in a satisfactory sense to the free wave equation for sufficiently short times.  For technical reasons we also have to make a mild assumption that either of the null energy densities \phi_1 \pm \partial_x \phi_0 vanish on a set with at most finitely many connected components.  The main result is then that as p \to \infty, the solution \phi = \phi^{(p)} to (1) converges locally uniformly to a Lipschitz, piecewise smooth limit \phi = \phi^{(\infty)}, which is restricted to take values in [-1,1], with -\phi_{tt}+\phi_{xx} (interpreted in a weak sense) being a negative measure supported on \{ \phi=+1\} plus a positive measure supported on \{\phi = -1\}.  Furthermore, we have the reflection conditions \displaystyle (\partial_t \pm \partial_x) |\phi_t \mp \phi_x| = 0. It turns out that the above conditions uniquely determine \phi, and one can even solve for \phi explicitly for any given data; such solutions start off smooth but pick up an increasing number of (Lipschitz continuous) singularities over time as they reflect back and forth across the nonlinear barriers \{\phi=+1\} and \{\phi=-1\}.  (An explicit example of such a reflection is given in the paper.) [The above conditions vaguely resemble entropy conditions, as appear for instance in kinetic formulations of conservation laws, though I do not know of a precise connection in this regard.] In the remainder of this post I would like to describe the strategy of proof and one of the key a priori bounds needed.  I also want to point out the connection to Liouville’s equation, which was discussed in the previous post. – Strategy – The top level strategy is based on compactness methods: first show that the family \phi^{(p)} of solutions is compact in a suitable topology, show that any limit point \phi of these solutions obeys the properties stated above, and then show that these properties uniquely determine the limit \phi. The compactness (in the local uniform topology) is easy, coming from the Arzelà-Ascoli theorem and energy conservation (and relies crucially on the fact that the one-dimensional NLW (1) remains subcritical even as p \to \infty; I do not know what to predict in the critical case of two dimensions, or the supercritical case of higher dimensions).  The uniqueness is also not too difficult, as one can solve for \phi obeying these conditions by quite classical means based on the method of characteristics (i.e. following the solution along null rays) and the fundamental theorem of calculus.  The main difficulty is to show that a limit point \phi of the solutions \phi^{(p)} actually do satisfy all of the properties listed above.  For this, it turns out to be necessary to establish a number of a priori estimates on \phi^{(p)} and its first derivatives that will survive the passage to the limit.  Some of these estimates are based on the pointwise conservation laws for Liouville’s equation that I discussed in the previous post; but now I want to turn to another key estimate, namely the pointwise estimate \displaystyle |\phi^{(p)}(t,x)| \leq 1 + \frac{\log p}{p} + O( \frac{1}{p} ) (4) as p \to \infty, which of course will keep the limiting solution \phi^{(\infty)} confined to range in the interval [-1,1].  This bound turns out to be best possible, and is important in bounding the nonlinear effects of (1) (for instance, it implies that the nonlinear term in (1) is pointwise bounded by O(p) ). The connection to the Liouville equation arises from the ansatz \displaystyle \phi^{(p)} = p^{1/(p-1)} (1 + \frac{1}{p} \psi( p t, p x ) ) then (1) becomes (for positive \phi) \displaystyle -\psi_{tt} + \psi_{xx} = (1 + \frac{1}{p} \psi)^p which formally converges to the Liouville equation - \psi_{tt} + \psi_{xx} = e^\psi in the limit p \to -\infty.  For comparison, the estimate (4) corresponds in this ansatz to an upper bound \psi \leq O(1) on \psi. – Confinement – Now we prove (4).  To simplify the discussion let us assume that \phi = \phi^{(p)} is always non-negative (one can restrict to this case by sign reversal symmetry, finite speed of propagation, and energy conservation).  As in the previous post, it is convenient to work in null coordinates \displaystyle u = t+x, v = t-x in which case the equation (1) becomes \displaystyle \phi_{uv} = - \frac{1}{4} \phi^p.  (5) In particular, \phi_u is decreasing in the v direction, and \phi_v is decreasing in the u direction.  With fixed initial data, this gives us an upper bound \displaystyle \phi_u, \phi_v \leq O(1) (6) uniformly in p. Now suppose that (t_0,x_0) is the first time and place in which (4) fails, in the sense that \displaystyle \phi(t_0,x_0) = 1 + \frac{\log p}{p} + \frac{K}{p} for some large constant K.  Then from (6) we know that \phi is also large just a bit before (t_0,x_0); in particular, we will have \displaystyle \phi(t,x) \geq 1 + \frac{\log p}{p} on the spacetime diamond \displaystyle \{ (t,x): t_0 + x_0-\frac{cK}{p} \leq t + x \leq t_0 + x_0; t_0 - x_0-\frac{cK}{p} \leq t - x \leq t_0 - x_0 \} where c is a small cosntant (independent of K).  Inserting this into (5), we obtain \displaystyle \phi_{uv} \leq - c' p on this diamond, for some other small constant c’ > 0.  Integrating this on this diamond and using the various bounds on \phi at the corners, one arrives at \displaystyle -c'' \frac{K^2}{p} = O(\frac{K}{p}) for some other small constant c” > 0, which leads to a contradiction for K large enough (independently of p).
b689851716b61177
Take the 2-minute tour × How the Green's functions and the Quantum Mechanics are related? Do they can be used to solve the Schrödinger equation of an particle subjet to some potential that is not a Dirac's delta? And the proprieties of some Green's functions that are symmetrical, i.e. $ G(x|\xi) = G(\xi|x)^{\ast} $, has some relation with the propriety of the inner product $ \langle \alpha \vert \beta \rangle = \langle \beta \vert \alpha \rangle^{\ast} $? share|improve this question 3 Answers 3 up vote 2 down vote accepted Schrödinger equation is a linear partial differential equation, so sure, you can use the usual formalism of Green's functions to solve it. First let's recall how the stuff works. Suppose $L$ is the linear operator and $D$ are the boundary conditions and we want to solve equations $Lu = f$ and $Du = 0$ for $u$. Using the identity property of the convolution $g*\delta = g$ one is motivated to solve the simpler equation $LG = \delta$ and then one finds $u = G*f$ because $$L(G*f) = (LG)*f = \delta*f = f$$. Now, for the time-independent Schrödinger equation the following should be useful. If the operator (understood also with the given boundary conditions) also has a complete basis of eigenvectors $\left\{\left|\phi_n\right>\right\}$ corresponding to eigenvalues $\left\{\lambda_n\right\}$ then the Green's function can easily be seen to be $$G(x, x') = \sum_n {\phi_n(x)^* \phi_n(x') \over \lambda_n}$$ (just apply the operator $L$ to it and use that $L \left|\phi_n\right> = \lambda_n \left|\phi_n\right>$. So again we can see that $G$ is in a sense an inverse of $L$ (and indeed it is often written simply as $L^{-1}$). Now, it turns out there is a deeper connection between Green's functions and quantum mechanics via Feynman's path integral if we pass to the time dependent Schrödinger equation. I am not going to derive all the stuff here but suffice it to say that Green's function takes on the meaning of a propagator of the particle. Namely, the probability amplitude that the particle gets from the event (t, x) to the event (t', x') is a Green's function of the time-dependent Schrödinger equation $G(x,t;x',t') = \left<x\right| U(t,t') \left|x'\right>$. So yes, the fact that the Green's function is symmetric is precisely because it can be interpreted as an inner product. This stuff generalizes further to quantum field theory and Green's functions are among the basic objects of study there. share|improve this answer Nice job buddy +1 –  user346 Feb 5 '11 at 11:10 In more 'down-to-earth' QM, you use Green's functions to find the density of states. I'm deprived of my books so at a loss for giving a good reference, but the idea is to calculate $$G(x,x';E) = \langle x, (E - H)^{-1} x' \rangle,$$ where $H$ is the system's Hamiltonian. You can then define a spectral function $F(x,x',E) = -\frac{1}{\pi} lim_{\epsilon \rightarrow 0} \text{Im } G(x,x',E+i\epsilon),$ whose trace is the density of states: $$\mathcal{N}(E) = \int F(x,x,E) dx.$$ Finally, you can also use this formalism to calculate other expectation values, with formulas like (modulo an incorrect prefactor) $\langle A \rangle = -\frac{1}{\pi} \text{Im Tr}(AG).$ So yes, they are symmetrical, but they can not really be used to 'solve' a Schrödinger's equation, only on a formal level. That's why they're useful though: they're used all the time in many-body QM/solid state physics, where you'll never 'solve' the problem but can learn lots of interesting stuff by indirect approaches, as the one used above. share|improve this answer +1 all correct except for your statement that they can not really be used to 'solve' a Schrödinger's equation, only on a formal level. For scattering from a potential the Green's function is exactly calculable for many important cases and encodes the full physical content of the solutions. –  user346 Feb 5 '11 at 9:36 @space_cadet: forgot about that entirely, good job mentioning it. –  Gerben Feb 5 '11 at 17:25 An interesting system to study to understand this is the Poincare half disk or half plane. ${\cal H}^2$. It also illustrates the role of Laplacian operators, Green’s functions and resolvents. The Laplace-Beltrami operator on a Riemannian manifold with a Gaussian metric is $$ \Delta~=~\sum_{ij}{1\over\sqrt{g}}{\partial\over{\partial x^i}}\Big(\sqrt{g}g^{ij}{\partial\over{\partial x^j}}\Big), $$ for $g~=~|det(g_{ij}|$. The Laplacians for the Poincare half plane and disk are $$ \Delta_{{\cal H}^2}~=~y^2\Big({{\partial^2}\over{\partial x^2}}~+~{{\partial^2}\over{\partial y^2}}\Big),~\Delta_{{\cal D}}~=~(\alpha^2~-~x^2~-~y^2)^2\Big({{\partial^2}\over{\partial x^2}}~+~{{\partial^2}\over{\partial y^2}}\Big). $$ The Laplacian commutes with all group elements $g~\in~Iso({\cal H}^2)$ $\Delta T_g~=~T_g\Delta$, so the metric is invariant under these isometries. The Laplacian satisfies the the differential equation $(\Delta~+~\lambda)f(z)~=~0$ defined as the kernel of the resolvent $(\Delta~+~\lambda)^{-1}$ by the equation $$ (\Delta~+~\lambda)^{-1}f(z)~=~\int (z,~z^\prime,~\lambda)f(z^\prime)d\mu(z^\prime), $$ with the harmonic condition $(\Delta~+~\lambda)G(z,~z^\prime,~\lambda)~=~\delta(z,~z^\prime)$. This space is such that the Laplaican $\Delta$ has eigenvalue $-2$, or a negative Gaussian curvature. This space is a model for the $AdS_2$ spacetime. I hope that with this example you can see answers to you questions, such as the symmetry under interchange. share|improve this answer Perhaps, in the interests of clarity, you could make explicit the definition of the Green's function. I can see why someone might find this answer unhelpful. @Rodrigo asks about Green's functions in QM and the Schrodinger equation and you start off with the Poincare half-plane. Technically, of course, your answer is helpful for someone who actually bothers to read it. +1 –  user346 Feb 5 '11 at 8:10 Hm, not that this isn't interesting stuff but I don't see any connection with the question whatsoever. -1 –  Marek Feb 5 '11 at 8:55 @Rodrigo asks How the Green's functions and the Quantum Mechanics are related?. @Lawrence provides a concrete example of how to calculate a Green's function, albeit for a free field. The Laplace-Beltrami operator is nothing more than the kinetic term of the Schrodinger equation. How is that not related to the question? The question has other parts, but every answer does not have to answer every single subpart of a question. @Lawrence's answers tend to have more analytical content and are superior to many "hand-wavy" answers. –  user346 Feb 5 '11 at 9:30 @space_cadet: be so kind and point to me the place where there is any quantum mechanics in this example. Also show me the place where there is an explicit connection between QM and GF. –  Marek Feb 5 '11 at 9:34 @Marek did you not read my previous comment? –  user346 Feb 5 '11 at 9:37 Your Answer
e141d5037e90741b
The scope of physics The study of gravitation The study of heat, thermodynamics, and statistical mechanics First law Second law Third law Statistical mechanics The study of electricity and magnetism Atomic and chemical physics Condensed-matter physics Nuclear physics Particle tracks from the collision of an accelerated nucleus of a niobium atom with another niobium nucleus. The single line on the left is the track of the incoming projectile nucleus, and the other tracks are fragments from the collision.Courtesy of the Department of Physics and Astronomy, Michigan State UniversityThis branch of physics deals with the structure of the atomic nucleus and the radiation from unstable nuclei. About 10,000 times smaller than the atom, the constituent particles of the nucleus, protons and neutrons, attract one another so strongly by the nuclear forces that nuclear energies are approximately 1,000,000 times larger than typical atomic energies. Quantum theory is needed for understanding nuclear structure. Particle physics Quantum mechanics The Bohr theory sees an electron (left) as a point mass occupying certain energy levels. Wave mechanics sees an electron as a wave washing back and forth in the atom in certain patterns only. The wave patterns and energy levels correspond exactly.Encyclopædia Britannica, Inc.In principle, all of atomic and molecular physics, including the structure of atoms and their dynamics, the periodic table of elements and their chemical behaviour, as well as the spectroscopic, electrical, and other physical properties of atoms, molecules, and condensed matter, can be accounted for by quantum mechanics. Roughly speaking, the electrons in the atom must fit around the nucleus as some sort of standing wave (as given by the Schrödinger equation) analogous to the waves on a plucked violin or guitar string. As the fit determines the wavelength of the quantum wave, it necessarily determines its energy state. Consequently, atomic systems are restricted to certain discrete, or quantized, energies. When an atom undergoes a discontinuous transition, or quantum jump, its energy changes abruptly by a sharply defined amount, and a photon of that energy is emitted when the energy of the atom decreases, or is absorbed in the opposite case. Relativistic mechanics Conservation laws and symmetry Fundamental forces and fields The methodology of physics Physics has evolved and continues to evolve without any single strategy. Essentially an experimental science, refined measurements can reveal unexpected behaviour. On the other hand, mathematical extrapolation of existing theories into new theoretical areas, critical reexamination of apparently obvious but untested assumptions, argument by symmetry or analogy, aesthetic judgment, pure accident, and hunch—each of these plays a role (as in all of science). Thus, for example, the quantum hypothesis proposed by the German physicist Max Planck was based on observed departures of the character of blackbody radiation (radiation emitted by a heated body that absorbs all radiant energy incident upon it) from that predicted by classical electromagnetism. The English physicist P.A.M. Dirac predicted the existence of the positron in making a relativistic extension of the quantum theory of the electron. The elusive neutrino, without mass or charge, was hypothesized by the German physicist Wolfgang Pauli as an alternative to abandoning the conservation laws in the beta-decay process. Maxwell conjectured that if changing magnetic fields create electric fields (which was known to be so), then changing electric fields might create magnetic fields, leading him to the electromagnetic theory of light. Albert Einstein’s special theory of relativity was based on a critical reexamination of the meaning of simultaneity, while his general theory of relativity rests on the equivalence of inertial and gravitational mass. Although the tactics may vary from problem to problem, the physicist invariably tries to make unsolved problems more tractable by constructing a series of idealized models, with each successive model being a more realistic representation of the actual physical situation. Thus, in the theory of gases, the molecules are at first imagined to be particles that are as structureless as billiard balls with vanishingly small dimensions. This ideal picture is then improved on step by step. The correspondence principle, a useful guiding principle for extending theoretical interpretations, was formulated by the Danish physicist Niels Bohr in the context of the quantum theory. It asserts that when a valid theory is generalized to a broader arena, the new theory’s predictions must agree with the old one in the overlapping region in which both are applicable. For example, the more comprehensive theory of physical optics must yield the same result as the more restrictive theory of ray optics whenever wave effects proportional to the wavelength of light are negligible on account of the smallness of that wavelength. Similarly, quantum mechanics must yield the same results as classical mechanics in circumstances when Planck’s constant can be considered as negligibly small. Likewise, for speeds small compared to the speed of light (as for baseballs in play), relativistic mechanics must coincide with Newtonian classical mechanics. Some ways in which experimental and theoretical physicists attack their problems are illustrated by the following examples. The modern experimental study of elementary particles began with the detection of new types of unstable particles produced in the atmosphere by primary radiation, the latter consisting mainly of high-energy protons arriving from space. The new particles were detected in Geiger counters and identified by the tracks they left in instruments called cloud chambers and in photographic plates. After World War II, particle physics, then known as high-energy nuclear physics, became a major field of science. Today’s high-energy particle accelerators can be several kilometres in length, cost hundreds (or even thousands) of millions of dollars, and accelerate particles to enormous energies (trillions of electron volts). Experimental teams, such as those that discovered the W+, W, and Z quanta of the weak force at the European Laboratory for Particle Physics (CERN) in Geneva, which is funded by its 20 European member states, can have 100 or more physicists from many countries, along with a larger number of technical workers serving as support personnel. A variety of visual and electronic techniques are used to interpret and sort the huge amounts of data produced by their efforts, and particle-physics laboratories are major users of the most advanced technology, be it superconductive magnets or supercomputers. Theoretical physicists use mathematics both as a logical tool for the development of theory and for calculating predictions of the theory to be compared with experiment. Newton, for one, invented integral calculus to solve the following problem, which was essential to his formulation of the law of universal gravitation: Assuming that the attractive force between any pair of point particles is inversely proportional to the square of the distance separating them, how does a spherical distribution of particles, such as the Earth, attract another nearby object? Integral calculus, a procedure for summing many small contributions, yields the simple solution that the Earth itself acts as a point particle with all its mass concentrated at the centre. In modern physics, Dirac predicted the existence of the then-unknown positive electron (or positron) by finding an equation for the electron that would combine quantum mechanics and the special theory of relativity. Relations between physics and other disciplines and society Influence of physics on related disciplines Because physics elucidates the simplest fundamental questions in nature on which there can be a consensus, it is hardly surprising that it has had a profound impact on other fields of science, on philosophy, on the worldview of the developed world, and, of course, on technology. Indeed, whenever a branch of physics has reached such a degree of maturity that its basic elements are comprehended in general principles, it has moved from basic to applied physics and thence to technology. Thus almost all current activity in classical physics consists of applied physics, and its contents form the core of many branches of engineering. Discoveries in modern physics are converted with increasing rapidity into technical innovations and analytical tools for associated disciplines. There are, for example, such nascent fields as nuclear and biomedical engineering, quantum chemistry and quantum optics, and radio, X-ray, and gamma-ray astronomy, as well as such analytic tools as radioisotopes, spectroscopy, and lasers, which all stem directly from basic physics. Apart from its specific applications, physics—especially Newtonian mechanics—has become the prototype of the scientific method, its experimental and analytic methods sometimes being imitated (and sometimes inappropriately so) in fields far from the related physical sciences. Some of the organizational aspects of physics, based partly on the successes of the radar and atomic-bomb projects of World War II, also have been imitated in large-scale scientific projects, as, for example, in astronomy and space research. The great influence of physics on the branches of philosophy concerned with the conceptual basis of human perceptions and understanding of nature, such as epistemology, is evidenced by the earlier designation of physics itself as natural philosophy. Present-day philosophy of science deals largely, though not exclusively, with the foundations of physics. Determinism, the philosophical doctrine that the universe is a vast machine operating with strict causality whose future is determined in all detail by its present state, is rooted in Newtonian mechanics, which obeys that principle. Moreover, the schools of materialism, naturalism, and empiricism have in large degree considered physics to be a model for philosophical inquiry. An extreme position is taken by the logical positivists, whose radical distrust of the reality of anything not directly observable leads them to demand that all significant statements must be formulated in the language of physics. The uncertainty principle of quantum theory has prompted a reexamination of the question of determinism, and its other philosophical implications remain in doubt. Particularly problematic is the matter of the meaning of measurement, for which recent theories and experiments confirm some apparently noncausal predictions of standard quantum theory. It is fair to say that though physicists agree that quantum theory works, they still differ as to what it means. Influence of related disciplines on physics Interior of the U.S. Department of Energy’s National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory, Livermore, California. The NIF target chamber uses a high-energy laser to heat fusion fuel to temperatures sufficient for thermonuclear ignition. The facility is used for basic science, fusion energy research, and nuclear weapons testing.U.S. Department of EnergyThe relationship of physics to its bordering disciplines is a reciprocal one. Just as technology feeds on fundamental science for new practical innovations, so physics appropriates the techniques and instrumentation of modern technology for advancing itself. Thus experimental physicists utilize increasingly refined and precise electronic devices. Moreover, they work closely with engineers in designing basic scientific equipment, such as high-energy particle accelerators. Mathematics has always been the primary tool of the theoretical physicist, and even abstruse fields of mathematics such as group theory and differential geometry have become invaluable to the theoretician classifying subatomic particles or investigating the symmetry characteristics of atoms and molecules. Much of contemporary research in physics depends on the high-speed computer. It allows the theoretician to perform computations that are too lengthy or complicated to be done with paper and pencil. Also, it allows experimentalists to incorporate the computer into their apparatus, so that the results of measurements can be provided nearly instantaneously on-line as summarized data while an experiment is in progress. The physicist in society Tracks emerging from a proton-antiproton collision at the centre of the UA1 detector at CERN include those of an energetic electron (straight down) and a positron (upper right). These two particles have come from the decay of a Z0; when their energies are added together, the total is equal to the Z0’s mass.David Parker—Science Photo Library/Photo ResearchersBecause of the remoteness of much of contemporary physics from ordinary experience and its reliance on advanced mathematics, physicists have sometimes seemed to the public to be initiates in a latter-day secular priesthood who speak an arcane language and can communicate their findings to laymen only with great difficulty. Yet, the physicist has come to play an increasingly significant role in society, particularly since World War II. Governments have supplied substantial funds for research at academic institutions and at government laboratories through such agencies as the National Science Foundation and the Department of Energy in the United States, which has also established a number of national laboratories, including the Fermi National Accelerator Laboratory in Batavia, Ill., with one of the world’s largest particle accelerators. CERN is composed of 14 European countries and operates a large accelerator at the Swiss–French border. Physics research is supported in Germany by the Max Planck Society for the Advancement of Science and in Japan by the Japan Society for the Promotion of Science. In Trieste, Italy, there is the International Center for Theoretical Physics, which has strong ties to developing countries. These are only a few examples of the widespread international interest in fundamental physics. Basic research in physics is obviously dependent on public support and funding, and with this development has come, albeit slowly, a growing recognition within the physics community of the social responsibility of scientists for the consequences of their work and for the more general problems of science and society.
ccc06770809b5f94
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I am an eighth grader (please remember this!!!) in need of some guidance in my school project on Quantum Mechanics, Theory, and Logic. I am attempting the create a graph of the Schrödinger Equation given the needed variables. To do this, I need to know what all of the variables mean and stand for. For starters, I get to the point of: (LaTeX code, reformat if possible please!) $$\Psi \left( x,t \right)=\frac{-\hbar}{2m}\left( i\frac{p}{\hbar} \right)\left( Ae^{ikx-i\omega t} \right)$$ Where $\hbar$ is the reduced Planck constant. And my guess is that k is kinetic energy of the particle, m is the mass, p is the potential energy, and the Greek w-like variable is the frequency. What are the other variables? Also, am I right so far? share|cite|improve this question $k$ is the wavenumber: $2\pi/\lambda$. By 'lesser planck constant', do you mean 'reduced planck constant'? In that case the symbol is $\hbar$ \hbar. Also, that's not the schrodinger equation, just a particular solution given some function $u(x)$ for potential, which seems to be constant here. – Manishearth Mar 13 '12 at 14:40 Yes, I meant reduced instead of lesser. And I have no experience in LaTeX, I just created this equation in the Grapher Application that came with my Mac. I am sort of confused with the u(x)... – fr00ty_l00ps Mar 13 '12 at 14:44 I fixed it for you. Anyways, LaTeX (rather MathJax) is down at the moment. $U(x)$ is the potential energy function. Also written as $V(x)$. Could you provide a link to where you got that equation from? Its not the schrodinger equation, rather a specific solution of it. Kind of like how you get a specific solution for $y$ in $x+y=11$ when you substitute a value for $x$. The specific solution is not the whole equation.... – Manishearth Mar 13 '12 at 15:22 Just out of interest, how much Quantum mechanics do you know? It's better to stay away from the schrodinger equation till you know enough calculus as well as general physics. If you want to graph some solutions of it, I would suggest showing electron orbital graphs or something. Also, how are you connecting QM to Theory and Logic? – Manishearth Mar 13 '12 at 15:25 CodeAdmiral: That is a real challenge to have a school project on QM, Theory and Logic. Maybe you can explain a bit want you want to achieve, simply plotting the given equation will look basically like a wave: $f(x)=a*sin(x)$. As Manishearth already pointed out that is not the Schrödinger Equation. – Alexander Mar 13 '12 at 20:17 This is just a placeholder answer so that this (answered) question does not go into our unanswered backlog and get bumped up every now and then by this obnoxious fellow known as Community ♦. Please accept this answer. The equation you've given is not the Schrödinger equation, rather, it is most probably a specific solution of it. • $k=2\pi/\lambda$ is the (angular) wavenumber, where $\lambda$ is the wavelength • $\omega$ is (angular) frequency • $p$ is probably momentum. In the Schrödinger equation, potential energy is usually represented with $U(x)$ or $V(x)$ • $m$ is the mass of the particle • $A$ is the amplitude of the wave. This itself may be a function of $x$ • $i=\sqrt{-1}$ • $t$ is time • $\Psi$ is the wavefunction has a full transcript of a discussion which lead to the resolution of the dilemma. share|cite|improve this answer Your Answer
e533ce68d2298526
Take the 2-minute tour × Lawrence Evans wrote in discussing the work of Lions fils that there is in truth no central core theory of nonlinear partial differential equations, nor can there be. The sources of partial differential equations are so many - physical, probabilistic, geometric etc. - that the subject is a confederation of diverse subareas, each studying different phenomena for different nonlinear partial differential equation by utterly different methods. To me the second part of Evans' quote does not necessarily imply the first. So my question is: why can't there be a core theory of nonlinear PDE? More specifically it is not clear to me is why there cannot be a mechanical procedure (I am reminded here by [very] loose analogy of the Risch algorithm) for producing estimates or good numerical schemes or algorithmically determining existence and uniqueness results for "most" PDE. (Perhaps the h-principle has something to say about a general theory of nonlinear PDE, but I don't understand it.) I realize this question is more vague than typically considered appropriate for MO, so I have made it CW in the hope that it will be speedily improved. Given the paucity of PDE questions on MO I would like to think that this can be forgiven in the meantime. share|improve this question Are there any Markov or Novikov type theorems for PDEs ? i.e. presumably you could encode algorithmically unsolvable problems into the language of PDEs. Meaning, knowledge of some aspect of the solution (bounded orbit, say) is equivalent to knowing the solution to an algorithmically unsolvable problem? If there were such theorems that would partially address your question. –  Ryan Budney Feb 14 '10 at 23:14 Perhaps the kind of negative result you are looking for is the theorem of Pour-el and Richards that the 3-dimensional wave equation has non-computable solutions with computable initial conditions. This is in their book Computability in Analysis and Physics (Springer-Verlag 1989). –  John Stillwell Feb 14 '10 at 23:43 @Ryan, John--Good point! (One or both of) You should put this in an answer...I seem to recall hearing something once along the lines of PDEs being Turing-universal. But perhaps there can be a general theory of PDE that correspond to a restricted model of computation? –  Steve Huntsman Feb 15 '10 at 1:10 When I teach basic differential equations, I stress analogies with algebraic equations. While this is probably more simple-minded than you were looking for, I point out (without attempting a thorough justification) that although there is a good theory of linear (algebraic equaions) a general theory to solve all algebraic equations, no matter how irregular, is hopelessly out of reach. And we have no right to expect better of differential equations. –  Mark Meckes Mar 23 '10 at 16:31 This also brings to mind the preface (books.google.com/…) from "Lectures on Partial Differential Equations" by Arnol'd. Unfortunately the google books version cuts out after the first page, and I can't find another English version online. You can find a Russian version by googling for "Лекции об уравнениях с частными производными". –  Josh Guffin Sep 29 '10 at 18:26 9 Answers 9 I find Tim Gowers' "two cultures" distinction to be relevant here. PDE does not have a general theory, but it does have a general set of principles and methods (e.g. continuity arguments, energy arguments, variational principles, etc.). Sergiu Klainerman's "PDE as a unified subject" discusses this topic fairly exhaustively. Any given system of PDE tends to have a combination of ingredients interacting with each other, such as dispersion, dissipation, ellipticity, nonlinearity, transport, surface tension, incompressibility, etc. Each one of these phenomena has a very different character. Often the main goal in analysing such a PDE is to see which of the phenomena "dominates", as this tends to determine the qualitative behaviour (e.g. blowup versus regularity, stability versus instability, integrability versus chaos, etc.) But the sheer number of ways one could combine all these different phenomena together seems to preclude any simple theory to describe it all. This is in contrast with the more rigid structures one sees in the more algebraic sides of mathematics, where there is so much symmetry in place that the range of possible behaviour is much more limited. (The theory of completely integrable systems is perhaps the one place where something analogous occurs in PDE, but the completely integrable systems are a very, very small subset of the set of all possible PDE.) p.s. The remark Qiaochu was referring to was Remark 16 of this blog post. share|improve this answer I wonder: can one not model Turing machines using ODEs? –  Mariano Suárez-Alvarez Feb 15 '10 at 2:15 And even the completely integrable systems are full of surprises, such as the Camasso–Holm equation, where the solution concept needs some tweaking in order to make the Cauchy problem well posed. –  Harald Hanche-Olsen Feb 15 '10 at 2:20 @Mariano: yes, as covered in your subsequent question: mathoverflow.net/questions/15309 –  Steve Huntsman Feb 15 '10 at 4:26 Leave it to Terry Tao to give the most knowledgable and succinct response to a deep question.His grasp of the Big Picture and relevant publications in any field never ceases to amaze me. –  The Mathemagician Jun 4 '10 at 21:13 I agree with Craig Evans, but maybe it's too strong to say "never" and "impossible". Still, to date there is nothing even close to a unified approach or theory for nonlinear PDE's. And to me this is not surprising. To elaborate on what Evans says, the most interesting PDE's are those that arise from some application in another area of mathematics, science, or even outside science. In almost every case, the best way to understand and solve the PDE arises from the application itself and how it dictates the specific structure of the PDE. So if a PDE arises from, say, probability, it is not surprising that probabilistic approximations are often very useful, but, say, water wave approximations often are not. On other hand, if a PDE arises from the study of water waves, it is not surprising that oscillatory approximations (like Fourier series and transforms) are often very useful but probabilistic ones are often not. Many PDE's in many applications arise from studying the extrema or stationary points of an energy functional and can therefore be studied using techniques arising from calculus of variations. But, not surprisingly, PDE's that are not associated with an energy functional are not easily studied this way. Unlike other areas of mathematics, PDE's, as well as the techniques for studying and solving them, are much more tightly linked to their applications. There have been efforts to study linear and nonlinear PDE's more abstractly, but the payoff so far has been rather limited. share|improve this answer Further to my comment above, on the theorem of Pour-el and Richards: it originally appeared in Advances in Math. 39 (1981) 215-239, entitled "The wave equation with computable initial data such that its unique solution is not computable." I think it is fair to say that they get the wave to simulate a universal Turing machine, albeit with very complicated encoding. However, this may all be irrelevant to explaining why "nonlinear PDE are hard" because the wave equation is linear! share|improve this answer Yes, I would say that there is a general theory of linear PDE, and Hörmander pretty well captures the basics. –  Steve Huntsman Feb 15 '10 at 3:00 Yes, there is a general theory of linear PDE developed largely by Hormander, but of what use is it? In some sense, the space of all possible linear PDE's can be viewed as a singular algebraic variety, where Hormander's theory applies only to generic (smooth) points and the most interesting and heavily studied PDE's all lie in a lower-dimensional subvariety and mostly in the singular set of the variety. –  Deane Yang Feb 15 '10 at 3:17 Also, even though you can't solve the halting problem for Turing machines, the existence, uniqueness, and computability (by definition!) of solutions to the Turing machine “equations of motion” are all utterly trivial. For PDEs, nothing could be farther from the truth. Similarly for ODEs: The local theory is easy, it's long term and global behaviour that is difficult. But for PDEs, even the local theory is fiendishly difficult. (Except for the Cauchy-Kowalevskaja theorem, which despite (or because of?) its generality also turns out to be of rather limited use.) –  Harald Hanche-Olsen Feb 15 '10 at 3:31 Some more random thoughts: The closest thing I've ever seen to a "general theory of nonlinear PDE's" is Gromov's book, Partial Differential Relations. He does many things in there that I don't understand, but one application that he applied his theory to is isometric embeddings of Riemannian manifolds into Euclidean space or other higher dimensional Riemannian manifolds (the problem made famous by Nash). Moreover, in a paper by Bryant, Griffiths, and me (but in a section written by the other two and not me), it is shown that in some sense, the linearized PDE corresponding to the isometric embedding of an $n$-dimensional Riemannian manifold into $n(n+1)/2$-dimensional Euclidean space looks like a generic $n$-by-$n$ system of first order linear PDE's. I'm not aware of any other place where a "generic" system of PDE's arises naturally. The results in this paper inspired some efforts by Jonathan Goodman and me (unpublished) as well as Nakamura and Maeda (TAMS 313 (1989) 1-51) to extend Hormander's theory of linear PDE's (at least those of real principal type) to nonlinear PDE's. (It should be noted that much more interesting work in this direction was done for the 2-dimensional case, starting with the Ph.D. thesis of C. S. Lin) But maybe you really meant "the general theory of nonlinear PDE's that are elliptic, hyperbolic, or parabolic" and not really the all encompassing "general theory of nonlinear PDE's"? There's far too much junk in the latter. share|improve this answer As far as my very limited understanding goes, the h-principle is not really a "general theory of nonlinear PDE's" but mostly applies to underdetermined systems, which happen to arise a lot in geometric applications, but not as much in physics. –  Otis Chodosh Jan 11 '12 at 3:54 Yes, Gromov's study of PDE's is pretty much limited to underdetermined systems and therefore is definitely not a study of general PDE's. But it applies to general underdetermined PDE's, and that's probably the broadest class of PDE's that anyone has been able to study using a unified approach. –  Deane Yang Jan 11 '12 at 8:39 Here is a 7-page review of Partial Differential Relations by Dusa McDuff: projecteuclid.org/DPubS/Repository/1.0/… –  Tom LaGatta Mar 20 '13 at 10:44 Tom, thanks. I've certainly seen that when it first came out. –  Deane Yang Mar 21 '13 at 2:41 To elaborate on Steve Huntsman's comment, I remember reading the following on Terence Tao's blog: there exist PDE that can simulate Newtonian mechanics, and using such a PDE and the correct initial conditions it is possible, in principle, to simulate an arbitrary analog Turing machine. So a general-purpose algorithm to determine even the qualitative behavior of an arbitrary PDE cannot exist because such an algorithm could be used to solve the halting problem. share|improve this answer I think there is something you can call a general theory of PDEs. It started already long time ago with Meray, Riquier, Janet, Elie Cartan. There is an important survey article by Donald Spencer: Overdetermined systems of linear partial differential equations , Bull. Amer. Math. Soc. 75 (1969), 179-239. see also the recent book by Seiler: Involution:The Formal Theory of Differential Equations and its Applications in Computer Algebra, springer, 2010. This book contains lots of references to this topic. It is a bit strange why this line of research is not very well known. share|improve this answer In response to "It is a bit strange why this line of research is not very well known": 1) Actually, this stuff has become much better known through the work and books by Bryant, Chern, Goldschmidt, Griffiths, Ivey, and Landsberg. 2) Most PDE's that arise from other areas of mathematics and sciences are either scalar or determined systems. For such PDE's, the formal theory tells you nothing more than what the Cauchy-Kovalevski theorem says. 3) The formal theory tells you nothing about the global behavior and regularity of solutions to PDE's. –  Deane Yang Jun 4 '10 at 18:47 @Deane. Your comment 2) is irrelevant for several reasons. a) Cauchy-Kovalevskaia theorem tells you nothing about the Cauchy problem for the heat equation, Navier-Stokes system or Schrödinger equation, because the order with respect to time ($=1$) is smaller than the total order ($=2$). b) Real problems are posed in domains with boundaries, and the boundary conditions can be non-homogeneous. You may need a very much elaborated theory to prove the solvability. Hyperbolic initial-boundary-value problems are notoriously difficult (see the book by S. Benzoni-Gavage and myself); C.-K. is useless. –  Denis Serre Nov 18 '10 at 7:51 Denis, your statements are consistent with and provide some details that underly mine. –  Deane Yang Jan 20 '11 at 4:27 this is a comment to Deane Yang, but apparently it was too long so here is a separate answer. My background is in numerical solution of PDEs 1) while I know about this, it is not at all well-known by people who numerically solve PDEs. 2) this is not true. Most computations are systems of PDEs. I think most computations are done with systems where there are no actual theory, i.e. existence and uniqueness results. Think about Navier-Stokes. Many systems are NS coupled with for example convection diffusion type systems (small amounts of material in the flow etc). Then there are liquid crystals, Maxwell, elasticity, flow coupled with elasticity etc. Of course when the computers were slower one had to simplify to get a scalar equation and then hope that it gives something reasonable. Of course Cauchy-Kovalevskaia as such is irrelevant because one wants the solutions in Sobolev spaces. But the whole formal theory started as a generalization of CK. 3) this is not true. For example there are systems which are not elliptic initially but whose involutive forms are elliptic. This gives a priori regularity results and existence results. Also one could argue that the word "determined" (and over/underdetermined) can't be defined in general without formal theory. share|improve this answer Here are my reactions: 1) Is the formal theory useful for numerical solutions? Could you provide references for this? 2) There are certainly systems consisting of an evolution equation that is coupled with a constraint or gauge condition. Navier-Stokes is like this. The formal theory provides no new insights for these systems, either. 3) What example do you have in mind? I know this statement as an abstract theorem, but I have never seen it used anywhere. –  Deane Yang Jun 4 '10 at 20:26 @DeaneYang: although this is an old thread, I ran across the following article by Jukka Tuomela: "Involutive upgrades of Navier–Stokes solvers". It indicates a relevance of the formal theory to equations like Navier-Stokes and numerics and answers some of your questions. He has some other results in this direction. –  Michael Bächtold Apr 15 at 10:03 I will simply quote Heisenberg. (This an approximative quote from memory.) One can say almost everything about nothing, and almost nothing about everything. share|improve this answer But why is the study of PDE's "everything"? In comparison to, say, the study of polynomials? –  Deane Yang Jan 11 '12 at 8:37 Of course, the study of PDEs is not everything, but the point of Heisenberg's quote (and he was referring specifically to nonlinear pde-s) is within the pde Universe, the statement that something is nonlinear carries zero information. PS The study of polynomials can be viewed as a subclass of the study of PDE's (think characteristic polynomials of constant coefficients o.d.e.'s) More generally the theory of D-modules suggests that substantial chunk of algebraic geometry is closely connected to PDE's. –  Liviu Nicolaescu Jan 11 '12 at 11:28 In my limited experience, the furthest you can carry a general theory of PDEs, assuming only smoothness of the PDEs for example, is to describe the characteristic variety and its integrability, or to determine whether the equations are formally integrable (in the sense of the Cartan-Kaehler theorem). Already you find that every real algebraic variety is the characteristic variety of some system of PDE. Even when the characteristic variety is very elementary (a sphere, for example), we know very little about the PDE (it is hyperbolic, but we don't have a complete theory of boundary value problems, initial value problems, long term existence, uniqueness). So I think that a general theory of PDE would have to be much more difficult than real algebraic geometry, which already has elementary problems that seem to be very difficult. share|improve this answer Your Answer
f8913bf39e5cacf1
måndag 26 april 2010 Schrödingers cat. Dead and alive, body and mind. The cat is in superposition, both dead and alive? A really good explination here of this complex problem. Life is quantum mechanical? More and more evidence is accumulating. In a sense it would be the most natural explanation for our experiencies, the mind-body problem. It would even explain death as phenomen. In everyday life there are so many questions that would belong to our quantum being mirror. Perceptions, muscle work, nerve functioning, cell membrane functionings, chromosome functions, everything else but not brain (cortex?). In the ancient medicine brain was not considered as important at all, and maybe there is a point. Our brain is so very overestimated. The acupuncture meridians make us living?, says Rakovic. The acupuncture system is the only macroscopic quantum system in our body (while brain still seems not to be). This is the reason that consciousness is related to its MW ultralow-frequency-modulated EM field of the acupuncture system. What is life? Life is an ability to be deformed, stressed, and still be able to take back the former shape and function. Life is sensitiveness, perhaps most of all, and an ability to react on the stress. Life is growing, learning, accumulation of stress in memory, and an capacity to create new meaning from nonsense noise, through an ability to recognize patterns and codes (information). If we compare to a piece of viscoelastic gel with a communicative system, what is the main difference? Both can be deformed, stressed, both can grow, both can perhaps learn something if the accumulation of energy is a 'memory'. But the gel cannot create something new out of old structures? The creeping can perhaps be seen as something new? The inherent self-organization? No, it is seen in viruses, proteins, etc. The replication? Can a replication be non-living? I think so, if the chrystallization is taken for a simple replication (many would not agree in this). The main difference is the sensitivity for stress, and the fast relaxation, but still with a natural inertia inherent to the system (dissipative structure), creating a sence of coherence, consciousness, self and time. The coherence favour survival and it can extend also outside the organism to a superorganism, and a community. The only real big difference is the inertia and the delayed dissipation, together with superconduction minimized dissipation. 'I can decide, I feel myself a creator'. Time and consciousness belongs together very tightly. Time is measured in frequencies of dissipation? In a sence time is dissipation. So, we have one system for creating dissipation (a string) and one for no dissipation (the bullet) at the same time. A string, for the robustness and criticality, a bullet for the possibilities and uncertainity, matter and wave. Is the Schrödinger cat both dead and alive at the same time? Are we real quantum mechanical beings? The creation of complex superpositions in harmonic systems (such as the motional state of trapped ions, microwave resonators or optical cavities) has presented a significant challenge because it cannot be achieved with classical control signals here So, are the control then quantal? Mae Wan Ho writes: The energy is the string, the coherence is the bullet? She continues: ...instantaneous, noiseless intercommunication that enables the organism to function as a perfectly coordinated whole because it is liquid crystalline, containing 70% by weight of water, permeates throughout the connective tissues and into the interior of every single cell. The water molecules are aligned in ordered layers along the extensive surfaces of macromolecules and are an integral part of the liquid crystalline continuum. It makes the living matrix highly responsive to changes in pressure, temperature, pH, and electrical polarization. The ordered layers of water in the matrix also support a special kind of 'jump conduction' of protons (positive ion) that is much faster than nerve conduction and faster than ordinary electrical conduction through wires. The macromolecules are the string, the coherent water (supraconduction?) the bullet. Ho and Knight proposed as one of the first that the dynamically ordered layers of water molecules (bullet) associated with the oriented collagen fibres in the connective tissues (string) correspond to the acupuncture meridians, while acupuncture points correspond to gaps or junctions between the meridians (collagen fibres or connective tissue bundles). Jumping charge of protons, that can be interpreted as magnetic holes? Are the charge induced from outside? These 'waves of holes' has a meaning for the DC-electricity, as well as 'waves of tops'; the electrons. (In spintronics even 'waves of spin' happen.) The network effect of the tunneling between the 'patterns' is a stochastic process, the well-known 'random walk' process. The tunneling can interfere with the diff erent stochastic processes, and are eg. an activation process, and relaxing the inertia (velocity/time). Acupuncture meridians have a double oscillating function. Dejan Rakovic 2001: Biophysical bases for acupunture based medicine is its resonance microwave/ultralowfrequency (MW/ULF) electromagnetic/ionic nature, as well as the quantum holographic 'electro-optical' neural-network-like-function of the acupuncture system, that is psychosomatically disordered, at the same time as the non-treshold gap-junction based self-assembling of the acupuncture system explains the extreme sensitivity on external MW/ULF EM fields. The ionic acupuncture currents are both MW and ULF, and the longer, latter modulates MW, exactly as the windows in tissue interactions with weak EM fields. The resonance ULF (about 4 Hz) stimulate the endorphin mechanism, and the resonant MW (50 - 80 GHz) is efficient even in serious diseases. Earlier we saw that the meridians was capable to accumulate there resonant frequencies in the alpha region. Rakovic cont.: Acupuncture system is a dynamic structure differentiated at the locations of maxima loci of 3-D standing waves, formed as a result of coherent MW-reflexive Fröhlich excitations (up to about 100 GHz from 50 - 70 GHz) of molecular subunits in the cell membranes, proteins, microtubulis, etc. Differentiation of gap junctions with a higher density in acupuncture points (through which an evolutionary older type of inter-cell communications is achieved, including acupuncture, whose conductivity can be modulated by intra-cell pH-factor, Ca2+-ions, neurotransmitters and second messengers - and even by voltage), are slightly sensitive to voltage. These gap junctions may function together as a 'superorganism-like' 'amplifying' junctions. Kind of a 'highway' or a zone. Two different functions for nerve nets. Neural and other associative nets can be seen as a deterministic system which assumes that the information processing (the dynamics) of the neural nets bears the classic-physical reality, determinism and locality. Or they can be modelled quantum-mechanically (solitonic). Or both. As the classic-physical systems, and with the quantum-mechanical corrections to their deterministic dynamics, coming from the well-known e ffect of quantum-mechanical tunnelling. It is the border territory between quantum world and classical world. Quantum-mechanical tunneling in associative neural networks, describes the meridians as kind of 'random walk structures', requiring no energy exchange. Being the relatively stable minima of the 'configuration-energy' space of the networks, the 'patterns' represent the macroscopically distinguishable states of the neural nets. Therefore, the tunneling represents a macroscopic quantum e ffect, but with some special characteristics. Particularly, we investigate the tunneling between the minima of approximately equal depth, thus requiring no energy exchange. If there are at least a few such minima, the tunneling represents a sort of the 'random walk' process, which implies the quantum fluctuations in the system, and therefore 'malfunctioning' in the information processing of the nets. Due to the finite number of the minima, the 'random walk' reduces to a dynamics. the quantum fluctuations due to the quantum-mechanical tunneling can be 'minimized' if the 'pattern'-formation is such that there are mutually 'distant' groups of the 'patterns', thus providing the 'zone' structure of the 'pattern' formation. The black ball represents the 'particle' oscillating with a frequency around the bottom of the well. The presented 'con guration energy (q − V )' surface represents the 'patterns' {welldefi ned, macroscopically distinguishable physical states of the nets. Entropy represents a measure of ignorance about the physical state of the system. The density matrix is the knowledge, which represent the mutually distinguishable (i.e., orthogonal) states. That is, entropy is a measure of the classical indeterminism: a system is in a de finite state, but the state is not known with certainty, and increase of entropy coincides with the loss of the informations concerning the physical state of the system. In what state the system can be found {probabilistically) is measured by entropy (von Neumans entropy). The zones and wells need not to follow each other. The healthy state. Nature wants to be robust by 'minimizing' the quantum fluctuations due to the tunnelling. No disturbing jumps that distort the cell machinery! But the big number of the minima and their high 'density' or zones are necessary for enhancing the memory capabilities and conscious experience of the nets. The 'zone' structure is mutually distant groups of not-very-numerous densely packed minima, between which the information can jump. Healthy state might be considered as an absolute minimum (ground state) of the non-local self-consistent macroscopic quantum potential of the organism (energy potential between the max and min degrees of freedom, stable and unstable conditions, in Popps terms); disorders corresponding to higher minima of the spatio-temporal changeable potential hypersurface in energy configuration (stressed condition) explains the higher sensory responses of the more excited acupuncture system. This is very close to an associative neural network and to pattern recognition as convergence of the neural networks to the bottoms of the potential hypersurface, being the attractors of neural networks memory patterns. There are the two diff erent levels of the influence of the environment. A. The e ffect of decoherence (decay), which destroys the quantummechanical coherence on a very short time scale. Quantum states inevitably decay with time into a probabilistic mixture of classical states. B. The influence of the environment on the (approximately) deterministic behavior of the system {which has previously 'survived' the decoherence). A mechanical non-living system is cooled (superconduction) to its quantum ground state, where all classical noise is eliminated. But in living systems noise cannot be eliminated. Lubos also points out an important factor: Von Neumann and Wigner actually argued that everything – including the macroscopic objects – evolves according to the Schrödinger equation, and the “collapse” is only done when you actually want to observe something which requires consciousness. Alternatively, you may imagine that even other people evolve into strange linear superpositions and it is just you who has the right to make the wavefunction collapse. This has led to the anthrophic principle, evidently wrong, but in essence this statement is right. It is consciousness that collapse the random wave. This has been seen in PSI-research so many times. September 10, 2009, a discussion on the Physics arXiv Blog How to Create Quantum Superpositions of Living Things. Scientists try to create a quantum superposition of a living thing, such as a virus. The experiment will first involve storing a virus in a vacuum and then cooling it to its quantum mechanical ground state in a microcavity. Zapping the virus with a laser then leaves it in a superposition of the ground state and an excited one. What could lead to the quantum ground state of a virus? The virus have the structure for it. Maybe it is because the host has a disorganized 'field' that viruses can attack? But seen in the light of above Nature has not gone this path. In Nature it is the surroundings and the organism that do the constraints. The object is allowed to oscillate and have 'noise'. The necessity for application of Microwave Resonance Therapy (MRT) upon acupuncture points was discovered only in the early 1980's (Sit'ko et al in Kiev) as appearance of sharply-resonant characteristic eigenfrequencies of human organism - which successfully stimulated development of the second generation of coherent, the third generation of noise spectrum MW generators, and finally fourth generation of noise spectrum Controlled Energy Materials (CEM) MW generators with changeable therapeutic oscillators in mid-2000's. The coherent spectrum MW generators with manually changeable frequency (from 52 to 70 GHz) are far less suitable in practice, because of much longer seeking of the resonant frequency, dependent on individual properties of the organism and the subjective state of the patient, which can result in therapeutic mistakes and overdosing. On the other hand, the noise spectrum MW generators enable simultaneous excitation of all possibly therapeutic resonance MW frequencies (52-78 GHz), and an organism continuously resonantly responds to currently appropriate (and changeable during therapy) frequency. Finally, the noise spectrum CEM MW generators with changeable therapeutic oscillators provide unique possibility to initially record an biologically-active-zone (BAZ) MW spectrum (biologically resonant in both frequency and intensity) by BAZ-influenced changeable oscillator, and subsequently to re-emit the BAZ MW spectrum by the oscillator in this very zone - thus enabling resonant shallowing of the acupuncture disordered state in favor of deepening of the attracting acupuncture healthy state. By affecting the appropriate acupuncture points by MRT generators, remarkable clinical results of the treatment are being achieved in the prevention and therapy of stress, as well as in many psychosomatic disorders (cardiovascular, respiratory, gastro-intestinal, nefro-urologic, endocrine, gynecological, neurological, psychiatric, dermatological, orthopedic and traumatologic, ophthalmologic, ORL, stomatologic, pediatric, addictions ...) - with average efficiency of 82% in chronic and up to 100% in acute diseases, tested on population of several millions of patients of different pathologies in several thousands of MRT cabinets in Ukraine and Russia. MRT is practical realization of the Prigogine theory of selforganization of living systems. In that context an explanation for efficiency of the MRT, as noninvasive non-pharmacological medical treatment, should be sought. So, some disorders in the organism give rise to deformation in the standing wave structure of electrical field of the organism in the MW region, which influences corresponding changes in the spatial structure of the acupuncture system, and consequently its resonant frequencies, resulting in some disease; during the therapy, applying the MW sound at corresponding acupuncture point the excited acupuncture system of the patient is relaxing to the previous healthy condition, while reaching normal resonant frequencies responses of its meridians upon the wide spectrum MW source - and following to physiological mechanisms of the acupuncture regulation the organism biochemically overcomes the disease. Trapping light. Mae Wan Ho: Organisms are thick with spontaneous activities at every level (100000/min?), right down to the molecules, and the molecules are dancing, even when the organisms sit still. The images obtained give direct evidence of the remarkable coherence (oneness) of living organisms. 1, 2, 3. The macromolecules, associated with lots of water, are in a dynamic liquid crystalline state, where all the molecules are macroscopically aligned to form a continuum that links up the whole body, permeating throughout the connective tissues, the extracellular matrix, and into the interior of every single cell. And all the molecules, including the water, are moving coherent ly together as a whole. The organism is creating and recreating herself afresh with each passing moment, recoding and rewriting the genes in her cells in an intricate dance of life that enables the organism to survive and thrive. The dance is written as it is performed; every movement is new, as it is shaped by what has gone before. The organism never ceases to experience its environment and registering its experience for future reference. The coordination required for simultaneous multiple tasks and for performing the most extraordinary feats both depend on a special state of being whole, the ideal description for which is “quantum coherence”. Quantum coherence is a paradoxical state that maximises both local freedom and global cohesion. The organism is, in the ideal, a quantum superposition of coherent activities over all space-times, constituting a pure coherent state towards which the system tends to return on being perturbed. An intuitive picture of the quantum coherent organism is a perfect life cycle coupled to energy (and material) flow. The perfect life cycle represents perpetual return and renewal. It is a domain of coherent energy storage that accumulates no waste or entropy within, because it mobilises energy most efficiently and rapidly to grow and develop and reproduce. Not only does it not accumulate entropy, but the waste or entropy exported outside is also minimised. To be quantum coherent above all, is to be most spontaneous and free. The wave function that describes the system is also a superposition of all possibilities. It implies that the future is entirely open, and the potentials infinite. Quantum coherence is the prerequisite for conscious experience It is why each and every one of us thinks of ourselves as “I” in the singular even though we are a multiplicity of organs, tissues and cells, and astronomical numbers of molecules. We would have a wave function that evolves, constantly informing the whole of our being, never ceasing to entangle other quantum entities, transforming itself in the process to mobilize energy most rapidly and efficiently, to intercommunicate nonlocally and instantaneously, transcending the usual separations of space and time. That’s why a ‘being’ can be in two places at the same time and different beings far, far apart can exchange information instantaneously. - No time, no 'meter', all information in the Universe, every quantum jump is saved and the Universe is recreated (for the 'I') all the time with every jump. Quantum information processors exploit the quantum features of superposition and entanglement for applications not possible in classical devices. Friedman et al. go some way to answering this fundamental question by confirming that quantum superposition works as well in the macroscopic world of superconducting rings as it does in the microscopic world of photons, electrons and atoms. A macroscopic system could behave quantum mechanically if it was suitably decoupled from its environment, as in a SQUID. The question was whether a 'persistent' electric current in the ring would decay in a quantum-mechanical way, which has been shown to happen by quantum tunnelling. At the midpoint of the figure, the measured tunnel splitting energy between the two states. The particle is creating a wave too. From Friedman et al. Gianni Blatter in Schrödinger's cat is now fat, 2000: If a macroscopic system can decay by quantum tunnelling, the next question to ask is whether it could also oscillate back and forth between two states. would the macroscopic state tunnel back and forth sequentially, forgetting about its quantum-mechanical make-up after each hop, or would it oscillate coherently, preserving its quantum state throughout the hops? The second process goes under the name of 'macroscopic quantum coherence' and has been something of a holy grail in this field since the 1980s — it is the coherent superposition between the two current states in the ring that corresponds to the indeterminate state of Schrödinger's dead-and-alive cat. Quantum theory predicts that if such a system is strongly coupled to the environment, it remains localized in one state and so behaves classically. At low coupling, the system follows damped, coherent oscillations between the states, with the damping rate vanishing as the coupling to the environment goes to zero (death?). This quantum mixing of the two states leads to a so-called coherence gap separating the energies of the superposition states. The Scrödinger cat is alive because of its entanglement with the environment. It is dead because it detaches from environment. We know that, everyone knows that. The true 'symmetric state' (the bullet) is quantal? The energy is the string? Can living things be quantal? This subject has been discussed extensively on the web this month. Here a video. BBC News, a photonic cluster, a tiny resonating strip of metal – only 60 micrometres long, but big enough to be seen without a microscope – can both oscillate and not oscillate at the same time, NewScientist, etc. The coherence gap. Blatter: One explanation for this coherence gap is as follows. Consider a particle trapped in a potential well with two minima (a double-well potential). As the particle tunnels between the two wells, it lowers its kinetic energy because of the spreading of its wavefunction over both wells. As a result, the new mixed ground state is shifted down with respect to the energy of the individual wells. This 'symmetrical' state always comes with an 'antisymmetric' partner state, which is slightly higher in energy. In fact, this excited state comes to lie at an energy above the original well energy, resulting in an excitation gap of 2*the energylevel. This phenomenon is well known in chemistry, where the two mixed or superposition states correspond to the bonding and antibonding states of a diatomic molecule. This probability peaks when the microwave frequency matches up with the two superposition states, allowing them to identify a coherence gap. Then they 'surf the wave'. The challenge in future experiments will be to track the appearance of this coherence gap when probing lower-energy (semiclassical) states deeper in the well. Another possibility would be to make the coherence gap vanish by introducing artificial decoherence into the system, for example by coupling the SQUID to a metallic reservoir. The coupled qubit-resonator experiment. Proving that all objects, whatever their size, obeys the same rules has long been a goal of physicists. But with quantum mechanics it is no trivial matter: the larger an object, the more easily its fragile quantum state is destroyed by the disruptive influence of the world around it. O'Connell's experiments required delicate control and a temperature of just 25 millikelvin to measure the state in the few nanoseconds before it was broken down by disruptive influences from outside. "It was a close call, but sufficient to see a first quantum signature" "The qubit acts as a bridge between the microscopic and the macroscopic worlds," says O'Connell. By tuning the frequency at which the qubit cycled between its two states to match the resonant frequency of the metallic strip, the qubit's quantum state could be transferred to the resonator at will.the resonator was sometimes in its non-oscillating ground state and sometimes in an oscillating "excited" state. The number of times it was measured to be in each state followed the probabilistic rules of quantum mechanics.now the spooky influence of quantum physics on visible objects has been proved, can we expect to be putting an object as large as a real child's swing into an indeterminate quantum state any time soon? O'Connell thinks so. "I'd say in the near future – in the next 20 years." Markus Aspelmeyer Preparation and detection of a mechanical resonator near the ground state of motion. Rocheleau et al. 2010: Cold, macroscopic mechanical systems are expected to behave contrary to our usual classical understanding of reality; the most striking and counterintuitive predictions involve the existence of states in which the mechanical system is located in two places simultaneously. Various schemes have been proposed to generate and detect such states, and all require starting from mechanical states that are close to the lowest energy eigenstate, the mechanical ground state. Here we report the cooling of the motion of a radio-frequency nanomechanical resonator by parametric coupling to a driven, microwave-frequency superconducting resonator. Starting from a thermal occupation of 480 quanta, we have observed occupation factors as low as 3.8 +/- 1.3 and expect the mechanical resonator to be found with probability 0.21 in the quantum ground state of motion. Further cooling is limited by random excitation of the microwave resonator and heating of the dissipative mechanical bath. This level of cooling is expected to make possible a series of fundamental quantum mechanical observations including direct measurement of the Heisenberg uncertainty principle and quantum entanglement with qubits. A widely applicable model of weakly but otherwise arbitrarily coupled two-level systems, and use quantum gate design techniques to derive a simple and intuitive CNOT construction, see Quantum logic with weakly coupled qubits. Islands of life? "There is this question of where the dividing line is between the quantum world and the classical world we know" There is a problem of scaling. Pitkänen suggested a solution with 'Life as islands' in a mathemathical concept. Quantum systems with fluctuating spacetime field configurations map onto field theories in one extra dimension. Several thermodynamic properties are shown to exhibit logarithmic corrections to scaling. In the renormalization group theory of phase transitions, such logarithmic corrections arise only in very special cases. It is not easy to drive experimental systems through a phase transition at T=0 without impurities, doping, or magnetic fields, each of which brings in additional features. Perhaps organic molecular crystals, which are very sensitive to pressure, are the most promising candidates. Feynman put it best: "If you think you understand quantum mechanics, you don't understand quantum mechanics" Gianni Blatter, 2000: Schrödinger's cat is now fat. Nature 406, 25-26(6 July 2000). doi:10.1038/35017670 http://www.nature.com/nature/journal/v406/n6791/full/406025a0.html M. Dugic and D. Rakovic,: Quantum-mechanical tunneling in associative neural networks. Eur. Phys. J. B 13, 781{790 (2000). http://www.dejanrakovicfound.org/papers/2000-EUR-Phys-J-B.pdf Friedman, J. R. , Patel, V. , Chen, W. , Tolpygo, S. K. & Lukens, J. E. Quantum superposition of distinct macroscopic states. Nature 406, 43– 46 (2000). doi:10.1038/35017505 http://www.nature.com/nature/journal/v406/n6791/full/406025a0.html#B2 A. D. O’Connell, M. Hofheinz, M. Ansmann, Radoslaw C. Bialczak, M. Lenander, Erik Lucero, M. Neeley, D. Sank, H. Wang, M. Weides, J. Wenner, John M. Martinis & A. N. Cleland. Quantum ground state and single-phonon control of a mechanical resonator. Nature 464, 697-703(1 April 2010) doi:10.1038/nature08967 http://www.nature.com/nature/journal/v464/n7289/full/nature08967.html NPR D. Raković, "Acupuncture-based biophysical frontiers of complementary medicine", Proc. 23rd Ann. Int. Conf. IEEE/EMBS, Istanbul, Turkey (2001). http://www.dejanrakovicfound.org/papers/2001-IEEE-MBE-TURKEY.pdf D. Raković, "Quantum medicine: Phenomenology and quantum-holographic implications", Med Data Rev, Vol. 1, No. 2, pp. 71-73 (2009). http://www.dejanrakovicfound.org/papers/2009-MED-DATA-REV.pdf D. Raković, M. Dugić, and M.M. Ćirković, "Macroscopic quantum effects in biophysics and consciousness", NeuroQuantology (www.NeuroQuantology.5u.com) Vol. 2, Issue 4, pp. 237-262 (2004). http://www.dejanrakovicfound.org/papers/2004-NEUROQUANTOLOGY.pdf Dejan Rakoviv, 2009: QUANTUM MEDICINE: PHENOMENOLOGY AND QUANTUM-HOLOGRAPHIC IMPLICATIONS. Medical review UDC: 616. http://www.dejanrakovicfound.org/papers/2009-MED-DATA-REV.pdf Dejan Rakovic, 2003: THINKING AND LANGUAGE: EEG MATURATION AND MODEL OF CONTEXTUAL LANGUAGE LEARNING. Speech and Language 2003, IEFPG, Belgrade. http://www.dejanrakovicfound.org/papers/2003-IEFPG.pdf Wae Wan Ho: Rajiv R. P. Singh, 2010: Does quantum mechanics play a role in critical phenomena? Physics 3, 35 (2010) DOI: 10.1103/Physics.3.35 http://physics.aps.org/articles/v3/35 H. Wang, M. Hofheinz, M. Ansmann, R. C. Bialczak, E. Lucero, M. Neeley, A. D. O’Connell, D. Sank, M. Weides, J. Wenner, A. N. Cleland and John M. Martinis, 2009: Decoherence Dynamics of Complex Photon States in a Superconducting Circuit. arXiv:0909.4585v1 måndag 12 april 2010 Stress and relax. The extracellular matrix. Brain modelling VIII b. Living bodies cannot store energy as heat. They are no real thermodynamical things, but they will still have to store the energy in other ways, and the only possible solution is to store it as bondings, organization, coherence (degrees of freedom is collapsed). Then the priority is in stable bondings and things that can ineract and give off their energy at demand. An energy store, more longlasting than the ATP-production, that also is an accommodation and relaxation. Also movement can have a relaxating effect (short time) in this sense, although the tension (especially longterm) often increase the energy. The metabolic energy balance between sugars and lipids are well known. The lipids for vertebrates are very often stored as saturated lipids, with very few C=C bondings. The degree of carbon double bondings is one measure of stable energy storage, also many body systems; another is oxygen, which is stable in reduced form, with no magnetic interference. Water and its chrystal structure, chiral molecules, metal ions etc. are also important. Polarized light and sound even. Water is extremely important and a more complete molecular description of the liquid and solid phases of water, including an accurate universal force field,is described in 'Water clusters: Untangling the mysteries of the liquid, one molecule at a time'. It is clear that the hydrogen bond network and its fluctuations and rearrangement dynamics determine the properties of water. The water dimer exhibits three distinct low barrier quantum tunneling pathways that rearrange the hydrogen bonding pattern. Acceptor switching (AS), having the lowest barrier of all tunneling motions estimated at 157 cm−1 by VRT(ASP-W), is the most facile tunneling motion. This tunneling pathway exchanges the two protons in the hydrogen bond acceptor monomer. After Keutsch & Saykally 2001. The water trimer is a much more rigid structure than the dimer, bound by three strained H-bonds. It has a chiral cyclic equilibrium structure. There is also a water tetramer that is highly symmetric and requires highly concerted tunneling motions and limits the number of degenerate minima that can be connected on the intermolecular potential surface via feasible tunneling motions to two. The water pentamer is chiral, and the hexamer has a cage structure with the oxygens forming a distorted octahedron. A cyclic hexamer form has also been seen. A vibration-rotation-tunneling band is indicated by a chiral six-membered ring structure with rapid quantum tunneling occurring between the enantiomers. The observed vibration involves torsional motion of the water subunits about each hydrogen bond axis. The interpretation of recent dielectric relaxation measurements has even suggested that water molecules making only two hydrogen bonds might be of special importance for bulk dynamics. Excitation induces hydrogen bond breaking, and hydrogen bonds are extremely important in biology (dissociation energy). Impacts on photosynthetic PSI cyclic electron transport rates could account for observed variability in quantum yields for oxygen evolution and some variability in quantum yields for carbon fixation. Similarly, enzymatic processes associated with organic carbon synthesis appeared to be variably dependent on spectral growth irradiances and contributing to the observed variability in quantum yields for carbon fixation. Reactive oxygen species (ROS) are important players in mediating quantum dot (QD)-induced cellular damage and apoptosis. The mechanism involves QD generation of ROS in the extracellular environment and intracellularly. These ROS can cause plasma membrane damage and intracellular organelle damage; mitochondria are the first to be affected in metabolism (decreased cytochrom c levels), and they appear to be the most sensitive organelles. Different surface molecules provide a different degree of protection, and they act as a barrier to oxygen, protons, or hole traps. Oxidative stress is also an important factor in ageing. Exessive NO is an important signalling mediator behind neurological disorders, but oxidative functions of ROS maybe also can be seen as an energy pool, as instance in the nerve system. Noise, information and stress . The electromagnetic interference, or noise, comes from our own body, but also from environment (nature and nurture), and make up the perceptions too. The excess is experienced as stress, the part that cannot be entangled with anything, that has no meaning. Without entanglement it gives dissipation and maybe de-entanglement (negative negentropy). Coherent storage have a higher priority, also networks, and highest of all the fractal and topological networks seen in maps ('homunculus' - type, but I prefer to talk of body maps in a topological way) that can interfere with each other, as mirror images, also at long distances. This is very often seen in medicine (referred pain). Take a heart attach as example. It engages the whole body, and the surroundings too. Take a fobia attack, a stroke, even common flue. It is the whole body that are sick. In medicine we talk of system diseases, or syndromas. The most coherent of all is the whole, synergetic (in time) body that interfere with the surroundings and minimize the dissipation that will follow. That is true biological holism, a true maximation of the negentropy. The resistance/dissipation is minimized. The same as could be seen in quantum tunnelling in water. The fight is minimized. How could we even think that Darwin was right? Or he is partly right. The fight is so unpleasant that we do anything to avoid it. Even change the species preferences, the niches, if needed. A way to minimize the resistance is to build networks. And a very important network is the extracellular matrix (ECM). In this way biological systems are capable to join termodynamic entropy and informational entropy as a stress-relaxing mechanism. This means growing, and death. I see I will have to look at the entropy aspect closer in later postings. Synergetism is a minimized dissipation Synergies are produced (as an emergent charachter?) when many elements or parts combine to produce distinctive/differently new 'wholes'. Indeed, complex living systems represent a multi-leveled, multi-faceted hierarchy of synergistic effects. Synergism is about self-organizing charachters and life is creating itself, also reproducing itself, as a consequence of inherent properties in the energy landscape. Erwin Schrödinger’s legendary book “What is Life?” (1945) said the most important physical property of life was its thermodynamic foundation. Living systems are distinctive in that they create thermodynamic order, is a multi-level phenomenon and new properties and even new principles arise at higher levels of “agglomeration” and organization. Life evolve. Such are its dynamic goal-directedness, purpose and survival strategies; partially 'independent' of other physical systems. Living systems have an internally-defined teleology, kind of a 'black box', can do conscious choice-making decisions to adapt and diminish the dissipation. The most advanced is the transcendent cultural achievements of humankind. Corning talks of the Synergism Hypothesis, in effect an 'bioeconomic' theory of complexity. Its result is cooperation and sociability, as opposed to competition in Darwins evolution. Co-operative/symbiotic processes involve the environment. Operative processes are controlling and regulating processes. Control isn't always endogenic. Large-scale, sophisticated cooperative efforts keep bacterial colonys together, direct the motor output of fish stims, synchronize the brain waves in jazz musicians, coordinate the metabolic activities in groups etc. Specialization (negentropy) facilitates growth. Also learning is growth. Energy is directed towards growth and a more effecient machinery (work better together than alone). Can this be a relation of dissipation (energy demand) and co-operation (energy 'production' or 'saving'). Larger size also means a lower energy demand. Integrated multicellular, hierarchial bodies express a synergy of scale, different hierarchial levels have different charachters. Chrystallization around a nuclei of a critical size (an organising center) and tool using is such characters, that have different degrees only. Language is another. Memory too, in close connection to consciousness. The whole is more effective than its parts, and gives an adaptive unification (entanglement). The boundaries between two systems are more energetic, and have a higher dissipation. Unification of boundaries 'produce' energy. Also timing of the unification is important. A very rapid unification 'produce' energy very fast, and give lesser chances to adaptation and differentation, and consequently the energy level is lower. To get a high energy level there must be a time-lag between the energy'production' and the relaxation. Energy must be stored. This also gives a flexibility into the system. There are a dynamic balance between expansive (growing) and constraining processes by allowing varying degrees of communication between cell and extracellular matrix. Critical to this balance are both the deformability and permeability of the boundaries, which determine the shapes assumed by cells and patterns of uptake, loss and passage of resources. Associations related to the boundaries (as the cell membrane and microtubulis in the cytoskeleton) can change this relation. Degrees of Freedom: Living in Dynamic Boundaries,is a good book to study these things. Synergism is the result of oscillations that are controlled well. The living system is an open one, and everything that disturbs the balance (energy input) is dissipation. Some of the disturbances are called control signals, because the system need some energy input, otherwise death follow, and if the dissipation/control signals give still higher dissipation it is called allostasis (a standing wave?). Is the stress even higher the living system will break down and disease follow. Also disease can in this aspect be seen as an allostatic control signal. Death is also an control signal, a feed forward signal, as apoptosis. A destruction of an hierarchy level. Also everyday activities as eating is a dissipation, as is hearing, seeing, touching, etc. All these are percieved electromagnetically, also touch through mecanoreceptors. Food? I don't know. Maybe the cyclotrone resonances, the radical production. So we also have perceptions for those things, and it's energy can also be too much, or too low, and be pecieved as stress, or only as control. There are no fixed treshold values, they are moving forth and back all the time, following the sensitivity. The pain treshold moving according to mood and stresslevel is a good example. Also the connective tissue and nerve cells have changing sensitivities. Our tissues accumulate energy and have a memory. In essence this is the anabolic (negentropic) and catabolic (entropic) reactions. Everything depends on your starting point. Everything is energy, but in different forms. Energetically our body is not isolated from the surroundings, but a part of it. In the environment are 'transparent windows' for cosmic radiation, and this radiation will reach us. Especially much radiation in the mm-range do hit us. Magnetic frequencies transverse our bodies also and invoke by resonance. And we live in a gravity field. Yin and Yang. But we need also this energy influx as Popp showed, for the organizing, the diminishing of the degrees of freedom, for the quantum criticality. Living systems are constructed to handle dissipation. This follow also the traditional chinese medical philosophy very tightly, but there are other elements too, and environment. Everything must be balanced, at a higher energy level from the surroundings. I really am fond of this picture of Yin and Yang. It can perhaps best be described by interaction, so that in Yang (the material bondings, the negentropy), are always having a seed to change, as Yin, (the energy, the dissipation). Yin turns into Yang by relaxation (organization, homeostasis, energy conservation) and Yang turns into Yin by stress (allostasis, energy production seen in gluconeogenes as instance). An entropy-based model is also described by Kang et al. 2007. Quantum biology. Stress adaption and accumulation.. Because the interaction of energy/stress with matter, have also immaterial aspects as coherence, visiability, communication, cognition, memory, consciousness, and other immaterial aspects to be energycoded somehow. The relation between dissipation and frequency express this interaction? There are 'windows' of 'effeciency' or 'economy'? This is maximal adaptation or balance? A maximation of entanglement for a smooth result. But also a senitiveness for interactions,otherwise there would be no response of the control signals. Maybe it is the sensitiveness (deformations) that most of all charachter life. Sensitiveness is also criticality. Quantum critical universe is in a well defi ned sense the most intelligent and interesting universe. Criticality is by the de nition something unstable. Situation changes if the fundamental constant of Nature is analogous to critical temperature. The coupling strength (geometry) is analog to the critical temperature, and both types of entropies can be interacting. This (TGD) is so different from string theory. Except temperature there are a lot of other windows determining the criticality. Acidity is one of the strongest. Pitkänen writes: Spin glass analogy could be regarded as one aspect of quantum criticality and states the TGD universe can be regarded as quantum spin glass. Quantum spin glass is phenomenologically characterized by its fractal energy landscape containing valleys inside valleys inside valleys giving rise to extremely complicated system. Quantum self-organization can be described as motion in this kind of energy landscape. p-Adicity can be regarded as one aspect of the quantum spin glassiness. Bio-system as a self-organizing quantum critical spin glass together with the notion of many-sheeted space-time provides rather restrictive general guide line for attempts to construct a general theory of bio-control and -coordination. The spin aspect also give supraconduction and minimized dissipation, seen as instance in the nerve pulse. The spin can be stored by strain. Optics and chirality. The final explanation is in form of quantum optics (QCD), say Popp 2001. There are factors as spatial space and momentum space that give 'possibilities' (potential information) in stored energy (memory = stored 'time'), and quantum energy as direction, vector and spin, and propagation gives stress adaption in actual photon intensity. The derivation from thermal equilibrum is the adaption, the relaxation (= the heat that is not expressed in temp. but fixed otherwise). "Noether's Theorem" says: The charges of matter are the symmetry debts of light. Vertebrates has a higher energy level than amphibians. Birds has a higher energy level than vertebrates (body size has also an impact). Popp express the same thing as: 'biological systems use sun energy to build up high-density informational stores, which delay the thermal dissipation and a time lag.' There must be a resonator that can catch the energy. There is also a delayed illuminescence from cells, expressed as relaxation time (in msec to sec). This means either that the reaction also can go backwards, or there are an active emission of light. This emission correlates with growth (organizational capacity). This means that organization in living system is not based on 'nearest neighbour interactions' as in solids, but on entanglement, that every part is connected to every other part interpreted as a 'supergenome'. Fluctuations around this organization/entanglement maximum state (organising centers?) can be seen as entropy fluctuations and have regulatory activity. In seedlings the logaritmic growth show strong organization (genetic?) and is very sensitive to disturbances (control) and correlate with biophoton emission (genome activity). Tumor cells have strong induced emission. The delayed emission is relaxing too (energy is given off). The capacity of the biophoton emission display destructive interference for the extracellular space, and constructive interference within the cells. The cell acts differently than its surroundings. Charles Shang says: It is well known that all the physiological systems, including nervous system, are derived from a system of embryogenesis - a growth control system. In growth control, the fate of a larger region is frequently controlled by a small group of cells, which is termed an organizing center or organizer. A gradient of messenger molecules called morphogens forms around organizers. Current acupuncture research suggests a convergence of the neurophysiology model, the connective tissue model and the growth control model. The growth control model of acupuncture set the first example of a biological model in integrative medicine with significant prediction power across multiple disciplines. Wave character and quantum character of acupuncture systems from Li et al. 2008, showed that acupuncture meridians are quantal. The scientists looked at the sensation propagation (SQUID) in acupuncture meridian system and found: - transport along meridian shows a wave pattern of acupuncture with three basic waves, namely 250 sec, 200 sec and 150 sec; the wave character of meridian as from standing waves - the magnetic spectrum on acupuncture meridian is quite stable with peaks of 12-14 Hz and 28 Hz, strong and almost time-independent - reveals the big jump of acupuncture meridians from original places to some completely different roads, as big as the electrons jump from some original orbits to some other orbits, when a person is in some abnormal state. It also reveals the quantum character of acupuncture system. Zhang says: the objective measurements of physics into acupuncture are quite successful with good reproducibility, in particular the electrical and acoustic measurements. A further study into the background of these results reveals an invisible and dynamic dissipative structure of electromagnetic field, which generally exists in all living systems, which has a lot to do with some mysterious old medicine as acupuncture meridians, charkas and homoeopathy. What is more, a scientific and quantitative way of evaluating the degree of coherence in living systems is developed during the research of the electric measurement on human body. Memory on cell - level. Memory is stored time, moments, strain. Consciousness is the process that transforms the actual time into potential, stored time, actual information into potential information, as 'stories' and that we call memories. Potential information is also about all possibilities, and about stored energy, that is organization, networks, and that storage must not be allowed to decay rapidly with time (a time lag, stability). Consciousness needs memory, and stable organization. If we take a higher level of organization, the networks grow in complexity, until finally every particle of a system is connected to every other part of the system. This part can be a cell, an organ, a bodypart etc. Then we have optimal communication. Further increase in complexity needs a decoherence, a break down first. We have theoretically infinite memory, because the information is kept in its original event, and there are relaxation. Because the event are stored, their origin of the actual event is never forgotten either. These stored moments as stored energy accumulate and give stress in the body if the relaxation is not efficient. Memory is important for holography as a glue. The living matrix. Chen 1996: An existing model of the electrical properties of the skin has been the accepted scientific standard for decades. But this model is based entirely on mechanistic principles and it fails to explain many biological phenomena, particularly those relating to acupuncture points and meridians. A new model is developed which, unlike the standard model, includes an active biological response and the fact that the electricity passes though different types of tissue, not just skin. Oschman (2009) says: So the body do harm to itself? The same happen to muscle cells during fierceful movements. Also the acupuncture procedure (needle grasp) do harm to the connective tisue in rearrangeging the fibers (Langevin & Yandow 2002). This effect is not restricted to the acupunctures only, but it is stronger/amplified there. Acupuncture points are 'sites of convergence' in a network of connective tissue, where the meridians are 'highways'. Any point outside the network is a 'secondary way'. The connective tissue is a fascia - arch enspanning the whole body, says Upledger. The needling produce cellular changes that propagate along the meridian connective tissue plane. The anatomy of the meridians and acupoints are collagen fiber alignment amplifying the signal. Becker (in the Body electric) talked of acupoints as boost function points that amplified the DC-signal. Any injury, large or small, results in an oxidative burst in which neutrophils and other white blood cells deliver highly reactive oxygen and nitrogen species to the region to destroy pathogens and to break down damaged cells and tissues. Becker told of a 'current of injury' that made the signal in nerve axonal transport as an DC-signal, activating the nerve in epidermis-nerve junctional points. Trigger points are focal points (necroses) too. Necroses are also common in toes and finger tips where the most important signalling acupuntures ends. There are evidence that at least some of the meridians represent low resistance pathways for the conduction of electricity, and it happens through an 'electrical synapses' pathway. Electrons can also act as relaxatants when they 'neutralize' the diamagnetic - paramagnetic differences in radicals. That way electrons are antioxidants too? Is this the pi-stack phenomen seen in chromosomes and the supraconduction in nerve pulse? The EMG emission seen as a phantom? Improved sleep, pain reduction and rapid healing effects on inflammation are seen as negative charges (free electrons) neutralize free radicals that contribute to chronic health problems. Such health issues can arise from either 'silent inflammation' from old injuries that have not fully resolved, or from the observable symptoms of inflammation: heat (measurable by medical infrared imaging), redness, pain, reduced range of motion at joints and swelling. Conductive pathways from the skin surface to the tissues and organs throughout the body, and in the opposite direction, has significance for the selfhealing capacity. While skin has a finite resistance, it is clearly not an insulator, and low impedance points have been identified on the skin. Some of these are acupoints, and there is evidence that some of these points are electrically coupled to specific organs (Chen, 1996; Major, 2007). The autonomous nerve system is linked with trigger points (Travell and Simons) and maybe with acupuncture points too. Dura mater and Pia mater is especially interesting in this context, as we shall see later. A control current circuitry formed by collagen network, as suggested by Mae Wan Ho. The need for a faster mechanism of charge transfer shows up dramatically in peak athletic or artistic performances involving perception and movement that is far too rapid to be explained by slow moving nerve impulses, diffusion of regulatory molecules and chemical reactions rate-limited by diffusion. Athletic events can result in acute injuries, and there is strong motivation for rapid healing so the athlete can re-enter the competition. Experience with elite athletes has repeatedly documented remarkably quick recovery when the injured part of the body is electrically coupled to the earth, and when the athlete is subsequently connected to the earth during sleep and recovery (earthing). Electrons have super highways extending the body? Stress relaxation in biological tissues. Stress relaxation is a property of biological tissues that is related to their viscoelastic properties. The stress-strain relationships for a spring and biological tissues such as a blood vessel and bladder can be compared. When a spring is stretched (increased strain applied), the tension (stress) that is generated is proportional to the change in length. Furthermore, the developed tension remains almost constant over time (declines a small amount over time). This decline in pressure (stress) over time at a constant volume (strain) is termed "stress relaxation." That pressure falls while the volume remains constant can be explained by the Law of LaPlace, where wall tension (T) is proportional to pressure (P) times radius (r). When this expression is rearranged and solved for pressure, then P = T/r. Therefore, pressure can fall at constant radius (or volume) if wall tension decreases over time, and this is what occurs during stress relaxation. If an analogous experiment were performed on the bladder (mostly smooth muscle), the stress relaxation would be much greater and occur more rapidly, while a tendon that is primarily composed of collagen shows virtually no stress relaxation. Therefore, different biological tissues display different degrees of stress relaxation. The associative network is extreemly important for the relaxation, seen in the light of muscular dystrophy, where the microtubulis are detached from the cell membrane. The ability to display stress relaxation is related to the biological function of the different tissues. The bladder must have a good relaxation that gives only a small increase in pressure, while aorta need a resistence to be able to keep up the blood pressure. Because they are viscoelastic, polymers behave in a nonlinear fashion. This nonlinearity is described by both stress relaxation and a phenomenon known as creep, which describes how polymers strain under constant stress. The following non-material parameters all affect stress relaxation in polymers: • Magnitude of initial loading • Speed of loading • Temperature (isothermal vs non-isothermal conditions) • Loading medium • Friction and wear • Long-term storage Time dependent effects indicate that the stress-strain behavior of a material will change with time. The classic material model for time dependent effects is viscoelasticity. As the name implies, viscoelasticity incorporates aspects of both fluid behavior (viscous) and solid behavior (elastic). Most notably, we know that elastic materials store 100% of the energy due to deformation. However, viscoelastic materials do not store 100% of the energy under deformation, but actually lose or dissipate some of this energy. This dissipation is also known as hysteresis. Hysteresis explicitly requires that the loading portion of the stress strain curve must be higher than the unloading curve. The ability to dissipate energy is one of the main properties of viscoelastic materials. The two other main characteristics associated with viscoelastic materials are stress relaxation and creep. Creep is in some sense the inverse of stress relaxation, and refers to the general characteristic of viscoelastic materials to undergo increased deformation under a constant stress. In comparison, elastic materials do not exhibit energy dissipation or hysteresis as their loading and unloading curve is the same. Indeed, the fact that all energy due to deformation is stored is a characteristic of elastic materials. Extracellular Matrix and adaptation. Kjær, Michael wrote in 'Role of Extracellular Matrix in Adaptation of Tendon and Skeletal Muscle to Mechanical Loading,' that 'The extracellular matrix (ECM), and especially the connective tissue with its collagen, links tissues of the body together and plays an important role in the force transmission and tissue structure maintenance especially in tendons, ligaments, bone, and muscle. The ECM turnover is influenced by physical activity, and both collagen synthesis and degrading metalloprotease enzymes increase with mechanical loading.' Liboff 2004, ends up with two postulates: 1. Every living organism is completely described by an electromagnetic field vector Po that is specifically determined by a transformation from the genome. 2. All pathologies, abnormalities and traumas are manifested by deviations from the normal field Po, and, within limits, these deviations are compensated for by the homeostatic tendency of the system to return to Po. (pi noll) David Awschalom, Nitin Samarth, 2009: Spintronics without magnetism. Physics 2, 50 (2009) DOI: 10.1103/Physics.2.50 http://physics.aps.org/articles/v2/50 Becker, R.O., 1991. Evidence for a primitive DC electrical analog system controlling brain function. Subtle Energies 2, 71–88. R. O. Becker and G. Selden (1990) The Body Electric: Electromagnetism and the Foundation of Life. William Morrow & Company, Inc., New York. Marco Bischof, 2008: Synchronization and Coherence as an Organizing Principle in the Organism, Social Interaction, and Consciousness NeuroQuantology December 2008, Vol 6, Issue 4, Page 440-451. http://www.neuroquantology.com/journal/index.php/nq/article/view/314/295 Chen, K.-G., 1996. Electrical properties of meridians. IEEE Engineering in Medicine and Biology, May/June, pp. 58–63, 66. http://dx.doi.org/10.1109/51.499759 Peter A. Corning, 2008: WHAT IS LIFE? AMONG OTHER THINGS, IT’S A SYNERGISTIC EFFECT! Cosmos and History: The Journal of Natural and Social Philosophy, Vol 4, No 1-2 (2008). http://www.cosmosandhistory.org/index.php/journal/article/view/91/182 Ho, M.-W., Knight, D.P., 1998. The acupuncture system and the liquid crystalline collagen fibers of the connective tissues. American Journal of Chinese Medicine 26, 251–263. Y HongQin, XIE ShuSen, LI Hui, W YuHua, 2009: On optics of human meridians. Science in China Series G: Physics Mechanics and Astronomy Science China Press, co-published with Springer, 1672-1799 (Print) 1862-2844 (Online), Volume 52, Number 4 / April, 2009, DOI 10.1007/s11433-009-0080-7. http://www.scichina.com:8083/sciGe/EN/article/downloadArticleFile.do?attachType=PDF&id=412172 Frank N. Keutsch and Richard J. Saykally, 2001: Water clusters: Untangling the mysteries of the liquid, one molecule at a time. PNAS, Sept 11, 2001 vol. 98 no. 19, 10533-10540. doi:10.1073/pnas.191266498 Kjær, Michael. Role of Extracellular Matrix in Adaptation of Tendon and Skeletal Muscle to Mechanical Loading. Physiol Rev 84: 649–698, 2004; 10.1152/physrev.00031.2003. Langevin, H.M., 2006. Connective tissue: a body-wide signaling network. Medical Hypotheses 66, 1074–1077. Helene M. Langevin and Jason A. Yandow, 2002: Relationship of Acupuncture Points and Meridians to Connective Tissue Planes. The Anatomical Record, New Anat. 269:257-265. DOI 10.1002/ar.10185. Ding-Zhong Li, Wei-Bo Zhang, Shong-Tao Fu, Yu-Ting Liu, Xiu-Zhang Li, Xiao-Yu Wang, Li-Jian Zhang, Shan-Ling Wu, Miao-He Shen, Chang-Lin Zhang, 2008: Wave character and quantum character of acupuncture systems. International Journal of Modelling, Identification and Control. Volume 5, Number 3 / 2008: 229 - 235. http://inderscience.metapress.com/app/home/contribution.asp?referrer=parent&backto=searcharticlesresults,1,1; Abraham R. Liboff, 2004: Toward an Electromagnetic Paradigm for Biology and Medicine. The Journal of Alternative and Complementary Medicine. February 2004: 41-47. http://www.liebertonline.com/doi/pdfplus/10.1089/107555304322848940 Jasmina Lovrić, Sung Ju Cho1, Françoise M. Winnik and Dusica Maysinger, 2005: Unmodified Cadmium Telluride Quantum Dots Induce Reactive Oxygen Species Formation Leading to Multiple Organelle Damage and Cell Death. doi:10.1016/j.chembiol.2005.09.008 Major, D.F., 2007. Electroacupuncture. A Practical Manual and Resource. Churchill Livingstone/Elsevier, Edinburgh. James L. & Nora Oschman 2009: THE DEVELOPMENT of the LIVING MATRIX CONCEPT AND IT’S SIGNIFICANCE for HEALTH AND HEALING. Science of Healing Conference, Kings College, London, March 13, 2009http://www.massage.net/articles/pdfs/Oschman_Living-Matrix-Concept.pdf M. Pitkänen, 2010: Quantum Control and Coordination in Bio-Systems: part I. http://tgd.wippiespace.com/public_html/pdfpool/qcococI.pdf A. D. M. Rayner, 1997: Degrees of Freedom: Living in Dynamic Boundaries. Imperial College Press. google books. Charles Shang, 2006?: The Mechanism of Acupuncture - Beyond Neurohumoral Theory. http://www.acupuncture.com/education/theory/mechanismacu.htm Schrödinger, E. (1945) What is Life? The Physical Aspect of the Living Cell. New York:The Macmillan Co. Chang Lin Zhang, 2008: Brief history of modern scientific research into acupuncture systems: a path from static anatomic structure of particles to dynamic dissipative structure of electromagnetic field. International Journal of Modelling, Identification and Control, Volume 5, Number 3 / 2008, pp. 176 - 180. http://inderscience.metapress.com/app/home/contribution.asp?referrer=parent&backto=issue,1,13;searcharticlesresults,1,1; lördag 10 april 2010 Stress and relax. The cell membrane. Brain modeling VIII a. As we saw in previous postings the criticality, or robustness, in biology is the problem. It is too rigid. But the other way is true too, where is the non-locality and the supraconditions in the biology? Where is the mind? Where is the psyche? Where is memory and consciousness? String-theory has exactly the same problems, but the other way round. There are only all the possibilities, but no robustness at all. Only mind without body? Can mind be part of the body? Can the problem be solved, and interactions (measurements) be done? What happen when we do interactions/measurements of our surroundings? If we want to scale down the question we must look at the cell. How do the cell experience its surroundings? There is probably very little difference to the cell if the measurement comes from the own body or from environment. How do the cell differ of meaningful and meaningless (noise) signals? Where can we talk of stress? How do the cell handle too much stress? If we leave the cell psychology ahead for a while, and look at the surroundings, the perceptions, and the magnetic impact. The cell psychology is as instance the qualia problem, but we have no tools yet to discuss that. We must be reductionistic in this situation. Membranes in biology. Membranes are bipolar lipids, with a low dielectric constant (due to fat) and a content of other molecules that can be blocked or drawn from the membrane when they are not wanted or needed. They are phagocytosed mostly. The lipid membrane is floating, loose, and an energy reservoir for phosphorylation, and at the same time a communication tool for the neighbourhood (exocrinal hormonal cell signalling, paracrine and autocrine hormonal signals), sense organs for the cell, etc. Overview of signal transduction pathways. Wikipedia. Look only at the mess. Some signals can pass through the membrane without 'passcode', as important ones like oestrogen, insuline, thyroxine, other need a multiply of 'tests' before they are allowed to pass. A second messenger such as Ca++ or cAMP is needed. Then there are other membranes, as the nuclear envelope, that also control it's 'passports'. In principle everything that happens is part of the control output of the cell. There can be no uncontrolled things 'that just happen'. The 'passport' can also be activated in other ways. For example, the neurotransmitter GABA can activate a cell surface receptor that is part of an ion channel. However, for many cell surface receptors, ligand-receptor interactions are not directly linked to the cell's response. The activated receptor must first interact with other proteins inside the cell, in a signal transduction mechanism or pathway. An external signal gives conformal changes in a protein-chain interaction, the mitotic cell cycle is involved, receptors that are kinases start phosphorylation of themselves or others, and induce growth, etc. The adaptor proteines do the choise, the jump. The phosphorylated receptor binds to an adaptor protein , which couples the signal to further downstream signaling processes, for instance attach phosphate to target proteins, and alter cell cycle progression, or output. Complex multi-component signal transduction pathways provide opportunities for feedback, signal amplification, and interactions inside one cell between multiple signals and signaling pathways. What decides the choise done by the cell? Has the cell a free will? As we see many of these steps are also quantum biological. Has the quantum biology, the holographic body, any meaning for the cell? This is a very simplified picture, where I try to point at essential features only. You can see at a sample of communication ways below. I cannot go into depth into the extremengly interesting signalling this time. You only need to know how complicated it is. A. The cell membrane. The cell membrane is more a loci only. A place where signals arrive. Adey, one of the pioneers and big names in this field say 1988 in 'Cell Membranes: The Electromagnetic Environment and Cancer Promotion', ...the sequence and energetics of events that couple humoral stimuli from surface receptor sites to the cell interior has identified cell membranes as a primary site of interaction with these low frequency fields. Field modulation of cell surface chemical events indicates a major amplification of initial weak triggers associated with binding of hormones, antibodies and neurotransmitters to their specific binding sites. Calcium ions play a key role in this stimulus amplification, probably through highly cooperative alterations in binding to surface glycoproteins, with spreading waves of altered calcium binding across the membrane surface. Protein particles spanning the cell membrane form pathways for signaling and energy transfer. Fields millions of times weaker than the membrane potential gradient of 10^5 V/cm modulate cell responses to surface stimulating molecules. The evidence supports nonlinear, nonequilibrium processes at critical steps in transmembrane signal coupling. Powerful cancer-promoting phorbol esters act at cell membranes to stimulate ornithine decarboxylase which is essential for cell growth and DNA synthesis. This response is enhanced by weak microwave fields, also acting at cell membranes Adey says that cell membranes, in coupling humoral stimuli (hormones, neurotransmitters and antibodies) from surface receptor sites to the cell interior, functions as a primary site of interaction with weak oscillating EM fields in the pericellular fluid. This would mean that the primary signal comes not from the cell alone, but from the cell environment. And the signal is forcefully amplified in the passage through the membrane. This we know is true, but not in every case. The important GCPR-receptors stand out here, making up for about half of all receptors. They are most often the targets for the drugs. EM fields in fluid surrounding cells modulate inward and outward signal streams through cell membranes. Is this the real impact? He writes: Careful evaluation of these field actions has revealed subtle effects that betoken mechanisms of interaction based on long range interactions and nonequilibrium processes. Temperature increments are not the primary substrates of the observed biological sensitivities. This was tested for lymphoid cells (leukemia) in 1995. Electromagnetic waves (1G, 60 Hz -this is quite high stimulus level) stimulates the protein tyrosine kinases, so that it results in tyrosine phosphorylation of multiple electrophoretically distinct substrates, and leads to downstream activation of protein kinase C (PKC). A wave of 'destruction' into the cell. But this destruction is selective, some kinases are stimulated, and a delicate growth regulatory balance might be altered. In fact there are many studies revelaing a link between cancer and EM-fields. Humanmade fields are substantially above the naturally occurring ambient electric and magnetic fields of ~10^-4 Vm^-1 and ~10^-13 T, respectively. Several epidemiological studies have concluded that ELF-EMFs may be linked to an increased risk of cancer, particularly childhood leukemia. How might EMFs induce cancer? Magnetic fields can also change the opioid levels, by modulating the gene expression. Ventura et al. writes: Magnetic fields have been shown to affect cell proliferation and growth factor expression in cultured cells. Although the activation of endorphin systems is a recurring motif among the biological events elicited by magnetic fields, compelling evidence indicating that magnetic fields may modulate opioid gene expression is still lacking. We therefore investigated whether extremely low frequency (ELF) pulsed magnetic fields (PMF) may affect opioid peptide gene expression and the signaling pathways controlling opioid peptide gene transcription in the adult ventricular myocyte, a cell type behaving both as a target and as a source for opioid peptides. Conclusions: The present findings demonstrate that an opioid gene is activated by myocyte exposure to PMF and that the cell nucleus and nuclear embedded PKC are a crucial target for the PMF action. Due to the wide ranging importance of opioid peptides in myocardial cell homeostasis, the current data may suggest consideration for potential biological effects of PMF in the cardiovascular system. Vetura, cont. Magnetic fields may elicit multiple effects in biological systems, including behavioural changes in intact organisms. Effects of MF on opioid-related events may have important implications in cellular homeostasis. Among the regulatory systems that appear to be targeted are endogenous opioid peptides. MF can produce analgesic effects through an opioid receptor-mediated mechanism and are able to affect the spontaneous electrical brain activity by interfering with the action of both exogenous and endogenous opioids. In mice, MF have been found to enhance the duration of pharmacologically-induced anaesthesia by releasing endogenous opioids and/or enhancing the activity of opioid signaling pathways. The capability of MF of controlling the central cholinergic system also appears to depend on the activation of an opioidergic pathway. Opioid receptor antagonism also attenuated MF-induced antiparkinsonian effects in man. Opioid peptides may act as growth modulators and may control both cell differentiation and architecture in a wide variety of tissues. The myocardial cell responds to opioid receptor stimulation with deep changes in cytosolic Ca2+/pH homeostasis and contractility. Dynorphin B released Ca2+ from an intracellular store acted in an autocrine fashion to stimulate the transcription of its coding gene (for dynorphin B), involving an impairment of cell growth and differentiation. A delicate growth regulatory balance may be altered following nuclear PKC activation by PMF. PMF-induced prodynorphin gene transcription resulted in the increase of both intracellular and secreted dynorphin B. Dynorphin B is known to bind selectively opioid receptors and the stimulation of these receptors in cardiac myocytes has been shown to promote phosphoinositide turnover, depletion of Ca2+ in the sarcoplasmic reticulum and leading to a marked decrease in the amplitude of the cytosolic Ca2+ transient oscillations and in that of the associated contraction of the heart. Ventura et al ask: Why would hearth cells have a system capable of reacting to PMF? In isolated nuclei an opioid gene can be independently and fully activated by PMF, as in the intact cell. The property of conveying nuclear signaling to the modulation of gene transcription may disclose new perspectives in the molecular dissection of the biological effects. Magnetic fields may - alter human cardiac rhythm (Ventura et al.) - enhance the occurrence of arrhythmia-related heart problems (Ventura et al.) - induce stress responses that protect the embryonic myocardium from anoxia damage (chick) - influence the spontaneous electrical brain activity (rat) (Vorobyov et al.1998.) consistent with the findings of other groups demonstrating that weak magnetic fields may drastically modify the effects of both exogenous and endogenous opioids on different basic functions in vertebrates and invertebrates. Lithium- and dopamine effects. Modification of a brain opioid system may contribute to the clinical response to lithium. Li increase the phosphorylation, but not in the presence of Ca2+ or Ca2+ and calmodulin. Chronic lithium treatment affects some signal transduction mechanisms such as cAMP, cGMP, inositol 1,4,5 P3, Gi protein, protein kinase C and can also modify gene expression in rat brain. Li affects the adrenoceptors and their half lives (=turnover rate). Administration of Li is associated with a reduction in retinal light sensitivity, but chronic lithium use is not associated with differences in retinal light sensitivity, and no retinal toxicity is feared. Li diminish neostriatal dopaminergic activity, but the underlying mechanisms do not appear to involve modifications in either the D1 or the D2 receptor primary ligand recognition sites. The hypothesis of an increased dopamine synthesis is not supported and Li modified the affinity of DA transporters for the radioligand, possibly a consequence of conformational changes induced by the disruption of the nerve terminal membrane environment. Lithium blocks isolation-induced hypersensitivity, especially of the β-adrenergic system. Isolation reduce motor activity, seen in rats. Lithium has an inhibitory effect on neuroleptic receptors ([3H]spiroperidol binding sites) in the limbic-forebrain and on serotonin receptors ([3H]serotonin binding sites) in the hippocampus. Serotonin directs the attention amongst others. The effect of lithium ion on the electrically stimulated 5-[3H]hydroxytryptamine (5-HT) release from the rat hippocampal decreased when exposed to 5-HT, but Li did not affect release alone but inhibited together with serotonin. Li may inhibit the regulation of 5-HT release via presynaptic 5-HT autoreceptors in rat hippocampus. The response on penile erection induced by apomorphine, a mixed Dl/D2 dopamine receptor agonist, (0.05-0.5 mg/kg), was decreased in animals pretreated with chronic lithium, and inhibitory effect of sulpiride increased too. No bliss with Li. Serotonergic (5-HT) dysfunction has been hypothesized in mania, but the results are inconsistent. The platelet 5-HT2 receptor is neither a state marker nor a trait marker in mania, and maybe the serotonin hypothesis is wrong. There are though a clear up- or down-regulation of platelet serotonin receptor responsiveness in bipolar and unipolar depression. The effect on second messengers is interesting too. Lithium reduced the inhibitory ability of carbachol, and reduced the degree of stimulation of formation of inositol phosphate, induced by noradrenaline. Chronic effects of administration of lithium may be related to actions at the G protein level and that different modes of coupling of receptors to G proteins may be responsible for the variety of effects observed. A selective D1 dopamine receptor antagonist, blocked an increase in cAMP formation of all of the dopamine agonists investigated. Are there a relationship between the D1 receptor-stimulated increase in cAMP formation and the induction of dyskinesia in Parkinsonian humans? Robust catalepsy follow from D1 receptor blockade (rat), while dopamine agonists (as apomorphine) effects on bradycardia (induced by stim vagus nerve) decreased significantly the vagal nerve -induced (but blocked by sulpiride) but not the acetylcholine-induced bradycardia, and suggest the presence of presynaptic and/or ganglionic dopamine DA2 receptors in the parasympathetic innervation of the rat heart, stimulation of which inhibits the release of acetylcholine. As a parenthesis I must say Li is mostly used to prevent mania. Stork & Renshaw, 2005, propose a hypothesis of mitochondrial dysfunction in bipolar disorder that involves impaired oxidative phosphorylation, a resultant shift toward glycolytic energy production, a decrease in total energy production and/or substrate availability, and altered phospholipid metabolism. Dopaminergic and opioidergic systems interact in the striatum in the brain to modulate locomotor and motivated behaviors. Dopamine modulate opioid receptor-mediated signal transduction. Repeated activation of D1 receptors attenuates the functional coupling of delta opioid receptors with adenylyl cyclase due to decreased coupling between delta receptors and G proteins. Li acts through cyclotrone resonanse frequencies, as Ca, and Fe do? It's secrets cannot be revealed on cellular level? Age and cAMP-production. Blood vessels from aged animals and humans have impaired relaxation and cAMP production to β-adrenergic stimulation, but direct activators of adenylyl cyclase are not affected. Would the effects on cAMP production occur in membrane? Aortic media membrane was studied in rats. Basal AC activity increased significantly with age, but no age-related decrease in responsiveness for G protein activators, or receptor agonists β-adrenergic and PGE-1 (prostaglandin). The membrane system to assess age-related changes in β-adrenergic responsiveness seem not be the case. Cocaine reduce cAMP production, as age do. A functional change in a critical signal transduction pathway and effects the development of the brain. Overall membrane charachters. So, it seem it is not the membrane that is magnetically active, but the receptors, and they may be activated through an magnetic attraction of the second messengers cAMP, Ca++, GTP etc. These second messengers then amplify the signal. But the membrane give very clear response in magnetic induction fields, seen in fMRI. The cell membrane and especially its receptors, acts as a capture for magnetic waves, just as the genes are captures, seen in the promoter genes. The magnetic field (weak permanent homogenous hirizontal magnetic field (PMF) 400 A/m) affects the lipid constitution too. In radish seedlings Novitskaya et al. found that PMF increased the ratio of phospholipids to sterols by 30–100%, and suppressed the formation of polar lipids in light (by 18%), whereas in darkness, it stimulated it approximately by 80%, very temperature dependant. PMF exerted the strongest effect on the content of erucic acid. PMF behaved as a correction factor affecting lipid metabolism on the background of light and temperature action. Membrane composition also varies between vertebrates and the degree of polyunsaturation of membrane phospholipids is correlated with cellular metabolic activity, so that more phospholipids give a faster metabolism. Membranes can act as pacemakers for overall metabolic activity. Such membrane polyunsaturation increases the molecular activity of many membrane-bound proteins and consequently some specific membrane leak–pump cycles and cellular metabolic activity. A greater transfer of energy during intermolecular collisions of membrane proteins with the unsaturated two carbon units (C=C) of polyunsaturates compared to the single carbon units of saturated acyl chains, as well as the more even distribution of such units throughout the depth of the bilayer when membranes contain polyunsaturated acyl chains compared to monounsaturated ones. The proposed pacemaker role of differences in membrane bilayer composition have importance to the brain (and sensory cells), evolution of mammalian endothermic metabolism, etc. When a cell is exposed to a time-varying magnetic field, this leads to an induced voltage on the cytoplasmic membrane, as well as on the membranes of the internal organelles, such as mitochondria. These potential changes in the organelles could have a significant impact on their functionality. The amount of polarization in the organelle was less than its counterpart in the cytoplasmic membrane. This was largely due to the presence of the cell membrane, which "shielded" the internal organelle from excessive polarization by the field. Organelle polarization was largely dependent on the frequency of the magnetic field. Regional polarization of the cytoplasmic membrane and the organelle membrane by the time-varying magnetic field. The plot demonstrated an instant polarization pattern on both membranes. The color map represented the amount of polarization (in mV) calculated with the standard values listed in table 1. A. Field frequency was 10 KHz. B. Field frequency was 100 KHz. Ye et al. 2010. The effect is also seen in a pattern generation of the molecules in the cell membrane. Distinct 'fields' are clearly seen. This is partly a result of chemical attractions, but also electric and magnetic. The danish solitonic nerve pulse model clearly show such patterns. Electric field vector plot and potential distribution near the plasma membrane with mobile surface charges in an alternating electric field. The uniform electric field in the cell is greater at (a) 106 Hz than at (b) 102 Hz. The excitation field is 1.0 V/cm. Vajrala et al. 2008. Observe that this is an electric field. Elastic fibres have important cell adhesion functions. Electron microscopy and biochemical studies have highlighted strong interaction with their subendothelial elastic fibre-containing matrix, and with juxtaposed elastic fibre lamellae at cell surface dense plaques. These interactions are mediated mainly through heterodimeric transmembrane receptors. Many diseases depends on the microtubuli - attatchment to the cell membrane. Myotrophies as Duchennes and Beckers dystrofies are one result. Without a proper cytosceleton the cell cannot work. B. How might EMFs induce cancer? Lacy-Hulberta et al. writes: Free radicals are generated as intermediates in metabolism and may attack lipids, proteins, and DNA. Thus, any elevation in free radical production could increase the rate of chemical damage to DNA as occurs, for example, as a consequence of sustained activation of the immune system in response to chronic infection. Magnetic fields of more than 1 mT can have measurable effects on the kinetics and yield of chemical reactions that use geminate radical pairs through their effect on the spin precession rates of unpaired electrons and consequent effects on the lifetime of radicals. The magnetic field can increase or decrease precession rates between singlet and triplet spin-correlated states. Hence, a geminate radical pair born in the singlet spin state may rapidly recombine; after precession to the triplet spin state, recombination is prohibited by the Pauli exclusion principle, resulting in a longer radical lifetime. The consequence of this may be, for example, increased enzyme product or release of radicals from the enzyme. Electromagnetic field effects on free radical processes. A) A reaction between two species can generate a pair of radicals in the triplet state with parallel electron spin. If one of the electrons converts to a different state, changing its spin, the radical pair can react to form product. B) This change involves transfer of the electron between the three triplet states: T0, T-1, and T+1. These states are normally degenerate, but in a magnetic field the energies separate. When this separation is less than the hyperfine reaction for the system, the radicals created in the triplet states can be transformed into singlets and react. When the separation is greater than the hyperfine reaction, radicals created in T-1 and T+1 triplet states cannot interconvert and hence reaction cannot take place. However, an alternating magnetic field of frequency {upsilon} can excite electron transitions between levels, allowing transition to the singlet states even in high magnetic fields. Alternating magnetic fields superimposed on static magnetic fields can further affect reactions by providing quanta of energy equal to the gap between singlet and triplet states, allowing transition of radicals and hence increasing reaction probability. The effect requires both static magnetic fields and fields fluctuating at a resonance frequency. These examples (ROS, neutrophiles) represent clear, reproducible effects of magnetic fields on biochemical systems with a firm theoretical basis. Effects are reported from 0.1 mT fields at 60 Hz. Nitric oxide production is also interesting, as other immune reactions. A free radical basis for magnetic field effects would have some important implications for investigations and epidemiological studies. The processes affected occur very rapidly, and so at the level of simple effects are independent of frequency; in many cases, the geomagnetic field exposure would far outweigh the alternating field. However, as described above, more complex effects can occur in vitro with specific combinations of static and alternating magnetic fields, and these combinations vary with the free radical species involved. This also correlates with the radical-pair theory of Ritz mentioned in earlier posting. Also with the laser-effects found by Tiina Karu. Seedling growth magnetically sensitive as a result of photoinduced radical-pair reactions in cryptochrome photoreceptors—tested by measuring several cryptochrome-dependent responses, all of which proved to be enhanced in a magnetic field of intensity 500 μT, show a way forward. C. The Extracellular Matrix (ECM). Disturbances and dysfunctions are most evident effects of longterm illness, and medicines seldom can change this. Is the reason somewhere else? In the synchronisation/control-levels, but not at cell-level? To believe something else only means misuse of medicine? We need to look at the networks, to step up a level in the molecular hierarchies. Maybe the reason to the induced control-signals are found there? We have seen that many of the signals indeed arrive from outside the cell, even gene regulation signals. When we look at the SRP-molecule in the promoter-gene we find all those loops that will change the magnetic field very strongly, make superpositions as Lacy-Hulberta et al. suggests. In fact, Nature herself use networks, as seen in nerves, blood circulation, gap junction systems, nanotubes, hormonal systems, meridians etc. Maybe there are an oxygen network too, as Mae Wan Ho said? Oxygen makes more energy available and at greater efficiency; at the same time, it increases the complexity of metabolic networks. A network that take part very much in the relaxation in form of negentropic bindings. A fast relaxation is important for the functions in the organisms. They are as important as the very minute signal is. In fact,it is exactly the relaxation that make up the robustness and criticality in matter. It is the probabilities that changes in different magnetic fields, The relaxation leads to polarization. Adhesion and adsorption makes it go faster and time is very important too. Paluch et al. writes 2006: The shape of animal cells is, to a large extent, determined by the cortical actin network that underlies the cell membrane. Because of the presence of myosin motors, the actin cortex is under tension, and local relaxation of this tension can result in cortical flows that lead to deformation and polarization of the cell. Cortex relaxation is often regulated by polarizing signals, but the cortex can also rupture and relax spontaneously. A similar tension-induced polarization is observed in actin gels growing around beads, and we propose that a common mechanism governs actin gel rupture in both systems. We shall look at yet a network, the extracellular matrix, that do supramolecular organizations, and is non-local and very fast. But that will be in a new posting, 'Stress and relax. The Extracellular Matrix. Brain modelling VIII b', coming soon. I will finish with some words from Matti Pitkänen. In this framework the energy feed to the system means that the (quantum) superposition changes in such a manner that the average energy of the positive energy state increases. This excites new degrees of freedom and makes the system more complex. The dissipation caused by quantum jumps reducing entanglement entropy tends to reduce the average energy and this tendency is compensated by the energy feed selecting also the most stable self-organization patterns as a flow equilibrium. The hologrammic organization is the ultimate, most stable organization. But in a holistic model ought the gravity also be included. Maybe we will soon know what gravity really is? W.R.Adey 1988: Cell Membranes: The Electromagnetic Environment and Cancer Promotion. Neurochemical Research, Vol. 13, No. 7, 1988, pp. 671-677. http://www.springerlink.com/content/h507p8wq85141871/fulltext.pdf?page=1 W. Ross Adey 1993: Biological Effects of Electromagnetic Fields. Journal of Cellular Biochemistry 51:410-416 (1993). http://www.energycelltherapy.co.uk/pdfs/biological.pdf Sue-Re Harris, Kevin B. Henbest, Kiminori Maeda, John R. Pannell, Christiane R. Timmel, P.J. Hore and Haruko Okamoto, 2009: Effect of magnetic fields on cryptochrome-dependent responses in Arabidopsis thaliana. J. R. Soc. Interface 6 December 2009 vol. 6 no. 41 1193-1205. Adam Lacy-Hulberta, James C. Metcalfea, and Robin Hesketh, 1998: Biological responses to electromagnetic fields. The FASEB Journal. 1998;12:395-420. http://www.fasebj.org/cgi/content/full/12/6/395 E. Paluch, J. van der Gucht, and C. Sykes (2006): Cracking up: symmetry breaking in cellular systems. J. Cell Biol. 175, 687-692 http://jcb.rupress.org/content/175/5/687.abstract Fatih M. Uckun, Tomohiro Kurosaki, Jizhong Jin, Xiao Jun, Andre Morgan, Minoru Takata, Joseph Bolen and Richard Luben, 1995: Exposure of B-lineage Lymphoid Cells to Low Energy Electromagnetic Fields Stimulates Lyn Kinase. November 17, 1995 The Journal of Biological Chemistry, 270, 27666-27670. doi: 10.1074/jbc.270.46.27666 Vijayanand Vajrala, James R. Claycomb, Hugo Sanabria, and John H. Miller, Jr., 2008: Effects of Oscillatory Electric Fields on Internal Membranes: An Analytical Model. Biophys J. 2008 March 15; 94(6): 2043–2052. doi: 10.1529/biophysj.107.114611. PMCID: PMC2257880 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2257880/?tool=pubmed Vasily Vasilievitch Vorobyov, Evgeni Alekseevitch Sosunov, Nikolai Ilitch Kukushkin and Valeri Vasilievitch Lednev, 1998: Weak combined magnetic field affects basic and morphine-induced rat's EEG. Brain Research Volume 781, Issues 1-2, 19 January 1998, Pages 182-187. doi:10.1016/S0006-8993(97)01228-6 Carlo Venturaa, Margherita Maiolia, Gianfranco Pintusa, Giovanni Gottardic and Ferdinando Bersani, 2000: Elf-pulsed magnetic fields modulate opioid peptide gene expression in myocardial cells. Cardiovasc Res (2000) 45 (4): 1054-1064. doi: 10.1016/S0008-6363(99)00408-3 Hui Ye, Marija Cotic, Eunji E Kang, Michael G Fehlings, and Peter L Carlen, 2010: Transmembrane potential induced on the internal organelle by a time-varying magnetic field: a model study. J Neuroeng Rehabil. 2010; 7: 12. doi: 10.1186/1743-0003-7-12. PMCID:PMC2836366 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2836366/
6e151db26f9dcca9
All Issues Volume 18, 2019 Volume 17, 2018 Volume 16, 2017 Volume 15, 2016 Volume 14, 2015 Volume 13, 2014 Volume 12, 2013 Volume 11, 2012 Volume 10, 2011 Volume 9, 2010 Volume 8, 2009 Volume 7, 2008 Volume 6, 2007 Volume 5, 2006 Volume 4, 2005 Volume 3, 2004 Volume 2, 2003 Volume 1, 2002 Communications on Pure & Applied Analysis May 2016 , Volume 15 , Issue 3 Select all articles Nonexistence of positive solutions for polyharmonic systems in $\mathbb{R}^N_+$ Yuxia Guo and Bo Li 2016, 15(3): 701-713 doi: 10.3934/cpaa.2016.15.701 +[Abstract](1081) +[PDF](393.7KB) In this paper, we study the monotonicity and nonexistence of positive solutions for polyharmonic systems $\left\{\begin{array}{rlll} (-\Delta)^m u&=&f(u, v)\\ (-\Delta)^m v&=&g(u, v) \end{array}\right.\;\hbox{in}\;\mathbb{R}^N_+.$ By using the Alexandrov-Serrin method of moving plane combined with integral inequalities and Sobolev's inequality in a narrow domain, we prove the monotonicity of positive solutions for semilinear polyharmonic systems in $\mathbb{R_+^N}.$ As a result, the nonexistence for positive weak solutions to the system are obtained. On Compactness Conditions for the $p$-Laplacian Pavel Jirásek 2016, 15(3): 715-726 doi: 10.3934/cpaa.2016.15.715 +[Abstract](1265) +[PDF](369.4KB) We investigate the geometry and validity of various compactness conditions (e.g. Palais-Smale condition) for the energy functional \begin{eqnarray} J_{\lambda_1}(u)=\frac{1}{p}\int_\Omega |\nabla u|^p \ \mathrm{d}x- \frac{\lambda_1}{p}\int_\Omega|u|^p \ \mathrm{d}x - \int_\Omega fu \ \mathrm{d}x \nonumber \end{eqnarray} for $u \in W^{1,p}_0(\Omega)$, $1 < p < \infty$, where $\Omega$ is a bounded domain in $\mathbb{R}^N$, $f \in L^\infty(\Omega)$ is a given function and $-\lambda_1<0$ is the first eigenvalue of the Dirichlet $p$-Laplacian $\Delta_p$ on $W_0^{1,p}(\Omega)$. Well-posedness and ill-posedness results for the Novikov-Veselov equation Yannis Angelopoulos 2016, 15(3): 727-760 doi: 10.3934/cpaa.2016.15.727 +[Abstract](1336) +[PDF](563.2KB) In this paper we study the Novikov-Veselov equation and the related modified Novikov-Veselov equation in certain Sobolev spaces. We prove local well-posedness in $H^s (\mathbb{R}^2)$ for $s > \frac{1}{2}$ for the Novikov-Veselov equation, and local well-posedness in $H^s (\mathbb{R}^2)$ for $s > 1$ for the modified Novikov-Veselov equation. Finally we point out some ill-posedness issues for the Novikov-Veselov equation in the supercritical regime. A weighted $L_p$-theory for second-order parabolic and elliptic partial differential systems on a half space Kyeong-Hun Kim and Kijung Lee 2016, 15(3): 761-794 doi: 10.3934/cpaa.2016.15.761 +[Abstract](1267) +[PDF](547.8KB) In this article we consider parabolic systems and $L_p$ regularity of the solutions. With zero boundary condition the solutions experience bad regularity near the boundary. This article addresses a possible way of describing the regularity nature. Our space domain is a half space and we adapt an appropriate weight into our function spaces. In this weighted Sobolev space setting we develop a Fefferman-Stein theorem, a Hardy-Littlewood theorem and sharp function estimations. Using these, we prove uniqueness and existence results for second-order elliptic and parabolic partial differential systems in weighed Sobolev spaces. A class of virus dynamic model with inhibitory effect on the growth of uninfected T cells caused by infected T cells and its stability analysis Wenbo Cheng, Wanbiao Ma and Songbai Guo 2016, 15(3): 795-806 doi: 10.3934/cpaa.2016.15.795 +[Abstract](1372) +[PDF](413.7KB) A class of virus dynamic model with inhibitory effect on the growth of uninfected T cells caused by infected T cells is proposed. It is shown that the infection-free equilibrium of the model is globally asymptotically stable, if the reproduction number $R_0$ is less than one, and that the infected equilibrium of the model is locally asymptotically stable, if the reproduction number $R_0$ is larger than one. Furthermore, it is also shown that the model is uniformly persistent, and some explicit formulae for the lower bounds of the solutions of the model are obtained. A Liouville-type theorem for higher order elliptic systems of Hé non-Lane-Emden type Frank Arthur and Xiaodong Yan 2016, 15(3): 807-830 doi: 10.3934/cpaa.2016.15.807 +[Abstract](1811) +[PDF](476.2KB) We prove there are no positive solutions with slow decay rates to higher order elliptic system \begin{eqnarray} \left\{ \begin{array}{c} \left( -\Delta \right) ^{m}u=\left\vert x\right\vert ^{a}v^{p} \\ \left( -\Delta \right) ^{m}v=\left\vert x\right\vert ^{b}u^{q} \end{array} \text{ in }\mathbb{R}^{N}\right. \end{eqnarray} if $p\geq 1,$ $q\geq 1,$ $\left( p,q\right) \neq \left( 1,1\right) $ satisfies $\frac{1+\frac{a}{N}}{p+1}+\frac{1+\frac{b}{N}}{q+1}>1-\frac{2m}{N} $ and \begin{eqnarray} \max \left( \frac{2m\left( p+1\right) +a+bp}{pq-1},\frac{2m\left( q+1\right) +aq+b}{pq-1}\right) >N-2m-1. \end{eqnarray} Moreover, if $N=2m+1$ or $N=2m+2,$ this system admits no positive solutions with slow decay rates if $p\geq 1,$ $q\geq 1,$ $\left( p,q\right) \neq \left( 1,1\right) $ satisfies $\frac{1}{ p+1}+\frac{1}{q+1}>1-\frac{2m}{N}.$ Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity Hiroyuki Hirayama and Mamoru Okamoto 2016, 15(3): 831-851 doi: 10.3934/cpaa.2016.15.831 +[Abstract](1635) +[PDF](481.9KB) In the present paper, we consider the Cauchy problem of fourth order nonlinear Schrödinger type equations with derivative nonlinearity. In one dimensional case, the small data global well-posedness and scattering for the fourth order nonlinear Schrödinger equation with the nonlinear term $\partial _x (\overline{u}^4)$ are shown in the scaling invariant space $\dot{H}^{-1/2}$. Furthermore, we show that the same result holds for the $d \ge 2$ and derivative polynomial type nonlinearity, for example $|\nabla | (u^m)$ with $(m-1)d \ge 4$, where $d$ denotes the space dimension. A class of generalized quasilinear Schrödinger equations Yaotian Shen and Youjun Wang 2016, 15(3): 853-870 doi: 10.3934/cpaa.2016.15.853 +[Abstract](1173) +[PDF](438.2KB) We establish the existence of nontrivial solutions for the following quasilinear Schrödinger equation with critical Sobolev exponent: \begin{eqnarray} -\Delta u+V(x) u-\Delta [l(u^2)]l'(u^2)u= \lambda u^{\alpha2^*-1}+h(u),\ \ x\in \mathbb{R}^N, \end{eqnarray} where $V(x):\mathbb{R}^N\rightarrow \mathbb{R}$ is a given potential and $l,h$ are real functions, $\lambda\geq 0$, $\alpha>1$, $2^*=2N/(N-2)$, $N\geq 3$. Our results cover two physical models $l(s)=s^{\frac{\alpha}{2}}$ and $l(s) = (1+s)^{\frac{\alpha}{2}}$ with $\alpha\geq 3/2$. Traveling waves for a diffusive SEIR epidemic model Zhiting Xu 2016, 15(3): 871-892 doi: 10.3934/cpaa.2016.15.871 +[Abstract](1345) +[PDF](461.7KB) In this paper, we propose a diffusive SEIR epidemic model with saturating incidence rate. We first study the well posedness of the model, and give the explicit formula of the basic reproduction number $\mathcal{R}_0$. And hence, we show that if $\mathcal{R}_0>1$, then there exists a positive constant $c^*>0$ such that for each $c>c^*$, the model admits a nontrivial traveling wave solution, and if $\mathcal{R}_0\leq1$ and $c\geq 0$ (or, $\mathcal{R}_0>1$ and $c\in[0,c^*)$), then the model has no nontrivial traveling wave solutions. Consequently, we confirm that the constant $c^*$ is indeed the minimal wave speed. The proof of the main results is mainly based on Schauder fixed theorem and Laplace transform. Qualitative properties of solutions to an integral system associated with the Bessel potential Lu Chen, Zhao Liu and Guozhen Lu 2016, 15(3): 893-906 doi: 10.3934/cpaa.2016.15.893 +[Abstract](1243) +[PDF](403.5KB) In this paper, we study a differential system associated with the Bessel potential: \begin{eqnarray}\begin{cases} (I-\Delta)^{\frac{\alpha}{2}}u(x)=f_1(u(x),v(x)),\\ (I-\Delta)^{\frac{\alpha}{2}}v(x)=f_2(u(x),v(x)), \end{cases}\end{eqnarray} where $f_1(u(x),v(x))=\lambda_1u^{p_1}(x)+\mu_1v^{q_1}(x)+\gamma_1u^{\alpha_1}(x)v^{\beta_1}(x)$, $f_2(u(x),v(x))=\lambda_2u^{p_2}(x)+\mu_2v^{q_2}(x)+\gamma_2u^{\alpha_2}(x)v^{\beta_2}(x)$, $I$ is the identity operator and $\Delta=\sum_{j=1}^{n}\frac{\partial^2}{\partial x^2_j}$ is the Laplacian operator in $\mathbb{R}^n$. Under some appropriate conditions, this differential system is equivalent to an integral system of the Bessel potential type. By the regularity lifting method developed in [4] and [18], we obtain the regularity of solutions to the integral system. We then apply the moving planes method to obtain radial symmetry and monotonicity of positive solutions. We also establish the uniqueness theorem for radially symmetric solutions. Our nonlinear terms $f_1(u(x), v(x))$ and $f_2(u(x), v(x))$ are quite general and our results extend the earlier ones even in the case of single equation substantially. On the differentiability of the solutions of non-local Isaacs equations involving $\frac{1}{2}$-Laplacian Imran H. Biswas and Indranil Chowdhury 2016, 15(3): 907-927 doi: 10.3934/cpaa.2016.15.907 +[Abstract](1118) +[PDF](476.4KB) We derive $C^{1,\sigma}$-estimate for the solutions of a class of non-local elliptic Bellman-Isaacs equations. These equations are fully nonlinear and are associated with infinite horizon stochastic differential game problems involving jump-diffusions. The non-locality is represented by the presence of fractional order diffusion term and we deal with the particular case of $\frac 12$-Laplacian, where the order $\frac 12$ is known as the critical order in this context. More importantly, these equations are not translation invariant and we prove that the viscosity solutions of such equations are $C^{1,\sigma}$, making the equations classically solvable. Oscillatory integrals related to Carleson's theorem: fractional monomials Shaoming Guo 2016, 15(3): 929-946 doi: 10.3934/cpaa.2016.15.929 +[Abstract](1101) +[PDF](441.3KB) Stein and Wainger [21] proved the $L^p$ bounds of the polynomial Carleson operator for all integer-power polynomials without linear term. In the present paper, we partially generalise this result to all fractional monomials in dimension one. Moreover, the connections with Carleson's theorem and the Hilbert transform along vector fields or (variable) curves %and a polynomial Carleson operator along the paraboloid are also discussed in details. Layer solutions for an Allen-Cahn type system driven by the fractional Laplacian Yan Hu 2016, 15(3): 947-964 doi: 10.3934/cpaa.2016.15.947 +[Abstract](1295) +[PDF](439.7KB) We study entire solutions in $R$ of the nonlocal system $(-\Delta)^{s}U+\nabla W(U)=(0,0)$ where $W:R^{2}\rightarrow R$ is a double well potential. We seek solutions $U$ which are heteroclinic in the sense that they connect at infinity a pair of global minima of $W$ and are also global minimizers. Under some symmetric assumptions on potential $W$, we prove the existence of such solutions for $s>\frac{1}{2}$, and give asymptotic behavior as $x\rightarrow\pm\infty$. Infinitely many solutions for nonlinear Schrödinger system with non-symmetric potentials Weiwei Ao, Liping Wang and Wei Yao 2016, 15(3): 965-989 doi: 10.3934/cpaa.2016.15.965 +[Abstract](1218) +[PDF](496.0KB) Without any symmetric conditions on potentials, we proved the following nonlinear Schrödinger system \begin{eqnarray} \left\{\begin{array}{ll} \Delta u-P(x)u+\mu_1u^3+\beta uv^2=0, \quad &\mbox{in} \quad R^2\\ \Delta v-Q(x)v+\mu_2v^3+\beta vu^2=0, \quad &\mbox{in} \quad R^2 \end{array} \right. \end{eqnarray} has infinitely many non-radial solutions with suitable decaying rate at infinity of potentials $P(x)$ and $Q(x)$. This is the continued work of [8]. Especially when $P(x)$ and $Q(x)$ are symmetric, this result has been proved in [18]. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent Kaimin Teng and Xiumei He 2016, 15(3): 991-1008 doi: 10.3934/cpaa.2016.15.991 +[Abstract](1464) +[PDF](430.2KB) In this paper, we establish the existence of ground state solutions for fractional Schrödinger equations with a critical exponent. The methods used here are based on the $s-$harmonic extension technique of Caffarelli and Silvestre, the concentration-compactness principle of Lions and methods of Brezis and Nirenberg. Global regular solutions to two-dimensional thermoviscoelasticity Jerzy Gawinecki and Wojciech M. Zajączkowski 2016, 15(3): 1009-1028 doi: 10.3934/cpaa.2016.15.1009 +[Abstract](1129) +[PDF](416.5KB) A two-dimensional thermoviscoelastic system of Kelvin-Voigt type with strong dependence on temperature is considered. The existence and uniqueness of a global regular solution is proved without small data assumptions. The global existence is proved in two steps. First global a priori estimate is derived applying the theory of anisotropic Sobolev spaces with a mixed norm. Then local existence, proved by the method of successive approximations for a sufficiently small time interval, is extended step by step in time. By two-dimensional solution we mean that all its quantities depend on two space variables only. Inversion of the spherical Radon transform on spheres through the origin using the regular Radon transform Sunghwan Moon 2016, 15(3): 1029-1039 doi: 10.3934/cpaa.2016.15.1029 +[Abstract](1429) +[PDF](4710.4KB) A spherical Radon transform whose integral domain is a sphere has many applications in partial differential equations as well as tomography. This paper is devoted to the spherical Radon transform which assigns to a given function its integrals over the set of spheres passing through the origin. We present a relation between this spherical Radon transform and the regular Radon transform, and we provide a new inversion formula for the spherical Radon transform using this relation. Numerical simulations were performed to demonstrate the suggested algorithm in dimension 2. Bogdanov-Takens bifurcation of codimension 3 in a predator-prey model with constant-yield predator harvesting Jicai Huang, Sanhong Liu, Shigui Ruan and Xinan Zhang 2016, 15(3): 1041-1055 doi: 10.3934/cpaa.2016.15.1041 +[Abstract](1612) +[PDF](2561.0KB) Recently, we (J. Huang, Y. Gong and S. Ruan, Discrete Contin. Dynam. Syst. B 18 (2013), 2101-2121) showed that a Leslie-Gower type predator-prey model with constant-yield predator harvesting has a Bogdanov-Takens singularity (cusp) of codimension 3 for some parameter values. In this paper, we prove analytically that the model undergoes Bogdanov-Takens bifurcation (cusp case) of codimension 3. To confirm the theoretical analysis and results, we also perform numerical simulations for various bifurcation scenarios, including the existence of two limit cycles, the coexistence of a stable homoclinic loop and an unstable limit cycle, supercritical and subcritical Hopf bifurcations, and homoclinic bifurcation of codimension 1. Traveling wave solutions in a nonlocal reaction-diffusion population model Bang-Sheng Han and Zhi-Cheng Wang 2016, 15(3): 1057-1076 doi: 10.3934/cpaa.2016.15.1057 +[Abstract](1906) +[PDF](1955.9KB) This paper is concerned with a nonlocal reaction-diffusion equation with the form \begin{eqnarray} \frac{\partial u}{\partial t}=\frac{\partial^{2}u}{\partial x^{2}}+u\left\{ 1+\alpha u-\beta u^{2}-(1+\alpha-\beta)(\phi\ast u) \right\}, \quad (t,x)\in (0,\infty) \times \mathbb{R}, \end{eqnarray} where $\alpha $ and $\beta$ are positive constants, $0<\beta<1+\alpha$. We prove that there exists a number $c^*\geq 2$ such that the equation admits a positive traveling wave solution connecting the zero equilibrium to an unknown positive steady state for each speed $c>c^*$. At the same time, we show that there is no such traveling wave solutions for speed $c<2$. For sufficiently large speed $c>c^*$, we further show that the steady state is the unique positive equilibrium. Using the lower and upper solutions method, we also establish the existence of monotone traveling wave fronts connecting the zero equilibrium and the positive equilibrium. Finally, for a specific kernel function $\phi(x):=\frac{1}{2\sigma}e^{-\frac{|x|}{\sigma}}$ ($\sigma>0$), by numerical simulations we show that the traveling wave solutions may connects the zero equilibrium to a periodic steady state as $\sigma$ is increased. Furthermore, by the stability analysis we explain why and when a periodic steady state can appear. 2018  Impact Factor: 0.925 Email Alert [Back to Top]
9a93b4b2df1a1620
World Library   Flag as Inappropriate Email this Article Computational chemistry Article Id: WHEBN0000006019 Reproduction Date: Title: Computational chemistry   Author: World Heritage Encyclopedia Language: English Subject: John Pople, Quantum chemistry, In the news/Candidates/October 2013, GAMESS (US), Theoretical chemistry Collection: Computational Chemistry, Theoretical Chemistry Publisher: World Heritage Encyclopedia Computational chemistry Computational chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into efficient computer programs, to calculate the structures and properties of molecules and solids. Its necessity arises from the fact that — apart from relatively recent results concerning the hydrogen molecular ion (see references therein for more details) — the quantum many-body problem cannot be solved analytically, much less in closed form. While computational results normally complement the information obtained by chemical experiments, it can in some cases predict hitherto unobserved chemical phenomena. It is widely used in the design of new drugs and materials. Examples of such properties are structure (i.e. the expected positions of the constituent atoms), absolute and relative (interaction) energies, electronic charge distributions, dipoles and higher multipole moments, vibrational frequencies, reactivity or other spectroscopic quantities, and cross sections for collision with other particles. The methods employed cover both static and dynamic situations. In all cases the computer time and other resources (such as memory and disk space) increase rapidly with the size of the system being studied. That system can be a single molecule, a group of molecules, or a solid. Computational chemistry methods range from highly accurate to very approximate; highly accurate methods are typically feasible only for small systems. Ab initio methods are based entirely on quantum mechanics and basic physical constants. Other methods are called empirical or semi-empirical because they employ additional empirical parameters. Both ab initio and semi-empirical approaches involve approximations. These range from simplified forms of the first-principles equations that are easier or faster to solve, to approximations limiting the size of the system (for example, periodic boundary conditions), to fundamental approximations to the underlying equations that are required to achieve any solution to them at all. For example, most ab initio calculations make the Born–Oppenheimer approximation, which greatly simplifies the underlying Schrödinger equation by assuming that the nuclei remain in place during the calculation. In principle, ab initio methods eventually converge to the exact solution of the underlying equations as the number of approximations is reduced. In practice, however, it is impossible to eliminate all approximations, and residual error inevitably remains. The goal of computational chemistry is to minimize this residual error while keeping the calculations tractable. In some cases, the details of electronic structure are less important than the long-time phase space behavior of molecules. This is the case in conformational studies of proteins and protein-ligand binding thermodynamics. Classical approximations to the potential energy surface are employed, as they are computationally less intensive than electronic calculations, to enable longer simulations of molecular dynamics. Furthermore, cheminformatics uses even more empirical (and computationally cheaper) methods like machine learning based on physicochemical properties. One typical problem in cheminformatics is to predict the binding affinity of drug molecules to a given target. • History 1 • Fields of application 2 • Accuracy 3 • Methods 4 • Ab initio methods 4.1 • Density functional methods 4.2 • Semi-empirical and empirical methods 4.3 • Molecular mechanics 4.4 • Methods for solids 4.5 • Chemical dynamics 4.6 • Molecular dynamics 4.7 • Interpreting molecular wave functions 5 • Software packages 6 • See also 7 • Notes and references 8 • Bibliography 9 • Specialized journals on computational chemistry 10 • External links 11 Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow. With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were carried out. Theoretical chemists became extensive users of the early digital computers. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe.[1] The first ab initio Hartree–Fock calculations on diatomic molecules were carried out in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960.[2] The first polyatomic calculations using Gaussian orbitals were carried out in the late 1950s. The first configuration interaction calculations were carried out in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers.[3] By 1971, when a bibliography of ab initio calculations was published,[4] the largest molecules included were naphthalene and azulene.[5][6] Abstracts of many earlier developments in ab initio theory have been published by Schaefer.[7] In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method for the determination of electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford.[8] These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO.[9] In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed up ab initio calculations of molecular orbitals. Of these four programs, only GAUSSIAN, now massively expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2, were developed, primarily by Norman Allinger.[10] One of the first mentions of the term "computational chemistry" can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality."[11] During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry.[12] The Journal of Computational Chemistry was first published in 1980. Computational chemistry has featured in a number of Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry.[13] Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems".[14] Fields of application The term theoretical chemistry may be defined as a mathematical description of chemistry, whereas computational chemistry is usually used when a mathematical method is sufficiently well developed that it can be automated for implementation on a computer. In theoretical chemistry, chemists, physicists and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions. There are two different aspects to computational chemistry: • Computational studies can be carried out to find a starting point for a laboratory synthesis, or to assist in understanding experimental data, such as the position and source of spectroscopic peaks. • Computational studies can be used to predict the possibility of so far entirely unknown molecules or to explore reaction mechanisms that are not readily studied by experimental means. Thus, computational chemistry can assist the experimental chemist or it can challenge the experimental chemist to find entirely new chemical objects. Several major areas may be distinguished within computational chemistry: • The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantum chemical methods, to find stationary points on the energy surface as the position of the nuclei is varied. • Storing and searching for data on chemical entities (see chemical databases). • Identifying correlations between chemical structures and properties (see QSPR and QSAR). • Computational approaches to help in the efficient synthesis of compounds. • Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design and catalysis). The words exact and perfect do not appear here, as very few aspects of chemistry can be computed exactly. However, almost every aspect of chemistry can be described in a qualitative or approximate quantitative computational scheme. Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational expense of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of molecules that contain up to about 40 electrons with sufficient accuracy. Errors for energies can be less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometres and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen electrons is computationally tractable by approximate methods such as density functional theory (DFT). There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that employ what are called molecular mechanics. In QM/MM methods, small portions of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM). A single molecular formula can represent a number of molecular isomers. Each isomer is a local minimum on the energy surface (called the potential energy surface) created from the total energy (i.e., the electronic energy, plus the repulsion energy between the nuclei) as a function of the coordinates of all the nuclei. A stationary point is a geometry such that the derivative of the energy with respect to all displacements of the nuclei is zero. A local (energy) minimum is a stationary point where all such displacements lead to an increase in energy. The local minimum that is lowest is called the global minimum and corresponds to the most stable isomer. If there is one particular coordinate change that leads to a decrease in the total energy in both directions, the stationary point is a transition structure and the coordinate is the reaction coordinate. This process of determining stationary points is called geometry optimization. The determination of molecular structure by geometry optimization became routine only after efficient methods for calculating the first derivatives of the energy with respect to all atomic coordinates became available. Evaluation of the related second derivatives allows the prediction of vibrational frequencies if harmonic motion is estimated. More importantly, it allows for the characterization of stationary points. The frequencies are related to the eigenvalues of the Hessian matrix, which contains second derivatives. If the eigenvalues are all positive, then the frequencies are all real and the stationary point is a local minimum. If one eigenvalue is negative (i.e., an imaginary frequency), then the stationary point is a transition structure. If more than one eigenvalue is negative, then the stationary point is a more complex one, and is usually of little interest. When one of these is found, it is necessary to move the search away from it if the experimenter is looking solely for local minima and transition structures. The total energy is determined by approximate solutions of the time-dependent Schrödinger equation, usually with no relativistic terms included, and by making use of the Born–Oppenheimer approximation, which allows for the separation of electronic and nuclear motions, thereby simplifying the Schrödinger equation. This leads to the evaluation of the total energy as a sum of the electronic energy at fixed nuclei positions and the repulsion energy of the nuclei. A notable exception are certain approaches called direct quantum chemistry, which treat electrons and nuclei on a common footing. Density functional methods and semi-empirical methods are variants on the major theme. For very large systems, the relative total energies can be compared using molecular mechanics. The ways of determining the total energy to predict molecular structures are: Ab initio methods The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theoretical principles, with no inclusion of experimental data – are called ab initio methods. This does not imply that the solution is an exact one; they are all approximate quantum mechanical calculations. It means that a particular approximation is rigorously defined on first principles (quantum theory) and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods have to be employed, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made). Diagram illustrating various ab initio electronic structure methods in terms of energy. Spacings are not to scale. The simplest type of ab initio electronic structure calculation is the Hartree–Fock (HF) scheme, an extension of molecular orbital theory, in which the correlated electron–electron repulsion is not specifically taken into account; only its average effect is included in the calculation. As the basis set size is increased, the energy and wave function tend towards a limit called the Hartree–Fock limit. Many types of calculations (known as post-Hartree–Fock methods) begin with a Hartree–Fock calculation and subsequently correct for electron–electron repulsion, referred to also as electronic correlation. As these methods are pushed to the limit, they approach the exact solution of the non-relativistic Schrödinger equation. In order to obtain exact agreement with experiment, it is necessary to include relativistic and spin orbit terms, both of which are only really important for heavy atoms. In all of these approaches, in addition to the choice of method, it is necessary to choose a basis set. This is a set of functions, usually centered on the different atoms in the molecule, which are used to expand the molecular orbitals with the LCAO ansatz. Ab initio methods need to define a level of theory (the method) and a basis set. The Hartree–Fock wave function is a single configuration or determinant. In some cases, particularly for bond breaking processes, this is quite inadequate, and several configurations need to be used. Here, the coefficients of the configurations and the coefficients of the basis functions are optimized together. The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without a full knowledge of the complete surface. A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods. Density functional methods Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are known as hybrid functional methods. Semi-empirical and empirical methods Semi-empirical quantum chemistry methods are based on the Hartree–Fock formalism, but make many approximations and obtain some parameters from empirical data. They are very important in computational chemistry for treating large molecules where the full Hartree–Fock method without the approximations is too expensive. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods. Semi-empirical methods follow what are often called empirical methods, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Molecular mechanics In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use a single classical expression for the energy of a compound, for instance the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations. The database of compounds used for parameterization, i.e., the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules.[15][16] Methods for solids Computational chemical methods can be applied to solid state physics problems. The electronic structure of a crystal is in general described by a band structure, which defines the energies of electron orbitals for each point in the Brillouin zone. Ab initio and semi-empirical calculations yield orbital energies; therefore, they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate them for the entire list of points in the Brillouin zone. Chemical dynamics Once the electronic and nuclear variables are separated (within the Born–Oppenheimer representation), in the time-dependent approach, the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms. The most popular methods for propagating the wave packet associated to the molecular geometry are: Molecular dynamics Molecular dynamics (MD) use either quantum mechanics, Newton's laws of motion or a mixed model to examine the time-dependent behavior of systems, including vibrations or Brownian motion and reactions. MD combined with density functional theory leads to hybrid models. Interpreting molecular wave functions The Atoms in molecules or QTAIM model of Richard Bader was developed in order to effectively link the quantum mechanical picture of a molecule, as an electronic wavefunction, to chemically useful concepts such as atoms in molecules, functional groups, bonding, the theory of Lewis pairs and the valence bond model. Bader has demonstrated that these empirically useful chemistry concepts can be related to the topology of the observable charge density distribution, whether measured or calculated from a quantum mechanical wavefunction. QTAIM analysis of molecular wavefunctions is implemented, for example, in the AIMAll software package. Software packages There are many self-sufficient software packages used by computational chemists. Some include many methods covering a wide range, while others concentrating on a very specific range or even a single method. Details of most of them can be found in: See also Notes and references 1. ^ 2. ^ 3. ^ 4. ^ 5. ^ 6. ^ 7. ^ 8. ^ 9. ^ 10. ^ 11. ^ 12. ^ 13. ^ The Nobel Prize in Chemistry 1998 14. ^ 15. ^ 16. ^ • C. J. Cramer Essentials of Computational Chemistry, John Wiley & Sons (2002). • T. Clark A Handbook of Computational Chemistry, Wiley, New York (1985). • R. Dronskowski Computational Chemistry of Solid State Materials, Wiley-VCH (2005). • A.K. Hartmann, Practical Guide to Computer Simulations, World Scientific (2009) • F. Jensen Introduction to Computational Chemistry, John Wiley & Sons (1999). • K.I. Ramachandran, G Deepa and Krishnan Namboori. P.K. Computational Chemistry and Molecular Modeling Principles and applications Springer-Verlag GmbH ISBN 978-3-540-77302-3. • D. Rogers Computational Chemistry Using the PC, 3rd Edition, John Wiley & Sons (2003). • P. v. R. Schleyer (Editor-in-Chief). Encyclopedia of Computational Chemistry. Wiley, 1998. ISBN 0-471-96588-X. • D. Sherrill. Notes on Quantum Mechanics and Computational Chemistry [1]. • J. Simons An introduction to Theoretical Chemistry, Cambridge (2003) ISBN 978-0-521-53047-7. • A. Szabo, N.S. Ostlund, Modern Quantum Chemistry, McGraw-Hill (1982). • D. Young Computational Chemistry: A Practical Guide for Applying Techniques to Real World Problems, John Wiley & Sons (2001). • D. Young's Introduction to Computational Chemistry. • Errol G. Lewars, Computational Chemistry: Introduction to the Theory and Applications of Molecular and Quantum Mechanics, Springer (Heidelberg) Specialized journals on computational chemistry External links • NIST Computational Chemistry Comparison and Benchmark DataBase - Contains a database of thousands of computational and experimental results for hundreds of systems • American Chemical Society Division of Computers in Chemistry - American Chemical Society Computers in Chemistry Division, resources for grants, awards, contacts and meetings. • CSTB report Mathematical Research in Materials Science: Opportunities and Perspectives - CSTB Report • 3.320 Atomistic Computer Modeling of Materials (SMA 5107) Free MIT Course • Chem 4021/8021 Computational Chemistry Free University of Minnesota Course • Technology Roadmap for Computational Chemistry • Applications of molecular and materials modelling. • Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology CSTB Report • MD and Computational Chemistry applications on GPUs
ac33638ef613b53d
Higgs Boson Physicist Daniela Bortoletto on the discovery of the Higgs Boson, the Large Electron-Positron collider, and the Standard Model of particle physics faq | November 17, 2016 The Higgs boson is a particle that was hypothesized in 1964 by Peter Higgs and other theoretical physicists who were trying to understand the origin of the mass of fundamental particles. We know from elementary physics that the mass of an object is the resistance it offers to have its motion changed as described by Newton’s law F=ma. At a microscopic level, the mass of an object, like an apple, comes from its constituent molecules and atoms, which are themselves built from fundamental particles, electrons and quarks. But where do the masses of these fundamental particles come from? The theoretical work to answer this question, that appears simple, has led to the Higgs boson. Higgs Boson and Quantum Field Theory To describe the behavior of elementary particles, physicists use Quantum Field Theory which is a theoretical framework that is based both on relativity and quantum mechanics. Both these theories are needed to describe the elementary particle physics since elementary particles are tiny (in fact we treat them as point-like) and they travel almost at the speed of light. In the 1960s, the theoretical physicists who developed what is called the Standard Model of particle physics encountered a puzzle trying to describe what is called the weak interaction. The weak interaction is the interaction responsible for nuclear fission and the hydrogen cycle in the sun that allows life on our planet. They found that if they assumed that all particles were massless, then all equations were working well and they were able to describe in a unified way both the electromagnetic and the weak interaction. MIT Professor David Kaiser on galactic superclusters, long-term effects of gravity, and Higgs field The resulting framework was not only mathematically elegant but also explained the experimental data. Nonetheless, they had a fundamental problem. The electromagnetic interaction is exchanged by a photon which is massless and it is described by 1/r^2 behavior. Therefore, this interaction can have an impact also on large scales. The weak interaction instead is active only on the nuclear scale and therefore the particles that exchange this interaction, which we denote as the W and Z boson, should have a mass of about 100 GeV (100 times the mass of a proton). But, when theoretical physicists tried to introduce mass for the W and Z bosons in their equations they did not get the correct answer any longer. Image: Particles of the Standard Model of Physics / dullhunk (flickr.com) In 1964 Peter Higgs and other theorists realized that the solution to this problem was, in fact, to avoid putting these mass terms directly in the equations. Instead, to maintain the symmetry present in the equation, they assumed that all of space is uniformly filled with an invisible substance, which we call the Higgs field. This field exerts a drag force on particles when they accelerate through it. This resistance to the increase of the acceleration of a particle is the particle’s mass. As an analogy, we can think about moving an object in the water. When you try to move an object in water the object feels more massive than it does outside of water. The interaction of the object with the water has the effect to increase the effective mass of the object. This is what happens to particles in the Higgs field. Image: Peter Higgs. Nobel Laureates press conference at the Royal Swedish Academy of Sciences. December 2013 / Wikimedia Commons In 1964, Higgs submitted a paper bringing forward this revolutionary concept to a physics journal which rejected it. Other papers were published the same year and by the 80s the Higgs mechanisms were fully accepted by the physics community. Nonetheless, to test this idea and to confirm experimentally the existence of the Higgs field permeating the whole universe, the Higgs boson had to be found. In quantum field theory, every field can be excited and this state can sometimes look like a particle, and in other cases, it can act like a wave. A particle like an electron is an excitation of the electron field. Similarly, the Higgs field can be excited by the interaction of two protons fields. Discovery of the Higgs Boson We found the Higgs boson in 2012 only 4 years ago. Why did it take so long to discover this particle? Finding the Higgs boson was not an easy task since the standard model of elementary particles predicts all properties of these particles but not their mass. For any given mass, the properties of the Higgs could be calculated, but the mass itself could only be determined experimentally. We only knew that the Higgs boson could not be arbitrarily heavy and that its mass should be below 2000 GeV. We also knew that since the Higgs gives mass to particles it tends to interact more with heavier particles. In fact, the most efficient way to excite the Higgs field and produce the Higgs boson would be by colliding top quarks, since top quark is the most massive known elementary particle (in fact it is as massive as an atom of gold). This of course is not possible since top quarks decay extremely fast. The Higgs boson also decays preferentially to the heaviest particles it can decay to, once conservation of energy and momentum is taken into account. Therefore, it is quite challenging to produce the Higgs boson and furthermore, if you do not know its mass you will not know how it would decay. This made the search for this particle quite complex. We were truly looking for a needle in a haystack. Many searches for the Higgs boson took place after 1964. Some of the most important searches were performed at CERN’s Large Electron-Positron (LEP) collider. This is a collider in which electrons and positrons, circulating in opposite directions are smashed together, which operated in the same tunnel that is now used for the Large Hadron Collider (LHC). LEP reached maximum energy of 209 GeV while the LHC delivered 13,000 GeV, overs 60 times the energy of LEP, during its latest run. Image: The Large Hadron Collider / Image Editor (flickr.com) Electron-positron colliders present some advantages with respect to protons colliders. In electron-positron colliders, two fundamental particles, the electron and positron, are smashed together. Instead in proton-proton or a proton anti-proton colliders, we are smashing together the ensembles of quarks, antiquarks, and gluons that make up protons and antiprotons. Therefore, the collision at electron-positron collisions provides a cleaner, easier environment than proton-proton collisions for experimentalists. Furthermore, in electron-positron colliders, the exact energy of the colliding particles is known, and this provides important information for the analysis of events. At LEP all the energy could be used to create Higgs boson particles. Nonetheless, since the mass of electrons (and positrons) is very low, their probability of interacting with the Higgs field and creating Higgs bosons is negligible. Fortunately, Higgs bosons have a large coupling to Z bosons which can be copiously produced at LEP. This allowed to search for the Higgs boson first by looking to a Z boson decay to a Higgs and a pair of pairs of leptons (electron, muons). Such decays were not observed at LEP and therefore we concluded that the mass of the Higgs boson had to be larger than 58 GeV. Sometimes at LEP you can also produce what is called a virtual boson, this is a Z boson which energy and momentum do not add up to the Z mass of 91 GeV. A virtual Z boson is an excitation of the Z quantum field that decays very rapidly and can produce a Z boson together with a Higgs boson. By 2000, when LEP was turned off, experimentalists were able to determine that the Higgs boson’s mass should be larger than 114 GeV. This was very close to the maximum mass that could be probed which was equal to the peak energy of the machine of 209 GeV minus the Z mass of 91 Gev which is yields 118 GeV. Physicist Frank Taylor on the Higgs boson, supersymmetry, and physics beyond the Standard Model In fact, when LEP was turned off there was a possible hint consistent with a Higgs boson of 115 GeV. Nonetheless, the machine was turned off to allow the construction and installation of the Large Hadron Collider in the same tunnel. At that point, the search for the Higgs moved to the Tevatron, which was the highest energy collider operating at the time. The Tevatron was located at Fermi National Laboratory (Fermilab). At the Tevatron protons collided with anti-protons at a peak energy of almost 2000 GeV. Both protons and antiprotons are mainly made up of gluons and the lightest quarks (up and down quarks) and antiquarks (anti-up and anti-down). Gluons are massless and therefore do not couple directly to the Higgs field. Nonetheless, sometimes the gluons field can be excited in high energy collisions to produce a top quark field. Then the top quark fields can interact with the Higgs field and create Higgs bosons. This indirect mechanism, called gluon fusion, accounts for most Higgs bosons produced in proton-proton or proton-antiproton collider. Another mechanism, called vector boson fusion, is due to the fact that high-energy protons (or antiprotons) can create pairs of W and Z bosons, which then interact to create a Higgs boson. In associated production, a Higgs boson is produced together with a Z or a W boson. Image: Fermi National Accelerator Laboratory, Main Ring and Main Injector as seen from the air / wikipedia.org Since the Higgs has a tiny lifetime of only 10-22 seconds, to detect it we must reconstruct the particles it decays into. It is interesting to note that if the Higgs boson greater than twice the mass of the W and Z boson it will decay predominately into a pair of these gauge bosons. In fact, the Tevatron was able to exclude the existence of Higgs boson with a mass between 165 and 175 GeV since, if the Higgs had such a mass, it should have decayed into a W pair and it would have been observed at the Tevatron. If the Higgs is heavier than twice the top quark mass, then the Higgs would preferentially decay into a top quark pair. The most difficult range to find the Higgs boson is when it has a mass between 115 GeV and about 150 GeV. The highest branching fraction for a Higgs with a mass below 135 GeV is into a pair of b-quarks. Unfortunately, it is very difficult to reconstruct the decay of a Higgs boson into a pair of b-quarks since we cannot reconstruct the energy of the b-quarks very precisely and pair of b-quarks are produced in enormous quantities through other Standard Model processes. By the end of its run in 2010 the Tevatron established the evidence for Higgs decays into a pair of b-quarks at the 3 sigma level which was not high enough to claim a discovery. The collider that finally allowed us to find the Higgs is the Large Hadron Collider (LHC). This accelerator is located in a 27-kilometers long underground circular tunnel located near Geneva and crossing the border between Switzerland and France. About 9,000 superconducting magnets steer bunches of protons in this racetrack, cycling around in opposite directions. Each bunch contains 100 billion protons travelling almost at the speed of light. At these speeds, the protons go around the tunnel about 11,000 times each second. They are steered by magnets to collide in caverns equipped by giant detectors which can decide extremely rapidly which collision is interesting and should be recorded. The LHC was built with the aim of discovering the Higgs boson. The collision of protons at the LHC started at an energy of 7 trillion electron volts (TeV) and they are now reaching record energy of 13 TeV. Two of the detectors operating at the LHC were especially designed to discover the Higgs boson, in the mass range predicted by the standard model. The LHC had two major advantages over the Tevatron. In its first run, the LHC reached a record an energy four times higher than the Tevatron (and now 6.5 time more). At the higher energies of the LHC the production rate of the Higgs boson is much larger than at the Tevatron. The LHC also achieves higher collision rates and therefore more Higgs bosons can be produced. Therefore, the LHC was able to probe very rare decay modes of the Higgs. As we now know the Higgs boson has a mass of 125 GeV. A Higgs boson of this mass can decay to a pair of photons. Photons are massless and do not interact with the Higgs field. Top quarks carry electric charge and therefore they can interact with photons. Top quarks also interact with the Higgs field since they are very massive. Therefore, the Higgs boson can decay indirectly into two photons, through top quarks, but the probability for this decay is only 0.23 per cent. Image: Peter Higgs portrait by Ken Currie, University of Edinburgh / dullhunk (flickr.com) Nonetheless, this decay is very important since it yields a very clean signature. By measuring the energy and momentum of the two photons one can determine the existence of the Higgs boson by observing an increase of events at the mass of the Higgs above a smooth background. Higgs bosons in this mass range can also decay into a “real” W or Z and a “virtual” W and Z, but these decays are less frequent than if the Higgs was heavy enough to decay into two real particles. Just to give you an order of magnitude only 1 in 300 Higgs decays to two Z particles, and a Higgs particle lighter than about 130 GeV only decays to two Z particles about 1.5 percent of the time. Then the Z boson has a very distinctive decay into an electron and a positron or a positive and a negative muon which can easily detected in the experiment. By combining the 2 photons, the 2 Z, and the 2 W decay channels both ATLAS and CMS were able to reach the 5 sigma significance that is required in particle physics experiments to claim a discovery. The discovery of the Higgs boson was announced on July 4, 2012. I was at CERN and I witnessed the excitement at the lab by watching the presentations given by the spokesperson of ATLAS and CMS from the council chamber at CERN with members of the press. It was a true triumph for theoretical and experimental particle physics. ATLAS and CMS Experiments Since then, the two experiments, ATLAS and CMS, collected 4 times more data at 8,000 GeV, during the first run of the LHC, called Run1. During 2014 to 2015, the LHC was upgraded to reach an energy of 13,000 GeV and the experiments have already collected more than the data that we had during run 1. We have already determined many properties of the Higgs boson. The rate of the Higgs decay into W and Z bosons, which are spin 1 particle, has been measured very precisely and it agrees with the predictions of the SM. We know that the Higgs couples to fermions, spin ½ particles such as quarks and leptons since we know indirectly that it couples indirectly to top quark through its production by gluon fusion and its decay to photons. We have also observed the decay of the Higgs into tau leptons but the decay into two b-quarks remains unobserved. Studies of the decay of Higgs show that the Higgs is a spin zero particle. All measurements made up to now agree with the standard model nonetheless we have probed only the coupling to quarks and leptons belonging to the third generation of quarks and leptons. There is still more work to do. Image: ATLAS Experiment Searches / berkeleylab (flickr.com) LHC and Future Directions of Research Up to now, we have collected only about 2% of the data that we expect to collect at the LHC and its planned upgrade called the HL-LHC. In the standard model, we have 3 generations of quarks and leptons. By now we have probed only the coupling to quarks and leptons in the third generation, which is more massive. More data is needed to probe the couplings to quark and leptons in the second generation of quarks and leptons. We also hope that measurements of the Higgs can bring surprises. For example, the Higgs might couple to invisible particles and provide a window to the discovery of the dark matter particles. Physicist Mahdi Godazgar on the arrow of time, Schrödinger equation, and Einstein's relativity theory With more data we should also be able to measure the interaction of the Higgs boson with itself and therefore determine the shape of the Higgs potential. This is extremely interesting since it could reveal if our universe has evolved to a minimum of the Higgs potential or if there are other minima. This measurement could tell us if the Universe is stable, or if is “metastable”, which means that it is stable on cosmological time scales but ultimately headed towards a collapse. The current measured mass of the top quark and the Higgs bosons, seem to indicate, that if there is no new physics up to the Planck scale, the universe is in a metastable state. This means that the minimum of the Higgs potential is a local minimum and not a global minimum. Nonetheless, the most recent analysis concluded that it is possible for the Universe to be fully stable if the true values of measured parameters are only 1.3 standard deviations away from the current best estimates. More precise measurements of the Higgs boson mass, the top-quark mass, the strong coupling constant, and other parameters will help to answer this question. Open Questions in Particle Physics Finding the Higgs boson has completed the standard model of elementary particles but many questions remain unanswered. Furthermore, the Higgs boson itself has open new questions. The most striking question is why the Higgs mass has a mass of only 125 GeV. The Higgs boson is a spin 0 particle which interacts through virtual loops with most particles and gains mass through these interactions. Therefore, it very difficult for the Higgs boson to remain light without invoking an additional symmetry. This is called the hierarchy problem. Supersymmetry offers an elegant solution to this problem by postulating the existence of a new symmetry between bosons, which have an integer spin and fermions which have a half-integer spin. According to this symmetry for each spin 1/2 fermion in the standard model, like the top quark, there is a supersymmetric particle with integer spin, the stop. Similarly, for each boson in the SM there is a supersymmetric particle with spin ½. The contributions of SM particles to the Higgs mass is cancelled by the contributions of its super-partner. Supersymmetry is also interesting because it provides a particle, the lightest supersymmetric particle, which could explain the dark matter observed in the universe. Professor of Physics, ATLAS Oxford Group, University of Oxford Did you like it? Share it with your friends! Published items To be published soon Most viewed • 1 Erol Gelenbe • 2 Mathias Disney • 3 Jonathon Butterworth • 4 Michele Dougherty • 5 John Ellis • 6 Justin Stebbing • 7 David Gems • 8 Erol Gelenbe • 9 Chris Brierley • New
552c5c519b955aa8
famous scientists Famous Scientists A-Z Famous Scientists List A-Z ADDucation Tips: Click column headings with arrows to sort this list of most famous scientists of all time, reload the page to reset original sort. Click the + icon to show any hidden columns. Set your browser to full screen and zoom out to show as many columns as possible. Start typing in the Filter table box to find anything inside the table of all famous scientists ever. Famous ScientistsBornDiedCountryFields + AwardsGeneral Knowledge Facts & Trivia About Famous Scientists al-Khwārizmī, Muḥammad ibn Mūsāc780c850Iran.Mathematics and astronomy.Muhammad Al-Khwarizmi formalized the Arabic numerals 0-9, which he transferred from the Indians. The four basic arithmetic operations (+-×÷), algebra and algorithms all derive from the Latin spelling of his name. One of the first famous astronomers in history. Al-Khwarizmi’s astronomical tables contain movements of the sun, moon and five known planets. Alberti, Leon Battista c1404c1472Italy.Mathematics and physics.Leon Battista Alberti was a versatile talent. He studied physics, mathematics, law and art. He was also an inventor, architect and author in several languages. He was a talented rider, athlete, musician and composer. Alzheimer, Alois 18641915Germany.Medicine and psychology.In 1966 Alois Alzheimer published a famous study “An unusual illness of the cerebral cortex”. He had previously found protein deposits in the brain of his “demented” patient, Auguste from the insane asylum at Frankfurt am Main, Germany. Ampère, André M 17751863France.Mathematics and physics.André Ampère is regarded as the founder of electrodynamics. He discovered electric currents exert attractive and repulsive forces on each other, which is cause of magnetism. The unit for measuring the strength of an electric current “Amp” is named after him. Anaxagorasc510 BCc428 BCGreece.Astronomy and philosophy.Anaxagoras was an all-rounder. He founded meteorology, found the causes of wind, clouds, thunder and lightning, moon phases and eclipses. Anaxagoras also conducted experiments on the body, the brain and more. Anning, Mary17991847EnglandPaleontologyMary Anning was a fossil collector and paleontologist who became famous for her Jurassic fossil finds in Dorset, England, which she then sold to collectors to earn a living. Some of her important discoveries were an ichthyosaur skeleton and two complete plesiosaur skeletons. She became an authority in geological circles and was often consulted related issues, but as a woman, she wasn’t permitted to join the Geological Society of London and correspondingly lost out on the credit for some of her contributions. She became more famous in the early 20th century and is the seashell-seller behind the well-known tongue-twister “She sells seashells on the seashore” in 1908. Archimedes c287 BCc212 BCGreece.Mathematics, physics and mechanics.Archimedes was a multi-talented genius. He calculated the number pi and founded the current day integral calculus. Archimedes discovered “specific gravity” when a full bath tub overflowed when he stepped into it. “The volume of a body is equal to the amount of water it displaces” – very important when building ships. He invented the “Archimedes screw” to pump water and developed a system for calculating large numbers. Archimedes was also a shipbuilder and designed modern weapons and war techniques including catapults. One of the first most famous scientists of all time. Aristarchus of Samos c310 BCc230 BCGreece.Astronomy and mathematics.Aristarchus of Samos calculated the distance from the sun to the moon and their sizes. He was the first to explain the heliocentric system (the earth revolving around the sun) and the sphere of fixed stars. Copernicus later took over his teachings. Aristotle c384 BCc322 BCGreece.Physics, zoology and philosophy.Aristotle defined the method of exact research. His writings have inspired scientists for millennia. He founded the first university in Athens, was a student of Plato and teacher of Alexander the Great. Aristotle is considered the father of zoology, and wrote widely on topics such as reproduction and heredity transmission. One of the first famous scientists of all time. Avicenna 9801037Iran.Medicine and philosophy.Avicenna was a child prodigy of the Middle Ages. He was a doctor, physician, mathematician, astronomer, chemist, theologian, geologist, lawyer, inventor and also wrote poetry. Avicenna led a life made for the movies and wrote two encyclopedias of medicine on diagnoses, treatments, prevention, hygiene, medicinal plants, surgery, cosmetics and drugs. Becquerel, Henri 18521908France.Physics. Nobel Prize 1903.Henri Becquerel discovered radioactivity when putting uranium salts on a photographic plate, which then turned black. He found out that uranium emits radiation naturally and earned the Nobel Prize in 1903 for his work. Behring, Emil von 18541917Germany.Medicine, physiology and immunology.Emil von Behring was a student of Robert Koch’s. He found an antitoxin healing agent against diphtheria in the form of iodine trichloride. Treated patients were subsequently immune against diphtheria. Behring then used their blood to produce a serum. Blackburn, Elizabeth Helen1948Australia / USAMolecular biology. Nobel Prize 2009.Helen Blackburn co-discovered telomerase, which is an enzyme that prevents the telomeres of chromosomes becoming shorter during replication. This earned her and 2 others the 2009 Nobel Prize in Physiology or Medicine. Bohr, Niels 18851962Denmark.Physics. Nobel Prize 1922.Niels Bohr discovered that energy does not flow constantly, but in small spurts (quanta). Bohr laid down the foundations of modern atomic physics and received the Nobel Prize in 1922. One of the most famous scientists of all time. Boltzmann, Ludwig18441906Austria.Physics.Ludwig Eduard Boltzmann was one of the founders of quantum mechanics and most famous for developing statistical mechanics, one of the building blocks of modern physics. Boltzmann’s name is also connected to two physical constants (both developed by other scientists), theories, equations and distributions. In 1899 he awarded a Fellow of the Royal Society (FRS). In later years his lectures on natural philosophy were especially well received. Bolyai, János 18021860Hungary.Mathematics.János Bolyai led an exciting life alternating between military service, studying mathematics, learning nine languages, playing the violin and numerous duels. Bolyai is most famous for his treatise on geometry, which refuted Euclid’s parallel postulate. Boyle, Robert 16271691Great Britain.Physics and chemistry.Robert Boyle was one of Britains most famous scientists. Boyle explored the properties of air and gases and discovered that (at a constant temperature) pressure and volume are inversely proportional to one another. He confirmed Galileo’s “free fall” law, defined the term “analysis” (Greek: “resolution”), found acids and bases and, in doing so, discovered acetone and methanol. Was also the first to isolate oxygen. His major work: “The skeptical chymist” (1661). Bradley, James 16931762Great Britain.Astronomy.James Bradley discovered in 1725 the aberration of light (proof of the heliocentric worldview) and calculated from this the speed of light: 300,000 km a second. Braun, Wernher von 19121977Germany.Physics, astronomy.Wernher von Braun was a significant rocket designer. He launched rockets in 1934 already and later developed the V2 in Nazi Germany. In 1945, he emigrated to the United States where, as a NASA employee, he and other famous scientists, constructed the first moon rockets. Brown, Robert 17731858Great Britain.Biology.Robert Brown discovered the cell nucleus in 1831 but didn’t attach any importance to it. This was later also discovered by botanist Mathias Schleiden (1804-1881) and zoologist Theodor Schwann (1810-1882). They discovered that the whole plant consists of cells that form an “independent living community”. Bunsen, Robert Wilhelm 18111899Germany.Chemistry.Robert Bunsen is best known for improving laboratory burners “Bunsen burner“. Together with Gustav Kirchhoff Bunsen developed spectral analysis which aids research and proof of chemical elements. Cassini, Giovanni Domenico 16251712Italy / France.Astronomy and engineering.Giovanni Cassini measured the rotational period of Mars and Jupiter. He discovered four moons of Saturn; Iapetus, Rhea, Tethys and Dione and the Cassini division -the black gap in the rings of Saturn. NASA named its 1997 satellite, which orbited Saturn and its moons, after Cassini. In 1672 Cassini, in Paris, and Jean Richer, in French Guiana, made simultaneous observations of Mars. They used the principle of parallax to calculate the distance between Earth and Mars. Together with existing planetary distances and ratios Cassini calculated the size of the Solar System. Cavendish, Henry 17311810Great Britain.Chemistry and physics.Henry Cavendish was a wealthy, eccentric loner and misogynist. He was regarded as a pioneer of modern chemistry. He weighed and measured many gases and elements (before and after combustion) and discovered, among other things, the element hydrogen. Celsius, Anders 17011744Sweden.Astronomy.Anders Celsius initially proposed determining the boiling point of water at 0 and the freezing point at 100 degrees. It wasn’t until a year after his death, in 1745, that this scale was turned on its head by Carl Linnaeus and freezing point became zero. Chadwick, James 18911974Great Britain.Physics. Nobel Prize 1935.James Chadwick proved the existence of the neutron in 1932, built the first particle accelerator “cyclotron” which led to the development of the first nuclear chain reaction. Châtelet, Émilie du 17061749FranceMathematics, Physics.Émilie du Châtelet translated and commented on Isaac Newton’s Principia, which detailed the basic laws of physics. With this she made a considerable contribution to Newtonian mechanics. She published her most famous work Foundations of Physics in 1740, which was republished in several languages and caused much debate. She was Voltaire’s paramour and lived with him from 1733. Copernicus, Nicholas 14731543Germany / Poland.Astronomy and mathematics.Nicolaus Copernicus shattered the old worldview in 1543. Copernicus found the Earth, which rotates on its own axis every 24 hours, is one of many planets revolving around the fixed sun. He further concluded the moon rotates in circular orbits around the Earth and that fixed stars don’t move. Curie, Marie18671934Poland, FrancePhysics, Chemistry. Nobel Prizes 1903, 1911Marie Curie (or Marie Skłodowska Curie, born Maria Salomea Skłodowska) was born in Poland and became French later. Her work led to the development of X-rays. The first chemical element she discovered was polonium, which she named after her native country. Marie Curie was the first woman to win a Nobel Prize and to do so in two categories: Physics in 1903 and chemistry in 1911. When she died from aplastic anemia, caused by her frequent exposure to radiation, she was also the first woman to be entombed on her own merits in the Panthéon in Paris. One of the most famous scientists and of all time. Da Vinci, Leonardo 14521519Italy.Medicine, physics and astronomy.Leonardo da Vinci was not only an artistic genius but also a doctor, architect, astronomer and engineer. His irrepressible curiosity drove him to explore (almost) everything and is one of the best known famous scientists. Da Vinci studied humankind and nature and drew hundreds of anatomical drawings. He developed hydraulics, supervised the construction of canals, locks and aqueducts, and is considered the inventor of portable bridges, flamethrowers, tanks, submarines, parachutes as well as tools such as levers, saws, heating and lighting systems. Dalton, John 17661844Great Britain.Physics.John Dalton discovered matter is composed of atoms that are indivisible and indestructible and have a weight. All atoms of any element are the same, but that every element has different atoms. Also, that hydrogen atoms are the lightest. Their weight is used to determine all atomic weights. Darwin, Charles 18091882Great Britain.Biology.Charles Darwin is the father of the theory of evolution and one of the most famous scientists of all time. Darwin traveled around the world for four years. He studied fossils and concluded that stronger and fitter life forms always prevail and adapt. He was an atheist with agnostic tendencies. Davy, Humphrey 17781829Great Britain.Physics and chemistry.Humphrey Davy was a pioneer of the theory of electricity. Using electric currents he was able to isolate elements such as calcium, barium, strontium, for the first time, developed among other things a safety lamp for miners. Descartes, René 15961650France.Mathematics and philosophy.René Descartes was one of France’s most famous scientists. He was regarded, along with Pierre de Fermat (1607-1665) as one of the fathers of analytic geometry. He was a leading figure in the scientific revolution. Descartes set new standards with his work on dynamics, optics and astronomy. Descartes’ most famous quote is: I think; therefore I am (cogito ergo sum) Dirac, Paul19021984Great Britain.Physics. Nobel Prize 1993.Dirac was one of the founding fathers of quantum mechanics. He was autistic which explained his modest, socially awkward and reserved nature. Dirac established the most general theory of quantum mechanics. He predicted the existence of antimatter and discovered the relativistic equation for the wave function of the electron, known as the Dirac equation. In 1930 Dirac published “The Principles of Quantum Mechanics” which remains a standard textbook today. Dirac received a Nobel Prize in Physics in 1933. In 1973 Dirac received the Order of Merit, having earlier refused a knighthood. One of the famous scientists of the 20th century. Doppler, Christian 18031853Austria.Physics and mathematics.Christian Doppler calculated changes in the frequency of a moving object “Doppler effect“. When the observer and source are approaching each other the frequency increases, when moving away from each other, it decreases. Edison, Thomas Alva 18471931USA.Physics.Thomas Edison was self-taught and only attended school for three months. He ran experiments and developed many inventions including a film recording device, microphone and gramophone, all of which he financed himself. Edison became one of the most famous scientists for inventing the first light bulb, using a thread of carbon. Ehrlich, Paul 18541915Germany.Medicine. Nobel Prize 1908.Paul Ehrlich was the founder of chemotherapy and researcher into immunity and serum therapy. He examined corpuscles and in his theory of “side-chains” discovered the formation of antibodies. Paul Ehrlich also found the cause of sleeping sickness and syphilis as well as the first effective agent “Salvarsan” against syphilis. Elion, Gertrude Belle19181999USABiochemistry, Pharmacology. Nobel Prize 1988.Together with George H. Hitchings and Sir James Black, Gertrude Elion developed various important drugs, that in the end led to the development of AZT – the AIDS drug. In 1988, Elion shared the 1988 Nobel Prize in Physiology or Medicine with George H. Hitchings and Sir James Black for this work. She also developed azathioprine, an immunosuppressive drug used for organ transplants and was the first woman to be recognized in the National Inventors Hall of Fame. Einstein, Albert 18791955Germany.Mathematics and physics. Nobel Prize 1921.Albert Einstein’s two theories of relativity revolutionized the understanding of matter, space, time and gravitation. Everything is relative to the respective observation system, including time. Therefore, there is no absolute simultaneity. The only constant is the speed of light. It cannot be exceeded. Einstein concluded: Energy is Mass times the speed of light (C – from the Latin celeritas meaning speed) squared (E = MC2), i.e. matter is condensed energy. Every gram of mass contains huge amounts of energy. An insight that (unfortunately) also led to the construction of the atomic bomb. One of the most famous scientists of all time. Empedocles c495 BCc435 BCGreece.Medicine and biology.Empedocles was the founder of the four-element doctrine. The four elements (fire, air, water, earth) make everything in the world. Empedocles described the flow of blood to and from the heart and recognized that skin can breathe. Eudoxus of Cnidus c408 BCc355 BCGreece.Mathematics and astronomy.Eudoxus was the creator of the doctrine of ratio equations and volume calculations for circles, spheres, cones and pyramids. Physician and teacher of Menaechmus (380-320 BC), who discovered the ellipse, parabola and hyperbola. Euclid of Alexandria c360 BCc280 BCGreece.Mathematics.Euclid’s principal work was the “Elements of Geometry” consisting of 13 books. Among other topics, it deals with formulas, triangles, parallelograms, spheres, cones, circular gauges and many other mathematical theorems like the parallels axiom “the sum of the angles in a triangle is always 180°“. Euler, Leonhard 17071783Switzerland.Mathematics and physics.Leonard Euler was one of the most prolific scientists despite going blind later in life. Euler wrote 866 publications, established new symbols such as the summation sign Σ, founded the calculus of variations and, partly, analysis. In mechanics, Euler discovered equations for the motion of rigid bodies and fluids (hydrodynamics), and developed a wave theory to calculate lens in the field of optics. One of the most famous scientists of all time. Fahrenheit, Gabriel 16861736Germany.Physics.Gabriel Fahrenheit was a German physicist who developed the mercury thermometer in 1714 with a three-point calibration. For the zero point of Fahrenheit’s scale he used the lowest temperature he could produce at the time: minus 17.8 F°. He defined the freezing point of water as +32 F° and water’s boiling point as +212 F°. Faraday, Michael 17911867Great Britain.Chemistry and physics.Michael Faraday was a great experimenter. He discovered electromagnetic induction and rotation. He built the first dynamo, which led to the first electric motor. Faraday discovered diamagnetism is a property of all materials. In 1832 Faraday described the principles of electrolysis and electrostatics and invented the “Faraday cage” to prove his theory. In 1834 Faraday published his laws of electrolysis, based on his electrochemical research. Fermat, Pierre de 16011665France.Mathematics.Pierre de Fermat was a lawyer who only dabbled with mathematics in his spare time. He remained unknown during his lifetime. It was only after his death that other greatest scientists spotted the basics of analytic geometry in his writings which he had found independently of René Descartes. Fermat is also renowned for his “Fermat’s Last Theorem” which says that no triples of whole numbers satisfy the equation xn + yn = zn has no whole number solution when n is greater than 2. Fermi, Enrico 19011954Italy.Physics. Nobel Prize 1938.Enrico Fermi was a significant 20th century nuclear physicist. He bombarded uranium with neutrons and thus prepared the way for nuclear fission. He built the first nuclear reactor in 1944 and received the Nobel Prize for physics in 1938. Feynman, Richard Phillips 19181988USA.Physics. Nobel Prize 1965.Richard Feynman was famous for path integral formulation of quantum mechanics and particle physics. He advanced the theory of quantum electrodynamics and superfluidity of supercooled liquid helium. Feynman received the Nobel Prize in Physics in 1965 for quantum electrodynamics. He shared the prize with Julian Schwinger and Shin’ichirō Tomonaga. Fleming, Alexander 18811955Great Britain.Bacteriology.  Nobel Prize 1945.Sir Alexander discovered the first ever antibiotic by accident in 1928. Returning from holiday he discovered a bacteria-destroying fungus “penicillin” in Petri dishes he’d left lying around. This became an effective remedy for many infections. Fleming shared the 1945 Nobel Prize in Physiology or Medicine with Ernst Boris Chain and Howard Florey. Foucault, Léon Jean Bernand18191868France.Physics.Leon Foucault ascertained the speed of light by bouncing it off a series of rotating mirrors. He showed light in air moves faster than in water. The French physicist also proved Earth’s rotation using what became known as “Foucault’s Pendulum”. He demonstrated the pendulum at the Panthéon, Paris in 1851. Foucault was one of the most famous scientists of all time. Franklin, Melissa Eve Bronwen1956CanadaPhysics.Melissa Franklin is best known for her work on particle physics. She is currently the Mallinckrodt Professor of Physics and before that she had a tenure at Harvard. She was in charge of a team that first found signs that top quarks exist. She often appears as a guest on the CBC radio science program Quirks and Quarks. One of the greatest female scientists alive today. Franklin, Rosalind19201958Great Britain.X-ray Crystallography, Chemistry.Rosalind Franklin’s areas of research were DNA, RNA, graphite, coal and viruses. Her work greatly improved understanding of molecular structures. It is widely believed that James Watson and Francis Crick’s discovery of the structure of DNA was only possible through Franklin’s work. Fraunhofer, Joseph of 17871826Germany.Astronomy.Josef von Fraunhofer created the telescope with which Friedrich Wilhelm Bessel (1784-1846) was able to measure the parallax of a fixed star. He improved lenses and prisms, and through experiments with light found hundreds of spectral lines. Freud, Sigmund 18561939Germany.Psychology and neurology.Sigmund Freud is considered the father of “psychoanalysis”. Sexual drive and death primarily drive our behavior, somewhere in the middle are “displacement”, the “subconscious”, the “ego” and “superego” as well as neuroses. Galen of Pergamon c129c199Greece.Medicine.Galen was probably the first sports physician of all time. He viewed body and soul as a whole (the origin of psychoanalysis). He wrote 22 books about the organism, pathology, physiology, treatment and pharmacology. Galilei, Galileo 15641642Italy.Astronomy, physics and chemistry.Galileo Galilei is the founder of the fields of dynamics mechanics and acoustics. He discovered the laws of falling bodies, ballistics and pendulums, and confirmed Copernicus‘ heliocentric view of the world through astronomical observations using a telescope he’d also improved. In this way, he first saw the moon’s surface and many other stars. The scientific genius also examined gases and proved that air has a weight of its own and is, therefore, also matter. One of the most famous scientists of all time. Gauss, Karl Friedrich17771855Germany.Mathematics and astronomy.Gauss was a versatile genius. Aged 15 he had already deduced a connection between prime numbers and logarithms and discovered “the method of least squares”. Gauss influenced the fields of algebra with evidence of the so-called fundamental theorem (an equation of the nth degree has n roots), stochastics, integral calculus (Gaussian set), and magnetism. Ahead of other greatest scientists Gauss was the first to refute the Euclidean parallel postulate. He also found an easy way to represent complex numbers using the coordinate system. This famous mathematician computed planetary orbits and optical laws. Together with Wilhelm Weber (1804-1891) he built the first electromagnetic telegraph system. Gilbert, William 15401603Great Britain.Physics and medicineWilliam Gilbert realized the Earth itself is magnetic and that our planet has two (and not, as originally thought one) magnetic poles. Born in Colchester, Essex, England. Gilbert investigated electricity and developed the first electroscope for measuring electricity. Goodall, Jane1934Great Britain.Anthropology, Primatology.Jane Goodall (Dame Valerie Jane Morris-Goodall and previously Baroness Jane van Lawick-Goodall to give her her full name) is world famous for her studies of primates and seen as the leading expert on chimpanzees. She has won numerous awards for her work, the best known being her 45-year study on the social lives of chimpanzees in Tanzania. Surprising to many, her research revealed that although chimpanzees are largely “nicer than human beings”, they could also be brutal, and sometimes have a darker side to their nature. Göppert, Maria19061972Germany, USAPhysics. Nobel Prize 1963.Maria Göppert (or later Goeppert Mayer after her marriage to Joseph Mayer) was a theoretical physicist who was awarded the Nobel Prize in Physics in 1963 for her mathematical model for the structure of nuclear shells. After Marie Curie, she was the 2nd female Nobel laureate in physics. Her doctorate was on the two-photon absorption by atoms and today, the unit for this absorption is named the GM unit after her. Grassmann, Hermann18091877Germany.Mathematics and physics.Hermann Grassmann was an introverted philologist and autodidact in Mathematics. He conducted research on electrical currents, color theory, acoustics, phonetics and harmony. Grassmann’s mathematical work “The theory of linear extension” contained treatises on quaternions, matrix calculus and vector calculations. Haber, Fritz18681934Germany.Chemistry. Nobel Prize 1918.Fritz Haber is both an infamous and famous scientist! In 1918 Haber received a Nobel Prize in Chemistry for this part in the invention of the Haber–Bosch process which enhanced food production. On the downside he is known as the “father of chemical warfare” because he weaponized poison gases used in WWI and created ammonia synthesis used to manufacture explosives. Hahn, Otto18791968Germany.Chemistry. Nobel Prize 1944.Otto Hahn irradiated uranium with neutrons in 1938 which split uranium and freed barium. He was awarded the Nobel Prize for this first “nuclear fission”. Hahn was a close friend of the physicist Lise Meitner. Halley, Edmond 16561742Great Britain.Astronomy, mathematics and physics.Edmond Halley was the second Astronomer Royal. His observations were published in “Catalogus stellarum australium” (star maps). In his 1705 “A synopsis of the astronomy of comets” Halley concluded the comets of 1531, 1607, and 1682 were the same comet and that it would return in 1758. It became known as Halley’s Comet. He later explained geomagnetic phenomena including auroras. One of the most famous scientists of all time. Hamilton, Rowan 18051865Great Britain.Mathematics and physics.Rowan Hamilton was a prodigy who spoke 13 languages. Hamilton was appointed head of an observatory aged just 23. By 27 he was a well-known scientist who created the quaternions (hyper complex numbers a + bi + cj + dk) and vector calculus. Hawking, Stephen 19422018Great Britain.Astrophysics and mathematics.Despite suffering from ALS (motor neurone disease) for over 50 years, the English physicist Stephen Hawking researched the black holes of our universe. He wrote the 1988 bestseller A Brief History of Time and in 2001 The Universe in a Nutshell. Hawking was one of the greatest scientists of the 20th century and joins this list of famous scientists in history. Heisenberg, Werner 19011976Germany.Physics.Werner Heisenberg was a German physicist most famous for his 1927 publication of “Heisenberg uncertainty principle“. Heisenberg discovered atoms behave differently when observed. He concluded electrons can only change by so-called “quantum leaps” which led to the term “Quantum theory“. Helmholtz, Hermann von 18211894Germany.Physics and medicine.Helmholtz examined the fermentation, putrefaction and heat production of living beings. In his book on the Conservation of energy (1847) he showed energy can be transformed, but never lost. Hero of Alexandria c10 ADc70 ADGreece / Egypt.Mathematics and mechanics.Hero of Alexandria (also known as Heron of Alexandria) was a mathematician, engineer and inventor. He was expert at putting science into practice. Hero was among the first to build many machines including; vending machines, force / piston pump, windwheel / windmill, music machines, a fountain “Hero’s fountain”. He described a design for a steam turbine called an aeolipile or “Hero engine”. Herophilos of Chalcedon c330 BCc255 BCGreece.Medicine.Herophilos was the first to perform autopsies on people and animals. In doing so he discovered basic functions of the liver, spleen, intestines, heart, eyes, nerves, brain and bloodstream. Herophilos was also the first to distinguish veins from arteries. Herschel, Caroline Lucretia17501848Germany.Astronomy.Lucretia Herschel’s greatest contributions to astronomy included discovering a number of comets like the comet 35P/Herschel-Rigollet, which is named after her. Her brother was William Herschel, who was a famous astronomer in his own right. They collaborated closely on their work. From 1786 to 1797 she discovered a total of 8 comets. She also played a great role in cataloguing nebulae and clusters of stars. Her work was recognized with various honors such as a Gold Medal from the Royal Astronomical Society, and another from the King of Prussia on her 96th birthday. One of the greatest female scientists of the 18th century. Herschel, William 17381822Germany / Great Britain.Astronomy and mathematics.William Herschel was a dedicated astronomer who observed the night sky through his home-made telescope. Herschel discovered the planet Uranus, the Milky Way and concluded all stars are suns. Herschel was one of the most famous scientists of all time. Hertz, Heinrich 18571894Germany.Physics.Heinrich Hertz proved the existence of electromagnetic waves, as predicted by James Maxwell’s equations. Hertz conducted ground-breaking research on electromagnetic waves. The Hertz unit of frequency is named after him. Hilbert, David 18621943Germany.Mathematics.David Hilbert reduced geometry to a series of axioms. Hilbert is most famous for his list of the 23 big “problems of mathematics” (his 24th was found later) in 1900. Many of the 23 have since been solved by other famous scientists. Hipparchos c190 BCc120 BCGreece.Astronomy and mathematics.Observed over 1000 stars and recorded them in a catalog and a map of the sky. Hipparchos calculated the length of the sun’s and of the sidereal year as well as the lunar month. He is considered the founder of trigonometry. Hippocrates c460 BCc370 BCGreece.Medicine.Hippocrates is widely considered the father of western medicine. He looked for the causes of disease in lifestyle and diet rather than as punishment by the gods. He was effectively the first general practitioner, surgeon and dietician. Even today, doctors still swear the “Hippocratic Oath“. He used the first medications to be won from nature for healing purposes. Hodgkin, Dorothy Mary19101994Great Britain.Biochemistry and X-ray crystallography. Nobel Prize 1964.Dorothy Hodgkin is known for her research into protein crystallography, which examines how protein crystals form. They are mainly used in science and industrial applications. Her X-ray crystallography techniques are used work out 3D structures of biomolecules. She was awarded the Nobel Prize in Chemistry in 1964 for her work on the structure of vitamin B12. Hooke, Robert 16351703Great Britain.Chemistry and biology.Together with Robert Boyle, Hooke improved the air pump devised by Otto von Guericke (1602-1686), developed special microscopes with which he discovered plant cells (named after him). Howard, Luke17721864Great Britain.Chemistry and meteorology.Known as the “father of meteorology”, he devised a nomenclature system for clouds in 1802 which – with modifications – is still in use today. He gave names to the three main types of clouds – cumulus, stratus and cirrus and combinations like stratocumulus and cumulonimbus. Hubble, Edwin 18891953USA.Astronomy and physics.In 1925, Edwin Hubble proved that the “Andromeda Nebula M31” lies far beyond our Milky Way and thereby prepared for the discovery (by Georges Lemaître) of the expansion of the universe. One of the most famous scientists of all time. Huygens, Christian 16291695The Netherlands.Physics, mathematics and astronomy.Jack of all trades. Discovered the rings of Saturn with a self-made telescope, constructed new pendant and pocket watches, explained the theory of probability, described the so-called impact law, founded a new theory of light and dealt with vibration and circular motion (centrifugal force). Jackson, Shirley Ann1946USAPhysics.Shirley Jackson is famous for her contributions to the field of nuclear physics and has received numerous awards for her work along with honorary doctorate degrees. She was the first African American woman with a doctorate degree in nuclear physics at MIT. Jennings, Edward 17491823Great Britain.Virology and medicine.Brave enough to take risks, in 1796 he inoculated a healthy 8-year-old boy with (cow) smallpox pathogens against smallpox and was successful. Edward Jennings is, therefore, considered the father of the smallpox vaccination. Joliot-Curie, Irène18971956France.Chemistry. Nobel Prize 1935.Irene Joliot Curie, who was the daughter of the famous Marie Curie and Pierre Curie, won the 1935 Nobel Prize for chemistry together with her husband Frederic for finding artificial radioactivity. As a result, the Curie family holds the record for the most Nobel laureates to date. Joliot-Curie’s 2 children, Hélène and Pierre, are also respected scientists. Joule, James Prescott 18181889Great Britain.Physics and chemistry.Proved through experimentation that heat is a form of energy which is dependent on resistance, time and current strength. James Prescott Joule also discovered the internal energy of gases (the Joule-Thomson effect). Jump Cannon, Annie  18631941USA.Astronomy.Annie Jump Cannon was a famous astronomer for the “Harvard Classification Scheme” which classified stars based on their temperatures and spectral types. She classified over 300,000 stellar bodies, more than any other person, which earned her the nickname “Census Taker of the Sky”. In 1925 Cannon became the first female recipient of an honorary doctorate from Oxford University. In 1929 Annie Jump Cannon was chosen by the League of Women Voters as one of the “greatest living American women” and in 1994 Cannon was inducted into the National Women’s Hall of Fame. Kelvin, William Thomson 18241907Ireland.Physics and chemistry.William Kelvin was specialist in thermodynamics. He developed and fixed the Kelvin units temperature scale. Together with James Joule, Kelvin discovered gases under pressure change temperature and that at “absolute zero” (-273 ° C) all particles stop moving. Kepler, Johannes 15711630Germany.Astronomy and mathematics.Johannes Kepler discovered the laws of planetary motion (ellipses) and recorded a profile of star orbits. His calculations used integrals and logarithms for the first time. Kepler also confirmed discoveries made by Galileo Galilei. Kirch, Maria Margarethe16701720Germany.Mathematics and Astronomy.Maria Kirch, born Winkelmann, was one of the first famous astronomers due to her writings on the conjunction of the sun with Saturn, Venus, and Jupiter. She was educated by her father, a minister, who believed girls deserved the same education as boys. Her husband, Gottfried Kirch, was a famous German astronomer and mathematician and 30 years older. They worked together as a team and had 4 children, all of whom also studied astronomy. In 1702, she became the first woman to discover a new comet, now known as the “Comet of 1702”, and published widely on astronomy. When her husband died, she tried to take his place at the Royal Academy of Sciences but the Academy refused. One of the greatest female scientists of the 17th century. Kirchhoff, Gustav R. 18241887Germany.Physics.Gustav Kirchhoff discovered spectral analysis together with Robert Bunsen. This made it possible to detect tiny amounts of an element. Kirchhoff defined the laws of electric circuits and investigated the sun’s thermal radiation. Knuth, Donald Emeritus1938USA.Mathematics and computer science.Witty Professor Emeritus at Stanford University, Knuth is famous in the world of computer programming and is known by some as the “father of the analysis of algorithms”. Having created various programming systems and architectures himself he is personally against software patents. Koch, Robert 18431910Germany.Medicine. Nobel Prize 1905.Through painstaking and lengthy (animal) experiments Robert Koch discovered the spores, bacteria and pathogens of cholera, malaria, tuberculosis, anthrax, sleeping sickness and the plague. Lagrange, Joseph-Louis 17361813Italy.Astronomy and mathematics.Joseph-Louis Lagrange was a maths professor at just 19 years old. He performed ground-breaking work in almost all areas of pure mathematics, he founded analytical mechanics (Lagrangian), solved the three-body problem in celestial mechanics (Lagrangian points), the calculus of variations and the theory of complex functions! Lamarck, Jean de 17441829France.Zoology and biology.Jean Lamarck developed the first “theory of evolution” before Charles Darwin! He introduced the term “invertebrates” and recognized, before Darwin, that species are not immutable. Laplace, Pierre Simon 17491827France.Physics, mathematics and astronomy.Pierre Laplace lived through the French Revolution, Napoleon and the Bourbons all at close quarters. He still managed to focus on his probability theory (in games of chance), “celestial mechanics” (the calculation of planetary orbits, and the existence of black holes). Lavoisier, Antoine 17431794France.Chemistry.Antoine Lavoisier is the father of modern chemistry. He proved water is a compound of hydrogen and oxygen and that air is a compound of oxygen and nitrogen. Lavoisier’s meticulous experiments with sulfur and phosphorus demonstrated a burnt substance gains as much weight as the oxygen added. Lavoisier provided a nomenclature for chemistry by counting and symbolizing elements. During the French Revolution, he was guillotined to death, ending the life of one of the most famous scientists of all time. Leavitt, Henrietta Swan18681921USA.Astronomy.Henrietta Swan Leavitt was a human “computer” at the Harvard College Observatory. She examined photographic plates to catalog and measure the brightness of stars. Leavitt discovered a relationship between the luminosity and period of Cepheid variables. This made the stars the first “standard candle” in astronomy, known as “Leavitt’s law” today. Scientists use Leavitt’s law to compute distances to remote galaxies which are too remote for parallax observations. Hubble used Leavitt’s luminosity–period relationship together with Vesto Slipher’s galactic spectral shifts to formulate Hubble’s law to establish the universe is expanding. Leibniz, Gottfried Wilhelm 16461716Germany.Mathematics, physics and philosophy.Wilhelm Leibniz worked intensively with symbolic logic. Along with Sir Isaac Newton, he developed the differential and infinitesimal calculus, introduced the integral sign, built a calculating machine (in 1672) which could multiply, divide and extract square roots, developed the dual system (precursor of modern computer technology), invented a device to measure wind and drafted plans for submarines! Lemaître, Georges18941966Belgium.Cosmologist and a Catholic priest.Georges Lemaître is considered the father of the Big Bang theory. In his 1931 paper he proposed the shocking idea that the Universe was expanding, which solved related equations of General Relativity. Edwin Hubble validated this with his telescope showing distant galaxies receding. Lemaître concluded if the universe is expanding, then it must have originated at a finite point in time. Levi-Montalcini, Rita19092012Italy.Medicine and neurology. Nobel Prize 1986.Rita Levi-Montalcini is best known for her work on nerve growth. Rita Levi-Montalcini won the Nobel Prize in Physiology or Medicine in 1986 for her NGF (nerve growth factor) work. One of the greatest female scientists to live to be over 100. One of Italy’s most famous scientists. Liebig, Justus von 18031873Germany.Chemistry.Justus von Liebig was a pioneer of organic chemistry and founder of agriculture chemistry. Liebig founded a chemical laboratory and scientific training center in Giessen, Germany and undertook many organic elemental analyses with his students. Liebig investigated metabolism, and showed agriculture withdraws important nutrients from the soil which can only be replaced by adding fertilizers. Linde, Carl von 18421934Germany.Physics.Carl von Linde developed a technical method (the Linde process), which makes the liquefaction of gases and oxygen in large quantities possible. Among other things, this improved refrigeration processes. Linnaeus, Carl 17071778Sweden.Botany, zoology and medicine.The Swedish naturalist, botanist and zoologist Carl Linnaeus was the first to document and classify minerals, plants and animals into phyla, classes, order, family, genus and species. His major works include “Species Plantarum” (1752) and “Systema Naturae” (1758). Lobachevsky, Nikolai Ivanovich 17931856Russia.Mathematics.Nikolai Lobachevsky, a Russian mathematician, developed the first complete system of non-Euclidean geometry. It was based on the hypothesis of the acute angle (hyperbolic geometry). His work on hyperbolic geometry is also known as “Lobachevskian geometry“. His fundamental study on Dirichlet integrals is also known as “Lobachevsky integral formula“. Lorenz, Konrad 19031989Austria.Zoology. Nobel Prize 1973.Konrad Lorenz is still considered one of the most important behavioral researchers (anthropologist) of all time. After his experiments, mainly in graylag geese (Anser anser), in particular one goose named “Martina“, he established the concept of “imprinting”. Lorenz received the Nobel Prize in 1973 for his research. Mach, Ernst 18381916Austria.Physics and philosophy.Velocity, relative to the speed of sound at 20° C = 343 m/s and defined using a unit in his name: “Mach number”. His contribution to physics included a study of shock waves. Through experimentation Ernst Mach also confirmed the Doppler effect, which was still controversial in his day. Magnus, Albert 11931280Germany.Biology and philosophy.Albert Magnus was one of the founders of modern science. He described a large number of plants “De vegetabilibus” and animals and insects “De animalibus“. Magnus was also a bishop and teacher of the famous philosopher Thomas Aquinas. Marconi, Guglielmo 18741937Italy.Physics. Nobel Prize 1909Guglielmo Marconi was a pioneer of radio technology. Using evidence of electromagnetic waves and antennas from the Russian Popov, Marconi built the first wireless radio link. Guglielmo Marconi received the 1909 Nobel Prize in Physics. Maxwell, James 18311879Great Britain.Physics and mathematics.James Maxwell was famous for his Theory of Electromagnetism. Maxwell discovered light is electromagnetic radiation. He made valuable contributions to the theory of gases and heat. Maxwell calculated the average speed of molecules in gases “Maxwell’s Law” along with new insights in optics. Mayer, Julius Robert 18141878Germany.Physics and chemistry.Julius Mayer provided essential foundations for the field of thermodynamics. Mayer described the principle of the conservation of energy. This still holds true in chemistry, physics and engineering today. Unfortunately, James Joule took most of the credit for his discoveries. McClintock, Barbara19021992USA.Genetics. Nobel Prize 1983.Barbara McClintock was a scientist and cytogeneticist who specialized in the development of maize cytogenetics. Her breakthrough findings determined that genes could move within and between chromosomes, which went against the thinking at the time. In 1983 she was awarded the 1983 Nobel Prize in Physiology or Medicine, the only woman to receive an unshared Nobel Prize in this category. She was also awarded prestigious fellowships, and elected a member of the National Academy of Sciences. Meitner, Lise18781968Austria / Sweden.Physics.Lise Meitner worked in the areas of nuclear physics and radioactivity and was in the group that discovered nuclear fission. Her colleague, Otto Hahn, was awarded the Nobel Prize for their work which has been a controversial decision for the Nobel committee ever since. Mendel, Gregor 18221884Austria.Biology.Gregor Mendel was an Augustinian monk who conducted cross-breeding experiments on peas and beans. His studies revealed new insights into genetic transmission rules. Gregor Mendel’s “Mendelian Laws” made him the father of artificial insemination. Mendeleev, Dmitri 18341907Russia.Chemistry.Dmitri Mendeleev provided order to the chaos of the elements by establishing the Periodic Table of chemical elements. Mendeleev divided chemical elements into eight groups and arranged them in order of increasing atomic weight. He predicted 8 elements which he labelled using the prefixes; eka, dvi and tri (from the Sanskrit for 1, 2 and 3). Eka-boron (Eb), eka-aluminium (Ea), eka-manganese (Em) and eka-silicon (Es) turned out to be the properties of Scandium, Gallium, Technetium and Germanium which now fill the spots in the periodic table predicted and assigned by Mendeleev. One of the most famous scientists of all time. Messier, Charles 17301817France.Astronomy.Frenchman Charles Messier discovered twenty comets, galaxies and distant stars along with other famous astronomers of his time, including William Herschel, Pierre Méchain, Jérome Lalande and Johann Encke. One of the most famous scientists of all time. Michelson, Albert 18521931USA.Physics. Nobel Prize 1907.Albert Michelson was the first person to measure the speed of light with electrical equipment around 1930. He had already developed a system for measuring light waves named after him “Michelson interferometer” for which he was awarded the 1907 Nobel Prize for physics. Mitchell, Maria18181889USA.Astronomy.Maria Mitchell was the very first American female to become a professional astronomer. She discovered a comet in 1847, winning her a gold medal prize presented by King Frederick VI of Denmark. The comet was then named “Miss Mitchell’s Comet.” She was the first American woman to work as a professional astronomer and the first woman to be elected Fellow of the American Academy of Arts and Sciences as well as the American Association for the Advancement of Science. She later fought for equal pay at Vassar College, where she taught until one year before her death. Nobel, Alfred Bernhard18331896Sweden.Physics and chemistry.Alfred Nobel invented dynamite along with 355 other patents. He Introduced the world-famous Nobel prizes for various fields after reading his obituary while still alive. Shocked by its lacklustre content he set about improving his legacy. Nobelium, a synthetic element, was named after him. One of Sweden’s most famous scientists. Newton, Sir Isaac 16421727Great Britain.Physics, mathematics and astronomy.Isaac Newton was an introverted genius and child prodigy. As a child student in Cambridge Isaac Newton revolutionized the fields of mathematics (calculus), optics (color theory) and mechanics (universal gravitation, formulated after an apple fell from a tree hitting him on the head). Later Newton calculated Kepler’s laws of planetary motion, lunar orbit and tides, described the “binomial theorem”, devised formulas for calculating sound velocity and the penetrative power of missiles. In order to avoid frequent disturbances by his cat, he even developed the cat flap. Newton’s greatest work was the “Prinicipia Mathematica” in 1687. Newton is one of the most famous scientists of all time. Noether, Amalie Emmy18821935German.Mathematics, Physics.Amalie Noether was notable for her work on abstract algebra and theoretical physics, leading Albert Einstein to describe her as the most important woman in the history of mathematics. Other special fields were theories of rings, fields, and algebras. “Noether’s theorem”, published in 1918, states the connection between symmetry and its corresponding conservation law. Ohm, Georg Simon 17891854Germany.Physics.Simon Ohm examined the relationship between current, voltage and resistance. If two of the three variables are known, the third can be determined by Ohm’s formula: Amperage divided by Voltage = Resistance. Omar Khayyam 10481131Persia.Mathematics, astronomy and philosophy.Perian Omar Khayyam solved cubic equations in an algebraic and geometric manner. He also examined the so-called “Pascal’s triangle” and irrational numbers. Khayyam also designed the Islamic calendar and was also a philosopher and a poet. Oppenheimer, Robert 19041967Germany / USA.Physics.Robert Oppenheimer researched quantum mechanics. Oppenheimer developed Trinity, the first atomic bomb in the world. He was horrified by the effect and condemned further missions after he saw the effects of Hiroshima. One of the most famous scientists of the modern era. Ostwald, Wilhelm 18531932Germany.Chemistry. Nobel Prize 1909.Wilhelm Ostwald experimented with acids, salts and bases, explored their conductivity and reaction rates, and in doing so discovered affinity constants “Ostwald’s Law of Dilution“. The famous chemist also worked extensively with fuel cells and catalysts. In 1909 he was awarded the Nobel Prize in Chemistry for his work. Paré, Ambroise 15101590France.Medicine.Ambroise Paré is considered the founder of modern surgery. Paré found new ways of treating gunshot wounds, fractures and amputations (through ligation of the vessels). Among other things, Paré was surgeon to four French kings. Pascal, Blaise 16231662France.Mathematics and physics.Blaise Pascal proved the existence of the vacuum. His experiment known as “Vacuum in the vacuum” placed a mercury barometer in the center of another barometer. The Frenchman also discovered that air pressure decreases with height. Pascal was also a co-founder of probability theory. Pasteur, Louis 18221895France.Chemistry and bacteriology.Louis Pasteur worked his whole life with fermentation and putrefaction. He discovered that bacteria are responsible for these processes and that they die when boiled “pasteurization“. Pasteur discovered the anthrax pathogen and a vaccine against rabies. One of the most famous scientists of all time. Pauli, Wolfgang 19001958Austria.Physics and mathematics. Nobel Prize 1945.Wolfgang Pauli provided important insights into quantum physics. Specifically his “exclusion principle” which is related to so-called spin. Pauli received the 1945 Nobel Prize in Physics for these principles. Pauling, Linus 19011994USA.Chemistry and biology. Nobel Prizes 1954 & 1962.Linus Pauling conducted research with electrons and biological molecules and their chemical bonds found in nature. He is considered one of the fathers of quantum chemistry and, in 1954, was awarded the Nobel Prize for Chemistry, and in 1962, the Nobel Peace Prize. One of the most famous scientists of all time. Pawlow, Ivan 18491936Russia.Psychology.Ivan Pavlov famously conditioned dogs by ringing a bell before giving them food. After a time, they salivated as soon as they heard the bell. Based on his research he wrote the doctrine of “conditioned reflex“, whose nervous activity can also be applied to humans. Payne-Gaposchkin, Cecila 19001979Great Britain.Astronomy and astrophysics.Cecilia Payne-Gaposchkin’s 1925 doctoral thesis “Stellar Atmospheres; a Contribution to the Observational Study of High Temperature in the Reversing Layers of Stars” reached the groundbreaking conclusion that the composition of stars was related to the abundance of hydrogen and helium in the Universe. This contradicted the scientific wisdom of the time but was independently confirmed in 1929. Astronomer Otto Struve described Cecilia Payne-Gaposchkin’s work as “The most brilliant PhD thesis ever written in astronomy“. Payne-Gaposchkin became an American citizen in 1931. Planck, Max 18581947Germany.Physics. Nobel Prize 1918.Max Planck assumed that energy is radiated as so-called quantum (i.e. not as a stream but in packets), and thus founded quantum theory. This states that the size of an energy packet is proportional to the number of oscillations (multiplied by the constant factor h). He was awarded the Nobel Prize in 1918. One of the most famous scientists of all time. Priestley, Joseph 17331804Great Britain.Chemistry.Joseph Priestley was a theologian who isolated gases by using mercury. This led to his discoveries of oxygen, hydrochloric acid and laughing gas (nitrous oxide). Priestley also mixed water with carbon dioxide and in the process accidentally invented mineral water which is very popular today. Ptolemy, Claudius c100c169Greece/ Egypt.Astronomy and geology.Claudius Ptolemy was a genius across many disciplines. He wrote extensive works on mathematics and astronomy (his major work: Almagest), geography (definition of latitude), music theory, optics (refraction) and philosophy. One of the most famous scientists of all time. Pythagoras c569 BCc475 BCGreece.Mathematics, philosophy and astronomy.Pythagoras was a notable philosopher (pre-Socratic) and famous mathematician, astronomer and scientist. Pythagoras founded a school called “The Semicircle of Pythagoras” which blended science and religion. It’s thought discoveries made by members were attributed to Pythagoras, possibly even Pythagoras’ Theorem. Pythagoras is one of the most famous scientists of the late archaic period in Greece. Ramón y Cajal, Santiago 18521934Spain.Physics and medicine. Nobel Prize 1906.Santiago Ramón y Cajal was a brain researcher who discovered the central nervous system consists of billions of neurons which communicate via so-called synapses. He was awarded the Nobel Prize for Medicine for his insight in 1906. One of Spain’s most famous scientists. Ramsay, William 18521916Great Britain.Chemistry. Nobel Prize 1904.Ramsay discovered the noble gases argon, krypton, xenon and neon, and, during the decay of radon, observed the formation of helium. He found a method for determining atomic weights. In 1904 he received the Nobel Prize for Chemistry. Randall, Lisa1962USA.Physics.Lisa Randall is a theoretical physicist active in the fields of cosmology and particle physics at Harvard University. Her research covers i.a. elementary particles, supersymmetry, extra dimensions of space, and dark matter. Among others, she is the winner of the Andrew Gemant Award, the Lilienfeld Prize, and the Klopsted Memorial Award. One of the most famous scientists living today. Rhases, Abu Bakr Muham 844926Iran.Medicine.Rhases was one of the most famous doctors in the Middle Ages and also head of a hospital in Baghdad with much success in healing. He wrote more than 131 books about diseases and their treatment (including smallpox and measles) as well as two encyclopedias of Medicine. One of Iran’s most famous scientists. Richter, Charles Francis 19001985USA.Seismology.In 1935 Charles Francis Richter created (jointly with B. Gutenberg) the logarithmic units of quake strength known as the “Richter Scale”. The scale is open ended – it has no upper end value for especially strong earthquakes. Riemann, Bernhard 18261866Germany.Mathematics and physics.Bernhard Riemann was instrumental in non-Euclidean geometry “parallel axiom“, the general theory of functions and differential equations. Roentgen, Wilhelm Konrad 18451923Germany.Physics. Nobel Prize 1901.Konrad Roentgen found a new type of penetrating X-rays in 1895. This later led to computer tomography and ultrasonography. In 1901 he received the first ever awarded Nobel Prize for Physics. One of Germany’s most famous scientists. Rutherford, Ernest18711937New Zealand.Chemistry. Nobel Prize 1908.Ernest Rutherford identified three types of radioactivity in 1903; alpha, beta and gamma rays. Rutherford discovered the “photoelectric effect” and performed the first artificial nuclear disintegration. This earned Rutherford the 1933 Chemistry Nobel Prize. One of the most famous scientists of all time. Schrödinger, Erwin 18871961Austria.Physics. Nobel Prize 1933.Schrödinger described wave mechanics as the basis of quantum mechanics (Schrödinger equation). He also founded a theory of color perception. Schrödinger received the 1933 Nobel Prize for Physics. Siemens, Werner von 18161892Germany.Physics and mechanics.Werner von Siemens discovered early on that rubber is suitable as an insulator. In 1849 Siemens founded a company to manufacture submarine cables. He also improved dynamo using electricity instead of a bar magnet. Stevin, Simon 15481620The Netherlands.Physics and mathematics.Simon Stevin is the founder of modern statics and hydrostatics. Stevin formulated the law of forces; the “hydrostatic paradox” and other laws such as the relationship between force and displacement on an inclined plane. Strutt, John William 18421919Great Britain.Physics. Nobel Prize 1904.John William Strutt (Baron Rayleigh) researched optics, electricity, thermodynamics and wave theory.  He prepared the discovery of the noble gases (see Ramsay). Strutt was the first person to explain why the sky is blue, due to light scattering. Szent-Györgyi, Albert 18931986Hungary.Biology and medicine. Nobel Prize 1937.Albert Szent-Györgyi researched vitamins and discovered vitamin C. He also worked on oxidation processes in living organisms, carbon metabolism and muscle biology. For his achievements he was awarded the 1937 Nobel Prize for Medicine. One of Hungary’s most famous scientists. Tesla, Nikola18561943Austria / USA.Physics, engineering and futurist.Nikola Tesla developed the first Alternating Current (AC) system. As an inventor Tesla developed wireless lighting and tried to set up a worldwide wireless electric power distribution network but it ran out of funds. Tesla was nominated for a Nobel Prize in Physics in 1937. Theophrastos c372 BCc288 BCGreece.Biology and botany.Theophrastos was the father of botany and student of Aristotle. He wrote around 400 books. Theophrastos examined many hundreds of plants in detail, explored their origins and examined their medical properties. Two important botanical works are Enquiry into Plants (Historia Plantarum) and On the Causes of Plants. Thomson, Sir Joseph 18561940Great Britain.Physics. 1906 Nobel Prize.Thompson discovered the free electron by his research into cathode rays in 1897. He also discovered that ions and electrons are the charge carriers in electrical discharges in gases. 1906 Nobel Prize for Physics. One of the most famous scientists of all time. Tinbergen, Nikolaas 19071988The Netherlands.Biologist, zoology and medicine. Nobel Prize 1973.Nikolaas Tinbergen investigated animal behavior (especially fish and insects) and humans (childhood autism). He also wrote books including “The Study of Instinct” on behaviorism. Nikolaas Tinbergen was awarded the 1973 Nobel Prize for Physiology and Medicine. Tyndall, John 18201893Great Britain.Physics.John Tyndall studied diamagnetism. He made discoveries about infrared radiation and the physical properties of air. He also published books about experimental physics and was professor of physics at the Royal Institution of Great Britain, London, England. He was also a notable mountaineer! Van de Graaf, Robert 19011967USA.Physics.Most famous for developing the eponymous Van de Graaf generator between 1931 and 1933. The generator was able to generate millions of volts which were used to accelerate charged particles. Vesalius, Andreas 15151564Belgium.Medicine.Vesalius conducted dissections as a student already and by the age of 23 was a professor of surgery. He wrote seven books about the anatomy of the human body and was later also the personal physician of Emperor Charles V of Spain. Vieta, Francois 15401603France.Mathematics.Vieta introduced letters, fraction bars, the root sign and parentheses into mathematics in order to simplify calculations and make formulas more understandable. Thomas Harriot (1560-1621) replaced Vieta’s large letters with small ones, thus founding modern algebraic notation. Volta, Alessandro 17451827Italy.Physics.Alessandro Volta built on Luigi Galvani’s (1737-1798) discovery of the electric current to discover the electrolysis of water. Among others, Volta invented the battery (1800), an ampere meter and the “voltaic pile“. Watt, James 17361819Great Britain.Physics.James Watt perfected the efficiency of steam engines by developing new capacitors and the use of connecting rods. James Watt invented “Watt’s parallelogram” and a land survey telescope among other things. Weierstraß, Karl 18151897Germany.Mathematics.Karl Weierstrass made important discoveries for the further development of the general function theory, number theory, and power series. His main work dealt with the proper foundation of analysis (for example in the treatment of infinite products). He also coined the term uniform convergence “Weierstrass criterion“. Wu, Chien-Shiung19121997ChinaPhysics. Nobel Prize 1957.Chien-Shiung Wu contributed greatly to the field of nuclear physics, also working on the Manhattan Project. She is famous for the “Wu experiment“, that earned her and her colleagues the 1957 Nobel Prize in physics, and Wu the Wolf Prize in Physics in 1978. She was often compared to Marie Curie and given nicknames like “the Chinese Madame Curie“, and the “Queen of Nuclear Research“. One of China’s most famous scientists. Young, Thomas 17731829Great Britain.Physics and medicine.Thomas Young was gifted in languages and all-round genius. At aged two he spoke 10 languages fluently. He went on to research color theory, light waves, the tides, statics and technology. He also deciphered Egyptian hieroglyphics, including the three scripts of the famous “Rosetta Stone“. ADDucation Lists Related to Famous Scientists: Leave a Reply Your email address will not be published. 20 − 9 =
7a633e2e9fde4b63
Scattering length From Wikipedia, the free encyclopedia Jump to navigation Jump to search The scattering length in quantum mechanics describes low-energy scattering. It is defined as the following low-energy limit: where is the scattering length, is the wave number, and is the phase shift of the outgoing spherical wave. The elastic cross section, , at low energies is determined solely by the scattering length: General concept[edit] When a slow particle scatters off a short ranged scatterer (e.g. an impurity in a solid or a heavy particle) it cannot resolve the structure of the object since its de Broglie wavelength is very long. The idea is that then it should not be important what precise potential one scatters off, but only how the potential looks at long length scales. The formal way to solve this problem is to do a partial wave expansion (somewhat analogous to the multipole expansion in classical electrodynamics), where one expands in the angular momentum components of the outgoing wave. At very low energy the incoming particle does not see any structure, therefore to lowest order one has only a spherical outgoing wave, called the s-wave in analogy with the atomic orbital at angular momentum quantum number l=0. At higher energies one also needs to consider p and d-wave (l=1,2) scattering and so on. The idea of describing low energy properties in terms of a few parameters and symmetries is very powerful, and is also behind the concept of renormalization. As an example on how to compute the s-wave (i.e. angular momentum ) scattering length for a given potential we look at the infinitely repulsive spherical potential well of radius in 3 dimensions. The radial Schrödinger equation () outside of the well is just the same as for a free particle: where the hard core potential requires that the wave function vanishes at , . The solution is readily found: Here and is the s-wave phase shift (the phase difference between incoming and outgoing wave), which is fixed by the boundary condition ; is an arbitrary normalization constant. One can show that in general for small (i.e. low energy scattering). The parameter of dimension length is defined as the scattering length. For our potential we have therefore , in other words the scattering length for a hard sphere is just the radius. (Alternatively one could say that an arbitrary potential with s-wave scattering length has the same low energy scattering properties as a hard sphere of radius .) To relate the scattering length to physical observables that can be measured in a scattering experiment we need to compute the cross section . In scattering theory one writes the asymptotic wavefunction as (we assume there is a finite ranged scatterer at the origin and there is an incoming plane wave along the -axis): where is the scattering amplitude. According to the probability interpretation of quantum mechanics the differential cross section is given by (the probability per unit time to scatter into the direction ). If we consider only s-wave scattering the differential cross section does not depend on the angle , and the total scattering cross section is just . The s-wave part of the wavefunction is projected out by using the standard expansion of a plane wave in terms of spherical waves and Legendre polynomials : By matching the component of to the s-wave solution (where we normalize such that the incoming wave has a prefactor of unity) one has: This gives: See also[edit] • Landau, L. D.; Lifshitz, E. M. (2003). Quantum Mechanics: Non-relativistic Theory. Amsterdam: Butterworth-Heinemann. ISBN 0-7506-3539-8.
f390be6a5b6325be
Silicon is industry's most famous semiconductor. Its electron configuration is the following: silicon orbital diagram It means that the highest energy orbitals are the 3p orbitals (which are only partially filled). Now, silicon bandstructure is often represented by the following (simplified) model: silicon bandstructure Where the heavy-hole, light-hole and split-off valence bands are made of p-orbitals; and the conduction band results from the s-orbital. This seems quite inconsistent to me, because the 3s levels are less energetic than the 3p levels (see the orbital box diagram above), and should therefore be the ones which correspond to the valence band (and vice versa for the conduction band and the p-type states). Why is it so? There are several simplifications or confusing points in your thinking that should be clarified. (One that is not totally relevant to the discussion is that the indirect gap should, of course, be lower in energy than the direct gap in silicon - it isn't in the sketch above). The transition from localized atomic states to crystal Bloch functions does not have to be, and usually is by no means, one-to-one. Period. The one-electron Schrödinger equation solutions for an atom are not the solutions to the extended Bloch equation. However, one can start with one-electron wave functions (which as you know are a basis set) and mix them to obtain the Bloch functions. The classic band structure calculation methods (such as $k\cdot p$) do just that (ab initio calculations were way to expensive in the 1960s). For your question, I would refer you to Manuel Cardona's 1966 Physical Review paper as one good example for both silicon and germanium. Now, in the general approach of mixing one-electron wave functions to generate Bloch states, you can go back and look at how 's-like' or 'p-like' a given band is (because that, of course, is how your answer is represented since you started with the one-electron states as a basis set). Well, actually you have to look at how 's-like' or 'p-like' a given band at a particular momentum is. Not surprisingly, the 'mixing' parameters vary with crystal momentum, and Cardon's paper is full of figures showing the relative contributions of the different eigenstates. In addition, while silicon and germanium should be pretty much the same, the fact still remains that the relative mixing to generate the similar-looking bands is different. Not amazingly different, but different none the less. It is not a straight forward 'atomic p orbitals become valance bands'. So, bottom line, two different simplifications were made to you that resulted in your confusion (I believe). 1. atomic states 'directly become' crystal states, and 2. the Si valence band is made of just p-like atomic states. Both are, perhaps, reasonable first simplifications, but as with all of physics one has to dig a little deeper when you start to run in to trouble. Thank you Jon Custer for the very detailed answer! I have a handwaving argument for the s and p like crystal wave functions. Namely: going from atom wave function (s,p,...) to molecular wave functions, we get $\pi$ and $\sigma$ bonding and antibonding molecular orbitals. Then, we can go further to crystal energy bands. In the molecule, the $\sigma$ bonding orbital goes very deep down into the valence band, whereas the $\pi$ (p-like) bonding orbital (HOMO) goes into the upper valence band. The lowest unoccupied (LUMO), anti-bonding $\sigma$ (i.e. s-like) molecular orbital becomes the conduction band in the crystal. All this happens in exactly the way Jon Custer already explained above. Your Answer
6c595cedf3c822cc
Science, Mathematics, And Sufism I know a professor of theoretical physics, with whom I’ve had many interesting discussions over the years. (Disclosure: I came to Sufism via science.) I wanted to do an interview on the topics we covered with someone who, like me, had progressed from science to Sufism. For those who are the least bit interested in science, physics, and mathematics, the article below will, I believe, prove quite rewarding. The language is simple and no higher mathematics is involved, except only briefly. Of the “99 Beautiful Names of God,” one is al-Muhsi (The Reckoner, Appraiser,  or Accountant): The One who possesses all quantitative knowledge, who comprehends everything, small or great, who knows the number of every single thing in existence. In Arabic, the root HSY connotes “to number, count, reckon, compute,” “to collect in an aggregate by numbering,” “to register or record something,” “to take an account of something.” I conclude that a more concise rendition in English would be: God the Mathematician. Of course, another of God’s Beautiful Names, the Omniscient (al-Alim), is all-inclusive, so that God’s Knowledge (ilm) encompasses mathematics, physics, and biology alike. But “the Mathematician” makes it more explicit. In fact, quantity (miqdar) and destiny (qadar) both derive from the root QDR, and thus are inseparably intertwined. On May 18, 2014, I recorded a lively conversation with my friend, who wishes to remain anonymous. Highlights from that discussion follow. Text in bold, in brackets, and below graphics belongs to me. The incredibly sophisticated nanotech machine designs within a single cell. Watch it and weep. (Go to “Settings” and select 480 for best view.) Then ask yourself: can this be the outcome of any random collocation of atoms? When a cell dies, it has precisely the same components. Why then do they lie motionless in the case of a dead cell? So… Where shall we start? Well… They say, “When a person comes of age, s/he becomes responsible” [religion-wise]. Why? Because a person can comprehend the existence of God by reason alone. The mind is enough to know that God exists.newton1 A flower, a bit of soil, a car. Can these nice things have come about by themselves? We’re talking about initial creation, of course. Once the mechanism is in place, after it becomes self-reproducing, things are easier. Order, disorder. What I’ve seen in life is, unless it’s cultivated, nothing tends to improvement. If something has a chance of going wrong, it will. That’s Murphy’s Law. But there’s such an established order that you don’t have to be a professor, you could be a mountain peasant. When you look around, you see this exists. Your child is born. If you leave it alone, it won’t grow up, the child will die. You have to show it exceptional care. There is no need for intelligence to know that a child has parents. That is, you already know it has parents. And this child that is the universe has a parent too, it has an Owner, a Creator. You go to the moon, you find a color television there. Would anyone in their right mind say, “This TV was formed spontaneously out of the ground”? This is absurd. But they do say that. It’s called evolution by random mutation. What do they take refuge in? They take refuge in time. But the law of entropy tells us the exact opposite. Time is more of a negative factor in these matters. Time is something that degenerates, unless there is a driving force supporting the process. They say that radiation causes the mutations, but in all the examples I know of, radiation has a deleterious effect on living tissue. Radiation is one of the causes of cancer. “A drowning man will grasp at any straw.” That’s why a child is responsible upon reaching eighteen years of age. Because the child is no longer a child, s/he can analyze and see certain things. As a result, I think the intellect alone is sufficient to comprehend God. Prophethood and so on are something else. They’re more specialized matters. Now science has a dead-end of this sort. They used to define the law of entropy as: “Left by themselves, systems tend to disorder.” Now they’ve changed this, they’ve removed the word “disorder.” They’re trying to abstract entropy away from disorder, they’re trying not to use the words “entropy” and “disorder” together. This is in the newer textbooks. Because otherwise, you ask: “how did this order come about?” Now they write entropy as an equation, they don’t mention disorder. Physicists have tried to circumvent this, to find a solution to the question of entropy, and have wound up nowhere. That’s the second law of thermodynamics, isn’t it? Yes. And a peasant doesn’t call this entropy, but he says, “If you don’t tend your garden, you’ll get weeds.” If you were to bring together all the ingredients of a cell and shake them up, the probability that something will come of that is inconceivably less than 1 divided by 10130, which is already a vanishingly small number. That is, it’s zero. For all practical purposes, this means zero. [See Appendix A. We’re talking about the first living, self-replicating cell.] But people usually miss the really important point here. If the probability of something occurring randomly is zero, then the probability that it did not occur by chance is a certainty. 1 – 10-130 = 1 – 0 = 1. Now they don’t emphasize that, of course! Mind-blowing Animations of Molecular Machines inside Your Body [TED]. To claim that all the intricate mechanisms and processes of life could have arisen from inert matter by blind chance, given no matter how many billions of years, is not just an insult to God’s intelligence, but also to our own. It is to elevate the “intelligence” that can emerge from chance to the level of God’s, to impute the highest IQ to random events. Is that anything other than “chance-olatry”—the worship of chance? And if you say it will form into a cell if shaken for umpteen billion years, that’s an untestable hypothesis, and hence not science. Actually, quite to the contrary, entropy militates that not long afterwards, you’ll have a homogeneous mixture, and it’ll stay that way. Try it with two or three different powders or differently colored liquids, and you’ll see. Shaking more vigorously, adding more energy, doesn’t change the result. So time is no solution, either. On the contrary, time has an adverse effect. Hence, a mind that can’t perceive this shouldn’t be considered responsible. Because from the point of view of religion, there’s no responsibility when there’s a problem with the intellect. A sacred verse says, “God casts defilement on those who don’t use their reason” (10:100). So you have to use your intellect. There are so many verses that say “men possessed of minds,” “do you not reflect?” But we use our mind for other things. We know very well how to use it for diabolical stuff. What do scientists do when they’re desperate? They resort to time. Whereas entropy tells us the exact opposite. So a cause is an unavoidable problem. What do you do to get rid of it? You say there was a big bang, and before the big bang there was something else, and before that… you look for a way to wiggle out. Even if you didn’t know about the big bang, I think one ought to know that this can’t be of itself when one beholds this order. One has to see. This is insight. [For more on this see the Appendix B, taken from another discussion.] planthoppergears planthoppergear-jumping Interacting Gears Synchronize Propulsive Leg Movements in a Jumping Insect (Science, 13 September 2013; click on picture at right to view animation (size: 3 MB)). Gear technology designed into legs (and hence the genes and DNA) of young planthoppers. The mechanical gear was invented around 300 B.C. by humans. For millions of years, a 3-millimeter long hopping insect known as Issus coleoptratus has had intermeshing gears on its legs with 10 to 12 tapered teeth, each about 80 micrometers (or 80 millionths of a meter) wide. The gears enable the creature to jump straight. The teeth even have filleted curves at the base, a design also used in human-made mechanical gears since it reduces wear over time. Right: screw-and-nut system in hip joint of the weevil Trigonopterus oblongus. The screw thread is half a millimeter in size. Weevils, of which there are 50 thousand species, are a kind of beetle, and have been around for 100 million years. These are examples of God’s handiwork in His aspect of Engineer. You pose a problem in mathematics. One person sees the solution in a second, another sees it in an hour, a third doesn’t see it at all. I think this is like that, with the difference that psychology plays no role in a mathematical problem. Psychology does have a role when you look at nature and infer God. The way you were raised, what your parents taught you, what you received from your surroundings, can prevent you at that point. Because there’s a phenomenon called hypnotism, and this is a form of hypnosis. I hypnotize someone, I plant the suggestion: “when you wake up, you won’t see that phone.” After they wake up, I ask for the phone. They just can’t find the phone. These experiments have been performed. And human beings are hypnotized like that, only they’re not aware of it. So that person can’t ever find God, because they’ve been hypnotized since childhood. They’ve been conditioned. Conditioning takes time. The Prophet said, “Every child is born a Moslem, their parents turn them into something else.” How do they do that? Just so, by conditioning. So the intellect is very important. But intellect is not enough by itself. Until about the year 1700, we talked trusting our intellect. Science didn’t advance much. We talked for thousands of years. Physics was like history, like geography. Everybody was a physicist. How did this change? With Newton. Prior to the twentieth century, there are three great scientists: Newton, Galileo, and Maxwell. Maxwell isn’t emphasized that much, but he did something of paramount importance. He’s the one who solidified the mathematization of physics. Newton started the mathematics. He laid down the “method of fluxions” (differential calculus)… He introduced mathematics to mechanics. Galileo emphasized the importance of experiment. But Maxwell is the person who wrote down all electromagnetic phenomena in the form of differential equations. So there’s a solid mathematization there. And at that point, a discrepancy in the equations presented itself: a conceptual discrepancy. Maxwell resolved the discrepancy according to his own lights, he balanced the equations by adding another term. That’s when it emerged mathematically that electromagnetic waves exist. And so, we actually owe the foundations of our present technology to Maxwell. The mathematization there is as significant as Newton’s. The physicists of his time objected. One of the protesters was Faraday. Maxwell mathematized Faraday’s Law, as well. Faraday’s objection at that time was: By itself, mathematics does not include any laws of physics. In other words he’s saying, you’re doing this, but you’re doing it in vain. He objects, he says it won’t contribute much to physics. But Maxwell mathematizes these laws. Now this is very important in present-day physics. You pose a problem, you build a mathematical model of it. Writing the math is a skill all its own. Maxwell did this, and then the objections ceased. When Newton did it, they said, “You’ve done this, but physics has become a specialized science.” We were all physicists before that. You’ve done this math, but it’s a specialized field. So you’ve reduced it to a very small scope, they said, and the objections continued. The principle of gravitation, for example: you say it’s mathematical, but you don’t explain how it occurs. But after Maxwell, because there was that prediction, the objections ceased. Hence, mathematics accomplishes a very great thing. Looking at it from the viewpoint of classical mechanics, Galileo says, “Let’s check it with experiment.” The mathematical mind may be beautiful, but it’s not everything. The superiority of the mathematical mind to other kinds of mind is that it is a very concrete form of mind. For instance, there’s water vapor and then there’s ice. But the second is concrete. Water vapor exists, too, but it’s not as tangible as ice. Now you have your way of thinking, I have mine, she has her own. And the logic of each of us has internal weaknesses which we can’t perceive. But mathematics prevents that. Mathematics has become concrete, that is, it has been tested, formulated, thought through by thousands of people. When you apply mathematics, you’re automatically freed of the weaknesses, the fallacies of your personal logic. So mathematics is a more concrete form of logic, of the mind. feynmanI’m saying this in terms of its application to physics. Otherwise, there are fields where it can’t be applied. It can’t be applied that much to psychology. I don’t know to what extent it will prove applicable to neuroscience, to modeling the brain. But much that is useful has come of this. We know the seven planets, it was thanks to mathematics that the existence of the eighth planet was proved. Mathematics predicts. You do the calculations, they don’t agree. The coordinates don’t match, they diverge. Either our model is wrong, or something else is afoot that we don’t know about. What is required for this to occur? You say, there has to be a planet of this mass in such-and-such a position. They say, look at this point on this day, at this hour, and you’ll see a planet. That’s how the eighth planet was first sighted. Two astronomers, one French and the other British, are involved. Lo and behold, on that day at that hour at that point, a planet [Neptune] is observed. Now this invalidates Faraday’s claim. He was saying that mathematics could not make physical predictions on its own. What did it do? It predicted. That is, mathematics is usually regarded as a tool. But it’s slowly going beyond being a tool. It’s becoming a means of discovery. It’s becoming something of a trailblazer, a pioneer. A tool is a thing that helps you do something, it’s passed beyond that. And the same with the ninth planet, too. This time, perturbations in the orbit of the eighth planet led to the discovery of the ninth [Pluto]. But the ninth planet was discovered with more difficulty. And then it was demoted from the status of being a planet. They call them “dwarf planets.” Because of the tenth planet, the ninth was demoted. Now, back to Maxwell: he says there’s a discrepancy, a mathematical, a logical discrepancy. As he gets rid of that, he finds a wave equation there. Hence he says, electromagnetic waves exist. He calculates their velocity, it turns out to be the speed of light. Therefore, says he, light is an electromagnetic wave. And these are all things that were subsequently verified experimentally. Hertz, Marconi… The basis of today’s technology and communications lies there. This is one of the major breakthroughs. What did mathematics do? It paved the way for something. It led to a new discovery. After being confirmed by experiment, of course. In physics, one should never forget that principle of Galileo. mathpauldirac1Examples of this abound. We now come to quantum mechanics. For instance, in quantum mechanics, Dirac’s equation. Dirac’s equation renders quantum mechanics and relativity compatible with each other. The solutions of this equation are more accurate than those of the Schrödinger equation.  But here, too, there is a discrepancy, just as there was in the case of electromagnetics. Then Dirac says, there has to be a particle with the same mass as an electron, but with opposite charge. Within a year or two, the positron is discovered. It was so unexpected that the discoverer was awarded the Nobel Prize. Now what has mathematics done? It has again led to a new discovery, it has served as the means to finding a new physical entity. Again, it has passed beyond being a mere tool. And there are many more examples like this. Now, physicists are amazed by this. Eugene Wigner wrote an article on “The Unreasonable Effectiveness of Mathematics in the Natural Sciences.” Stephen Hawking has a saying like that, too. He asks: “What breathes fire into the equations?” This confusion arises from the assumption that the system excludes the God concept. Otherwise, they wouldn’t be amazed. Because such precision… everywhere there is a logic, a mind, an Infinite Mind at work. Entropy is the cause of our amazement: how can such order exist? The assumptions are wrong. These phenomena clearly tell us that these things can’t happen by themselves, there is an Infinite Mind here. And as far as physicists are concerned, this is a real conversation-stopper. From here, scientists and philosophers go on to other things. They say [with mathematician David Hilbert]: “Mathematics is a game.” Well, if it is a game, how come it’s so effective in physics? Mathematics is real. But there is no mathematics in nature. The numbers 2, 3, … don’t exist as objects in nature. You infer these yourself. For instance, half-integers. Irrational numbers. Rational numbers. Complex numbers. These are entirely constructs of the mind. For example, complex numbers were invented completely independently of physics, so that certain mathematical equations could have a solution. And what do we find, centuries after they were invented? Without complex numbers, quantum mechanics cannot be formulated. There are four or five formulations of quantum mechanics, all of them require complex numbers. There’s just no way to avoid them. Isn’t it the same with electricity? No. Complex numbers provide simplicity there. But you can do the calculations without resorting to complex numbers at all. Here, on the other hand, you can’t do anything without complex numbers. You don’t have that luxury. And here again, the question arises: Weren’t numbers a construct of the mind? Why are mind and nature such an inseparable whole? These are presumably surprising questions for physicists. Also, there is intellect there, but not every intellect. That’s why Galileo is so important. You have to test it against nature, to check whether that intellect is there or not. For instance, there are four kinds of what are called “division algebras”: real numbers, complex numbers, quaternions and octonions. If a number has an inverse, it’s part of a division algebra. As you move from the first to the last, you lose a property at each stage. Real numbers have the property of ordering: for instance, 5 is greater than 3. With complex numbers, you can no longer say which is greater, 3 + 5i or 5 – 6i. With quaternions, you lose the property of commutativity, and with octonions, you also lose the property of associativity. Now real numbers and complex numbers are used in nature, but quaternions and octonions are not. A group of physicists tried to formulate quantum mechanics in terms of quaternions, and nothing came of it. And the same holds for octonions. So that’s why experimentation is so important: you have to check the applicability of your mathematics to reality. In conclusion, the effectiveness of mathematics is unreasonable only if you exclude God. If you include that concept, then it becomes eminently reasonable. Now Plato says that mathematics has a reality independent of us. He says we access it by extensions of the mind, and project it on the physical world. That’s why it’s called a Platonic reality. And the same with love: you love another, that person doesn’t know anything about it, it’s all in the lover’s mind. That’s why that love is Platonic love. But this Platonic reality is a peculiar kind of reality. Where would physics be without mathematics? We would still be talking. We would be in the situation that existed prior to 1600-1700. There would still be a physics, crude, experimental, somewhat like meteorology. In meteorology you make forecasts. But is it like that now? I launch a rocket, thanks to my calculations I know where it’s going to fall, down to the centimeter. With our calculations, we can predict the exact time and duration of a solar or lunar eclipse that will happen 100 years from now down to the second. Now these are not trivial things. Mathematics equates with the mind, an intelligence that pervades the entire universe. Now we have trouble admitting this. So we don’t want to see or hear certain things. The question of entropy remains unresolved. The formation of the first living cell remains unresolved. It cannot be resolved, because there’s the law of entropy. Those experiments have been performed, that organic soup has been made. Stanley Miller did one experiment, Sidney Fox did another. You place the gases you imagine composed the atmosphere at that time, you give the electric current, that corresponds to lightning strokes. You get amino acids. Amino acids are the building blocks of proteins, so you conclude that life emerged from there. But it’s not merely a giant step, it’s an impossible step, from amino acids to proteins, if you’re going by chance. OK, how are these organized? Sidney Fox did that experiment. Nothing came of it. By that time, ten years had passed. And nothing would come of it if they were to remain there for ten million years more, because there’s the law of entropy. We say that given time, we’ll solve this. And that’s just kicking the can down the road. Now, why is mathematics so effective? Because nature is the product of a mind. There’s an Infinite Mind in the universe, a Mind that beggars our minds, that makes us look like mongoloids. Moreover, that Mind also has to possess infinite power, in order to enforce those laws all across the universe, from the macrocosmos down to the microcosmos at every level. Take a single cell, a single human, a single life form. There’s a phenomenal mechanism there, there’s a monumental set of laws. We’ve understood little bits and pieces of these, that is, what we understand doesn’t amount to much. And that, we understand by isolating. For example, we understand an atom, we try to understand a hydrogen atom. We act from the principle of linear superposition. We dismantle things like a clock and assume that like a clock, they’ll work in the same way when they’re reassembled. Of course, because our approach is atomistic. We haven’t seen any other kind, we don’t know. And we can’t wrap our minds around it, because it’s nothing comprehensible. Now a holistic approach, that’s something else. It’s the outcome of a different state of consciousness. Since we’re in atomistic states of consciousness, our minds too are atomistic. If we had holistic states of consciousness, perhaps we would have holistic minds. There are people with holistic consciousness. We don’t always understand what they say, because they’re talking from a different state of consciousness. A butterfly has a consciousness of its own, a mind of its own. A human has a consciousness of his own, a mind of his own. It’s like that, that is. There’s a relationship between consciousness and mind. You always say that “Quantum physics is holistic”… Not many people realize this. Before Newton, mathematics is at the level of arithmetic. Until quantum mechanics, in classical physics, we understand events atomistically, that is, we understand them one at a time. We draw diagrams, those diagrams have correlates. The resultant of two forces, and so on. In quantum mechanics, the dose of mathematics is stepped up even more. But our understanding diminishes. We have difficulty in comprehending the phenomena. In classical physics, we thought we understood the phenomena. We could take events on a piecemeal basis. In quantum mechanics, there’s a helium atom, it has 2 electrons and a nucleus, the nucleus has 2 protons and 2 neutrons. But we deal with it as a system. When we speak of the energy level of the helium atom, we don’t mean the energy level of the electron, the nucleus, or the proton, we consider the energy level of the system. The phenomenon is approached as a whole. What happens then? We can’t draw a diagram. The diagrams we draw are abstract. Hence, they have no pictorial representation. Pictures are out. So, three stages: first, arithmetic. Next, a physics at the level of calculus. Third, again physics at the level of calculus, but depiction is lost. Because our assumptions changed. We approached the phenomenon holistically. Why did we do that? Not because we wanted to. We were forced to do so. In order to make sense of the experiments. We can’t comprehend the results of experiments. The experiment is there, but its results don’t make any sense. We had to derive this formulation in spite of ourselves. The experiments forced this on us. And what is essential in physics is the experiment. Then we sat down and thought about what it was we had discovered. We had found something holistic. How about a definition of “holistic,” while we’re at it? First, let’s clarify what we mean by “atomistic.” Let’s say there’s an event in the solar system. We take the sun separately, the moon separately, this planet separately. Then we do our calculations. Each component has an identity of its own. The values of every component are important. Now, for example, the helium atom, the hydrogen atom, the individual states of protons, of electrons, are no longer of importance. We’re looking at it as a system, that is, as a whole. That’s what “holistic” is. In other words, not to go from the parts to the whole, but to deal only and directly with the whole. To make a jump, could we deal with the universe in the same way? The wave function of the universe. There have been studies like that. [Everett-Wheeler-Graham (EWG), “The Many-Worlds Interpretation of Quantum Mechanics.”] Here’s what this means: let there be a wave function, let all that can be known in the universe be in that wave function. And in the representation of the hydrogen atom, there’s all the information related to the system. Now this is a significant jump. First, it places us in a more helpless situation. It’s like Gödel’s theorems in mathematics. What do Gödel’s theorems do? They undermine the foundations of mathematics, they make it more insecure. We used to be determinists, we used to know everything. Now, we don’t know everything. We don’t know what we’re going to find when we conduct an experiment. We can only say, you’ll find this with this probability and that with that probability. And I don’t know how correct that is, because in order to say that with certainty, you’d have to conduct an infinite number of experiments. Only the menu I’m offering you is definite. But I can’t tell you which item you’ll discover. Because this is a holistic matter, there’s an indeterminacy there. There’s always this in holistic things: a lack of certainty. We can’t understand it, but in the end, we can know the energy levels. And we can do this with great accuracy. We can observe them in experiments. And this has been a very great success. [Quantum electrodynamics, or QED, has been tested to an accuracy of one part in 100 billion (more recently, in 2006, eight parts in a trillion). The famous American physicist Richard Feynman compared this degree of accuracy to mathematically calculating the distance between New York and Los Angeles to within a hair’s breadth. In other words, this is equivalent to predicting the width of North America with the precision of plus or minus one human hair.] There’s no such thing in classical physics. But actually, there’s a parallel between classical physics and quantum mechanics. Classical mechanics has four or five different formalisms, quantum mechanics has four or five different formalisms. This is not valid for every formalism. For example, the Poisson bracket formalism of classical mechanics is almost the same as the formalism of quantum mechanics, with one difference. The general appearance of the equations is the same. To me, this looks like the following: in the Koran, they say Ibn Abbas gave a verse’s hidden meaning by interpreting it differently. That’s not what you understand when you read the verse. And I say, that’s what the equation states, but you have to take it as a commutator. That is, there can be different approaches like that in reading the book of nature. There’s actually a one-to-one correspondence, so you penetrate to a deeper level of meaning. But you can’t logically prove one from the other. That is, you can’t prove the equations of quantum mechanics starting from the equations of classical mechanics. You see the similarity, but there’s no direct proof. That sounds like pattern recognition, doesn’t it? That is, there’s a form-al similarity. It’s not just a morphological similarity. For instance, the values of the commutators are identical. So it’s not only a matter of form. Give me the Poisson bracket of anything, I’ll write down its quantum mechanical equivalent for you. This goes beyond form-al. I know the Poisson bracket of a hydrogen atom, of a harmonic oscillator, I can write down the corresponding equation in quantum mechanics, because of this similarity. And the results are phenomenal. This is a different meaning of “a book with twin verses” [the Koran], that is, they have dual meanings. [The book of the universe is here being compared to the Koran.] Taking the meaning of “verse” (ayah) as “sign” here… Of course, not as words, but as God’s universe, God’s signs. That is, there’s a signifier in everything. In fact, there are even deeper meanings, and that happens in quantum field theory. Then you give a slightly different meaning. Now there are operators, and the things they operate on. If you assume commutation relations in the operated (operand), it becomes quantum field theory, that yields even more accurate results. In other words, there are nested meanings. Maybe that’s the case with everything, I don’t know. I’m saying this in terms of physics. But mathematics has an extraordinary role in our discovery of these. From the viewpoint of physics, however, not every mathematics is always useful. If the assumptions are valid, if you base your mathematics on those, the result is sensational. If the assumptions are wrong, nothing will come of it even if the math is correct. thought-of-god-ramanujan1That is, mathematics is actually a kind of gardening. Seed, cultivation, result. If the seed is the seed of a thorn, no matter how well you cultivate, you won’t get apples from it. Your seed simply has to be the right seed. And that seed is your assumptions. Why, for example, can’t we reach a result in the case of entropy? We can’t sow the right seed there, due to psychological reasons. That’s our problem. So we continue to be surprised. “The unreasonable effectiveness of mathematics” is not unreasonable at all. Why should you be surprised about the mind of God? [Nothing lies beyond its ken.] You mean it’s not so hard to pass from science to religion? You can pass to religion from anything, even from art. Perhaps you’ve heard of the joke: “I used to believe in no God, until I saw her. That’s when my opinion changed.” That is, such beauty can’t be accidental. This art can’t happen of itself. This rose doesn’t grow of itself. This scent doesn’t emerge by itself. This beauty, this intricate design, can’t exist of itself. You don’t have to be a physicist to understand this. Take any phenomenon. After you see the balance, the beauty there, you’ll say, this can’t happen on its own. Of course, there’s the matter of faith here. Anything can be a cause of faith. But there’s also the verse: “Nobody can have faith unless God desires it” (10:100). Some come to faith easily, others just can’t. But if there has to be an occasion for it, it doesn’t have to be mathematics or physics. But mathematics and physics make it crystal clear. So does medicine. A doctor. If the diagnosis is wrong, you can’t heal no matter what the therapy is, right? But for the diagnosis to be correct, you have to have a firm grasp of the processes. And you have to know that nothing is accidental, you have to know the mechanisms, to be able to reach the right diagnosis. Feynman explains all this elegantly. There were two objections against Newton: 1. You mathematized physics, you made it specialized. 2. You didn’t explain how gravitation occurs, you called it “action at a distance.” This is magic, and it has to remain so. The sun attracts the earth. How does it do this? The mechanism isn’t described. This was also Einstein, Podolsky and Rosen’s (EPR) objection to quantum theory. Einstein opposed quantum entanglement on the grounds that it was “spooky action at a distance” (spukhafte Fernwirkungen). It was everyone’s objection. Einstein turns gravitation into the curvature of spacetime, that has problems of its own. For two hundred years, people tried to devise a mechanism for it. There isn’t any. According to Feynman, there’s no difference between saying that gravitation attracts and that “the angel of gravitation” performs the attraction, because we don’t know what it is. For example, how does a proton attract an electron? Via an “electric field.” These are just words. Are these empty concepts, or can they be filled with meaning? That’s what we have to look at. Except that in quantum field theory, there’s an exchange of photons. We call them “virtual photons.” This tosses a photon to that, and vice versa. That’s how attraction occurs. Mathematically, many nice things have emerged from this. In the weak interaction (weak nuclear force), there is an exchange of W and Z bosons instead of photons. And in the strong interaction (strong nuclear force), gluons are exchanged. All these are by analogy. But there’s nothing there. There is no impressive prediction. Those in the know don’t say it out loud, but they know and feel it in their hearts. Because the assumptions are wrong, nothing comes of it. It’s the same in every science. Science is an activity performed by humans, and human beings have egos. How did you pass from science to religion? From the intellect rather than from science. But science refines this further. You see the accuracy more clearly. Let’s say that a human with a mind, anyone intelligent enough, can comprehend that all this can’t happen by itself when s/he looks at these relationships, this order, this art. But when you go deeper into the relationships, you discover how finely tuned, how delicate, how highly ordered the relationships are, with such great precision, and that cements it. That’s the real contribution of science. For instance, a doctor. When a doctor goes into that, s/he begins to see things on more of a micro level. They see much deeper than you or I do. So what happens? That cements it. And the same in other places, as well. For instance, if the distance between the sun and the earth were not what it is, there would be no life on earth. There are a thousand things like that. These things wouldn’t be if the ratio between gravitation and the electromagnetic force were not what it is. You perceive that so many coincidences just can’t coincide by themselves. I actually found God before science, but science riveted it. For example, human beings, couples. There’s a man, a woman. God created them compatible with each other. From that union, a child is born. He gave affection so that that child could live, He created that environment. The male seed, the female seed, there’s an extraordinary design. At that stage, there’s no need to be a physicist to see this. What’s really important here is the patent. Once the factory is in place and working, things are a bit easier. I built up the shop, I left it to my child and went off. The child’s task is a bit easier. Forming it is more difficult. But if the children can’t take it forward, it’ll degenerate and get closed. “It happened by itself.” If so, why can’t I take it forth? Why can’t the child, who took it over in a ready state, take it forward? Therefore, it didn’t happen of itself. Now, this logic is all very clear, very simple. But you won’t see it if you don’t want to. That’s the real issue. One has to be blind to not see it. Or you have to have grown up blinded. For me, it’s impossible not to see. Now, it’s possible to pass to the concept of God from science or art or something else. But how do we go from the God concept to religion? There, a vehicle is needed. The mind sees: OK, there’s something here. Why is experiment important in physics? The mind can’t solve everything. Reason has to be tested against reality. Experience is more important. Sometimes we know something from experience, we construct its reasoning later. To understand religion completely, experience is very important. The phenomenon of prophethood. You can’t understand that with physics, with mathematics. The phenomenon of sainthood, you can’t understand it with the intellect. Our Prophet inspired such a sense of trust in everyone, but in spite of that, not everyone believed in him. Either that, or you have to be able to reach great conclusions from small experiences you live. You saw something in your dream, the next day it took place, it came true. This happened once, twice, three times, … There’s no place for this in science. Well then, hold on, friend, there’s something here that eludes your intellect. Now of course, this gives way to listening, to heeding. Why don’t literate people take religion seriously? Because they trust their own mind and do not listen. They don’t listen, they don’t feel the need to listen. First, they were raised that way. Second, they haven’t had experiences like that to astound them. Even if they have, they feel the immediate need to rationalize it. They bypass it. Otherwise, if only they were to start researching, the place to be reached is clear. There’s a world you don’t know,  a whole range of experiences you don’t know about. It’s all here. We call it the world of light. There’s the Realm of Power (Jabarut),  the Stage of Nondetermination (La ta’ayyun), right? If you ask when that was, they say it’s all simultaneous. That is, they’re all here, and they’re here according to the level of consciousness you’re in. You mean they’re not in any temporal sequence. They’re not. Not anywhere else either, they’re actually here [and now]. That doesn’t mean nobody sees them. And that’s our main error. For example, I study mathematics, but I don’t understand it. That doesn’t mean nobody understands it. Or, there’s going to be an earthquake, a dog hears it, I can’t hear it. In other words, there are things I can’t perceive. For example, elephants can hear a sound from a distance of ten kilometers. Its ears are designed that way. Its trunk is designed to emit that sound. That is, both its transmitter and its receiver are suited to the task. My ears and mouth haven’t been designed for that. So the sizes and frequencies tally. Because its wavelength is greater, its frequency is lower. I would be wrong to claim it doesn’t exist. This has also been said of vision: of the electromagnetic spectrum, we see only a tiny sliver. Of course, of course. Now they can photograph the same place in every spectrum. crabnebulaThis is used in science, it’s even used in daily life. A thing that can’t be seen at one frequency can be seen at another. Why didn’t this exist before? It wasn’t done until now because we said, this can’t be. In the infrared spectrum, you see something there that you don’t normally see. So we shouldn’t trust our own perceptions too much, just as we shouldn’t trust our intellect too much. This is also an ego problem. The stronger your sense of self, the more heedless you are, the more you trust yourself. And the greatest catastrophes occur because of that. It’s also true in daily life: you trust yourself too much, your company folds. And such like. Either you have to have nonordinary experiences, or you have to have experienced people by your side. They explain certain things to us. But of course, in order to understand these events, holistic concepts are needed. This makes comprehension even more difficult. Do we need to think holistically in order to understand religion? Religion [Islam] has its own kind of classical mechanics, that’s the Divine Law. It has its quantum mechanics, that’s Paths and Schools. For example, religion tells us, “Do this and this,” “Don’t do that and that.” These are things at the atomistic level. You have to do them yourself, you’re not exonerated if someone else does them. To understand other concepts, holistic things enter: “He who kills one person, kills entire humankind. He who saves one person, saves entire humankind” (5:32). Or, “Don’t gossip, you’ll put that person’s spirit in pain.” You find that you are no longer yourself, everything is interlocked, everything is connected with everything else. Holistic concepts are less well-understood, more delicate things. One reads them in one way, another in another. Like in classical physics versus quantum physics. The second taxes you from a holistic viewpoint, you understand with difficulty unless you’re used to it in terms of experience. If not, you shouldn’t deny, you shouldn’t take risks. That’s what the great Sufi saint Ibn Arabi says: “Even if you don’t believe, don’t deny.” Don’t say, How can this be? “What is in the universe, that is in man.” Don’t say this is impossible. You don’t have that, but don’t say nobody can have it, don’t take that risk. Now this is entirely holistic. Everything is in the human being. “In man there’s a mountain,” as the Master said. Well, I see no such thing? I can’t reconcile a mountain with a human being. Neither my intellect nor my spiritual condition are up to the task. I can’t understand quantum mechanics, either. Nothing in a high school student is ready for quantum mechanics. And those who understand aren’t entirely there either, but at least we agree that there’s truth in it. There’s a similar situation here. You can’t explain everything to everyone, because they won’t understand. Plus, maybe there’s nothing to be understood, only something to be experienced. Mind alone is not sufficient to discover religion. The mind that comprehends the existence of God is responsible religiously. In order to go beyond that, you need an extra grace from God. Belief in God is a must. For that, the mind is enough. But believing in religion, believing in the Prophet, is a grace from God. There’s a verse to the effect: Noah says, “I’m telling you these things, but they’re no use if God doesn’t wish it.”  As Joseph’s brothers are going on their second visit to him, their father Jacob says, “Enter through separate gates. But if God doesn’t desire it, it won’t work.” No matter what you do, it’ll make no difference. Now we don’t understand this. We don’t understand the will of God. The Master once said, “God scattered a light. It struck some and didn’t strike others.” We don’t know the reason why. In particular, faith in the Prophet rests with God. That is, it’s a very special grace, believing in him is very difficult. Because when you say “God,” you bow to a superior authority. But the Prophet? “Well, he’s human and so am I.” There, the ego enters at once. “He could only have been an ordinary man. The conditions then were such-and-such, he said this, he administered, he was wild,” in the end there’s nothing there. “There was a clever man,” you say. And with that, you miss a lot. You need a special favor to believe that our Prophet was very special, that he was very different, that he was “a mercy to the worlds.” There’s no other way. Or else, God has to have given you the aptitude to derive great conclusions from small experiences. Then it’s possible. The Master riveted this. I reached that faith only with difficulty: the Prophet is a prophet. But the Master riveted down that faith in place. Our Prophet is very special. Now, this is very hard to believe. He is the best locus of manifestation the world has ever known. To believe like that is very difficult. Why is that true? Because all the Names of God were manifested in him. There’s no need for someone else. Why is there no need for another Book? Everything is in it [the Koran], even if we can’t understand this. So it’s not necessary. Whereas with the others, it wasn’t like that. Now it’s hard to accept it like this for our mind-dominated human beings. The ego is strong. Even at birth, children are princes or queens. Those egos won’t bend when they grow up. Here, you need to bend. You need to believe that God gave a mind-boggling boon to someone other than you. But I’m the king… In the language of his state, he says, “If He were to give it to someone, He’d give it to me, I’m king.” But God favors some human beings. Now, we look at the Koran. What’s there that’s bad about it? It says: “Do good, don’t do evil, don’t harm your neighbor, don’t charge interest on money, don’t be a burden to others.” It counsels all that is good. It says, “Don’t hurt anyone.” It also defines what is good. It says “This is good, do it, that is bad, don’t do it.” Otherwise, goodness is a relative thing. Thieves think what they’re doing is good. And that is like abandoning your mind to mathematics. Before Newton, everyone had intelligence. They still do, but everyone does things according to their own lights. In science, you receive guidance from mathematics, in religion you receive guidance from the Koran. You have to have a reference. Otherwise, everyone has their own reference point. Take morality. Everyone’s ethics is good from their own standpoint. Why are saints necessary? They hold a mirror to you. They show you yourself, they make you know yourself. Otherwise, nobody is aware of themselves. The Master shows you your error with extraordinary finesse. These things are entirely beyond the ken of contemporary human beings, even conceptually. They can’t even conceive of them, they can’t even conceive what they’re missing. These university professors, these people who think they’re clever, they don’t even know what they’re missing. Meeting the Master, I regard as God’s grace. There’s no other explanation. That is, the mind is at sea here. Everybody’s smart. Many university professors are more intelligent than I am. So this can’t be solely a matter of intelligence, there’s something else. I’m not smarter than they are just because I was graced with the presence of the Master. I realized that the world is not as I thought it was. This left me shaken. From that I passed on to other things. I already had faith in God, I believed in the Prophet, too. Scientists need experiences that will stagger them, experiences that will shake their belief that they know everything. That’s the only way. Because these are matters of consciousness. In its essence, religion has to do with consciousness. You have to observe changes in your consciousness. You’ll realize then that things are different. There are different states of consciousness: your present state of consciousness, there’s hypnosis, there are different levels in hypnosis, there’s the consciousness of sleep, there’s dream consciousness, there’s lucid dream consciousness. Each is different than the other. And there are who-knows-what-other states of consciousness that I don’t know about. Would you define religion as consciousness alteration? Here’s how I view religion: religion is the process of becoming worthy of God by changing one’s morality. But as you alter your ethics, that has an impact on your consciousness. That’s of secondary importance. Being moral is more important than being in a different state of consciousness. The person whose ethics, whose character traits, are closer to the Prophet’s, that person is the winner. This is the primary criterion that I’ve come to understand in the long run. Morality is very important. For example, we read in the Koran: “I chose him for Myself.” This is about Moses: “I chose you for Myself.” And the same for Abraham: “God chose Abraham as His friend.” Many of Abraham’s morals, character traits, are recounted in the Koran: “Abraham was of mild-mannered mien.” It also tells what God looks at: “He looks at your heart.” “God loves these, God does not love those,” right? “God does not love misers,” “God loves the generous,” God has given all the codes. Those things all pertain to morality. It doesn’t say,  “God loves those who go to Mars in one leap.” It doesn’t say, “God loves those who do Spacefolding.” Nor does it mean that God doesn’t love those who do Spacefolding, but it’s important only in the second-third-fourth degree. It’s not important if it’s not there. The Koran states very clearly: “God loves these, God does not love those.” If we were to list these, that’s where religion is. Because this is a matter of love. The heart of religion is love. Justice, that’s the Divine Law. Conscience, that’s the Paths. Love is the Reality. [The reference here is to the Master’s pamphlet: “The Secret That is Love.”] The main task is love. In other words, He created human beings out of love. That’s how I understand it. He loves human beings very much. The Master stated that clearly: “God loved human beings very much.” (Teachings of A Perfect Master, p. 56.) The “Secret of Islam” is Love, nothing else. But if I remain at the level of a dog or some other animal, how is God going to love me? That is, religion is more a matter of changing one’s state of morality than of changing one’s state of consciousness. The focus is always on ethics. After the New Age philosophies, this all became: “Let’s change our state of consciousness.” But without a change in one’s state of morality, a permanent change in one’s state of consciousness can’t be obtained. You go up in a helicopter, five minutes later it comes down when it runs out of gas. For example, let’s get top grades in the exam. How? Let’s cheat. But the means are more important than the ends. To obtain those credentials legitimately. This is actually stated very clearly in the books of great Sufis. For instance, in the “Holy Bestowal” [by Abdulqader Geylani]. Worshipers: worship is very important. Scientists/scholars: knowledge is very important. The wise: the secret and maybe the state of consciousness are very important. But most important of all is the love of God. Then, the question becomes: “How can we attain that love?” And that’s not possible except by ethics, and that’s a very hard thing to do. If only our ethics were beautified by our saying so, my ethics would have improved long ago. No, that happens by suffering. By suffering hardships. It’s not easy for a rock to become earth. It happens in time, by suffering hardships. It happens by paying careful attention to principles. It happens by paying careful attention to the Prohibited and the Permitted. Religion is a matter of ethics, a matter of becoming worthy of God by this means. First things first. That’s what God wants. He says, “First fix your ethics, then come to Me.” Intelligence is also important in these matters. “Who has no mind has no religion.” There’s a Tradition of the Prophet. Someone said: “My friend is highly moral.” The Prophet asked: “How is his intelligence?” “Not that much.” “Then he can’t progress very far.” On the other hand, if you’re not straight inwardly, the more intelligent you are, the more harmful you are. But the Master posits courtesy. Why? Because courtesy is actually morality. Courtesy is the refined form of morality. If you want the Owner, you have to fix your ethics. At first, I didn’t understand that. I’m reading the Koran, it says “those who want Paradise,” but it also says “those who want God.” So there is such a concept as desiring God. What is this? It’s in the Koran. So some people desire God more than Paradise. [The Turkish Sufi poet] Yunus Emre said that, and the expression is in the verses of the Koran. But it’s hard to discern it there. He sang, “I need You and You alone.” It’s been said, “When God is present, neither heaven nor hell exist,” right? That is something amazing. Because we want to re-establish our severed link with God [re-ligio]. That’s our real quest. Heaven and hell pale in comparison. When you’re dealing with God, everything pales in comparison. Of course. Compared to infinity, every finite thing is zero.dyson1 It’s like this in our lives, too. How so? When our friends come visiting, we prepare a treat. But our friends don’t come for that bounty, they come for a reunion. The reunion is the important thing, not bounties or Paradise. Now suppose that some come for the food. Well, let them! Let no one remain hungry. But the main point is not the bounty. Paradise is a boon, a wonderful boon. But in the end, it’s a boon. The phenomenon of Union is very different. What’s important for us is Union, just as it is for God. I see this in Sufi writings. What God desires is Union. God created human beings for Himself. And He said: “Fix your ethics, and come.” There’s something that will put “blessings such as no eye has seen and no ear has heard” to shame. That must be what they mean by “the Truth of Certainty.” You reach the highest level of proximity. Beyond “the Knowledge of Certainty” and “the Eye of Certainty.” That’s how we see the Master, he’s at the level of the Truth of Certainty. We’re going to perform the Prayer, we’re going to Fast. But what does the Master say? “Even if your head doesn’t rise from prostration, it won’t happen without these.” So it’s a matter of ethics. Actually, this is religion: religion is the task of making yourself worthy of God. Can we achieve that? That’s another matter entirely. But that’s the purpose. We don’t know if we can go to Mars, but that’s our calling: to go to Mars. It’s not a matter of knowledge, of consciousness. You can have those too, but there’s a ranking in terms of importance. The important thing is to display praiseworthy conduct. A man rescues a kitten from the rain, that night he dreams that the Prophet is stroking his beard. So it pleased him. And what’s pleasing to the Prophet is pleasing to God as well. He couldn’t have dreamt that if he had spent that whole night in worship. Let him worship, by all means, but the thing is beauteous conduct. That is, God’s pleasure, something that pleases Him. broccoli1Mathematics is important because it represents the mind. Physics plus mathematics proves God’s existence. For it is by mathematics that we best analyze nature. The root of the matter is there. Nothing is accidental. Everything is calculated, programmed, precise. And this is a very clear indicator of God’s existence. If the seed is right, it will yield results. God attaches great importance to the intellect. If you have no mind, you’re not responsible. Because you can deduce the existence of God based purely on reason. If you accept the Prophet too, that’s awesome. And mathematics is important because it has become a means of discovery. But if your assumptions are wrong, mathematics won’t help you. If they’re correct, unexpected things can emerge from that. The mind, mathematics, and experiment have brought us to a place in three hundred years that we hadn’t been able to reach in the previous three thousand. It’s magnificent. Great scientists, and Dirac is one of them,  have arrived at the point that from now on, we need to study consciousness. We don’t know how to study it yet. The Sufi masters have been studying it for centuries. So where Dirac ends, the Sufi masters begin. Dirac arrived at that point. So did [Roger] Penrose. And that’s where everyone will arrive at, sooner or later. That’s the point where the masters enter the loop. And then, you have to understand the importance of religion better. You have to perceive that religion is important, that morality is important, that things are not as you imagine them, that the intellect alone is not sufficient, in order to come to that door. A small protein may typically contain 100 amino acids, each with 20 varieties. For example, the protein histone-4 has a chain of 102 amino acids. The probability of even one small enzyme/protein molecule of 100 amino acids being arranged randomly in a useful (and hence, necessarily specific) sequence would be 1 part in 20100 = ~10130. For comparison, there are ~1080 protons in the entire universe. Even the smallest catalytically active protein molecules of the living cell consist of at least a hundred amino acid residues, and they thus already possess more than 10130 sequence alternatives. Getting a useful configuration of amino acids from the zillions of useless combinations is an exercise in futility. A primitive organism has about the same chance of arising by pure chance as a general textbook of biochemistry has of arising by the random mixing of a sufficient number of letters. And the moment you say that non-chance events are involved, such as the folding and fitting of molecules, you fall outside the field of randomness. You implicitly admit the presence of order. It appears that some people lack an adequate understanding of either the mathematical law of large numbers, or the physical law of entropy, or both. The law of large numbers (LLN) solidifies the expected probability or improbability of an event. If an event is improbable to begin with, an extremely large number of trials will only certify that improbability. Actually, the two are linked: “The law of the increase of entropy is guaranteed by the law of large numbers… order is an exception in a world of chance” (Hans Reichenbach, p. 54-55), and the LLN is at the core of the second law of thermodynamics. It would be unfair to one of the great names in quantum physics, Erwin Schrödinger, if we were to neglect mention here of his monograph, What Is Life? (1944). There, he explicitly associated life with negative entropy, or “negentropy” for short. This also ties in with Information Theory: information is a measure of order, entropy is a measure of disorder, so information is the negative of entropy. The “randomists”—that’s what I call people who try to explain the origin and development of life by random events occurring over eons—claim that there are highly improbable events which nevertheless occur every once in a while. For instance, winning the lottery is a highly improbable event, yet somebody does win the lottery. And getting a royal flush in a card game is an extremely improbable event, yet it does happen every now and then. Starting from such examples, they argue that highly improbable events can become possible, probable, and even actual, given billions of years. First, I should perhaps clarify that I’m not opposed to evolution as such. There’s the fossil record and all that. Natural selection exists. Mutations are a fact of life. What I’m against is supposing that extremely highly ordered phenomena, such as we witness everywhere in life, can be the outcome of chance events. Order does not arise spontaneously out of disorder. [To be more explicit: directed evolution is a possibility, random evolution is not. Nature cannot produce blueprints that have not been encoded into it.] Now like I said, the reason for this can’t be found in logic. Rather, it’s psychological. Those who make this claim, the “randomists” as you’ve called them, are in a hypnotic state that makes you Godproof. They don’t want to see. These people who impute the most important things to chance: observe them and you’ll see, in their own lives they leave nothing to chance. Because deep in their hearts, they know that chance alone won’t get you there. The lottery is designed so that at least one person will win. And you need not one, but a run of at least a thousand consecutive royal flushes to even begin to approximate the complexity of life processes. You know Murphy’s law. It says: “If anything can go wrong, it will.” This is actually the law of entropy. And you need, not only intelligence, but also will, to counteract this. Consider a TV set. One component in the wrong place, and the device won’t work. Now put all the components of a TV set in a sack and start shaking. Do you actually expect that after a sufficient number of shakes, they will all fall into the right place and the TV will assemble itself? First you need a plan, a blueprint. For that, intelligence is needed. And then, you need an iron will and constant, diligent supervision at every step of the way, to ensure that the thing actually gets done. Otherwise, it’s hopeless. Without that, everything tends to disorder, as anyone who’s ever accomplished anything knows firsthand. Let’s say you’re a Martian, and you see the Mars Rover moving about doing things. There’s no human being around, there’s nothing around, and yet it’s doing those things. It seems to be doing everything by itself, but it’s not. Someone has built it and is guiding it from millions of miles away. A chick lives and dies, but someone has to have programmed it, to have arranged it that way. We now have pilotless planes, but they were planned and developed over time. It didn’t happen all of a sudden. That reminds me of what a friend once said about the “infinite monkey theorem,” as it’s called. There’s even a jingle about it, which I can’t resist quoting here:babassoon There once was a brassy baboon Who used to breathe down a bassoon He said: “It appears, in millions of years, I’m certain to hit on a tune.” In its simplest form, the infinite monkey theorem states that a monkey randomly punching at the keys of a typewriter (or keyboard) will, given infinite time, type out the complete works of William Shakespeare, without a single error, punctuation marks included. This is one of the arguments set forth to support the idea of evolution by random mutation. Now this friend was a doctor, and he said this when he was a medical student, when they were studying the intricate workings of the human body. He said: “OK, I’ll accept that a monkey can actually do that, given infinite time. What I cannot accept is that this human body, with its millions of processes going on simultaneously, can ever be the work of chance.” How do those who deny the lack of randomness do so? They defer to infinite time. Because you can’t test it. Or they invoke higher dimensions. You can’t test that, either. Or they call it a “quantum jump” [punctuated equilibrium]. That is, they throw the issue into untestable territory. Feynman’s principles here are great. I like his approach. He says no theory is right. For it to be correct, it has to pass an infinite series of experiments. A theory passes an experiment, that means it has passed that experiment, it has not yet been falsified. [The concept of falsifiability was developed by philosopher of science Karl Popper.] Today, there’s the situation that when a theory doesn’t conform with experimental facts, you go back and mathematically tweak the theory until it does, and hence you remove the possibility of falsifying it. And that’s an illusion. There’s a couplet by the famous Turkish Sufi poet, Niyazi Misri, that expresses all this in a nutshell: Nothing is more apparent than God He is hidden only to the eyeless. 3 comments on “Science, Mathematics, And Sufism 1. Dear Imran Khan, You have asked: >how do you find the two related, I mean physics to Sufism. The two are related through quantum mechanics. Not through its mathematics, but through the interpretation of that mathematics. Of course there have been various interpretations of QM, but one thing that is not in doubt is that QM is “holistic.” In the words of physicist David Bohm, it treats the world as an “undivided whole.” In the interview, it is said that it treats its scope of investigation as a “system.” A collection of fifty atoms or particles is not treated as some kind of sum of fifty separate atoms or particles, but as a single, indivisible system. For this reason, it is difficult to understand, because pictorial representation is not possible. In fact, the observer/subject and observed/object themselves constitute a single whole. Now Sufism, too, is holistic. In the Koran it says: “Who kills one innocent person (is like one who) has killed all humankind” (5:32). It treats all humanity as a single entity. This is a holistic worldview. And it has been articulated by the famous Sufi Ibn Arabi in particular. Sometimes he sounds as if he is talking about quantum physics. Though not widely known, Sufism’s and Ibn Arabi’s affinity with quantum physics has been noted by various researchers. Google “Ibn Arabi quantum physics” and you will find various examples of this. NOTE: Modern quantum field theory conceives of physical phenomena as fluctuations of the underlying quantum vacuum. A 2015 Physics Today article described the quantum vacuum as “a turbulent sea, roiling with waves…” This has its exact counterpart in Sufism, which hundreds of years ago conceived of phenomena as waves on the surface of a sea. “The best credo of all times is that of modern physics — that everything is an unbroken, undivided wholeness.” —Pir Vilayat Inayat Khan, echoing Ibn Arabi’s famous doctrine of the Unity of Being (wahdat al-wujûd). —Erwin Schrödinger 2. Rukhsan ul Haq on said: Dear Henry Bayman I read your articles with a lot of interest and they always give joyful insights into the wisdom of Islam and what I like about them is the modern langauge based mostly on physics. I am a theoretical physicist myself so they appeal to me in that vein as well… With lots of love Bangalore India 3. Rukhsan ul Haq on said: I have the privilege to have known you through articles and books available from your website. I feel blessed to have the opportunity to cherish the wisdom you share with us and which you have inherited directly from a Sufi master in Turkey. I am a theoretical physicist by profession and a Sufi at heart. So there is no wonder that your articles and writings resonate with me because I see that you present Sufi wisdom in a scientific idiom… I will always behold you with love in my heart. With best wishes and regards Rukhsan ul Haq Bangalore India
56bcafd517341882
Monday, July 29, 2019 So we opened a Fedex account.  Problem solved, no? Monday, July 08, 2019 The Trouble with Many Worlds That paper is entitled "The Structure of the Multiverse" and its abstract is delightfully succinct.  I quote it here in its entirety: The structure of the multiverse is determined by information flow. Those of you who have been following my quantum adventures know that I am a big fan of information theory, so I was well primed to resonate with Deutsch's theory.  And I did resonate with it (and still do).  Deutsch's argument was compelling (and still is).  Nonetheless, I never wrote a followup for two reasons.  First, something was still bothering me about the argument, though I couldn't really put my finger on it.  Yes, Deutsch's argument was compelling, but on the other hand, so was my argument (at least to me).  The difference seemed to me (as many things in QM interpretations do) a matter of taste, so it seemed pointless to elaborate.  And second, I didn't think anyone reading this blog would really care.  So I tabled it. But last May the comment thread in the original post was awakened from its slumber by a fellow named Elliot Temple.  The subsequent exchange led me to this paper, of which I was previously unaware.  Here's the abstract, again, in its entirety: The "special probabilistic axiom" to which Deutsch refers is called the Born rule (named after Max Born).  The "remaining, non-probabilistic axioms of quantum theory" comprises mainly the Schrödinger equation.  (To condense things a bit I'll occsaionally refer to these as the BR and the SE.) The process of applying quantum mechanics to real-world situations consists of two steps: first you solve the SE.  The result is something called a "wave function".  Then you apply the BR to the wave function and what pops out is a set of probabilities for various possible results of the experiment you're doing.  Following this procedure yields astonishingly accurate results: no experiment has ever been done whose outcome is at odds with its predictions.  The details don't matter.  What matters is: there's this procedure.  It yields incredibly accurate predictions.  It consists of two parts.  One part is deterministic, the other part isn't. This naturally raises the question of why this procedure works as well as it does.  In particular, why does the procedure have two parts?  And why does it only yield probabilities?  Answering these questions is the business of "interpretations" of quantum mechanics.  Wikipedia lists almost twenty of these.  The fact that after nearly 100 years no consensus has emerged as to which one is correct gives you some idea of the thorniness of this problem. So the paper that Elliot referred me to was potentially a Big Deal.  It is hard to overstate the magnitude of the breakthrough this would be.  It would show that there are not in fact two disparate parts to the theory, there is only one: the SE.  Such a unification would be of the same order of magnitude as the discovery of relativity.  It would be headline news.  David Deutsch would be a Nobel Laureate, on a par with Newton and Einstein.  But the fact that there is still an active debate over the issue shows that Deutsch's claim has not been universally accepted.  So there would seem to be only two possibilities: either Deutsch is wrong, or he's right and the rest of the physics community has failed to recognize it. Normally when a claim of a major result like this fails to be recognized by the community it's because the claim is wrong.  In fact, more than 99% of the time it's because the claimant is a crackpot.  But Deutsch is no crackpot.  He's a foundational figure in quantum computing.  He discovered the first quantum algorithm.  Even if he got something wrong he very likely got it wrong in a very interesting way. So I decided to do a deep dive into this.  It led me down quite the little rabbit hole.  There are a number of published critiques of Deutsch's work, and counter-critiques critiquing the critiques, and counter-counter-critiques.  They're all quite technical.  It took me a couple of months of steady effort to sort it all out, and that only with the kind of help of a couple of people who understand all this stuff much better than I do.  (Many thanks to Tim Maudlin, David Wallace, and especially the patient, knowledgeable, and splendidly-pseudonymed /u/ididnoteatyourcat on Reddit.) In the rest of this post I'm going to try to describe the result of going down that rabbit hole in a way that is accessible to what I think is the majority of the audience of this blog.  The TL;DR is that Deutsch's argument depends on at least one assumption that is open to legitimate doubt.  Figuring out what that assumption is isn't easy, and whether or not the assumption is actually untrue is arguable.  That's the reason that Deutsch hasn't won his Nobel yet. I have to start with a review of the rhetoric of the many-worlds interpretation of quantum mechanics (MWI).  The rhetoric says that when you do a quantum measurement it is simply not the case that it has a single outcome.  Instead, what happens is that the universe "splits" into multiple parts when a measurement is performed, and so all of the possible outcomes of an experiment actually happen as a matter of physical fact.  The reason you only perceive a single outcome is that you yourself split into multiple copies.  Each copy of you perceives a single outcome, but the sum total of all the "you's" that have been created collectively perceive all the possible outcomes. I used the word "rhetoric" above because, as we shall see, there is a disconnect between what I have just written and the math.  To be fair to Deutsch, his rhetoric is different from what I have written above, and it more closely matches the math.  Instead of "splitting", on Deutsch's view the universe "peels apart" (that's my terminology) in "waves of differentiation" (that is Deutsch's terminology) rather than "splitting" (that is everyone else's terminology) but this is a detail.  The point is that at the end of a process that involves you doing a quantum measurement with N possible outcomes, there are, again in point of actual physical fact, N "copies" of you (Deutsch uses the word "doppelgänger"). Again, to be fair to Deutsch, he acknowledges that this is not quite correct: Universes, histories, particles and their instances are not referred to by quantum theory at all – any more than are planets, and human beings and their lives and loves. Those are all approximate, emergent phenomena in the multiverse.  [The Beginning of Infinity, p292, emphasis added.] All of the difficulty, it will turn out, hinges on the fidelity of the approximation.  But let us ignore this for now and look at Deutsch's argument. Deutsch attempts to capture the idea of probability in a deterministic theory using game theory, that is, by looking at how a rational agent should act, applying a few reasonable-looking assumptions about the utility function, and showing that a rational agent operating under the MWI would act exactly as if they were using the Born rule.  The argument is long and technical, but it can be summarized very simply. [Note to nit-pickers: this simplified argument is in fact a straw man because it is based on the assumption that branch counting is a legitimate rational strategy, which is actually false on the Deutsch-Wallace view.  But since the conclusion I am going to reach is the same as Deutsch's I consider this legitimate rhetorical and literary license because the target audience here is mainly non-technical.] For simplicity, let's consider only the case of doing an experiment with two possible outcomes (let's call them A and B).  The game-theoretical setup is this: you are going to place a bet on either A or B and then do the experiment.  If the outcome matches your choice, you win $1, otherwise you lose $1. If the experiment is set up in such a way that the quantum-mechanical odds of each outcome are the same (i.e. 50-50) then there is no conflict between the orthodox Born-rule-based approach and the MWI: in both cases, the agent has no reason to prefer betting on one outcome over the other.  The only difference is the rationale that each agent would offer: one would say, "The Born rule says the odds are even so I don't care which I choose" and the other would say, "I am going to split into two and one of me is going to experience one outcome (and win $1) and the other of me is going to experience the other outcome (and lose $1), and that will be situation no matter whether I choose A or B, so I don't care which I choose." [Aside: Deutsch goes through a great deal more complicated argument to prove this result because it is based on an assumption that Deutsch rejects.  In fact, he goes on from there to put in a great deal more effort to extend this result to an experiment with N possible outcomes, all of which have equal probabilities under the Born rule.  He has to do this because my argument is based on a tacit assumption that Deutsch rejects.  We'll get to that.  My goal at this point is not to reproduce Deutsch's reasoning, only to convince you that this intermediate result is plausibly true.] Now consider a case where the odds are not even.  Let's arrange for the probabilities to be 2:1 in favor of A (i.e. A happens 2/3 of the time, B happens 1/3 of the time, according to the Born rule).  Now we have a disconnect between the two world-views.  The Bornian would obviously choose A.  But what possible reason could the many-worlder have for doing the same?  After all, the situation is unchanged from before: again the many-worlder is going to split into two (because there are still only two possible outcomes).  What possible basis could they have for preferring one outcome over the other that doesn't assume the Born rule and hence beg the question? Deutsch's argument is based on an assumption called branching indifference.  Deutsch himself did not make this explicit in his original paper, it was clarified by David Wallace in a follow-up paper.  Branching indifference says that a rational agent doesn't care about branching per se.  In other words, if an agent does a quantum experiment that doesn't have a wager associated with it, then the agent has no reason to care whether or not the experiment is performed or not. The reasoning then proceeds as follows: suppose that the many-worlder who ends up on the A branch does a follow-up experiment with two outcomes and even odds, but without placing a bet.  Now there are three copies of him, two of which have won $1 and one of which has lost $1.  But (and this is the crucial point) all of these copies are now on branches that have equal probabilities.  Because of branch indifference, this situation is effectively equivalent to one where there was a single experiment with three outcomes, each with equal probability, but two of which result in winning $1, and where the agent had the opportunity to place the bet on both winning branches. So that sounds like a reasonable argument.  In fact, it is a correct argument, i.e. the conclusions really do follow from the premises. But are the premises reasonable?  Well, many many-worlders think so.  But I don't.  In particular, I cast a very jaundiced eye on branching indifference.  There are two reasons for this.  But first, let's look at Wallace's argument for why branching indifference is reasonable: Solution continuity and branching indifference — and indeed problem continuity — can be understood in the same way, in terms of the limitations of any physically realisable agent. Any discontinuous preference order would require an agent to make arbitrarily precise distinctions between different acts, something which is not physically possible. Any preference order which could not be extended to allow for arbitrarily small changes in the acts being considered would have the same requirement. And a preference order which is not indifferent to branching per se would in practice be impossible to act on: branching is uncontrollable and ever-present in an Everettian universe. If that didn't make sense to you, don't worry, I'll explain it.  But first I want to take a brief diversion.  Trust me, I'll come back to this. Remember how I said earlier that my simplified argument for Deutsch's conclusion was based on a premise that Deutsch would reject?  That premise is called branch counting.  It is the idea that the number of copies of me that exist matters.  This seems like an odd premise to dispute.  How could it possibly not matter if there is one of me winning $1 or a million of me each winning $1?  The latter situation might not have a utility that is a million times higher than the former, but if I'm supposed to care about "copies of me" at all, how can it not matter how many there are? Here is Wallace's answer: Why it is irrational: The first thing to note about branch counting is that it can’t actually be motivated or even defined given the structure of quantum mechanics. There is no such thing as “branch count”: as I noted earlier, the branching structure emergent from unitary quantum mechanics does not provide us with a well-defined notion of how many branches there are. Wait, what???  There is no "well defined notion of how many branches there are?" No, there isn't.  Wallace reiterates this over and over: ...the precise fineness of the grain of the decomposition is underspecified  There is no “real” branching structure beyond a certain fineness of grain...  ...agents branch all the time (trillions of times per second at least, though really any count is arbitrary) the actual physics there is no such thing as a well-defined branch number Remember how earlier I told you that there was a disconnect between the rhetoric and the math?  That the idea of "splitting" or "peeling apart" or whatever you want to call it was an approximation?  Well, this is where the rubber meets the road on that approximation.  Branching indifference is necessary because branching is not a well-defined concept. So what about the rhetoric of MWI, that when you do an experiment with N possible outcomes that you split/peel-apart/whatever-you-want-to-call-it into N copies of yourself?  That is an approximation to the truth, but like classical reality itself, it is not the truth.  The actual truth is much more complex and subtle, and it hinges on what the word "you" means. If by "you" you mean your body, which is to say, all the atoms that make up your arms and legs and eyes and brain etc. then it's true that there is no such thing as a well-defined branch count.  This is because every atom — indeed, every electron and every other sub-atomic particle — in your body is constantly "splitting" by virtue of its interactions with other nearby particles, including photons that are emitted by the sun and your smart phone and all the other objects that surround you.  These "splits" propagate out at the speed of light and create what Deutsch calls "waves of differentiation", what I call the "peeling apart" of different "worlds".  (If you are a regular reader you will have heard me refer to this phenomenon as creating "large systems of mutually entangled particles".  Same thing.)  This process is a continuous one.  There is never a well-defined "point in time" where the entire universe splits into two, and no point in time where you (meaning your body) splits into two.  There is a constant and continuous process of "peeling apart".  Actually many, many (many!) peelings-apart, all of which are happening continuously.  To call it mind-boggling would be quite the understatement. On the other hand, if by "you" you mean "the entity that has subjective experiences and makes decisions based on those experiences" then things are much less clear.  I don't know about you, but my subjective experience is that there is exactly one of me at all times.  I consider this aspect of my subjective experience to be an essential component of what it means to be me.  I might even go so far as to say that my subjective experience of being a single unified whole defines what it is to be "me".  So the only way that there could be a "copy of me" is if there is another entity that has a subjective experience that is bound to the same past as my own, but whose present subjective experience is somehow different from my own e.g. my experiment came out A and theirs came out B.  An entity whose subjective experience is indistinguishable from my own isn't a copy of me, it's me. The mathematical account of universes "peeling apart" has nothing to say about when the peeling process has progressed far enough to be considered a fully-fledged universe in its own right and so it has nothing to say about when I have "peeled apart" sufficiently to be considered a copy.  That is why branch count is not a coherent concept. And yet, if I am going to apply the notion of branching to myself (which is to say, to the entity having the subjective experience of being a coherent and unified whole) then branch count must be a coherent concept.  It might not be possible to know the branch count, but at any point in time whatever underlying physical processes are really going on,  it has to either qualify as me branching or not.  There is no middle ground. So we are faced with this stark choice: we can either believe the math, or we can believe our subjective experiences, but we can't do both, at least not at the same time.  We can take a "God's eye view" and look at the universal wave function, or we can take a "mortal's-eye view" and see our unified subjective experience as real.  But we can't do both simultaneously.  It's like a Necker cube.  You can see it one way or the other, but not both at the same time. Interestingly, this is all predicted by the math!  In fact, the math tells us why there is this dichotomy.  Subjective experience is necessarily classical because it requires copying information.  In order to be conscious, you have to be conscious of something.  In order to make decisions, you have to obtain information about your environment and take actions that affect your environment.  All of these things require copying information into and out of your brain.  But quantum information cannot be copied.  Only classical information can be copied.  And the only way to create copyable classical information out of a quantum system is to ignore part of the quantum system.  Classical behavior emerges from quantum systems (mathematically) when you trace over parts of the system.  Specifically, it emerges when you consider a subset of an entangled system in isolation from the rest of the system.  When you do that, the mathematical description of the system switches from being a pure state to being a mixed state.  Nothing physical has changed.  It's purely a question of the point of view you choose to take.  You can either look at the whole system (in which case you see quantum behavior) or you can look at part of the system (in which case you see classical behavior) but you can't do both at the same time. As a practical matter, in our day-to-day lives we have no choice but to "look" only at "part" of the system, because "the system" is the entire universe.  (In fact, it's an interesting puzzle how we can observe quantum behavior at all.  Every photon has to be emitted by, and hence be entangled with, something.  So why does the two-slit experiment work?)  We can take a "God's-eye view" only in the abstract.  We can never actually know the true state of the universe.  And, in fact, neither can God. Classical reality is what you get when you slice-and-dice the wave function in a particular way.  It turns out that there is more than one way to do the slicing-and-dicing, and so if you take a God's-eye view you get more than one classical universe.  An arbitrary number, in fact, because the slicing-and-dicing is somewhat arbitrary.  (It is only "somewhat" arbitrary because there are only certain ways to do the slicing-and-dicing that yield coherent classical universes.  But even with that constraint there are an infinite number of possibilities, hence "no well-defined branch count".)  But the only way you can be you, the only way to become aware of your own existence, indeed the only way to become aware of anything, is to descend from Olympus, ignore parts of the wave function, and become classical.  That leaves open the question of which parts to ignore.  To me, the answer is obvious: I ignore all of it except the parts that measurably effect the "branch" that "I" am on.  To me, that is the only possible rational choice.
04226420c255551b
Bell's theorem From Wikipedia, the free encyclopedia Jump to navigation Jump to search Bell's theorem is a "no-go theorem", meaning a theorem of inequality that addressed the concerns of the EPR paradox of Einstein Podolsky and Rosen concerning the incompleteness of Quantum Mechanics. EPR stated that superposition of the quantum mechanical Schrödinger equation would result in Entanglement making it incomplete. John Stewart Bell was intrigued by this argument and in favor of hidden variable theories creating his inequality to disprove Von Neumann’s proof that a hidden-variable theory could not exist. However, he discovered something new by rephrasing the problem as to whether Quantum Mechanics was correct and non-local (showed Entanglement), or whether Quantum Mechanics was incorrect because Entanglement did not exist. Contrary to popular opinion, Bell did not prove hidden variable theories could not exist, but he proved they had to have certain constraints upon them especially that Entanglement was necessary. [1][2] These non-local hidden variable theories are at variance with The Copenhagen Interpretation in which Bohr famously stated, “There is no Quantum World.”[3] And in which, the measurement instrument is differentiated from the quantum effects being observed. This has been called The Measurement problem and the Observer effect problem. In its simplest form, Bell's theorem rules out local hidden variables as a viable explanation of quantum mechanics[4] (though it still leaves the door open for non-local hidden variables, such as De Broglie–Bohm theory, Many Worlds Theory, Ghirardi–Rimini–Weber theory, etc). Example of simple Bell type inequality and its violation in quantum mechanics. Top: assuming any probability distribution among 8 possibilities for values of 3 binary variables ABC, we always get the above inequality. Bottom: example of its violation using quantum Born rule: probability is normalized square of amplitude. Historical background[edit] In the early 1930s, the philosophical implications of the current interpretations of quantum theory troubled many prominent physicists of the day, including Albert Einstein. In a well-known 1935 paper, Boris Podolsky and co-authors Einstein and Nathan Rosen (collectively "EPR") sought to demonstrate by the EPR paradox that quantum mechanics was incomplete. This provided hope that a more complete (and less troubling) theory might one day be discovered. But that conclusion rested on the seemingly reasonable assumptions of locality and realism (together called "local realism" or "local hidden variables", often interchangeably). In the vernacular of Einstein: locality meant no instantaneous ("spooky") action at a distance; realism meant the moon is there even when not being observed. These assumptions were hotly debated in the physics community, notably between Einstein and Niels Bohr. In his groundbreaking 1964 paper, "On the Einstein Podolsky Rosen paradox",[6][7] physicist John Stewart Bell presented an analogy (based on spin measurements on pairs of entangled electrons) to EPR's hypothetical paradox. Using their reasoning, he said, a choice of measurement setting here should not affect the outcome of a measurement there (and vice versa). After providing a mathematical formulation of locality and realism based on this, he showed specific cases where this would be inconsistent with the predictions of quantum mechanics theory. In experimental tests following Bell's example, now using quantum entanglement of photons instead of electrons, John Clauser and Stuart Freedman (1972) and Alain Aspect et al. (1981) demonstrated that the predictions of quantum mechanics are correct in this regard, although relying on additional unverifiable assumptions that open loopholes for local realism. In October 2015, Hensen and co-workers[8] reported that they performed a loophole-free Bell test which might force one to reject at least one of the principles of locality, realism, or freedom-of-choice (the last "could" lead to alternative superdeterministic theories).[9] Two of these logical possibilities, non-locality and non-realism, correspond to well-developed interpretations of quantum mechanics, and have many supporters; this is not the case for the third logical possibility, non-freedom. Conclusive experimental evidence of the violation of Bell's inequality would drastically reduce the class of acceptable deterministic theories but would not falsify absolute determinism, which was described by Bell himself as "not just inanimate nature running on behind-the-scenes clockwork, but with our behaviour, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined". However, Bell himself considered absolute determinism an implausible solution. In July 2019, physicists reported, for the first time, capturing an image of a strong form of quantum entanglement, called Bell entanglement.[10][11] Bell's theorem states that any physical theory that incorporates local realism cannot reproduce all the predictions of quantum mechanical theory. For a hidden variable theory, if Bell's conditions are correct, the results that agree with quantum mechanical theory appear to indicate superluminal (faster-than-light) effects, in contradiction to the principle of locality. The theorem is usually proved by consideration of a quantum system of two entangled qubits with the original tests as stated above done on photons. The most common examples concern systems of particles that are entangled in spin or polarization. Quantum mechanics allows predictions of correlations that would be observed if these two particles have their spin or polarization measured in different directions. Bell showed that if a local hidden variable theory holds, then these correlations would have to satisfy certain constraints, called Bell inequalities. Following the argument in the Einstein–Podolsky–Rosen (EPR) paradox paper (but using the example of spin, as in David Bohm's version of the EPR argument[6][12]), Bell considered a Gedankenexperiment or thought experiment in which there are "a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions."[6] The two particles travel away from each other to two distant locations, at which measurements of spin are performed, along axes that are independently chosen. Each measurement yields a result of either spin-up (+) or spin-down (−); it means, spin in the positive or negative direction of the chosen axis. Measuring the spin of these entangled particles along anti-parallel directions (i.e., facing in precisely opposite directions, perhaps offset by some arbitrary distance) the set of all results is perfectly correlated. On the other hand, if measurements are performed along parallel directions (i.e., facing in precisely the same direction, perhaps offset by some arbitrary distance) they always yield opposite results, and the set of measurements shows perfect anti-correlation. This is in accord with the above stated probabilities of measuring the same result in these two cases. Finally, measurement at perpendicular directions has a 50% chance of matching, and the total set of measurements is uncorrelated. These basic cases are illustrated in the table below. Columns should be read as examples of pairs of values that could be recorded by Alice and Bob with time increasing going to the right. Anti-parallel Pair 1 2 3 4 n Alice, 0° + + + Bob, 180° + + + (100% identical) Parallel 1 2 3 4 n Alice, 0° + + + Bob, 0° or 360° + + Correlation ( −1 −1 −1 −1 −1 ) / n = −1 (100% opposite) Orthogonal 1 2 3 4 n Alice, 0° + + Bob, 90° or 270° + + Correlation ( −1 +1 +1 −1 +1 ) / n = 0 (50% identical, 50% opposite) The best possible local realist imitation (red) for the quantum correlation of two spins in the singlet state (blue), insisting on perfect anti-correlation at 0°, perfect correlation at 180°. Many other possibilities exist for the classical correlation subject to these side conditions, but all are characterized by sharp peaks (and valleys) at 0°, 180°, and 360°, and none has more extreme values (±0.5) at 45°, 135°, 225°, and 315°. These values are marked by stars in the graph, and are the values measured in a standard Bell-CHSH type experiment: QM allows ±1/2 = ±0.7071…, local realism predicts ±0.5 or less. With the measurements oriented at intermediate angles between these basic cases, the existence of local hidden variables could agree with/would be consistent with a linear dependence of the correlation in the angle but, according to Bell's inequality (see below), could not agree with the dependence predicted by quantum mechanical theory, namely, that the correlation is the negative cosine of the angle. Experimental results match the curve predicted by quantum mechanics.[4] Over the years, Bell's theorem has undergone a wide variety of experimental tests. However, various common deficiencies in the testing of the theorem have been identified, including the detection loophole[13] and the communication loophole.[13] Over the years experiments have been gradually improved to better address these loopholes. In 2015, the first experiment to simultaneously address all of the loopholes was performed.[8] To date, Bell's theorem is generally regarded as supported by a substantial body of evidence and there are few supporters of local hidden variables, though the theorem is continually the subject of study, criticism, and refinement, and the popularity of non-local hidden variable theories such as Many Worlds Theory have been on the rise. [14][15][16][17] Bell's theorem, derived in his seminal 1964 paper titled On the Einstein Podolsky Rosen paradox,[6] has been called, on the assumption that the theory is correct, "the most profound in science".[18] Perhaps of equal importance is Bell's deliberate effort to encourage and bring legitimacy to work on the completeness issues, which had fallen into disrepute.[19] Later in his life, Bell expressed his hope that such work would "continue to inspire those who suspect that what is proved by the impossibility proofs is lack of imagination."[19] N. David Mermin has described the appraisals of the importance of Bell's theorem in the physics community as ranging from "indifference" to "wild extravagance".[20] Henry Stapp declared: "Bell's theorem is the most profound discovery of science."[21] In two respects Bell's 1964 paper was a step forward compared to the EPR paper: firstly, it considered more hidden variables than merely the element of physical reality in the EPR paper; and Bell's inequality was, in part, experimentally testable, thus raising the possibility of testing the local realism hypothesis. Limitations on such tests to date are noted below. Whereas Bell's paper deals only with deterministic hidden variable theories, Bell's theorem was later generalized to stochastic theories[23] as well, and it was also realised[24] that the theorem is not so much about hidden variables, as about the outcomes of measurements that could have been taken instead of the one actually taken. Existence of these variables is called the assumption of realism, or the assumption of counterfactual definiteness. Local realism[edit] The concept of local realism is formalized to state, and prove, Bell's theorem and generalizations. A common approach is the following: Bell inequalities[edit] Details on calculation of Cq(a, b) The two-particle spin space is the tensor product of the two-dimensional spin Hilbert spaces of the individual particles. Each individual space is an irreducible representation space of the rotation group SO(3). The product space decomposes as a direct sum of irreducible representations with definite total spins 0 and 1 of dimensions 1 and 3 respectively. Full details may be found in Clebsch—Gordan decomposition. The total spin zero subspace is spanned by the singlet state in the product space, a vector explicitly given by with adjoint in this representation The way single particle operators act on the product space is exemplified below by the example at hand; one defines the tensor product of operators, where the factors are single particle operators, thus if Π, Ω are single particle operators, etc., where the superscript in parentheses indicates on which Hilbert space in the tensor product space the action is intended and the action is defined by the right hand side. The singlet state has total spin 0 as may be verified by application of the operator of total spin J · J = (J1 + J2) ⋅ (J1 + J2) by a calculation similar to that presented below. The expectation value of the operator in the singlet state can be calculated straightforwardly. One has, by definition of the Pauli matrices, Upon left application of this on |A one obtains Likewise, application (to the left) of the operator corresponding to b on A| yields The inner products on the tensor product space is defined by Given this, the expectation value reduces to • Theoretically, there exists a, b such that whatever are the particular characteristics of the hidden variable theory as long as it abides to the rules of local realism as defined above. That is to say, no local hidden variable theory can make the same predictions as quantum mechanics. • Experimentally, instances of have been found (whatever the hidden variable theory), but has never been found. That is to say, predictions of quantum mechanics have never been falsified by experiment. These experiments include such that can rule out local hidden variable theories. But see below on possible loopholes. Original Bell's inequality[edit] The inequality that Bell derived can then be written as:[6] This simple form has an intuitive explanation, however. It is equivalent to the following elementary result from probability theory. Consider three (highly correlated, and possibly biased) coin-flips X, Y, and Z, with the property that: CHSH inequality[edit] Generalizing Bell's original inequality,[6] John Clauser, Michael Horne, Abner Shimony and R. A. Holt introduced the CHSH inequality,[25] which puts classical limits on the set of four correlations in Alice and Bob's experiment, without any assumption of perfect correlations (or anti-correlations) at equal settings Derivation of the classical bound[edit] With abbreviated notation the CHSH inequality can be derived as follows. Each of the four quantities is and each depends on . It follows that for any , one of and is zero, and the other is . From this it follows that and therefore At the heart of this derivation is a simple algebraic inequality concerning four variables, , which take the values only: The realism assumption is actually somewhat idealistic, and Bell's theorem only proves non-locality with respect to variables that only exist for metaphysical reasons[citation needed]. However, before the discovery of quantum mechanics, both realism and locality were completely uncontroversial features of physical theories. Quantum mechanical predictions violate CHSH inequalities[edit] The measurements performed by Alice and Bob are spin measurements on electrons. Alice can choose between two detector settings labeled and ; these settings correspond to measurement of spin along the or the axis. Bob can choose between two detector settings labeled and ; these correspond to measurement of spin along the or axis, where the coordinate system is rotated 135° relative to the coordinate system. The spin observables are represented by the 2 × 2 self-adjoint matrices: These are the Pauli spin matrices, which are known to have eigenvalues equal to . As is customary, we will use bra–ket notation to denote the eigenvectors of as , where Consider now the single state defined as where we used the shortened notation According to quantum mechanics, the choice of measurements is encoded into the choice of Hermitian operators applied to this state. In particular, consider the following operators: where represent two measurement choices of Alice, and two measurement choices of Bob. To obtain the expectation value given by a given measurement choice of Alice and Bob, one has to compute the expectation value of the corresponding pair of operators (for example, if the inputs are chosen to be ) over the shared state . For example, the expectation value corresponding to Alice choosing the measurement setting and Bob choosing the measurement setting is computed as Similar computations are used to obtain It follows that the value of given by this particular experimental arrangement is Bell's Theorem: If the quantum mechanical formalism is correct, then the system consisting of a pair of entangled electrons cannot satisfy the principle of local realism. Note that is indeed the upper bound for quantum mechanics called Tsirelson's bound. The operators giving this maximal value are always isomorphic to the Pauli matrices.[26] Testing by practical experiments[edit] Scheme of a "two-channel" Bell test When the polarization of both photons is measured in the same direction, both give the same outcome: perfect correlation. When measured at directions making an angle 45° with one another, the outcomes are completely random (uncorrelated). Measuring at directions at 90° to one another, the two are perfectly anti-correlated. In general, when the polarizers are at an angle θ to one another, the correlation is cos(2θ). So relative to the correlation function for the singlet state of spin half particles, we have a positive rather than a negative cosine function, and angles are halved: the correlation is periodic with period π instead of 2π. Bell test experiments to date overwhelmingly violate Bell's inequality. Two classes of Bell inequalities[edit] The fair sampling problem was faced openly in the 1970s. In early designs of their 1973 experiment, Freedman and Clauser[27] used fair sampling in the form of the Clauser–Horne–Shimony–Holt (CHSH[25]) hypothesis. However, shortly afterwards Clauser and Horne[23] made the important distinction between inhomogeneous (IBI) and homogeneous (HBI) Bell inequalities. Testing an IBI requires that we compare certain coincidence rates in two separated detectors with the singles rates of the two detectors. Nobody needed to perform the experiment, because singles rates with all detectors in the 1970s were at least ten times all the coincidence rates. So, taking into account this low detector efficiency, the QM prediction actually satisfied the IBI. To arrive at an experimental design in which the QM prediction violates IBI we require detectors whose efficiency exceeds 82.8% for singlet states,[28] but have very low dark rate and short dead and resolving times. This is now within reach. Practical challenges[edit] Because, at that time, even the best detectors didn't detect a large fraction of all photons, Clauser and Horne[23] recognized that testing Bell's inequality required some extra assumptions. They introduced the No Enhancement Hypothesis (NEH): The experiment was performed by Freedman and Clauser,[27] who found that the Bell's inequality was violated. So the no-enhancement hypothesis cannot be true in a local hidden variables model. While early experiments used atomic cascades, later experiments have used parametric down-conversion, following a suggestion by Reid and Walls,[29] giving improved generation and detection properties. As a result, recent experiments with photons no longer have to suffer from the detection loophole. This made the photon the first experimental system for which all main experimental loopholes were surmounted, although at first only in separate experiments. From 2015, experimentalists were able to surmount all the main experimental loopholes simultaneously; see Bell test experiments. Metaphysical aspects[edit] Non-local hidden variables Transactional interpretation of quantum mechanics Many-worlds interpretation of quantum mechanics Absolute Determinism Bell himself summarized one of the possible ways to address the theorem, superdeterminism, in a 1985 BBC Radio interview: There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behavior, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the 'decision' by the experimenter to carry out one set of measurements rather than another, the difficulty disappears. There is no need for a faster-than-light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already 'knows' what that measurement, and its outcome, will be.[32] A few advocates of deterministic models have not given up on local hidden variables. For example, Gerard 't Hooft has argued that the aforementioned superdeterminism loophole cannot be dismissed.[33][34] There have also been repeated claims that Bell's arguments are irrelevant because they depend on hidden assumptions that, in fact, are questionable. For example, E. T. Jaynes[35] argued in 1989 that there are two hidden assumptions in Bell's theorem that limit its generality. According to him: 1. Bell interpreted conditional probability P(X|Y) as a causal inference, i.e. Y exerted a causal inference on X in reality. This interpretation is a misunderstanding of probability theory. As Jaynes shows[35], "one cannot even reason correctly in so simple a problem as drawing two balls from Bernoulli's Urn, if he interprets probabilities in this way." Richard D. Gill claimed that Jaynes misunderstood Bell's analysis. Gill points out that in the same conference volume in which Jaynes argues against Bell, Jaynes confesses to being extremely impressed by a short proof by Steve Gull presented at the same conference, that the singlet correlations could not be reproduced by a computer simulation of a local hidden variables theory.[36] According to Jaynes (writing nearly 30 years after Bell's landmark contributions), it would probably take us another 30 years to fully appreciate Gull's stunning result. In 2006 a flurry of activity about implications for determinism arose with the paper: The Free Will Theorem[37] which stated "the response of a spin 1 particle to a triple experiment is free—that is to say, is not a function of properties of that part of the universe that is earlier than this response with respect to any given inertial frame."[38] This theorem raised awareness of a tension between determinism fully governing an experiment (on the one hand) and Alice and Bob being free to choose any settings they like for their observations (on the other).[39][40] The philosopher David Hodgson supports this theorem as showing that determinism is unscientific, and that quantum mechanics allows observers (at least in some instances) the freedom to make observations of their choosing, thereby leaving the door open for free will.[41] General remarks[edit] The violations of Bell's inequalities, due to quantum entanglement, provide near definitive demonstrations of something that was already strongly suspected: that quantum physics cannot be represented by any version of the classical picture of physics.[42] Some earlier elements that had seemed incompatible with classical pictures included complementarity and wavefunction collapse. The Bell violations show that no resolution of such issues can avoid the ultimate strangeness of quantum behavior.[43] What is powerful about Bell's theorem is that it doesn't refer to any particular theory of local hidden variables. It shows that nature violates the most general assumptions behind classical pictures, not just details of some particular models. No combination of local deterministic and local random hidden variables can reproduce the phenomena predicted by quantum mechanics and repeatedly observed in experiments.[44] See also[edit] 1. ^ Becker, Adam, “What is Real?” Basic Books, pp. 142-151, 2018. 2. ^ N. David Mermin, “Hidden Variables and the Two Theorems of John Bell” Reviews of Modern Physics, 65, 803-815 (1993) 3. ^ Quoted by Aage Petersen, Bulletin of the Atomic Scientists. sep 1963, Vol. 19 issue 7, p.12. 5. ^ "On the Einstein-Podolsky-Rosen paradox". Physics 1 (1964) 195-200. 6. ^ a b c d e f Bell, John (1964). "On the Einstein Podolsky Rosen Paradox" (PDF). Physics. 1 (3): 195–200. doi:10.1103/PhysicsPhysiqueFizika.1.195. 8. ^ a b Hensen, B; Bernien, H; Dréau, AE; Reiserer, A; Kalb, N; Blok, MS; Ruitenberg, J; Vermeulen, RF; Schouten, RN; Abellán, C; Amaya, W; Pruneri, V; Mitchell, MW; Markham, M; Twitchen, DJ; Elkouss, D; Wehner, S; Taminiau, TH; Hanson, R (2015). "Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres". Nature. 526 (7575): 682–686. arXiv:1508.05949. Bibcode:2015Natur.526..682H. doi:10.1038/nature15759. PMID 26503041. 9. ^ Zeeya Merali (2015-08-27). "Quantum 'spookiness' passes toughest test yet". Nature. 525 (7567): 14–15. doi:10.1038/nature.2015.18255. PMID 26333448. 10. ^ University of Glasgow (13 July 2019). "Scientists unveil the first-ever image of quantum entanglement". Retrieved 13 July 2019. 11. ^ Moreau, Paul-Antoine; et al. (12 July 2019). "Imaging Bell-type nonlocal behavior". Science Advances. 5 (7). doi:10.1126/sciadv.aaw2563. Retrieved 13 July 2019. 12. ^ Bohm, David (1951). Quantum Theory. Prentice−Hall. 16. ^ Harvard Physicist Lisa Randall, Curiosity Retreat Lecture, 2014, Episode 2, “Discovery in Cosmology & Particle Physics”, Q&A 17. ^ Becker, Adam, What is Real? Basic Books, 2018, p. 253. 18. ^ Stapp 1975 19. ^ a b Bell, JS (1982). "On the impossible pilot wave" (PDF). Foundations of Physics. 12 (10): 989–99. Bibcode:1982FoPh...12..989B. doi:10.1007/bf01889272. Reprinted in Speakable and unspeakable in quantum mechanics: collected papers on quantum philosophy. CUP, 2004, p. 160. 20. ^ Mermin, David (April 1985). "Is the moon there when nobody looks? Reality and the quantum theory" (PDF). Physics Today. 38 (4): 38–47. Bibcode:1985PhT....38d..38M. doi:10.1063/1.880968. 23. ^ a b c Clauser, John F. (1974). "Experimental consequences of objective local theories" (PDF). Physical Review D. 10 (2): 526–535. Bibcode:1974PhRvD..10..526C. doi:10.1103/PhysRevD.10.526. Archived from the original (PDF) on 2013-12-25. 24. ^ Eberhard, P. H. (1977). "Bell's theorem without hidden variables" (PDF). Nuovo Cimento B. 38 (1): 75–80. arXiv:quant-ph/0010047. Bibcode:1977NCimB..38...75E. CiteSeerX doi:10.1007/BF02726212. 26. ^ Werner, Reinhard F.; Wolf, Michael M. (2001). "Bell inequalities and entanglement". Quantum Information & Computation. 1 (3): 1–25. arXiv:quant-ph/0107093. (Sect. 5.3 "Operators for maximal violation".) Summers, Stephen J.; Werner, Reinhard F. (1987). "Bell inequalities and quantum field theory. I. General setting". Journal of Mathematical Physics. 28 (10): 2440–2447. doi:10.1063/1.527733. (Page 2442.) See also: Tsirelson, Boris (1987). "Quantum analogues of the Bell inequalities. The case of two spatially separated domains". Journal of Soviet Mathematics. 36 (4): 557–570. doi:10.1007/BF01663472. (Sect. 3 "Representation of extremal correlations".) 29. ^ Reid, M. D.; Walls, D. F. (1986). "Violations of classical inequalities in quantum optics". Physical Review A. 34 (2): 1260–1276. Bibcode:1986PhRvA..34.1260R. doi:10.1103/PhysRevA.34.1260. 35. ^ a b Jaynes, E. T. (1989). Clearing up Mysteries—The Original Goal (PDF). Maximum Entropy and Bayesian Methods. pp. 1–27. CiteSeerX doi:10.1007/978-94-015-7860-8_1. ISBN 978-90-481-4044-2. 36. ^ Gill, Richard D. (2003). "Time, Finite Statistics, and Bell's Fifth Position". Proc. Of "Foundations of Probability and Physics - 2", Ser. Math. Modelling in Phys., Engin., and Cogn. Sc. 5/2002: 179–206. arXiv:quant-ph/0301059. 40. ^ Esfeld, Michael (2015). "Bell's Theorem and the Issue of Determinism and Indeterminism". Foundations of Physics. 45 (5): 471–482. arXiv:1503.00660. Bibcode:2015FoPh...45..471E. doi:10.1007/s10701-015-9883-8. 42. ^ Penrose, Roger (2007). The Road to Reality. Vintage Books. p. 583. ISBN 978-0-679-77631-4. 44. ^ R.G. Lerner; G.L. Trigg (1991). Encyclopaedia of Physics (2nd ed.). VHC publishers. p. 495. ISBN 978-0-89573-752-6. Further reading[edit] The following are intended for general audiences. • Goldstein, Sheldon; et al. (2011). "Bell's theorem". Scholarpedia. 6 (10): 8378. Bibcode:2011SchpJ...6.8378G. doi:10.4249/scholarpedia.8378. External links[edit]
81855e8d3034164a
All Issues Volume 39, 2019 Volume 38, 2018 Volume 37, 2017 Volume 36, 2016 Volume 35, 2015 Volume 34, 2014 Volume 33, 2013 Volume 32, 2012 Volume 31, 2011 Volume 30, 2011 Volume 29, 2011 Volume 28, 2010 Volume 27, 2010 Volume 26, 2010 Volume 25, 2009 Volume 24, 2009 Volume 23, 2009 Volume 22, 2008 Volume 21, 2008 Volume 20, 2008 Volume 19, 2007 Volume 18, 2007 Volume 17, 2007 Volume 16, 2006 Volume 15, 2006 Volume 14, 2006 Volume 13, 2005 Volume 12, 2005 Volume 11, 2004 Volume 10, 2004 Volume 9, 2003 Volume 8, 2002 Volume 7, 2001 Volume 6, 2000 Volume 5, 1999 Volume 4, 1998 Volume 3, 1997 Volume 2, 1996 Volume 1, 1995 Discrete & Continuous Dynamical Systems - A August 2010 , Volume 27 , Issue 3 Select all articles Decoration invariants for horseshoe braids André de Carvalho and Toby Hall 2010, 27(3): 863-906 doi: 10.3934/dcds.2010.27.863 +[Abstract](1274) +[PDF](492.9KB) The Decoration Conjecture describes the structure of the set of braid types of Smale's horseshoe map ordered by forcing, providing information about the order in which periodic orbits can appear when a horseshoe is created. A proof of this conjecture is given for the class of so-called lone decorations, and it is explained how to calculate associated braid conjugacy invariants which provide additional information about forcing for horseshoe braids. A nonlinear partial integro-differential equation from mathematical finance Frederic Abergel and Remi Tachet 2010, 27(3): 907-917 doi: 10.3934/dcds.2010.27.907 +[Abstract](1640) +[PDF](160.4KB) Consistently fitting vanilla option surfaces is an important issue when it comes to modeling in finance. As far as local and stochastic volatility models are concerned, this problem boils down to the resolution of a nonlinear integro-differential pde. The non-locality of this equation stems from the quotient of two integral terms and is not defined for all bounded continuous functions. In this paper, we use a fixed point argument and suitable a priori estimates to prove short-time existence of solutions for this equation. Boundary stabilization of the wave and Schrödinger equations in exterior domains Lassaad Aloui and Moez Khenissi 2010, 27(3): 919-934 doi: 10.3934/dcds.2010.27.919 +[Abstract](1419) +[PDF](226.6KB) In this paper we complete our works on the local energy decay for the evolution damping problem in exterior domains. We consider the wave and Schrödinger equations in an exterior domain with dissipative boundary condition. We study the distribution of resonances under some natural assumptions on the behavior of the geodesics in order to deduce the uniform local energy decay. Baire category and extremely non-normal points of invariant sets of IFS's In-Soo Baek and Lars Olsen 2010, 27(3): 935-943 doi: 10.3934/dcds.2010.27.935 +[Abstract](1183) +[PDF](200.5KB) We prove that if $K$ is the invariant set of an IFS in $\ R^{d}$ satisfying the Strong Open Set Condition, then the set of extremely non-normal points of $K$ is a comeagre subset of $K$. Discrete orbits in topologically transitive cylindrical transformations Jan Kwiatkowski and Artur Siemaszko 2010, 27(3): 945-961 doi: 10.3934/dcds.2010.27.945 +[Abstract](1376) +[PDF](211.2KB) In this paper we provide a few recipes how to construct a topologically transitive cocycle over an arbitrary odometer possessing discrete orbits. It is shown that for every odometer, there exists a topologically transitive cocycle such that the set of points with discrete orbits starting form zero level has the cardinality of the continuum. Cyclicity of unbounded semi-hyperbolic 2-saddle cycles in polynomial Lienard systems Magdalena Caubergh, Freddy Dumortier and Stijn Luca 2010, 27(3): 963-980 doi: 10.3934/dcds.2010.27.963 +[Abstract](1570) +[PDF](850.1KB) The paper deals with the cyclicity of unbounded semi-hyperbolic 2-saddle cycles in polynomial Liénard systems of type $(m,n)$ with $m<2n+1$, $m$ and $n$ odd. We generalize the results in [1] (case $m=1$), providing a substantially simpler and more transparant proof than the one used in [1]. Optimal interior partial regularity for nonlinear elliptic systems Shuhong Chen and Zhong Tan 2010, 27(3): 981-993 doi: 10.3934/dcds.2010.27.981 +[Abstract](1646) +[PDF](190.1KB) We consider interior regularity for weak solutions of nonlinear elliptic systems with subquadratic under controllable growth condition. By $\mathcal{A}$-harmonic approximation technique, we obtain a general criterion for a weak solution to be regular in the neighborhood of a given point. In particularly, the regular result is optimal. A note on two approaches to the thermodynamic formalism Vaughn Climenhaga 2010, 27(3): 995-1005 doi: 10.3934/dcds.2010.27.995 +[Abstract](1733) +[PDF](172.9KB) Inducing schemes provide a means of using symbolic dynamics to study equilibrium states of non-uniformly hyperbolic maps, but necessitate a solution to the liftability problem. One approach, due to Pesin and Senti, places conditions on the induced potential under which a unique equilibrium state exists among liftable measures, and then solves the liftability problem separately. Another approach, due to Bruin and Todd, places conditions on the original potential under which both problems may be solved simultaneously. These conditions include a bounded range condition, first introduced by Hofbauer and Keller. We compare these two sets of conditions and show that for many inducing schemes of interest, the conditions from the second approach are strictly stronger than the conditions from the first. We also show that the bounded range condition can be used to obtain Pesin and Senti's conditions for any inducing scheme with sufficiently slow growth of basic elements. Microdynamics for Nash maps William Geller, Bruce Kitchens and Michał Misiurewicz 2010, 27(3): 1007-1024 doi: 10.3934/dcds.2010.27.1007 +[Abstract](1503) +[PDF](284.2KB) We investigate a family of maps that arises from a model in economics and game theory. It has some features similar to renormalization and some similar to intermittency. In a one-parameter family of maps in dimension 2, when the parameter goes to 0, the maps converge to the identity. Nevertheless, after a linear rescaling of both space and time, we get maps with attracting invariant closed curves. As the parameter goes to 0, those curves converge in a strong sense to a certain circle. We call those phenomena microdynamics. The model can be also understood as a family of discrete time approximations to a Brown-von Neumann differential equation. Well-posedness and blow-up phenomena for the interacting system of the Camassa-Holm and Degasperis-Procesi equations Ying Fu, Changzheng Qu and Yichen Ma 2010, 27(3): 1025-1035 doi: 10.3934/dcds.2010.27.1025 +[Abstract](1557) +[PDF](170.5KB) In this paper, the well-posedness and blow up phenomena for the interacting system of the Camassa-Holm and Degasperis-Procesi equations are studied. We first establish the local well-posedness of strong solutions for the system. Then the precise blow-up scenarios for the strong solutions to the system are derived. On very weak solutions of semi-linear elliptic equations in the framework of weighted spaces with respect to the distance to the boundary Jesus Idelfonso Díaz and Jean Michel Rakotoson 2010, 27(3): 1037-1058 doi: 10.3934/dcds.2010.27.1037 +[Abstract](1591) +[PDF](285.0KB) We prove the existence of an appropriate function (very weak solution) $u$ in the Lorentz space $L^{N',\infty}(\Omega), \ N'=\frac N{N-1}$ satisfying $Lu-Vu+g(x,u,\nabla u)=\mu$ in $\Omega$ an open bounded set of $\R^N$, and $u=0$ on $\partial\Omega$ in the sense that $(u,L\varphi)_0-(Vu,\varphi)_0+(g(\cdot,u,\nabla u),\varphi)_0=\mu(\varphi),\quad\forall\varphi\in C^2_c(\Omega).$ The potential $V \le \lambda < \lambda_1$ is assumed to be in the weighted Lorentz space $L^{N,1}(\Omega,\delta)$, where $\delta(x)= dist(x,\partial\Omega),\ \mu\in M^1(\Omega,\delta)$, the set of weighted Radon measures containing $L^1(\Omega,\delta)$, $L$ is an elliptic linear self adjoint second order operator, and $\lambda_1$ is the first eigenvalue of $L$ with zero Dirichlet boundary conditions.     If $\mu\in L^1(\Omega,\delta)$ we only assume that for the potential $V$ is in L1loc$(\Omega)$, $V \le \lambda<\lambda_1$. If $\mu\in M^1(\Omega,\delta^\alpha),\ \alpha\in$[$0,1[$[, then we prove that the very weak solution $|\nabla u|$ is in the Lorentz space $L^{\frac N{N-1+\alpha},\infty}(\Omega)$. We apply those results to the existence of the so called large solutions with a right hand side data in $L^1(\Omega,\delta)$. Finally, we prove some rearrangement comparison results. Countable inverse limits of postcritical $w$-limit sets of unimodal maps Chris Good, Robin Knight and Brian Raines 2010, 27(3): 1059-1078 doi: 10.3934/dcds.2010.27.1059 +[Abstract](1420) +[PDF](263.7KB) Let $f$ be a unimodal map of the interval with critical point $c$. If the orbit of $c$ is not dense then most points in lim{[0, 1], f} have neighborhoods that are homeomorphic with the product of a Cantor set and an open arc. The points without this property are called inhomogeneities, and the set, I, of inhomogeneities is equal to lim {w(c), f|w(c)}. In this paper we consider the relationship between the limit complexity of $w(c)$ and the limit complexity of I. We show that if $w(c)$ is more complicated than a finite collection of convergent sequences then I can have arbitrarily high limit complexity. We give a complete description of the limit complexity of I for any possible $\w(c)$. Non topologically weakly mixing interval exchanges Hadda Hmili 2010, 27(3): 1079-1091 doi: 10.3934/dcds.2010.27.1079 +[Abstract](1609) +[PDF](200.1KB) In this paper, we prove a criterion for the existence of continuous non constant eigenfunctions for interval exchange transformations which are non topologically weakly mixing. We first construct, for any $m>3$, uniquely ergodic interval exchange transformations of Q-rank $2$ with irrational eigenvalues associated to continuous eigenfunctions which are not topologically weakly mixing; this answers a question of Ferenczi and Zamboni [5]. Moreover we construct, for any even integer $m \geq 4$, interval exchange transformations of Q-rank $2$ with both irrational eigenvalues (associated to continuous eigenfunctions) and non trivial rational eigenvalues (associated to piecewise continuous eigenfunctions); these examples can be chosen to be either uniquely ergodic or non minimal. Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation Jun-ichi Segata 2010, 27(3): 1093-1105 doi: 10.3934/dcds.2010.27.1093 +[Abstract](1351) +[PDF](225.1KB) We consider the fourth order nonlinear Schrödinger type equation (4NLS). The first purpose is to revisit the well-posedness theory of (4NLS). In [8], [9], [20] and [21], they proved the time-local well-posedness of (4NLS) in H *(R) with $s>1/2$ by using the Fourier restriction method. In this paper we give another proof of above result by using simpler approach than the Fourier restriction method. The second purpose is to construct the exact standing wave solution to (4NLS). Quasi-invariant measures, escape rates and the effect of the hole Wael Bahsoun and Christopher Bose 2010, 27(3): 1107-1121 doi: 10.3934/dcds.2010.27.1107 +[Abstract](1275) +[PDF](259.1KB) Let $T$ be a piecewise expanding interval map and $T_H$ be an abstract perturbation of $T$ into an interval map with a hole. Given a number , 0 < < l, we compute an upper-bound on the size of a hole needed for the existence of an absolutely continuous conditionally invariant measure (accim) with escape rate not greater than -ln(1-). The two main ingredients of our approach are Ulam's method and an abstract perturbation result of Keller and Liverani. Classes of singular $pq-$Laplacian semipositone systems Eun Kyoung Lee, R. Shivaji and Jinglong Ye 2010, 27(3): 1123-1132 doi: 10.3934/dcds.2010.27.1123 +[Abstract](1494) +[PDF](152.0KB) We consider the positive solutions to classes of $pq-$Laplacian semipositone systems with Dirichlet boundary conditions, in particular, we study strongly coupled reaction terms which tend to $-\infty$ at the origin and satisfy a combined sublinear condition at $\infty.$ By using the method of sub-super solutions we establish our results. Hyperbolicity of $C^1$-stably expansive homoclinic classes Keonhee Lee and Manseob Lee 2010, 27(3): 1133-1145 doi: 10.3934/dcds.2010.27.1133 +[Abstract](1425) +[PDF](190.5KB) Let $f$ be a diffeomorphism of a compact $C^\infty$ manifold, and let $p$ be a hyperbolic periodic point of $f$. In this paper we introduce the notion of $C^1$-stable expansivity for a closed $f$-invariant set, and prove that $(i)$ the chain recurrent set $\mathcal {R}(f)$ of $f$ is $C^1$-stably expansive if and only if $f$ satisfies both Axiom A and no-cycle condition, $(ii)$ the homoclinic class $H_f(p)$ of $f$ associated to $p$ is $C^1$-stably expansive if and only if $H_f(p)$ is hyperbolic, and $(iii)$ $C^1$-generically, the homoclinic class $H_f(p)$ is $C^1$-stably expansive if and only if $H_f(p)$ is $C^1$-persistently expansive. Rotating modes in the Frenkel-Kontorova model with periodic interaction potential Wen-Xin Qin 2010, 27(3): 1147-1158 doi: 10.3934/dcds.2010.27.1147 +[Abstract](1420) +[PDF](269.8KB) Employing a homotopy argument and the Leray-Schauder degree theory, we show the existence of rotating modes for the Frenkel-Kontorova model with periodic interaction potential. The solutions describing rotating modes are periodic and called rotating oscillating solutions, in which the phase of a fixed rotator increases by $2\pi$ per period, while its neighbors oscillate with small amplitudes around their equilibrium positions. We also discuss a fundamental difference between the Frenkel-Kontorova model with periodic interaction potential and that with convex interaction potential by demonstrating the nonexistence of the rotating modes for the latter case. Boundary feedback stabilization of the two dimensional Navier-Stokes equations with finite dimensional controllers Jean-Pierre Raymond and Laetitia Thevenet 2010, 27(3): 1159-1187 doi: 10.3934/dcds.2010.27.1159 +[Abstract](1622) +[PDF](348.7KB) We study the boundary stabilization of the two-dimensional Navier-Stokes equations about an unstable stationary solution by controls of finite dimension in feedback form. The main novelty is that the linear feedback control law is determined by solving an optimal control problem of finite dimension. More precisely, we show that, to stabilize locally the Navier-Stokes equations, it is sufficient to look for a boundary feedback control of finite dimension, able to stabilize the projection of the linearized equation onto the unstable subspace of the linearized Navier-Stokes operator. The feedback operator is obtained by solving an algebraic Riccati equation in a space of finite dimension, that is to say a matrix Riccati equation. Quasistatic evolution for plasticity with softening: The spatially homogeneous case Francesco Solombrino 2010, 27(3): 1189-1217 doi: 10.3934/dcds.2010.27.1189 +[Abstract](1357) +[PDF](355.1KB) The spatially uniform case of the problem of quasistatic evolution in small strain associative elastoplasticity with softening is studied. Through the introdution of a viscous approximation, the problem reduces to determine the limit behaviour of the solutions of a singularly perturbed system of ODE's in a finite dimensional Banach space. We see that the limit dynamics presents, for a generic choice of the initial data, the alternation of three possible regimes (elastic regime, slow dynamics, fast dynamics), which is determined by the sign of two scalar indicators, whose explicit expression is given. Measures of intermediate entropies for skew product diffeomorphisms Peng Sun 2010, 27(3): 1219-1231 doi: 10.3934/dcds.2010.27.1219 +[Abstract](1536) +[PDF](180.3KB) In this paper we study a skew product map $F$ preserving an ergodic measure $\mu$ of positive entropy. We show that if on the fibers the map are $C^{1+\alpha}$ diffeomorphisms with nonzero Lyapunov exponents, then $F$ has ergodic measures of arbitrary intermediate entropies. To construct these measures we find a set on which the return map is a skew product with horseshoes along fibers. We can control the average return time and show the maximal entropy of these measures can be arbitrarily close to $h_\mu(F)$. On the spatial asymptotics of solutions of the Toda lattice Gerald Teschl 2010, 27(3): 1233-1239 doi: 10.3934/dcds.2010.27.1233 +[Abstract](1299) +[PDF](139.6KB) We investigate the spatial asymptotics of decaying solutions of the Toda hierarchy and show that the asymptotic behaviour is preserved by the time evolution. In particular, we show that the leading asymptotic term is time independent. Moreover, we establish infinite propagation speed for the Toda lattice. Homoclinic orbits for superlinear Hamiltonian systems without Ambrosetti-Rabinowitz growth condition Jun Wang, Junxiang Xu and Fubao Zhang 2010, 27(3): 1241-1257 doi: 10.3934/dcds.2010.27.1241 +[Abstract](1275) +[PDF](248.8KB) In this paper we prove the existence of homoclinic orbits for the first order non-autonomous Hamiltonian system $\dot{z}=\mathcal {J}H_{z}(t,z),$ where $H(t,z)$ depends periodically on $t$. We establish some existence results of the homoclinic orbits for weak superlinear cases. To this purpose, we apply a new linking theorem to provide bounded Palais-Samle sequences. 2018  Impact Factor: 1.143 Email Alert [Back to Top]
76481f0d36a9f4f6
Open main menu Complete acetylene (H–C≡C–H) molecular orbital set. The left column shows MO's which are occupied in the ground state, with the lowest-energy orbital at the top. The white and grey line visible in some MO's is the molecular axis passing through the nuclei. The orbital wave functions are positive in the red regions and negative in the blue. The right column shows virtual MO's which are empty in the ground state, but may be occupied in excited states. In chemistry, a molecular orbital (MO) is a mathematical function describing the wave-like behavior of an electron in a molecule. This function can be used to calculate chemical and physical properties such as the probability of finding an electron in any specific region. The term orbital was introduced by Robert S. Mulliken in 1932 as an abbreviation for one-electron orbital wave function.[1] At an elementary level, it is used to describe the region of space in which the function has a significant amplitude. Molecular orbitals are usually constructed by combining atomic orbitals or hybrid orbitals from each atom of the molecule, or other molecular orbitals from groups of atoms. They can be quantitatively calculated using the Hartree–Fock or self-consistent field (SCF) methods. A molecular orbital (MO) can be used to represent the regions in a molecule where an electron occupying that orbital is likely to be found. Molecular orbitals are obtained from the combination of atomic orbitals, which predict the location of an electron in an atom. A molecular orbital can specify the electron configuration of a molecule: the spatial distribution and energy of one (or one pair of) electron(s). Most commonly a MO is represented as a linear combination of atomic orbitals (the LCAO-MO method), especially in qualitative or very approximate usage. They are invaluable in providing a simple model of bonding in molecules, understood through molecular orbital theory. Most present-day methods in computational chemistry begin by calculating the MOs of the system. A molecular orbital describes the behavior of one electron in the electric field generated by the nuclei and some average distribution of the other electrons. In the case of two electrons occupying the same orbital, the Pauli principle demands that they have opposite spin. Necessarily this is an approximation, and highly accurate descriptions of the molecular electronic wave function do not have orbitals (see configuration interaction). Molecular orbitals are, in general, delocalized throughout the entire molecule. Moreover, if the molecule has symmetry elements, its nondegenerate molecular orbitals are either symmetric or antisymmetric with respect to any of these symmetries. In other words, application of a symmetry operation S (e.g., a reflection, rotation, or inversion) to molecular orbital ψ results in the molecular orbital being unchanged or reversing its mathematical sign: Sψ = ±ψ. In planar molecules, for example, molecular orbitals are either symmetric (sigma) or antisymmetric (pi) with respect to reflection in the molecular plane. If molecules with degenerate orbital energies are also considered, a more general statement that molecular orbitals form bases for the irreducible representations of the molecule's symmetry group holds.[2] The symmetry properties of molecular orbitals means that delocalization is an inherent feature of molecular orbital theory and makes it fundamentally different from (and complementary to) valence bond theory, in which bonds are viewed as localized electron pairs, with allowance for resonance to account for delocalization. In contrast to these symmetry-adapted canonical molecular orbitals, localized molecular orbitals can be formed by applying certain mathematical transformations to the canonical orbitals. The advantage of this approach is that the orbitals will correspond more closely to the "bonds" of a molecule as depicted by a Lewis structure. As a disadvantage, the energy levels of these localized orbitals no longer have physical meaning. (The discussion in the rest of this article will focus on canonical molecular orbitals. For further discussions on localized molecular orbitals, see: natural bond orbital and sigma-pi and equivalent-orbital models.) Formation of molecular orbitalsEdit Molecular orbitals arise from allowed interactions between atomic orbitals, which are allowed if the symmetries (determined from group theory) of the atomic orbitals are compatible with each other. Efficiency of atomic orbital interactions is determined from the overlap (a measure of how well two orbitals constructively interact with one another) between two atomic orbitals, which is significant if the atomic orbitals are close in energy. Finally, the number of molecular orbitals formed must be equal to the number of atomic orbitals in the atoms being combined to form the molecule. Qualitative discussionEdit For an imprecise, but qualitatively useful, discussion of the molecular structure, the molecular orbitals can be obtained from the "Linear combination of atomic orbitals molecular orbital method" ansatz. Here, the molecular orbitals are expressed as linear combinations of atomic orbitals.[3] Linear combinations of atomic orbitals (LCAO)Edit Molecular orbitals were first introduced by Friedrich Hund[4][5] and Robert S. Mulliken[6][7] in 1927 and 1928.[8][9] The linear combination of atomic orbitals or "LCAO" approximation for molecular orbitals was introduced in 1929 by Sir John Lennard-Jones.[10] His ground-breaking paper showed how to derive the electronic structure of the fluorine and oxygen molecules from quantum principles. This qualitative approach to molecular orbital theory is part of the start of modern quantum chemistry. Linear combinations of atomic orbitals (LCAO) can be used to estimate the molecular orbitals that are formed upon bonding between the molecule's constituent atoms. Similar to an atomic orbital, a Schrödinger equation, which describes the behavior of an electron, can be constructed for a molecular orbital as well. Linear combinations of atomic orbitals, or the sums and differences of the atomic wavefunctions, provide approximate solutions to the Hartree–Fock equations which correspond to the independent-particle approximation of the molecular Schrödinger equation. For simple diatomic molecules, the wavefunctions obtained are represented mathematically by the equations where   and   are the molecular wavefunctions for the bonding and antibonding molecular orbitals, respectively,   and   are the atomic wavefunctions from atoms a and b, respectively, and   and   are adjustable coefficients. These coefficients can be positive or negative, depending on the energies and symmetries of the individual atomic orbitals. As the two atoms become closer together, their atomic orbitals overlap to produce areas of high electron density, and, as a consequence, molecular orbitals are formed between the two atoms. The atoms are held together by the electrostatic attraction between the positively charged nuclei and the negatively charged electrons occupying bonding molecular orbitals.[11] Bonding, antibonding, and nonbonding MOsEdit When atomic orbitals interact, the resulting molecular orbital can be of three types: bonding, antibonding, or nonbonding. Bonding MOs: • Bonding interactions between atomic orbitals are constructive (in-phase) interactions. • Bonding MOs are lower in energy than the atomic orbitals that combine to produce them. Antibonding MOs: • Antibonding interactions between atomic orbitals are destructive (out-of-phase) interactions, with a nodal plane where the wavefunction of the antibonding orbital is zero between the two interacting atoms • Antibonding MOs are higher in energy than the atomic orbitals that combine to produce them. Nonbonding MOs: • Nonbonding MOs are the result of no interaction between atomic orbitals because of lack of compatible symmetries. • Nonbonding MOs will have the same energy as the atomic orbitals of one of the atoms in the molecule. Sigma and pi labels for MOsEdit The type of interaction between atomic orbitals can be further categorized by the molecular-orbital symmetry labels σ (sigma), π (pi), δ (delta), φ (phi), γ (gamma) etc. These are the Greek letters corresponding to the atomic orbitals s, p, d, f and g respectively. The number of nodal planes containing the internuclear axis between the atoms concerned is zero for σ MOs, one for π, two for δ, three for φ and four for γ. σ symmetryEdit A MO with σ symmetry results from the interaction of either two atomic s-orbitals or two atomic pz-orbitals. An MO will have σ-symmetry if the orbital is symmetric with respect to the axis joining the two nuclear centers, the internuclear axis. This means that rotation of the MO about the internuclear axis does not result in a phase change. A σ* orbital, sigma antibonding orbital, also maintains the same phase when rotated about the internuclear axis. The σ* orbital has a nodal plane that is between the nuclei and perpendicular to the internuclear axis.[12] π symmetryEdit A MO with π symmetry results from the interaction of either two atomic px orbitals or py orbitals. An MO will have π symmetry if the orbital is asymmetric with respect to rotation about the internuclear axis. This means that rotation of the MO about the internuclear axis will result in a phase change. There is one nodal plane containing the internuclear axis, if real orbitals are considered. A π* orbital, pi antibonding orbital, will also produce a phase change when rotated about the internuclear axis. The π* orbital also has a second nodal plane between the nuclei.[12][13][14][15] δ symmetryEdit A MO with δ symmetry results from the interaction of two atomic dxy or dx2-y2 orbitals. Because these molecular orbitals involve low-energy d atomic orbitals, they are seen in transition-metal complexes. A δ bonding orbital has two nodal planes containing the internuclear axis, and a δ* antibonding orbital also has a third nodal plane between the nuclei. φ symmetryEdit Suitably aligned f atomic orbitals overlap to form phi molecular orbital (a phi bond) Theoretical chemists have conjectured that higher-order bonds, such as phi bonds corresponding to overlap of f atomic orbitals, are possible. There is as of 2005 only one known example of a molecule purported to contain a phi bond (a U−U bond, in the molecule U2).[16] Gerade and ungerade symmetryEdit For molecules that possess a center of inversion (centrosymmetric molecules) there are additional labels of symmetry that can be applied to molecular orbitals. Centrosymmetric molecules include: Non-centrosymmetric molecules include: If inversion through the center of symmetry in a molecule results in the same phases for the molecular orbital, then the MO is said to have gerade (g) symmetry, from the German word for even. If inversion through the center of symmetry in a molecule results in a phase change for the molecular orbital, then the MO is said to have ungerade (u) symmetry, from the German word for odd. For a bonding MO with σ-symmetry, the orbital is σg (s' + s'' is symmetric), while an antibonding MO with σ-symmetry the orbital is σu, because inversion of s' – s'' is antisymmetric. For a bonding MO with π-symmetry the orbital is πu because inversion through the center of symmetry for would produce a sign change (the two p atomic orbitals are in phase with each other but the two lobes have opposite signs), while an antibonding MO with π-symmetry is πg because inversion through the center of symmetry for would not produce a sign change (the two p orbitals are antisymmetric by phase).[12] MO diagramsEdit The qualitative approach of MO analysis uses a molecular orbital diagram to visualize bonding interactions in a molecule. In this type of diagram, the molecular orbitals are represented by horizontal lines; the higher a line the higher the energy of the orbital, and degenerate orbitals are placed on the same level with a space between them. Then, the electrons to be placed in the molecular orbitals are slotted in one by one, keeping in mind the Pauli exclusion principle and Hund's rule of maximum multiplicity (only 2 electrons, having opposite spins, per orbital; place as many unpaired electrons on one energy level as possible before starting to pair them). For more complicated molecules, the wave mechanics approach loses utility in a qualitative understanding of bonding (although is still necessary for a quantitative approach). Some properties: • A basis set of orbitals includes those atomic orbitals that are available for molecular orbital interactions, which may be bonding or antibonding • The number of molecular orbitals is equal to the number of atomic orbitals included in the linear expansion or the basis set • If the molecule has some symmetry, the degenerate atomic orbitals (with the same atomic energy) are grouped in linear combinations (called symmetry-adapted atomic orbitals (SO)), which belong to the representation of the symmetry group, so the wave functions that describe the group are known as symmetry-adapted linear combinations (SALC). • The number of molecular orbitals belonging to one group representation is equal to the number of symmetry-adapted atomic orbitals belonging to this representation • Within a particular representation, the symmetry-adapted atomic orbitals mix more if their atomic energy levels are closer. The general procedure for constructing a molecular orbital diagram for a reasonably simple molecule can be summarized as follows: 1. Assign a point group to the molecule. 2. Look up the shapes of the SALCs. 3. Arrange the SALCs of each molecular fragment in increasing order of energy, first noting whether they stem from s, p, or d orbitals (and put them in the order s < p < d), and then their number of internuclear nodes. 4. Combine SALCs of the same symmetry type from the two fragments, and from N SALCs form N molecular orbitals. 5. Estimate the relative energies of the molecular orbitals from considerations of overlap and relative energies of the parent orbitals, and draw the levels on a molecular orbital energy level diagram (showing the origin of the orbitals). 6. Confirm, correct, and revise this qualitative order by carrying out a molecular orbital calculation by using commercial software.[17] Bonding in molecular orbitalsEdit Orbital degeneracyEdit Molecular orbitals are said to be degenerate if they have the same energy. For example, in the homonuclear diatomic molecules of the first ten elements, the molecular orbitals derived from the px and the py atomic orbitals result in two degenerate bonding orbitals (of low energy) and two degenerate antibonding orbitals (of high energy).[11] Ionic bondsEdit When the energy difference between the atomic orbitals of two atoms is quite large, one atom's orbitals contribute almost entirely to the bonding orbitals, and the other atom's orbitals contribute almost entirely to the antibonding orbitals. Thus, the situation is effectively that one or more electrons have been transferred from one atom to the other. This is called an (mostly) ionic bond. Bond orderEdit The bond order, or number of bonds, of a molecule can be determined by combining the number of electrons in bonding and antibonding molecular orbitals. A pair of electrons in a bonding orbital creates a bond, whereas a pair of electrons in an antibonding orbital negates a bond. For example, N2, with eight electrons in bonding orbitals and two electrons in antibonding orbitals, has a bond order of three, which constitutes a triple bond. Bond strength is proportional to bond order—a greater amount of bonding produces a more stable bond—and bond length is inversely proportional to it—a stronger bond is shorter. There are rare exceptions to the requirement of molecule having a positive bond order. Although Be2 has a bond order of 0 according to MO analysis, there is experimental evidence of a highly unstable Be2 molecule having a bond length of 245 pm and bond energy of 10 kJ/mol.[12][18] The highest occupied molecular orbital and lowest unoccupied molecular orbital are often referred to as the HOMO and LUMO, respectively. The difference of the energies of the HOMO and LUMO is called the HOMO-LUMO gap. This notion is often the matter of confusion in literature and should be considered with caution. Its value is usually located between the fundamental gap (difference between ionization potential and electron affinity) and the optical gap. In addition, HOMO-LUMO gap can be related to a bulk material band gap or transport gap, which is usually much smaller than fundamental gap. Homonuclear diatomicsEdit Homonuclear diatomic MOs contain equal contributions from each atomic orbital in the basis set. This is shown in the homonuclear diatomic MO diagrams for H2, He2, and Li2, all of which containing symmetric orbitals.[12] Electron wavefunctions for the 1s orbital of a lone hydrogen atom (left and right) and the corresponding bonding (bottom) and antibonding (top) molecular orbitals of the H2 molecule. The real part of the wavefunction is the blue curve, and the imaginary part is the red curve. The red dots mark the locations of the nuclei. The electron wavefunction oscillates according to the Schrödinger wave equation, and orbitals are its standing waves. The standing wave frequency is proportional to the orbital's kinetic energy. (This plot is a one-dimensional slice through the three-dimensional system.) As a simple MO example, consider the electrons in a hydrogen molecule, H2 (see molecular orbital diagram), with the two atoms labelled H' and H". The lowest-energy atomic orbitals, 1s' and 1s", do not transform according to the symmetries of the molecule. However, the following symmetry adapted atomic orbitals do: 1s' – 1s" Antisymmetric combination: negated by reflection, unchanged by other operations 1s' + 1s" Symmetric combination: unchanged by all symmetry operations The symmetric combination (called a bonding orbital) is lower in energy than the basis orbitals, and the antisymmetric combination (called an antibonding orbital) is higher. Because the H2 molecule has two electrons, they can both go in the bonding orbital, making the system lower in energy (hence more stable) than two free hydrogen atoms. This is called a covalent bond. The bond order is equal to the number of bonding electrons minus the number of antibonding electrons, divided by 2. In this example, there are 2 electrons in the bonding orbital and none in the antibonding orbital; the bond order is 1, and there is a single bond between the two hydrogen atoms. On the other hand, consider the hypothetical molecule of He2 with the atoms labeled He' and He". As with H2, the lowest energy atomic orbitals are the 1s' and 1s", and do not transform according to the symmetries of the molecule, while the symmetry adapted atomic orbitals do. The symmetric combination—the bonding orbital—is lower in energy than the basis orbitals, and the antisymmetric combination—the antibonding orbital—is higher. Unlike H2, with two valence electrons, He2 has four in its neutral ground state. Two electrons fill the lower-energy bonding orbital, σg(1s), while the remaining two fill the higher-energy antibonding orbital, σu*(1s). Thus, the resulting electron density around the molecule does not support the formation of a bond between the two atoms; without a stable bond holding the atoms together, molecule would not be expected to exist. Another way of looking at it is that there are two bonding electrons and two antibonding electrons; therefore, the bond order is 0 and no bond exists (the molecule has one bound state supported by the Van der Waals potential).[citation needed] Dilithium Li2 is formed from the overlap of the 1s and 2s atomic orbitals (the basis set) of two Li atoms. Each Li atom contributes three electrons for bonding interactions, and the six electrons fill the three MOs of lowest energy, σg(1s), σu*(1s), and σg(2s). Using the equation for bond order, it is found that dilithium has a bond order of one, a single bond. Noble gasesEdit Considering a hypothetical molecule of He2, since the basis set of atomic orbitals is the same as in the case of H2, we find that both the bonding and antibonding orbitals are filled, so there is no energy advantage to the pair. HeH would have a slight energy advantage, but not as much as H2 + 2 He, so the molecule is very unstable and exists only briefly before decomposing into hydrogen and helium. In general, we find that atoms such as He that have full energy shells rarely bond with other atoms. Except for short-lived Van der Waals complexes, there are very few noble gas compounds known. Heteronuclear diatomicsEdit While MOs for homonuclear diatomic molecules contain equal contributions from each interacting atomic orbital, MOs for heteronuclear diatomics contain different atomic orbital contributions. Orbital interactions to produce bonding or antibonding orbitals in heteronuclear diatomics occur if there is sufficient overlap between atomic orbitals as determined by their symmetries and similarity in orbital energies. In hydrogen fluoride HF overlap between the H 1s and F 2s orbitals is allowed by symmetry but the difference in energy between the two atomic orbitals prevents them from interacting to create a molecular orbital. Overlap between the H 1s and F 2pz orbitals is also symmetry allowed, and these two atomic orbitals have a small energy separation. Thus, they interact, leading to creation of σ and σ* MOs and a molecule with a bond order of 1. Since HF is a non-centrosymmetric molecule, the symmetry labels g and u do not apply to its molecular orbitals.[19] Quantitative approachEdit To obtain quantitative values for the molecular energy levels, one needs to have molecular orbitals that are such that the configuration interaction (CI) expansion converges fast towards the full CI limit. The most common method to obtain such functions is the Hartree–Fock method, which expresses the molecular orbitals as eigenfunctions of the Fock operator. One usually solves this problem by expanding the molecular orbitals as linear combinations of Gaussian functions centered on the atomic nuclei (see linear combination of atomic orbitals and basis set (chemistry)). The equation for the coefficients of these linear combinations is a generalized eigenvalue equation known as the Roothaan equations, which are in fact a particular representation of the Hartree–Fock equation. There are a number of programs in which quantum chemical calculations of MOs can be performed, including Spartan and HyperChem. Simple accounts often suggest that experimental molecular orbital energies can be obtained by the methods of ultra-violet photoelectron spectroscopy for valence orbitals and X-ray photoelectron spectroscopy for core orbitals. This, however, is incorrect as these experiments measure the ionization energy, the difference in energy between the molecule and one of the ions resulting from the removal of one electron. Ionization energies are linked approximately to orbital energies by Koopmans' theorem. While the agreement between these two values can be close for some molecules, it can be very poor in other cases. 1. ^ Mulliken, Robert S. (July 1932). "Electronic Structures of Polyatomic Molecules and Valence. II. General Considerations". Physical Review. 41 (1): 49–71. Bibcode:1932PhRv...41...49M. doi:10.1103/PhysRev.41.49. 2. ^ 1930-2007., Cotton, F. Albert (Frank Albert), (1990). Chemical applications of group theory (3rd ed.). New York: Wiley. p. 102. ISBN 0471510947. OCLC 19975337. 3. ^ Albright, T. A.; Burdett, J. K.; Whangbo, M.-H. (2013). Orbital Interactions in Chemistry. Hoboken, N.J.: Wiley. ISBN 9780471080398. 4. ^ F. Hund, "Zur Deutung einiger Erscheinungen in den Molekelspektren" [On the interpretation of some phenomena in molecular spectra] Zeitschrift für Physik, vol. 36, pages 657-674 (1926). 5. ^ F. Hund, "Zur Deutung der Molekelspektren", Zeitschrift für Physik, Part I, vol. 40, pages 742-764 (1927); Part II, vol. 42, pages 93–120 (1927); Part III, vol. 43, pages 805-826 (1927); Part IV, vol. 51, pages 759-795 (1928); Part V, vol. 63, pages 719-751 (1930). 6. ^ R. S. Mulliken, "Electronic states. IV. Hund's theory; second positive nitrogen and Swan bands; alternate intensities", Physical Review, vol. 29, pages 637–649 (1927). 7. ^ R. S. Mulliken, "The assignment of quantum numbers for electrons in molecules", Physical Review, vol. 32, pages 186–222 (1928). 8. ^ Friedrich Hund and Chemistry, Werner Kutzelnigg, on the occasion of Hund's 100th birthday, Angewandte Chemie International Edition, 35, 573–586, (1996) 9. ^ Robert S. Mulliken's Nobel Lecture, Science, 157, no. 3785, 13-24. Available on-line at: 10. ^ Sir John Lennard-Jones, "The electronic structure of some diatomic molecules", Transactions of the Faraday Society, vol. 25, pages 668-686 (1929). 11. ^ a b Gary L. Miessler; Donald A. Tarr. Inorganic Chemistry. Pearson Prentice Hall, 3rd ed., 2004. 12. ^ a b c d e Catherine E. Housecroft, Alan G. Sharpe, Inorganic Chemistry, Pearson Prentice Hall; 2nd Edition, 2005, p. 29-33. 13. ^ Peter Atkins; Julio De Paula. Atkins’ Physical Chemistry. Oxford University Press, 8th ed., 2006. 14. ^ Yves Jean; François Volatron. An Introduction to Molecular Orbitals. Oxford University Press, 1993. 15. ^ Michael Munowitz, Principles of Chemistry, Norton & Company, 2000, p. 229-233. 16. ^ Gagliardi, Laura; Roos, Björn O. (2005). "Quantum chemical calculations show that the uranium molecule U2 has a quintuple bond". Nature. 433: 848–851. Bibcode:2005Natur.433..848G. doi:10.1038/nature03249. PMID 15729337. 17. ^ Atkins., Peter ... [et al]. (2006). Inorganic chemistry (4. ed.). New York: W.H. Freeman. p. 208. ISBN 978-0-7167-4878-6. 18. ^ Bondybey, V.E. (1984). "Electronic structure and bonding of Be2". Chemical Physics Letters. 109 (5): 436–441. Bibcode:1984CPL...109..436B. doi:10.1016/0009-2614(84)80339-5. 19. ^ Catherine E. Housecroft, Alan G, Sharpe, Inorganic Chemistry, Pearson Prentice Hall; 2nd Edition, 2005, ISBN 0130-39913-2, p. 41-43. External linksEdit
d1025101e15b541f
Electronic band structure Electronic band structure Why bands occur in materials The electrons of a single free-standing atom occupy atomic orbitals, which form a discrete set of energy levels. If several atoms are brought together into a molecule, their atomic orbitals split, as in a coupled oscillation. This produces a number of molecular orbitals proportional to the number of atoms. When a large number of atoms (of order 10^{20} or more) are brought together to form a solid, the number of orbitals becomes exceedingly large, and the difference in energy between them becomes very small, so the levels may be considered to form continuous "bands" of energy rather than the discrete energy levels of the atoms in isolation. However, some intervals of energy contain no orbitals, no matter how many atoms are aggregated, forming "band gaps". Within an energy band, energy levels are so numerous as to be a near continuum. First, the separation between energy levels in a solid is comparable with the energy that electrons constantly exchange with phonons (atomic vibrations). Second, it is comparable with the energy uncertainty due to the Heisenberg uncertainty principle, for reasonably long intervals of time. As a result, the separation between energy levels is of no consequence. Several approaches to finding band structure are discussed below Basic concepts "Metals" contain a band that is partly empty and partly filled regardless of temperature. Therefore they have very high conductivity. The lowermost, almost fully occupied band in an "insulator" or "semiconductor" is called the "valence band" by analogy with the valence electrons of individual atoms. The uppermost, almost unoccupied band is called the "conduction band" because only when electrons are excited to the conduction band can current flow in these materials. The difference between insulators and semiconductors is only that the forbidden band gap between the valence band and conduction band is larger in an insulator, so that fewer electrons are found there and the electrical conductivity is lower. Because one of the main mechanisms for electrons to be excited to the conduction band is due to thermal energy, the conductivity of semiconductors is strongly dependent on the temperature of the material. A more complete view of the band structure takes into account the periodic nature of a crystal lattice using the symmetry operations that form a space group. The Schrödinger equation is solved for the crystal, which has Bloch waves as solutions: where k is called the wavevector, and is related to the direction of motion of the electron in the crystal, and "n" is the band index, which simply numbers the energy bands. The wavevector k takes on values within the Brillouin zone (BZ) corresponding to the crystal lattice, and particular directions/points in the BZ are assigned conventional names like Γ, Δ, Λ, Σ, "etc." These directions are shown for the face-centered cubic lattice geometry in Figure 2. The available energies for the electron also depend upon k, as shown in Figure 3 for silicon in the more complex energy band diagram at the right. In this diagram the topmost energy of the valence band is labeled E_v and the bottom energy in the conduction band is labeled E_c. The top of the valence band is not directly below the bottom of the conduction band (E_v is for an electron traveling in direction Γ, E_c in direction X), so silicon is called an indirect gap material. For an electron to be excited from the valence band to the conduction band, it needs something to give it energy E_c - E_v "and" a change in direction/momentum. In other semiconductors (for example GaAs) both are at Γ, and these materials are called direct gap materials (no momentum change required). Direct gap materials benefit the operation of semiconductor laser diodes. Anderson's rule is used to align band diagrams between two different semiconductors in contact. Band structures in different types of solids Although electronic band structures are usually associated with crystalline materials, quasi-crystalline and amorphous solids may also exhibit band structures. However, the periodic nature and symmetrical properties of crystalline materials makes it much easier to examine the band structures of these materials theoretically. In addition, the well-defined symmetry axes of crystalline materials makes it possible to determine the dispersion relationship between the momentum (a 3-dimension vector quantity) and energy of a material. As a result, virtually all of the existing theoretical work on the electronic band structure of solids has focused on crystalline materials. Density of states While the density of energy states in a band could be very large for some materials, it may not be uniform. It approaches zero at the band boundaries, and is generally highest near the middle of a band.The density of states for the free electron model in three dimensions is given by,::D(epsilon)= frac{V}{2pi^2}left(frac {2m}{hbar^2} ight)^{3/2} epsilon^{1/2} Filling of bands Although the number of states in all of the bands is effectively infinite, in an uncharged material the number of electrons is equal only to the number of protons in the atoms of the material. Therefore not all of the states are occupied by electrons ("filled") at any time. The likelihood of any particular state being filled at any temperature is given by the Fermi-Dirac statistics. The probability is given by the following: :f(E) = frac{1}{1 + e^{frac{E-E_F}{k_B T} * k_B is Boltzmann's constant, * "T" is the temperature, * E_F is the Fermi energy (or 'Fermi level'). The Fermi level naturally is the level at which the electrons and protons are balanced. At "T=0", the distribution is a simple step function: :f(E) = egin{cases} 1 & mbox{if} 0 < E le E_F \0 & mbox{if} E_F < E end{cases} At nonzero temperatures, the step "smooths out", so that an appreciable number of states below the Fermi level are empty, and some states above the Fermi level are filled. Band structure of crystals Brillouin zone Theory of band structures in crystals The ansatz is the special case of electron waves in a periodic crystal lattice using Bloch waves as treated generally in the dynamical theory of diffraction. Every crystal is a periodic structure which can be characterized by a Bravais lattice, and for each Bravais lattice we can determine the reciprocal lattice, which encapsulates the periodicity in a set of three reciprocal lattice vectors (mathbf{b_1}, mathbf{b_2}, mathbf{b_3}). Now, any periodic potential V(mathbf{r}) which shares the same periodicity as the direct lattice can be expanded out as a Fourier series whose only non-vanishing components are those associated with the reciprocal lattice vectors. So the expansion can be written as: V(mathbf{r}) = sum_{mathbf{K{V_{mathbf{Ke^{i mathbf{K}cdotmathbf{r} where mathbf{K} = m_1 mathbf{b}_1 + m_2 mathbf{b}_2 + m_3 mathbf{b}_3 for any set of integers (m_1, m_2, m_3). Nearly-free electron approximation In the nearly-free electron approximation in solid state physics interactions between electrons are completely ignored. This approximation allows use of Bloch's Theorem which states that electrons in a periodic potential have wavefunctions and energies which are periodic in wavevector up to a constant phase shift between neighboring reciprocal lattice vectors. The consequences of periodicity are described mathematically by the Bloch wavefunction: :{Psi}_{n,mathbf{k (mathbf{r}) = e^{i mathbf{k}cdotmathbf{r u_n(mathbf{r}) where the function u_n(mathbf{r}) is periodic over the crystal lattice, that is, :u_n(mathbf{r}) = u_n(mathbf{r-R}) . Here index "n" refers to the "n-th" energy band, wavevector k is related to the direction of motion of the electron, r is position in the crystal, and R is location of an atomic site.cite book author=Charles Kittel title=Introduction to Solid State Physics year= 1996 edition=Seventh Edition page=pp. 179 location=New York ] . (For more detail see nearly-free electron model and pseudopotential method). Tight-binding model The opposite extreme to the nearly-free electron approximation assumes the electrons in the crystal behave much like an assembly of constituent atoms. This tight-binding model assumes the solution to the time-independent single electron Schrödinger equation Psi is well approximated by a linear combination of atomic orbitals psi_n(mathbf{r}).cite book author=Charles Kittel title=Introduction to Solid State Physics year= 1996 edition=Seventh Edition page=pp. 245-248 location=New York ] . :Psi(mathbf{r}) = sum_{n,mathbf{R b_{n,mathbf{R psi_n(mathbf{r-R}), where the coefficients b_{n,mathbf{R are selected to give the best approximate solution of this form. Index "n" refers to an atomic energy level and R refers to an atomic site. A more accurate approach using this idea employs Wannier functions, defined by cite book author=Charles Kittel title=Introduction to Solid State Physics year= 1996 edition=Seventh Edition page=Eq. 42 p. 267 location=New York ] ^,cite book author=Daniel Charles Mattis title=The Many-Body Problem: Encyclopaedia of Exactly Solved Models in One Dimension year= 1994 publisher=World Scientific page=p. 340 ] . :a_n(mathbf{r-R}) = frac{V_{C{(2pi)^{3 int_{BZ} dmathbf{k} e^{-imathbf{k}cdot(mathbf{R-r})}u_{nmathbf{k;in which u_{nmathbf{k is the periodic part of the Bloch wave and the integral is over the Brillouin zone. Here index "n" refers to the "n"-th energy band in the crystal. The Wannier functions are localized near atomic sites, like atomic orbitals, but being defined in terms of Bloch functions they are accurately related to solutions based upon the crystal potential. Wannier functions on different atomic sites R are orthogonal. The Wannier functions can be used to form the Schrödinger solution for the "n"-th energy band as: :Psi_{n,mathbf{k (mathbf{r}) = sum_{mathbf{R e^{-imathbf{k}cdot(mathbf{R-r})}a_n(mathbf{r-R}). Density-functional theory In present days physics literature, the large majority of the electronic structures and band plots is calculated using the density-functional theory (DFT) which is not a model but rather a theory, i.e. a microscopic first-principle theory of condensed matter physics that tries to cope with the electron-electron many-body problem via the introduction of an exchange-correlation term in the functional of the electronic density. DFT calculated bands are found in many cases in agreement with experimental measured bands, for example by angle-resolved photoemission spectroscopy (ARPES). In particular, the band shape seems well reproduced by DFT. But also there are systematic errors of DFT bands with respect to the experiment. In particular, DFT seems to underestimate systematically by a 30-40% the band gap in insulators and semiconductors. It must be said that DFT is in principle an exact theory to reproduce and predict ground state properties (e.g. the total energy, the atomic structure, etc.). However DFT is not a theory to address excited state properties, such as the band plot of a solid that represents the excitation energies of electrons injected or removed from the system. What in literature is quoted as a DFT band plot is a representation of the DFT Kohn-Sham energies, that is the energies of a fictive non-interacting system, the Kohn-Sham system, which has no physical interpretation at all. The Kohn-Sham electronic structure must not be confused with the real, quasiparticle electronic structure of a system, and there is no Koopman's theorem holding for Kohn-Sham energies, like on the other hand for Hartree-Fock energies that can be truly considered as an approximation for quasiparticle energies. Hence in principle DFT is not a band theory, not a theory suitable to calculate bands and band-plots. Green's function methods and the "ab initio" GW approximation To calculate the bands including electron-electron interaction many-body effects, one can resort to so called Green's function methods. Indeed, the knowledge of the Green's function of a system provides both ground (the total energy) and also excited state observables of the system. The poles of the Green's function are the quasiparticle energies, the bands of a solid. The Green's function can be calculated by solving the Dyson equation once the self-energy of the system is known. For real systems like solids, the self-energy is a very complex quantity and usually approximations are needed to solve the problem. One of such approximations is the GW approximation, so called from the mathematical form the self-energy takes as product Sigma=GW of the Green's function G and the dynamically screened interaction W. This approach is more pertinent to address the calculation of band plots (and also quantities beyond, such as the spectral function) and can be also formulated in a completely "ab initio" way. The GW approximation seems to provide band gaps of insulators and semiconductors in agreement with the experiment and hence to correct the systematic DFT underestimation. Mott insulators * Bands may also be viewed as the large-scale limit of molecular orbital theory. A solid creates a large number of closely spaced molecular orbitals, which appear as a band. * Hubbard model Each model describes some types of solids very well, and others poorly. The nearly-free electron model works well for metals, but poorly for non-metals. The tight binding model is extremely accurate for ionic insulators, such as metal halide salts (e.g. NaCl). Further reading # Kotai no denshiron (The theory of electrons in solids), by Hiroyuki Shiba, ISBN 4-621-04135-5 # "Microelectronics", by Jacob Millman and Arvin Gabriel, ISBN 0-07-463736-3, Tata McGraw-Hill Edition. # "Solid State Physics", by Neil Ashcroft and N. David Mermin, ISBN 0-03-083993-9 # "Elementary Solid State Physics: Principles and Applications", by M. Ali Omar, ISBN 0-20-160733-6 # "Introduction to Solid State Physics" by Charles Kittel, ISBN 0-471-41526-X # "Electronic and Optoelectronic Properties of Semiconductor Structures - Chapter 2 and 3" by Jasprit Singh, ISBN 0-521-82379-X See also * Bloch waves * Nearly-free electron model * Fermi surface * Band Gap * Effective mass * Kronig-Penney model * Anderson's rule * k·p perturbation theory * Tight binding model * Local-density approximation * Dynamical theory of diffraction * Solid state physics Wikimedia Foundation. 2010. Look at other dictionaries: • Band mapping — Band mapping, in the realm of condensed matter physics, refers to the process which allows for detection (and measurement) of photoelectrons emitted from an observed surface at different emission angles. This process is employed in a spectroscopy …   Wikipedia • Band gap — This article is about solid state physics. For voltage control circuitry in electronics, see Bandgap voltage reference. In solid state physics, a band gap, also called an energy gap or bandgap, is an energy range in a solid where no electron… …   Wikipedia • Electronic body music — Stylistic origins Industrial music, electronic dance music, synthpunk, synthpop, post punk Cultural origins Early 1980s, Belgium, United Kingdom, Germany, Canada Typical instruments …   Wikipedia • Electronic Arts — EA redirects here. For other uses, see EA (disambiguation). Electronic Arts, Inc. Type …   Wikipedia • Electronic tuner — Pocket sized Korg chromatic LCD tuner, with simulated analog indicator needle The term electronic tuner can refer to a number of different things, depending which discipline you wish to study. In the Discipline of radio frequency electronics an… …   Wikipedia • electronic music — electronically produced sounds recorded on tape and arranged by the composer to form a musical composition. [1930 35] * * * Any music involving electronic processing (e.g., recording and editing on tape) and whose reproduction involves the use of …   Universalium • Structure of the Australian Army — The Australian Army, as with many other armies of nations that were formerly part of the British Empire, is structured in a similar way to the British Army, with divisions and brigades as the main formations, subdivided into regiments and… …   Wikipedia Share the article and excerpts Direct link Do a right-click on the link above and select “Copy Link”
8387475285688114
Dismiss Notice Join Physics Forums Today! Eigenequation and eigenvalue 1. Jan 27, 2005 #1 what is an eigenequation? what is the purpose of the eigenvalue? how does this fit into the schrodinger equation (particle in a box problem) ? 2. jcsd 3. Jan 27, 2005 #2 An eigenequation is for example the following: M x = b x where M is a Matrix (for example a 3x3), x is a vector (3 components) and b is a real number (could also be complex number). You see that the Matrix doesn't change the direction of x, only it's length (right hand side of the equation). x is called eigenvector and b eigenvalue of M. Now in Quantum mechanics you have operators (instead of matrices) and so called state vectors, for example: H |Psi> = E |Psi> ( M x = b x ) H is the Hamilton-Operator, |Psi> is your eigenvector and E the eigenvalue. Whats the meaning of the equation above? It just says that you got a system represented by the vector |Psi> (for example electron in the Hydrogen atom). And then you want to measure the energy. This is done by 'throwing' the operator H on your vector |Psi>. What comes out is your eigenvalue E which is the energy. Now whats the Schrödinger equation? Suppose you want to examine the energy of the electron in the hydrogen atom. So you just apply H on |Psi> and get the energy E on the right hand side of the eigenequation. The PROBLEM is, you dont know how your |Psi> looks like. So here's where the SCHRÖDINGER equation comes into the play. The Schrödinger equation is a differential equation, which you have to solve in order to get your |Psi>. (solving the differential equation means you get a solution |Psi>) You put your potential (square well potential for particle in a box, or Coloumb potential for hydrogen atom) into the Schrödinger equation and solve it. You get your |Psi> from it. I hope I could help you. 4. Jan 28, 2005 #3 thanks alot! Have something to add? Similar Discussions: Eigenequation and eigenvalue 1. Eigenvalue problems (Replies: 2)
58b6e52b1092b097
Follow Slashdot stories on Twitter Forgot your password? Chameleon-Like Behavior of Neutrino Confirmed 191 191 Posted by Soulskill from the yes,-neutrinos-eat-bugs dept. Chameleon-Like Behavior of Neutrino Confirmed Comments Filter: by Steve Max (1235710) on Monday May 31, 2010 @06:03PM (#32411522) Journal You'd need a pretty complex theory to get non-mass oscillations to match all the data we got over the past 12 years, which is very compatible with a three-state, mass-driven oscillation scenario. Besides, you'd have to explain more than what the current "new standard model" (the SM with added neutrino masses) does if you want your theory to be accepted. If two theories explain the same data equally well, the simplest is more likely. • by dumuzi (1497471) on Monday May 31, 2010 @06:17PM (#32411680) Journal I agree. In QCD quarks and gluons can undergo colour changes [], this would be "chameleon-like behavior". Neutrinos on the other hand change flavour [], this would be "Willy Wonka like behavior". • Re: What if... (Score:3, Informative) by Black Parrot (19622) on Monday May 31, 2010 @06:22PM (#32411716) If two theories explain the same data equally well, the simplest is more likely./quote? Make that "more preferred". In general we don't know anything about likelihood. The thing about Occam's Razor is that it filters out "special pleading" type arguments. If you want your pet in the show, you've got to provide motivation for including it. • by pz (113803) on Monday May 31, 2010 @06:23PM (#32411738) Journal Neutrinos are thought to have a very small mass. So exceedingly small that they barely interact with anything (they also have no charge, so they are even less likely to interact). But zero mass and really, really, really small but not zero mass, are two different things. • by BitterOak (537666) on Monday May 31, 2010 @06:49PM (#32411982) The fact that they barely interact with anything has nothing to do with the fact that they are nearly massless. Photons are massless and they interact with anything that carries an electric charge. Electrons are much lighter than muons, but they are just as likely to interact with something. The only force that gets weaker as the mass goes down is gravity, which is by far the weakest of the fundamental forces. by BitterOak (537666) on Monday May 31, 2010 @08:31PM (#32412774) The situation you describe with the EM field is an example of wave-particle duality. Light can behave like both a wave and a particle, but it doesn't make sense to analyze it both ways at the same time. As a wave, it does manifest itself as oscillating electric and magnetic fields and as a particle, it manifests itself as a photon, which doesn't change into a different type of particle. (There's no such thing as an "electric photon" and a "magnetic photon".) Neutrinos, too, are described quantum mechanically by wavefunctions, and these wavefunctions have frequencies associated with them, related to the energy of the particle. But these have nothing to do with the oscillation frequencies described here, in which a neutrino of one flavor (eg. mu) can change into a different flavor (eg. tau). Quantum mechanically speaking, we say the mass eigenstates of the neutrino (states of definite mass) don't coincide with the weak eigenstates (states of definite flavor: i.e. e, mu, or tau). Without mass, there would be no distinct mass eigenstates at all, and so mixing of the weak eigenstates would not occur as the neutrino propagates through free space. • by Steve Max (1235710) on Monday May 31, 2010 @09:15PM (#32413138) Journal Light doesn't oscillate in this way. A photon is a photon, and remains a photon. Electric and magnetic fields oscillate, but the particle "photon" doesn't. Neutrinos start as one particle (say, as muon-neutrinos) and are detected as a completely different particle (say, as a tau-neutrino). The explanation for that is that what we call "electron-neutrino", "muon-neutrino" and "tau-neutrino" aren't states with a definite mass; they're a mixture of three neutrino states with definite, different mass (one of those masses can be zero, but at most one). Then, from pure quantum mechanics (and nothing more esoteric than that: pure Schrödinger equation) you see that, if those three defined-mass states have slightly different mass, you will have a probability of creating an electron neutrino and detecting it as a tau neutrino, and every other combination. Those probabilities follow a simple expansion, based on only five parameters (two mass differences and three angles), and depend on the energy of the neutrino and the distance in a very specific way. We can test that dependency, and use very different experiments to measure the five parameters; and everything fits very well. Right now (specially after MINOS saw the energy dependency of the oscillation probability), nobody questions neutrino oscillations. This OPERA result only confirms what we already knew. • Re:What if... (Score:3, Informative) by khayman80 (824400) on Monday May 31, 2010 @09:18PM (#32413160) Homepage Journal Thanks. I just found some [] equations [] that appear to reinforce what you said. Since the oscillation frequency is proportional to the difference of the squared masses of the mass eigenstates, perhaps it's more accurate to say that neutrino flavor oscillation implies the existence of several mass eigenstates which aren't identical to flavor eigenstates. Since two mass eigenstates would need different eigenvalues in order to be distinguishable, this means at least one mass eigenvalue has to be nonzero. There's probably some sort of "superselection rule" which prevents particles from oscillating between massless and massive eigenstates, so both mass eigenstates have to be non-zero. Cool. • by Anonymous Coward on Tuesday June 01, 2010 @12:03AM (#32414422) Photons are masless and chargeless, right? • by Young Master Ploppy (729877) on Tuesday June 01, 2010 @08:30AM (#32417022) Homepage Journal I'm not a "real" physicist - but I did study this at undergrad level, so here goes: Heisenberg's Uncertainty Principle ( [] ) states that there must always be a minimum uncertainty in certain pairs of related variables - e.g. position and momentum, i.e. the more accurately you know the position of something, the less accurately you know how it's moving. Another related pair is energy and time - the more accurately you know the energy of something, the less accurately you know when the measurement was taken. (disclaimer - this makes perfect sense when expressed mathematically, it onlysounds like handwavery when you translate it into English, as words are ambiguous and mean different things to different people) Anyway, this uncertainty means that there is a small but non-zero probability of a higher-energy event occuring in the history of a lower-energy particle (often mis-stated as "particles can borrow energy for a short time, but check the wiki page for a more accurate statement). It sounds nuts, I know, but it has many real-world implications that have no explanation in non-quantum physics. Particles can "tunnel" through barriers that they shouldn't be able to cross, for instance - this is how semi-conductors work. By implication, there is a small probability of the neutrino acting as if it had a higher energy, and *this* is how neutrino-flipping occurs without violating conservation of energy. • Re:What if... (Score:4, Informative) by Steve Max (1235710) on Tuesday June 01, 2010 @10:41AM (#32418340) Journal No. All flavour eigenstates MUST be massive: they are superpositions of the three mass eigenstates, one of which can have zero mass. Calling the three mass eigenstates n1, n2 and n3; and the three flavour eigenstates ne, nm and nt, we'd have: So, if any of n1, n2 or n3 has a non-zero mass (and at least two of them MUST have non-zero masses, since we know two different and non-zero mass differences), all three flavour eigenstates have non-zero masses. Also, remember that the limit for the neutrino mass is at about 1eV, while it's hard to have neutrinos travelling with energies under 10^6 eV. In other words, the gamma factor is huge, and they're always ultrarelativistic, travelling practically at "c". Another point is that the mass differences are really, really small; of the order of 0.01 eV. This is ridiculously small; so small that the uncertainty principle makes it possible for one state to "tunnel" to the other. I really can't go any deeper than that without resorting to quantuim field theory. I can only say that standard QM is not compatible with relativity: Schrödinger's equation comes from the classical Hamiltonian, for example. To take special relativity into account, you need a different set of equations (Dirac's), which use the relativistic Hamiltonian. In this particular case, the result is the same using Dirac, Schrödinger or the full QFT, but the three-line Schrödinger solution becomes a full-page Dirac calculation, or ten pages of QFT. In this particular case, unfortunately, the best I can do is say "trust me, it works; you'll see it when you get more background". by Steve Max (1235710) on Tuesday June 01, 2010 @10:52AM (#32418462) Journal The time-dependent Schrödinger's equation doesn't apply for massless particles. It was never intended to. It isn't relativistic. Try to apply a simple boost and you'll see it's not Poincaré invariant. The main point is that you get the same probabilities if you use a relativistic theory, but you need A LOT of work to get there. Oscillations work and happen in QFT, which is Poincaré-invariant and assumes special relativity. I can't find any references in a quick search, but I've done all the (quite painful) calculations a long time ago to make sure it works. It's one of those cases where the added complexity of relativistic quantum field theory doesn't change the results from a simple Schrödinger solution.
94decd46aa9fa455
Atomic units From Wikipedia, the free encyclopedia Jump to: navigation, search Atomic units (au or a.u.) form a system of natural units which is especially convenient for atomic physics calculations. There are two different kinds of atomic units, Hartree atomic units[1] and Rydberg atomic units, which differ in the choice of the unit of mass and charge. This article deals with Hartree atomic units. In atomic units, the numerical values of the following four fundamental physical constants are all unity by definition: Atomic units are often abbreviated "a.u." or "au", not to be confused with the same abbreviation used also for astronomical units, arbitrary units, and absorbance units in different contexts. Use and notation[edit] Atomic units, like SI units, have a unit of mass, a unit of length, and so on. However, the use and notation is somewhat different from SI. Suppose a particle with a mass of m has 3.4 times the mass of electron. The value of m can be written in three ways: • "m = 3.4~m_e". This is the clearest notation (but least common), where the atomic unit is included explicitly as a symbol.[2] • "m = 3.4~\mathrm{a.u.}" ("a.u." means "expressed in atomic units"). This notation is ambiguous: Here, it means that the mass m is 3.4 times the atomic unit of mass. But if a length L were 3.4 times the atomic unit of length, the equation would look the same, "L = 3.4~\mathrm{a.u.}" The dimension needs to be inferred from context.[2] • "m = 3.4". This notation is similar to the previous one, and has the same dimensional ambiguity. It comes from formally setting the atomic units to 1, in this case m_e = 1, so 3.4~m_e = 3.4.[3][4] Fundamental atomic units[edit] These four fundamental constants form the basis of the atomic units (see above). Therefore, their numerical values in the atomic units are unity by definition. Fundamental atomic units Dimension Name Symbol/Definition Value in SI units[5] mass electron rest mass \!m_\mathrm{e} 9.10938291(40)×10−31 kg charge elementary charge \!e 1.602176565(35)×10−19 C action reduced Planck's constant \hbar = h/(2 \pi) 1.054571726(47)×10−34 J·s electric constant−1 Coulomb force constant 1/(4 \pi \epsilon_0) 8.9875517873681×109 kg·m3·s-2·C-2 Related physical constants[edit] Dimensionless physical constants retain their values in any system of units. Of particular importance is the fine-structure constant \alpha = \frac{e^2}{(4 \pi \epsilon_0)\hbar c} \approx 1/137. This immediately gives the value of the speed of light, expressed in atomic units. Some physical constants expressed in atomic units Name Symbol/Definition Value in atomic units speed of light \!c \!1/\alpha \approx 137 classical electron radius r_\mathrm{e}=\frac{1}{4\pi\epsilon_0}\frac{e^2}{m_\mathrm{e} c^2} \!\alpha^2 \approx 5.32\times10^{-5} proton mass m_\mathrm{p} m_\mathrm{p}/m_\mathrm{e} \approx 1836 Derived atomic units[edit] Below are given a few derived units. Some of them have proper names and symbols assigned, as indicated in the table. Derived atomic units Dimension Name Symbol Expression Value in SI units Value in more common units length bohr \!a_0 4\pi \epsilon_0 \hbar^2 / (m_\mathrm{e} e^2) = \hbar / (m_\mathrm{e} c \alpha) 5.2917721092(17)×10−11 m[5] 0.052917721092(17) nm=0.52917721092(17) Å energy hartree \!E_\mathrm{h} m_\mathrm{e} e^4/(4\pi\epsilon_0\hbar)^2 = \alpha^2 m_\mathrm{e} c^2 4.35974417(75)×10−18 J 27.211 eV=627.509 kcal·mol−1 time \hbar / E_\mathrm{h} 2.418884326505(16)×10−17 s velocity a_0 E_\mathrm{h} / \hbar = \alpha c 2.1876912633(73)×106 m·s−1 force \! E_\mathrm{h} / a_0 8.2387225(14)×10−8 N 82.387 nN=51.421 eV·Å−1 temperature \! E_\mathrm{h} / k_\mathrm{B} 3.1577464(55)×105 K pressure E_\mathrm{h} / {a_0}^3 2.9421912(19)×1013 Pa electric field \!E_\mathrm{h} / (ea_0) 5.14220652(11)×1011 V·m−1 5.14220652(11) GV·cm−1=51.4220652(11) V·Å−1 electric dipole moment e a_0 8.47835326(19)×10−30 C·m 2.541746 D SI and Gaussian-CGS variants, and magnetism-related units[edit] There are two common variants of atomic units, one where they are used in conjunction with SI units for electromagnetism, and one where they are used with Gaussian-CGS units.[6] Although the units written above are the same either way (including the unit for electric field), the units related to magnetism are not. In the SI system, the atomic unit for magnetic field is 1 a.u. = \frac{\hbar}{e a_0^2} = 2.35×105 T = 2.35×109 G, and in the Gaussian-cgs unit system, the atomic unit for magnetic field is 1 a.u. = \frac{e}{a_0^2} = 1.72×103 T = 1.72×107 G. (These differ by a factor of α.) Other magnetism-related quantities are also different in the two systems. An important example is the Bohr magneton: In SI-based atomic units,[7] \mu_B = \frac{e \hbar}{2 m_e} = 1/2 a.u. and in Gaussian-based atomic units,[8] \mu_B = \frac{e \hbar}{2 m_e c}=\alpha/2\approx 3.6\times 10^{-3} a.u. Bohr model in atomic units[edit] Atomic units are chosen to reflect the properties of electrons in atoms. This is particularly clear from the classical Bohr model of the hydrogen atom in its ground state. The ground state electron orbiting the hydrogen nucleus has (in the classical Bohr model): • Orbital velocity = 1 • Orbital radius = 1 • Angular momentum = 1 • Orbital period = 2π • Ionization energy = 12 • Electric field (due to nucleus) = 1 • Electrical attractive force (due to nucleus) = 1 Non-relativistic quantum mechanics in atomic units[edit] The Schrödinger equation for an electron in SI units is - \frac{\hbar^2}{2m_e} \nabla^2 \psi(\mathbf{r}, t) + V(\mathbf{r}) \psi(\mathbf{r}, t) = i \hbar \frac{\partial \psi}{\partial t} (\mathbf{r}, t). The same equation in au is - \frac{1}{2} \nabla^2 \psi(\mathbf{r}, t) + V(\mathbf{r}) \psi(\mathbf{r}, t) = i \frac{\partial \psi}{\partial t} (\mathbf{r}, t). For the special case of the electron around a hydrogen atom, the Hamiltonian in SI units is: \hat H = - {{{\hbar^2} \over {2 m_e}}\nabla^2} - {1 \over {4 \pi \epsilon_0}}{{e^2} \over {r}}, while atomic units transform the preceding equation into \hat H = - {{{1} \over {2}}\nabla^2} - {{1} \over {r}}. Comparison with Planck units[edit] Both Planck units and au are derived from certain fundamental properties of the physical world, and are free of anthropocentric considerations. It should be kept in mind that au were designed for atomic-scale calculations in the present-day universe, while Planck units are more suitable for quantum gravity and early-universe cosmology. Both au and Planck units normalize the reduced Planck constant. Beyond this, Planck units normalize to 1 the two fundamental constants of general relativity and cosmology: the gravitational constant G and the speed of light in a vacuum, c. Atomic units, by contrast, normalize to 1 the mass and charge of the electron, and, as a result, the speed of light in atomic units is a large value, 1/\alpha \approx 137. The orbital velocity of an electron around a small atom is of the order of 1 in atomic units, so the discrepancy between the velocity units in the two systems reflects the fact that electrons orbit small atoms much slower than the speed of light (around 2 orders of magnitude slower). There are much larger discrepancies in some other units. For example, the unit of mass in atomic units is the mass of an electron, while the unit of mass in Planck units is the Planck mass, a mass so large that if a single particle had that much mass it might collapse into a black hole. Indeed, the Planck unit of mass is 22 orders of magnitude larger than the au unit of mass. Similarly, there are many orders of magnitude separating the Planck units of energy and length from the corresponding atomic units. See also[edit] Notes and references[edit] 1. ^ Hartree, D. R. (1928). "The Wave Mechanics of an Atom with a Non-Coulomb Central Field. Part I. Theory and Methods". Mathematical Proceedings of the Cambridge Philosophical Society 24 (1) (Cambridge University Press). pp. 89–110. doi:10.1017/S0305004100011919.  2. ^ a b Pilar, Frank L. (2001). Elementary Quantum Chemistry. Dover Publications. p. 155. ISBN 978-0-486-41464-5.  3. ^ Bishop, David M. (1993). Group Theory and Chemistry. Dover Publications. p. 217. ISBN 978-0-486-67355-4.  4. ^ Drake, Gordon W. F. (2006). Springer Handbook of Atomic, Molecular, and Optical Physics (2nd ed.). Springer. p. 5. ISBN 978-0-387-20802-2.  5. ^ a b "The NIST Reference on Constants, Units and Uncertainty". National Institute of Standard and Technology. Retrieved 1 April 2012.  6. ^ "A note on Units". Physics 7550 — Atomic and Molecular Spectra. University of Colorado lecture notes.  7. ^ Chis, Vasile. "Atomic Units; Molecular Hamiltonian; Born-Oppenheimer Approximation". Molecular Structure and Properties Calculations. Babes-Bolyai University lecture notes. ) 8. ^ Budker, Dmitry; Kimball, Derek F.; DeMille, David P. (2004). Atomic Physics: An Exploration through Problems and Solutions. Oxford University Press. p. 380. ISBN 978-0-19-850950-9.  External links[edit]
f3b9f89d2ee8e490
Linear algebra From Wikipedia, the free encyclopedia   (Redirected from Linear Algebra) Jump to: navigation, search Not to be confused with Elementary algebra. The three-dimensional Euclidean space R3 is a vector space, and lines and planes passing through the origin are vector subspaces in R3. Linear algebra is the branch of mathematics concerning vector spaces and linear mappings between such spaces. It includes the study of lines, planes, and subspaces, but is also concerned with properties common to all vector spaces. The set of points with coordinates that satisfy a linear equation form a hyperplane in an n-dimensional space. The conditions under which a set of n hyperplanes intersect in a single point is an important focus of study in linear algebra. Such an investigation is initially motivated by a system of linear equations containing several unknowns. Such equations are naturally represented using the formalism of matrices and vectors.[1][2] Linear algebra is central to both pure and applied mathematics. For instance, abstract algebra arises by relaxing the axioms of a vector space, leading to a number of generalizations. Functional analysis studies the infinite-dimensional version of the theory of vector spaces. Combined with calculus, linear algebra facilitates the solution of linear systems of differential equations. Techniques from linear algebra are also used in analytic geometry, engineering, physics, natural sciences, computer science, computer animation, and the social sciences (particularly in economics). Because linear algebra is such a well-developed theory, nonlinear mathematical models are sometimes approximated by linear models. The study of linear algebra first emerged from the study of determinants, which were used to solve systems of linear equations. Determinants were used by Leibniz in 1693, and subsequently, Gabriel Cramer devised Cramer's Rule for solving linear systems in 1750. Later, Gauss further developed the theory of solving linear systems by using Gaussian elimination, which was initially listed as an advancement in geodesy.[3] The study of matrix algebra first emerged in England in the mid-1800s. In 1844 Hermann Grassmann published his “Theory of Extension” which included foundational new topics of what is today called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for "womb". While studying compositions of linear transformations, Arthur Cayley was led to define matrix multiplication and inverses. Crucially, Cayley used a single letter to denote a matrix, thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants, and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants".[3] In 1882, Hüseyin Tevfik Pasha wrote the book titled "Linear Algebra".[4][5] The first modern and more precise definition of a vector space was introduced by Peano in 1888;[3] by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra first took its modern form in the first half of the twentieth century, when many ideas and methods of previous centuries were generalized as abstract algebra. The use of matrices in quantum mechanics, special relativity, and statistics helped spread the subject of linear algebra beyond pure mathematics. The development of computers led to increased research in efficient algorithms for Gaussian elimination and matrix decompositions, and linear algebra became an essential tool for modelling and simulations.[3] The origin of many of these ideas is discussed in the articles on determinants and Gaussian elimination. Educational history[edit] Linear algebra first appeared in graduate textbooks in the 1940s and in undergraduate textbooks in the 1950s.[6] Following work by the School Mathematics Study Group, U.S. high schools asked 12th grade students to do "matrix algebra, formerly reserved for college" in the 1960s.[7] In France during the 1960s, educators attempted to teach linear algebra through affine dimensional vector spaces in the first year of secondary school. This was met with a backlash in the 1980s that removed linear algebra from the curriculum.[8] In 1993, the U.S.-based Linear Algebra Curriculum Study Group recommended that undergraduate linear algebra courses be given an application-based "matrix orientation" as opposed to a theoretical orientation.[9] Scope of study[edit] Vector spaces[edit] The main structures of linear algebra are vector spaces. A vector space over a field F is a set V together with two binary operations. Elements of V are called vectors and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector v + w. The second operation, scalar multiplication, takes any scalar a and any vector v and outputs a new vector av. The operations of addition and multiplication in a vector space must satisfy the following axioms.[10] In the list below, let u, v and w be arbitrary vectors in V, and a and b scalars in F. Axiom Signification Commutativity of addition u + v = v + u Identity element of addition There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all vV. Compatibility of scalar multiplication with field multiplication a(bv) = (ab)v [nb 1] The first four axioms are those of V being an abelian group under vector addition. Vector spaces may be diverse in nature, for example, containing functions, polynomials or matrices. Linear algebra is concerned with properties common to all vector spaces. Linear transformations[edit] Similarly as in the theory of other algebraic structures, linear algebra studies mappings between vector spaces that preserve the vector-space structure. Given two vector spaces V and W over a field F, a linear transformation (also called linear map, linear mapping or linear operator) is a map T:V\to W that is compatible with addition and scalar multiplication: T(u+v)=T(u)+T(v), \quad T(av)=aT(v) for any vectors u,vV and a scalar aF. Additionally for any vectors u, vV and scalars a, bF: \quad T(au+bv)=T(au)+T(bv)=aT(u)+bT(v) When a bijective linear mapping exists between two vector spaces (that is, every vector from the second space is associated with exactly one in the first), we say that the two spaces are isomorphic. Because an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially the same" from the linear algebra point of view. One essential question in linear algebra is whether a mapping is an isomorphism or not, and this question can be answered by checking if the determinant is nonzero. If a mapping is not an isomorphism, linear algebra is interested in finding its range (or image) and the set of elements that get mapped to zero, called the kernel of the mapping. Linear transformations have geometric significance. For example, 2 × 2 real matrices denote standard planar mappings that preserve the origin. Subspaces, span, and basis[edit] Again, in analogue with theories of other algebraic objects, linear algebra is interested in subsets of vector spaces that are themselves vector spaces; these subsets are called linear subspaces. For example, both the range and kernel of a linear mapping are subspaces, and are thus often called the range space and the nullspace; these are important examples of subspaces. Another important way of forming a subspace is to take a linear combination of a set of vectors v1, v2, …, vk: a_1 v_1 + a_2 v_2 + \cdots + a_k v_k, where a1, a2, …, ak are scalars. The set of all linear combinations of vectors v1, v2, …, vk is called their span, which forms a subspace. A linear combination of any system of vectors with all zero coefficients is the zero vector of V. If this is the only way to express the zero vector as a linear combination of v1, v2, …, vk then these vectors are linearly independent. Given a set of vectors that span a space, if any vector w is a linear combination of other vectors (and so the set is not linearly independent), then the span would remain the same if we remove w from the set. Thus, a set of linearly dependent vectors is redundant in the sense that there will be a linearly independent subset which will span the same subspace. Therefore, we are mostly interested in a linearly independent set of vectors that spans a vector space V, which we call a basis of V. Any set of vectors that spans V contains a basis, and any linearly independent set of vectors in V can be extended to a basis.[11] It turns out that if we accept the axiom of choice, every vector space has a basis;[12] nevertheless, this basis may be unnatural, and indeed, may not even be constructible. For instance, there exists a basis for the real numbers considered as a vector space over the rationals, but no explicit basis has been constructed. Any two bases of a vector space V have the same cardinality, which is called the dimension of V. The dimension of a vector space is well-defined by the dimension theorem for vector spaces. If a basis of V has finite number of elements, V is called a finite-dimensional vector space. If V is finite-dimensional and U is a subspace of V, then dim U ≤ dim V. If U1 and U2 are subspaces of V, then \dim(U_1 + U_2) = \dim U_1 + \dim U_2 - \dim(U_1 \cap U_2).[13] One often restricts consideration to finite-dimensional vector spaces. A fundamental theorem of linear algebra states that all vector spaces of the same dimension are isomorphic,[14] giving an easy way of characterizing isomorphism. Matrix theory[edit] Main article: Matrix (mathematics) A particular basis {v1, v2, …, vn} of V allows one to construct a coordinate system in V: the vector with coordinates (a1, a2, …, an) is the linear combination a_1 v_1 + a_2 v_2 + \cdots + a_n v_n. \, The condition that v1, v2, …, vn span V guarantees that each vector v can be assigned coordinates, whereas the linear independence of v1, v2, …, vn assures that these coordinates are unique (i.e. there is only one linear combination of the basis vectors that is equal to v). In this way, once a basis of a vector space V over F has been chosen, V may be identified with the coordinate n-space Fn. Under this identification, addition and scalar multiplication of vectors in V correspond to addition and scalar multiplication of their coordinate vectors in Fn. Furthermore, if V and W are an n-dimensional and m-dimensional vector space over F, and a basis of V and a basis of W have been fixed, then any linear transformation T: VW may be encoded by an m × n matrix A with entries in the field F, called the matrix of T with respect to these bases. Two matrices that encode the same linear transformation in different bases are called similar. Matrix theory replaces the study of linear transformations, which were defined axiomatically, by the study of matrices, which are concrete objects. This major technique distinguishes linear algebra from theories of other algebraic structures, which usually cannot be parameterized so concretely. There is an important distinction between the coordinate n-space Rn and a general finite-dimensional vector space V. While Rn has a standard basis {e1, e2, …, en}, a vector space V typically does not come equipped with such a basis and many different bases exist (although they all consist of the same number of elements equal to the dimension of V). One major application of the matrix theory is calculation of determinants, a central concept in linear algebra. While determinants could be defined in a basis-free manner, they are usually introduced via a specific representation of the mapping; the value of the determinant does not depend on the specific basis. It turns out that a mapping has an inverse if and only if the determinant has an inverse (every non-zero real or complex number has an inverse[15]). If the determinant is zero, then the nullspace is nontrivial. Determinants have other applications, including a systematic way of seeing if a set of vectors is linearly independent (we write the vectors as the columns of a matrix, and if the determinant of that matrix is zero, the vectors are linearly dependent). Determinants could also be used to solve systems of linear equations (see Cramer's rule), but in real applications, Gaussian elimination is a faster method. Eigenvalues and eigenvectors[edit] In general, the action of a linear transformation may be quite complex. Attention to low-dimensional examples gives an indication of the variety of their types. One strategy for a general n-dimensional transformation T is to find "characteristic lines" that are invariant sets under T. If v is a non-zero vector such that Tv is a scalar multiple of v, then the line through 0 and v is an invariant set under T and v is called a characteristic vector or eigenvector. The scalar λ such that Tv = λv is called a characteristic value or eigenvalue of T. To find an eigenvector or an eigenvalue, we note that Tv-\lambda v=(T-\lambda \, \text{I})v=0, where I is the identity matrix. For there to be nontrivial solutions to that equation, det(T − λ I) = 0. The determinant is a polynomial, and so the eigenvalues are not guaranteed to exist if the field is R. Thus, we often work with an algebraically closed field such as the complex numbers when dealing with eigenvectors and eigenvalues so that an eigenvalue will always exist. It would be particularly nice if given a transformation T taking a vector space V into itself we can find a basis for V consisting of eigenvectors. If such a basis exists, we can easily compute the action of the transformation on any vector: if v1, v2, …, vn are linearly independent eigenvectors of a mapping of n-dimensional spaces T with (not necessarily distinct) eigenvalues λ1, λ2, …, λn, and if v = a1v1 + ... + an vn, then, T(v)=T(a_1 v_1)+\cdots+T(a_n v_n)=a_1 T(v_1)+\cdots+a_n T(v_n)=a_1 \lambda_1 v_1 + \cdots +a_n \lambda_n v_n. Such a transformation is called a diagonalizable matrix since in the eigenbasis, the transformation is represented by a diagonal matrix. Because operations like matrix multiplication, matrix inversion, and determinant calculation are simple on diagonal matrices, computations involving matrices are much simpler if we can bring the matrix to a diagonal form. Not all matrices are diagonalizable (even over an algebraically closed field). Inner-product spaces[edit] Besides these basic concepts, linear algebra also studies vector spaces with additional structure, such as an inner product. The inner product is an example of a bilinear form, and it gives the vector space a geometric structure by allowing for the definition of length and angles. Formally, an inner product is a map \langle \cdot, \cdot \rangle : V \times V \rightarrow F that satisfies the following three axioms for all vectors u, v, w in V and all scalars a in F:[16][17] Note that in R, it is symmetric. \langle au,v\rangle= a \langle u,v\rangle. \langle v,v\rangle \geq 0 with equality only for v = 0. We can define the length of a vector v in V by \|v\|^2=\langle v,v\rangle, and we can prove the Cauchy–Schwarz inequality: |\langle u,v\rangle| \leq \|u\| \cdot \|v\|. In particular, the quantity \frac{|\langle u,v\rangle|}{\|u\| \cdot \|v\|} \leq 1, and so we can call this quantity the cosine of the angle between the two vectors. Two vectors are orthogonal if \langle u, v\rangle =0. An orthonormal basis is a basis where all basis vectors have length 1 and are orthogonal to each other. Given any finite-dimensional vector space, an orthonormal basis could be found by the Gram–Schmidt procedure. Orthonormal bases are particularly nice to deal with, since if v = a1 v1 + ... + an vn, then a_i = \langle v,v_i \rangle. The inner product facilitates the construction of many useful concepts. For instance, given a transform T, we can define its Hermitian conjugate T* as the linear transform satisfying \langle T u, v \rangle = \langle u, T^* v\rangle. If T satisfies TT* = T*T, we call T normal. It turns out that normal matrices are precisely the matrices that have an orthonormal system of eigenvectors that span V. Some main useful theorems[edit] • A matrix is invertible, or non-singular, if and only if the linear map represented by the matrix is an isomorphism. • Any vector space over a field F of dimension n is isomorphic to Fn as a vector space over F. • Corollary: Any two vector spaces over F of the same finite dimension are isomorphic to each other. • A linear map is an isomorphism if and only if the determinant is nonzero. Because of the ubiquity of vector spaces, linear algebra is used in many fields of mathematics, natural sciences, computer science, and social science. Below are just some examples of applications of linear algebra. Solution of linear systems[edit] Linear algebra provides the formal setting for the linear combination of equations used in the Gaussian method. Suppose the goal is to find and describe the solution(s), if any, of the following system of linear equations: 2x &&\; + \;&& y &&\; - \;&& z &&\; = \;&& 8 & \qquad (L_1) \\ -3x &&\; - \;&& y &&\; + \;&& 2z &&\; = \;&& -11 & \qquad (L_2) \\ -2x &&\; + \;&& y &&\; +\;&& 2z &&\; = \;&& -3 & \qquad (L_3) The Gaussian-elimination algorithm is as follows: eliminate x from all equations below L1, and then eliminate y from all equations below L2. This will put the system into triangular form. Then, using back-substitution, each unknown can be solved for. In the example, x is eliminated from L2 by adding (3/2)L1 to L2. x is then eliminated from L3 by adding L1 to L3. Formally: L_2 + \tfrac{3}{2}L_1 \rightarrow L_2 L_3 + L_1 \rightarrow L_3 The result is: 2x &&\; + && y &&\; - &&\; z &&\; = \;&& 8 & \\ && && \frac{1}{2}y &&\; + &&\; \frac{1}{2}z &&\; = \;&& 1 & \\ && && 2y &&\; + &&\; z &&\; = \;&& 5 & Now y is eliminated from L3 by adding −4L2 to L3: L_3 + -4L_2 \rightarrow L_3 The result is: 2x &&\; + && y \;&& - &&\; z \;&& = \;&& 8 & \\ && && \frac{1}{2}y \;&& + &&\; \frac{1}{2}z \;&& = \;&& 1 & \\ && && && &&\; -z \;&&\; = \;&& 1 & This result is a system of linear equations in triangular form, and so the first part of the algorithm is complete. The last part, back-substitution, consists of solving for the known in reverse order. It can thus be seen that z = -1 \quad (L_3) Then, z can be substituted into L2, which can then be solved to obtain y = 3 \quad (L_2) Next, z and y can be substituted into L1, which can be solved to obtain x = 2 \quad (L_1) The system is solved. We can, in general, write any system of linear equations as a matrix equation: The solution of this system is characterized as follows: first, we find a particular solution x0 of this equation using Gaussian elimination. Then, we compute the solutions of Ax = 0; that is, we find the null space N of A. The solution set of this equation is given by x_0+N=\{x_0+n: n\in N \}. If the number of variables is equal to the number of equations, then we can characterize when the system has a unique solution: since N is trivial if and only if det A ≠ 0, the equation has a unique solution if and only if det A ≠ 0.[18] Least-squares best fit line[edit] The least squares method is used to determine the best fit line for a set of data.[19] This line will minimize the sum of the squares of the residuals. Fourier series expansion[edit] Fourier series are a representation of a function f: [−π, π] → R as a trigonometric series: f(x)=\frac{a_0}{2} + \sum_{n=1}^\infty \, [a_n \cos(nx) + b_n \sin(nx)]. This series expansion is extremely useful in solving partial differential equations. In this article, we will not be concerned with convergence issues; it is nice to note that all Lipschitz-continuous functions have a converging Fourier series expansion, and nice enough discontinuous functions have a Fourier series that converges to the function value at most points. The space of all functions that can be represented by a Fourier series form a vector space (technically speaking, we call functions that have the same Fourier series expansion the "same" function, since two different discontinuous functions might have the same Fourier series). Moreover, this space is also an inner product space with the inner product The functions gn(x) = sin(nx) for n > 0 and hn(x) = cos(nx) for n ≥ 0 are an orthonormal basis for the space of Fourier-expandable functions. We can thus use the tools of linear algebra to find the expansion of any function in this space in terms of these basis functions. For instance, to find the coefficient ak, we take the inner product with hk: \langle f,h_k \rangle=\frac{a_0}{2}\langle h_0,h_k \rangle + \sum_{n=1}^\infty \, [a_n \langle h_n,h_k\rangle + b_n \langle\ g_n,h_k \rangle], and by orthonormality, \langle f,h_k\rangle=a_k; that is, a_k = \frac{1}{\pi} \int_{-\pi}^\pi f(x) \cos(kx) \, dx. Quantum mechanics[edit] Quantum mechanics is highly inspired by notions in linear algebra. In quantum mechanics, the physical state of a particle is represented by a vector, and observables (such as momentum, energy, and angular momentum) are represented by linear operators on the underlying vector space. More concretely, the wave function of a particle describes its physical state and lies in the vector space L2 (the functions φ: R3C such that \int_{-\infty}^\infty \int_{-\infty}^\infty \int_{-\infty}^{\infty} |\phi|^2 dxdydz is finite), and it evolves according to the Schrödinger equation. Energy is represented as the operator H=-\frac{\hbar^2}{2m} \nabla^2 + V(x,y,z), where V is the potential energy. H is also known as the Hamiltonian operator. The eigenvalues of H represents the possible energies that can be observed. Given a particle in some state φ, we can expand φ into a linear combination of eigenstates of H. The component of H in each eigenstate determines the probability of measuring the corresponding eigenvalue, and the measurement forces the particle to assume that eigenstate (wave function collapse). Geometric introduction[edit] Many of the principles and techniques of linear algebra can be seen in the geometry of lines in a real two dimensional plane E. When formulated using vectors and matrices the geometry of points and lines in the plane can be extended to the geometry of points and hyperplanes in high-dimensional spaces. Point coordinates in the plane E are ordered pairs of real numbers, (x,y), and a line is defined as the set of points (x,y) that satisfy the linear equation[20] \lambda: ax+by + c =0, , where a, b and c are not all zero. Then, \lambda: \begin{bmatrix} a & b & c\end{bmatrix} \begin{Bmatrix} x\\ y \\1\end{Bmatrix} = 0, where x = (x, y, 1) is the 3 × 1 set of homogeneous coordinates associated with the point (x, y).[21] Homogeneous coordinates identify the plane E with the z = 1 plane in three dimensional space. The x−y coordinates in E are obtained from homogeneous coordinates y = (y1, y2, y3) by dividing by the third component (if it is nonzero) to obtain y = (y1/y3, y2/y3, 1). The linear equation, λ, has the important property, that if x1 and x2 are homogeneous coordinates of points on the line, then the point αx1 + βx2 is also on the line, for any real α and β. Now consider the equations of the two lines λ1 and λ2, \lambda_1: a_1 x+b_1 y + c_1 =0,\quad \lambda_2: a_2 x+b_2 y + c_2 =0, which forms a system of linear equations. The intersection of these two lines is defined by x = (x, y, 1) that satisfy the matrix equation, \lambda_{1,2}: \begin{bmatrix} a_1 & b_1 & c_1\\ a_2 & b_2 & c_2 \end{bmatrix} \begin{Bmatrix} x\\ y \\1\end{Bmatrix} = \begin{Bmatrix}0\\0 \end{Bmatrix}, or using homogeneous coordinates, The point of intersection of these two lines is the unique non-zero solution of these equations. In homogeneous coordinates, the solutions are multiples of the following solution:[21] x_1 = \begin{vmatrix} b_1 & c_1\\ b_2 & c_2\end{vmatrix}, x_2 = -\begin{vmatrix} a_1 & c_1\\ a_2 & c_2\end{vmatrix}, x_3 = \begin{vmatrix} a_1 & b_1\\ a_2 & b_2\end{vmatrix} if the rows of B are linearly independent (i.e., λ1 and λ2 represent distinct lines). Divide through by x3 to get Cramer's rule for the solution of a set of two linear equations in two unknowns.[22] Notice that this yields a point in the z = 1 plane only when the 2 × 2 submatrix associated with x3 has a non-zero determinant. It is interesting to consider the case of three lines, λ1, λ2 and λ3, which yield the matrix equation, \lambda_{1,2,3}: \begin{bmatrix} a_1 & b_1 & c_1\\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3\end{bmatrix} \begin{Bmatrix} x\\ y \\1\end{Bmatrix} = \begin{Bmatrix}0\\0 \\0\end{Bmatrix}. which in homogeneous form yields, Clearly, this equation has the solution x = (0,0,0), which is not a point on the z = 1 plane E. For a solution to exist in the plane E, the coefficient matrix C must have rank 2, which means its determinant must be zero. Another way to say this is that the columns of the matrix must be linearly dependent. Introduction to linear transformations[edit] Another way to approach linear algebra is to consider linear functions on the two dimensional real plane E=R2. Here R denotes the set of real numbers. Let x=(x, y) be an arbitrary vector in E and consider the linear function λ: ER, given by \lambda: \begin{bmatrix}a & b\end{bmatrix}\begin{Bmatrix} x\\y\end{Bmatrix} = c, This transformation has the important property that if Ay=d, then A(\alpha\mathbf{x}+\beta \mathbf{y}) = \alpha A \mathbf{x} + \beta A\mathbf{y} = \alpha c + \beta d. This shows that the sum of vectors in E map to the sum of their images in R. This is the defining characteristic of a linear map, or linear transformation.[20] For this case, where the image space is a real number the map is called a linear functional.[22] Consider the linear functional a little more carefully. Let i=(1,0) and j =(0,1) be the natural basis vectors on E, so that x=xi+yj. It is now possible to see that A\mathbf{x} = A(x\mathbf{i}+y\mathbf{j})=x A\mathbf{i} + y A\mathbf{j} = \begin{bmatrix}A\mathbf{i} & A\mathbf{j}\end{bmatrix}\begin{Bmatrix} x\\y\end{Bmatrix} = \begin{bmatrix}a & b\end{bmatrix}\begin{Bmatrix} x\\y\end{Bmatrix} = c. Thus, the columns of the matrix A are the image of the basis vectors of E in R. This is true for any pair of vectors used to define coordinates in E. Suppose we select a non-orthogonal non-unit vector basis v and w to define coordinates of vectors in E. This means a vector x has coordinates (α,β), such that xvw. Then, we have the linear functional \lambda: A\mathbf{x} = \begin{bmatrix} A\mathbf{v} & A\mathbf{w} \end{bmatrix}\begin{Bmatrix} \alpha \\ \beta \end{Bmatrix} = \begin{bmatrix} d & e \end{bmatrix}\begin{Bmatrix} \alpha \\ \beta \end{Bmatrix} =c, where Av=d and Aw=e are the images of the basis vectors v and w. This is written in matrix form as \begin{bmatrix}a & b\end{bmatrix} \begin{bmatrix} v_1 & w_1 \\ v_2 & w_2 \end{bmatrix} =\begin{bmatrix} d & e \end{bmatrix}. Coordinates relative to a basis[edit] This leads to the question of how to determine the coordinates of a vector x relative to a general basis v and w in E. Assume that we know the coordinates of the vectors, x, v and w in the natural basis i=(1,0) and j =(0,1). Our goal is two find the real numbers α, β, so that xvw, that is \begin{Bmatrix} x \\ y \end{Bmatrix} = \begin{bmatrix} v_1 & w_1 \\ v_2 & w_2 \end{bmatrix} \begin{Bmatrix} \alpha \\ \beta\end{Bmatrix}. To solve this equation for α, β, we compute the linear coordinate functionals σ and τ for the basis v, w, which are given by,[21] \sigma = \begin{bmatrix}\sigma_1 &\sigma_2\end{bmatrix}=\frac{1}{v_1 w_2- v_2w_1}\begin{bmatrix} w_2 & - w_1\end{bmatrix}, \tau = \begin{bmatrix}\tau_1 &\tau_2\end{bmatrix}=\frac{1}{v_1 w_2- v_2w_1}\begin{bmatrix} -v_2 & v_1\end{bmatrix}, The functionals σ and τ compute the components of x along the basis vectors v and w, respectively, that is, \sigma \mathbf{x}=\alpha, \tau\mathbf{x}=\beta, which can be written in matrix form as \begin{bmatrix} \sigma_1 & \sigma_2 \\ \tau_1 &\tau_2 \end{bmatrix} \begin{Bmatrix} x \\ y \end{Bmatrix} =\begin{Bmatrix} \alpha \\ \beta\end{Bmatrix}. These coordinate functionals have the properties, \sigma\mathbf{v}=1, \sigma\mathbf{w}=0, \tau\mathbf{w}=1, \tau\mathbf{v}=0. These equations can be assembled into the single matrix equation, Thus, the matrix formed by the coordinate linear functionals is the inverse of the matrix formed by the basis vectors.[20][22] Inverse image[edit] The set of points in the plane E that map to the same image in R under the linear functional λ define a line in E. This line is the image of the inverse map, λ−1: RE. This inverse image is the set of the points x=(x, y) that solve the equation, A\mathbf{x}=\begin{bmatrix}a & b\end{bmatrix}\begin{Bmatrix} x\\y\end{Bmatrix} = c. Notice that a linear functional operates on known values for x=(x, y) to compute a value c in R, while the inverse image seeks the values for x=(x, y) that yield a specific value c. In order to solve the equation, we first recognize that only one of the two unknowns (x,y) can be determined, so we select y to be determined, and rearrange the equation by = c - ax. Solve for y and obtain the inverse image as the set of points, \mathbf{x}(t) = \begin{Bmatrix} 0\\ c/b\end{Bmatrix} + t\begin{Bmatrix} 1\\ -a/b\end{Bmatrix}=\mathbf{p} + t\mathbf{h} . For convenience the free parameter x has been relabeled t. The vector p defines the intersection of the line with the y-axis, known as the y-intercept. The vector h satisfies the homogeneous equation, A\mathbf{h}= \begin{bmatrix}a & b\end{bmatrix} \begin{Bmatrix} 1\\ -a/b\end{Bmatrix}= 0. Notice that if h is a solution to this homogeneous equation, then t h is also a solution. The set of points of a linear functional that map to zero define the kernel of the linear functional. The line can be considered to be the set of points h in the kernel translated by the vector p.[20][22] Generalizations and related topics[edit] Since linear algebra is a successful theory, its methods have been developed and generalized in other parts of mathematics. In module theory, one replaces the field of scalars by a ring. The concepts of linear independence, span, basis, and dimension (which is called rank in module theory) still make sense. Nevertheless, many theorems from linear algebra become false in module theory. For instance, not all modules have a basis (those that do are called free modules), the rank of a free module is not necessarily unique, not every linearly independent subset of a module can be extended to form a basis, and not every subset of a module that spans the space contains a basis. In multilinear algebra, one considers multivariable linear transformations, that is, mappings that are linear in each of a number of different variables. This line of inquiry naturally leads to the idea of the dual space, the vector space V consisting of linear maps f: VF where F is the field of scalars. Multilinear maps T: VnF can be described via tensor products of elements of V. If, in addition to vector addition and scalar multiplication, there is a bilinear vector product V × VV, the vector space is called an algebra; for instance, associative algebras are algebras with an associate vector product (like the algebra of square matrices, or the algebra of polynomials). Functional analysis mixes the methods of linear algebra with those of mathematical analysis and studies various function spaces, such as Lp spaces. Representation theory studies the actions of algebraic objects on vector spaces by representing these objects as matrices. It is interested in all the ways that this is possible, and it does so by finding subspaces invariant under all transformations of the algebra. The concept of eigenvalues and eigenvectors is especially important. Algebraic geometry considers the solutions of systems of polynomial equations. There are several related topics in the field of Computer Programming that utilizes much of the techniques and theorems Linear Algebra encompasses and refers to. See also[edit] 1. ^ Strang, Gilbert (July 19, 2005), Linear Algebra and Its Applications (4th ed.), Brooks Cole, ISBN 978-0-03-010567-8  2. ^ Weisstein, Eric. "Linear Algebra". From MathWorld--A Wolfram Web Resource. Wolfram. Retrieved 16 April 2012.  3. ^ a b c d Vitulli, Marie. "A Brief History of Linear Algebra and Matrix Theory". Department of Mathematics. University of Oregon. Archived from the original on 2012-09-10. Retrieved 2014-07-08.  4. ^ 5. ^ 6. ^ Tucker, Alan (1993). "The Growing Importance of Linear Algebra in Undergraduate Mathematics". College Mathematics Journal 24 (1): 3–9. doi:10.2307/2686426.  7. ^ Goodlad, John I.; von stoephasius, Reneta; Klein, M. Frances (1966). "The changing school curriculum". U.S. Department of Health, Education, and Welfare: Office of Education. Retrieved 9 July 2014.  8. ^ Dorier, Jean-Luc; Robert, Aline; Robinet, Jacqueline; Rogalsiu, Marc (2000). Dorier, Jean-Luc, ed. The Obstacle of Formalism in Linear Algebra. Springer. pp. 85–124. ISBN 978-0-7923-6539-6. Retrieved 9 July 2014.  9. ^ Carlson, David; Johnson, Charles R.; Lay, David C.; Porter, A. Duane (1993). "The Linear Algebra Curriculum Study Group Recommendations for the First Course in Linear Algebra". The College Mathematics Journal 24 (1): 41–46. doi:10.2307/2686430.  10. ^ Roman 2005, ch. 1, p. 27 11. ^ Axler (2004), pp. 28–29 12. ^ The existence of a basis is straightforward for countably generated vector spaces, and for well-ordered vector spaces, but in full generality it is logically equivalent to the axiom of choice. 13. ^ Axler (2204), p. 33 14. ^ Axler (2004), p. 55 15. ^ If we restrict to integers, then only 1 and -1 have an inverse. Consequently, the inverse of an integer matrix is an integer matrix if and only if the determinant is 1 or -1. 18. ^ Gunawardena, Jeremy. "Matrix algebra for beginners, Part I" (PDF). Harvard Medical School. Retrieved 2 May 2012.  19. ^ Miller, Steven. "The Method of Least Squares" (PDF). Brown University. Retrieved 1 May 2013.  20. ^ a b c d Strang, Gilbert (July 19, 2005), Linear Algebra and Its Applications (4th ed.), Brooks Cole, ISBN 978-0-03-010567-8 , 21. ^ a b c J. G. Semple and G. T. Kneebone, Algebraic Projective Geometry, Clarendon Press, London, 1952. 22. ^ a b c d E. D. Nering, Linear Algebra and Matrix Theory, John-Wiley, New York, NY, 1963 Further reading[edit] • Fearnley-Sander, Desmond, "Hermann Grassmann and the Creation of Linear Algebra" ([1]), American Mathematical Monthly 86 (1979), pp. 809–817. • Grassmann, Hermann, Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert, O. Wigand, Leipzig, 1844. Introductory textbooks Advanced textbooks[edit] Study guides and outlines[edit] • Leduc, Steven A. (May 1, 1996), Linear Algebra (Cliffs Quick Review), Cliffs Notes, ISBN 978-0-8220-5331-6  • Lipschutz, Seymour; Lipson, Marc (December 6, 2000), Schaum's Outline of Linear Algebra (3rd ed.), McGraw-Hill, ISBN 978-0-07-136200-9  • Lipschutz, Seymour (January 1, 1989), 3,000 Solved Problems in Linear Algebra, McGraw–Hill, ISBN 978-0-07-038023-3  • McMahon, David (October 28, 2005), Linear Algebra Demystified, McGraw–Hill Professional, ISBN 978-0-07-146579-3  • Zhang, Fuzhen (April 7, 2009), Linear Algebra: Challenging Problems for Students, The Johns Hopkins University Press, ISBN 978-0-8018-9125-0  External links[edit] Online books[edit]
afde18d8b1b4f4c8
Did BICEP2 Detect Gravitational Waves Directly or Indirectly? I’ll describe 1. a past observation of gravitational waves that everyone agrees is indirect; 2. a future observation of gravitational waves that we expect to happen fairly soon, one that I believe everyone will agree is direct; 3. BICEP2, and how you can view it either way, depending on your perspective. A Past Indirect Observation of Gravitational Waves First, let me describe what everyone agrees was the first observation of gravitational waves, and was definitely indirect.  In 1974, two scientists (Joseph Taylor and his graduate student Russell Hulse) discovered a pulsar.  A pulsar is a city-sized neutron star (made entirely from neutrons and resulting from a Type IIa supernova) that spins rapidly — rotating many times per second — and, due to its powerful magnetic field, sends strong radio beams into space, which sweep past the Earth as the pulsar spins.  We observe this as a pulsing radio signal from the location of the star. Pulsars are common, but this one was special.  Its frequency of pulsing (i.e. how many times per second does it pulse) varied slightly, growing and shrinking every 7 hours and 45 minutes.  It quickly became clear this was due to the Doppler effect for radio waves; the pulsar was sometimes moving toward us, and sometimes away, because it was in orbit around something else.  Detailed study (using the Newton/Einstein laws of gravity) allowed Hulse and Taylor to infer that what they were seeing was a pulsar orbiting a second neutron star.  They could even figure out the orientation and size of the orbit! Having figured this out, they could do one more thing.  Einstein’s laws of gravity predict that the gravitational waves — waves in space itself — that are created by these two stars as they orbit one another, and these waves should be carrying energy out into space, reducing the energy available to the two stars.  The effect of this loss of energy would be a very mild reduction in the time (or “period”) that it takes for the two stars to orbit each other — but not by very much!  The period of the orbit, about 28,000 seconds, is predicted by Einstein’s equations to be shrinking by a bit more than one second per year. Fortunately, pulsars are stable enough, and Hulse and Taylor’s measurements were easily accurate enough, that this change of about a second per year was relatively easy for them to measure during the ensuing decade.  And they could compare their measurements of the change in the period with the predictions of Einstein’s theory of gravity.  Remarkably, the agreement of the theory with the data is excellent!  For this confirmation of Einstein’s theory’s prediction of gravitational waves, Hulse and Taylor received the Nobel Prize in 1993. Graph showing the cumulative shift of periastron time for PSR 1913+16. This shows the decrease of the orbital period as the two stars spiral together. Hulse and Taylor’s data (black dots) for the reduction in the period of the neutron stars’ orbit compared to Einstein’s theory’s prediction (solid line) of how the orbit should change due to the emission of gravitational waves. (General Relativity is the name of Einstein’s Theory of gravity.) Notice how remarkably precise is the agreement over decades! Hulse and Taylor had thus observed the effect of gravitational waves for the first time in human history. But they hadn’t observed the waves themselves; they’d observed the loss of energy, in the neutron star pair, due to the waves, but not the waving of space, compressing and expanding as the waves move by. Clearly, this detection of gravitational waves was indirect. A Future, Likely Direct Detection of Gravitational Waves A direct search for gravitational waves is underway now, at experiments known as LIGO and VIRGO.  When a gravitational wave passes by the Earth, space itself grows and shrinks a little bit, and the distances between objects increases and decreases.  It’s an incredibly tiny effect even for powerful gravitational waves; you and I would never notice it.  But this shrinking and growing of space can potentially be observed with extremely stable, carefully designed lasers looking for the distance between two mirrors to shift by less than the radius of a proton, which itself is 100,000 times smaller than the radius of an atom!  [The principles involved are not so different from those used in the famous Michelson-Morley experiment --- but the experimental requirements are vastly greater!] When the repeated changing of the distance between mirrors due to a stretching and compression of space is actually observed, that will clearly be direct observation of waves of space itself — gravitational waves.  This hasn’t happened yet, but the “Advanced” phase of LIGO is coming up very soon, starting this year.  We may well see LIGO make discoveries within the decade. BICEP2: Direct or Indirect? I think it’s very clear that BICEP2 — IF the experiment’s results are correct (they have not been confirmed by another experiment yet) and IF they are correctly interpreted as due to gravitational waves (which is still an open question) —represents an advance over the Hulse-Taylor discovery.  But it’s not as direct as LIGO, either. BICEP2’s measurement [see here for some details]  is actually of the polarization of light  that was released 380,000 years after the Hot Big Bang began, at the time when the universe cooled enough to become transparent. This light has now become the “cosmic microwave background” [CMB] which we observe today coming from all directions in the sky.  So really they’re directly observing light (microwaves rather than visible light), not waves in space itself — gravitational waves. But the nature and size of the polarization effect they observe (“B-mode” polarization, across large swathes of sky) is believed to have only one possible source: gravitational waves, created in the early universe and ringing for 380,000 years, and then interacting with the light that is now the CMB.  It is the squeezing and stretching of space within which the light is moving that causes the light to end up polarized in a unique way. In this sense, you could say that the CMB is providing a sort of unusual photograph of gravitational waves, taken at 380,000 years post-Big-Bang.  It gives far, far more detail about their nature than does the Hulse-Taylor measurement; it confirms more and different things that Einstein predicted, such as the fact that these gravitational waves have “spin two”, which is necessary for them to give B-mode polarization.  If you think of it as a photograph, BICEP2’s measurement seems pretty direct. But on the other hand, it’s nowhere near as direct as LIGO would be, where mirrors that humans have set up will actually move back and forth as a gravitational wave’s crests and troughs pass by.  Far, far more detail will be available when that happens — and there will be little or no ambiguity about the interpretation of the data.  For BICEP2, it’s still conceivable (though no one has thought of anything specific) that the B-mode polarization actually is not due to gravitational waves but is due to something else.  The very fact that this is conceivable — that maybe the polarization comes from something other than waves in space itself — reflects the fact that the BICEP2 data involves looking at something that happened billions of years ago in very distant locations, and drawing inferences.  BICEP2 isn’t itself seeing space shrink and expand; it’s observing polarized light created long ago, and then scientists are inferring that the pattern of its polarization is due to space shrinking and expanding. From that point of view, BICEP2’s detection is still rather indirect. So call it what you will, it’s clearly (if correct and correctly interpreted) more direct than Hulse and Taylor’s measurement, and less direct than a detection at LIGO would be.  Maybe we should call it “(…nnnn)direct”?  In any case, what we call it isn’t important; what’s important is to figure out whether it’s correct, and what it means. 60 responses to “Did BICEP2 Detect Gravitational Waves Directly or Indirectly? 1. If space is the medium for those waves , then space is a thing ….is it possible that spaceness is not fundamental ? Does the status of space as a continuous or discrete medium affect what we see ? Is space is Quantized what does that mean for those waves ? 2. Correction : ” IF space is …..” • The characteristics of “empty space” suggest to me that this space or “vacuum” is indeed quantized. And that it is the dynamic structure of “empty space” that determines the universal basis for time and the Higgs field. The “CMB” is evidence of the dynamic nature and “temperature” of the structure of empty open space; variations in the “CMB” is indicative of the shape of this structure. 3. How come gravitational waves cause light passing through them to end up with a particular polarization? • This is due to geometry and statistics. It is similar to how a lens can focus light from many points into one focal point; the original light comes from random directions while the focused light heads towards a specific point. Gravitational waves alter space that light is passing through and (very, very roughly speaking) ‘lens’ the polarization in a specific kind of way. This is a poor explanation but I hope it suffices. 4. At this point we’ve left physics and are only talking about nomenclature. If you want to be pedantic about it, all experimental measurements are indirect. We take physical phenomena and convert them to other types of physical phenomena which we’re better at measuring. The simplest example is converting a force on a spring into a displacement. A spring scale is a force-displacement transducer. A more modern version uses piezoelectric elements to convert (force) into (frequency shift) into (voltage) into (bits). Now the indirect BICEP2 measurement: (gravitational waves) into (quadrupolar radiation fields) into (polarized radiation) into (current in a phased array antenna) into (raising the temperature of a load on a bolometer) into (resistivity of a TES) into (magnetic flux through a SQUID) into (volts) into (bits). Compare that to what LIGO will do: (gravitational waves) into (electric field phase difference in an interferometer) into (power incident on a photodiode) into (volts) into (bits). Still indirect, just fewer steps in the process. But again, this is now nomenclature, not physics. • Jorma Reinikainen You’re not even wrong here. The crucial difference between an indirect and a direct measurement is how well the physics of the intermediate steps are known. The early universe might do bizarre things we have no idea about; this is even expectable. Whereas, how a gravitational wave might cancel out its own signal in an interferometer would require a very contrived explanation or new physics. Slipping into solipsism is the result if your logic is followed. • Torbjörn Larsson, OM You are not completely wrong here. =D The number of steps isn’t crucial, just pointing out the problem of quantifying “direct” in a measurable way. Your “how well the physics of the intermediate steps are known” has a similar problem. Ordinarily knowledge is that goes into specifying uncertainty based on random and relative errors. But you allude to “unknown unknowns” and how contrived they can be. By definition those are unmeasurable and contrivance impossible to specify, again raising the head of solipsism. Kudos for “new physics” vs “no new physics” though. But that would go into the area of open field/possible not yet eliminated alternative theories (physics)/possible not yet eliminated ingressing mechanisms, which by definition is the constraint a new measurement always work with. • Torbjörn Larsson, OM Also, about “new physics”. That (possible ingressing mechanisms) is peculiar of this new measurement. Others may take place in more familiar territory. 5. I’m pretty sure vector modes sourced by cosmic strings can also produce low-l primordial B-mode polarization. See arxiv:1403.6105. Those aren’t tensor perturbations of the metric, so we don’t usually think of them as gravitational waves. But we might just have a semantic difference somewhere. 6. The BICEP2 results actually deal with the only gravitational waves thinkable in context of 4D general relativity: i.e. these stationary ones. From this perspective the BICEP2 observation is about true gravitational waves. 7. Marshall Eubanks To me, a direct measurement of gravitational waves requires in some fashion measuring or observing the change in spacetime due to the waves. The binary pulsar decay doesn’t do that, but the Pulsar timing array, should it have success, would and thus would be a direct observation of gravitational waves. From that perspective, the BISON2 results come close, but they are still not there, and so are indirect. 8. I heard someone from BICEP2 -I think it was Chao-Lin Kuo- describe the detection as semi-direct. I thin that is fair. He made the comparison with detecting waves on the surface of the ocean. You could measure them by watching a floating object move up and down (what A-LIGO will do) or you could take a still picture and see the wavy pattern on the surface (sort of the BICEP2, although of course the analogy breaks down in that specific palarization patterns require tensors). 9. Let’s hope dust is not causing this polarization. 10. Harald Fillinger Hi Mat, As a layman I’m a great fan of your website. I read recently something about newly discovered galactic foreground structures that are not included in BICEP2 foreground models and hence may have an Impact on B-mode polarization measurement results… (galactic foreground emission caused by magnetic dipole radiation from dust grains enriched by metallic iron etc.) See: http://arxiv.org/abs/1404.1899 Well, you’d pointed out repeatedly in earlier posts to be cautious with BICEP2 results (and Interpretation) unless they are confirmed by other experiment… Is this paper just ‘experts’ talks’ about ‘details’ or can this have a major Impact on BICEP2 results? Any idea? 11. Pingback: Allgemeines Live-Blog ab dem 27. April 2014 | Skyweek Zwei Punkt Null 12. Minor correction, the period of the binary pulsar is not changing by 1s per year. That’s the cumulative change in periastron time. 13. It’s my understanding that gravitational lensing is another source of B-Mode polarization, however its contribution to polarized light can be predicted (or measured?) Or maybe it is more correct to say that one can place an upper bound on the contribution of gravitational lensing to B-Mode polarization? In any case, I beieve that the success of the BICEP2 exaperiment hinges on establishing (in the 5-sigma sense) that the amount of B-mode polarization that was experimently detected exceeds whatever contribution gravitational lensing could possibly provide. 14. Very well, Matt, just to be sure, I subscribe to all three parts of your appraisal. • Who cares? • That’s just rude. You may not think much of Lubos Motl, but he’s a well-known figure in HEP theory, having taught at places like Harvard that mostly don’t tend to hire doofuses, having obtained right academic degrees to have an opinion worth considering (versus we’ve no idea about you), he works in this area (versus we’ve no clue whatsoever what if anything you do), and I expect Matt Strassler appreciates his reaction to this article. 15. Folks I show in my paper http://www.worldscientific.com/doi/abs/10.1142/S0219887814500595 that gravitons are propapagated through spacetime in the same manner as heat. This implies that LIGO is actually detecting the thermal activity of gravitons. If my research is correct we will never detect an undulation propagating at light speed which is what we are expecting to see in all gravity wave detectors. 16. kashyap vasavada Matt: Question about interpretation of BICEP2. Some people say these gravitational waves follow from classical GR. So it has nothing to do with quantum gravity. Some others say that actually quantum fluctuations took place in Gravitational field. So it proves quantum gravity. What is your opinion? • Yes, I would love to see a discussion of this. I think I know the answer, but so many people have said strange and contradictory things about it that I feel that there is an urgent need for clarification from someone who knows what he is talking about… • Torbjörn Larsson, OM Isn’t that a moot question? Krauss and Wilczek has found that it can only be quantized waves. “Thus the gravitational radiation background, measured invariantly, is proportional to ¯h^2. Since this is a positive power of ¯h, we infer the essentially quantum-mechanical nature of that phenomenon. Since no field other than gravity is involved, we infer that quantization of the gravitational field is an essential ingredient.” [ http://arxiv.org/abs/1309.5343 ] • Torbjörn Larsson, OM Oh, I forgot: They also ask for consistency checks, the ones BICEP2 has avoided so far. 17. Richard Goldhor Matt, You say, “But the…polarization effect they observe … is believed to have only one possible source: gravitational waves, created in the early universe and ringing for 380,000 years….” When I think of “ringing” waves I think of standing waves, which implies a structure with reflective boundaries–which in this case would be ??? Or when you said “ringing” did you mean a traveling gravitational wave that was still propagating across the universe 380,000 years after the Big Bang? 18. Hello Matt, I red in your post that the fact that the gravitational waves have “spin two”, is necessary for them to give B-mode polarization. It is possible for you to give us some explanation or hint about why spin two is necessary for B-mode polarization ? Thank you very much for your time 19. I have a maybe pretty hairy question: Is there anything like fine-tuning of the universe for intelligent life? I know that there is the thory that a mutliverse allows for all possible options to be relaized and we just happen to be in this version of a universe where human life is supported. Is that the final answer? • This comes in two flavors of ‘anthropomorphic principle'; the weak which says ‘Isn’t it interesting that the universe is just right for us to exist’ and the strong which says ‘The universe can be no other way than it is now’ For the universe to support human life specifically a number of variables need to have rather specific values. Gravity is remarkably weak for example. So it can be argued the universe is ‘fine tuned’ but this begs the questions ‘why?’ and ‘by whom?’ As a christian you can guess my take on the matter. But there are many objections to this. Firstly if the universe is different it is quite possible life would be different. There might be living stars the size of basketballs sitting and marveling at how gravity was just strong enough to allow their existence. We simply do not have a good enough grasp at present as to what other options are out there. Secondly scientists hate a ‘just because'; all values of all variables should arise ‘naturally’ out of a theory and there are a lot at present that ‘just are’ with no explanation as to why they take those values. It is the hope of physics that at some future point a ‘theory of everything’ will be found that shows how our universe isn’t special at all and in fact could not be anything else. Multiverse theories are one attempt at doing this (As well as tackling a number of other problems\ideas); in this case the values aren’t fine tuned, our universe would *have* to appear *somewhere* so it’s not special.But these theories are mostly speculation with no definitive evidence at present. • Thank you, Kudzu. I apologize for being so late in responding. Too caught up with other things… In the meantime I saw that Luke Barnes, Sydney Institute for Astronomy, has posted a reading list and an article on the topic http://letterstonature.wordpress.com/2013/09/10/what-to-read-the-fine-tuning-of-the-universe-for-intelligent-life/ I might look into it. • Torbjörn Larsson, OM Well, “gap theory” makes your magical agency very powerless, doesn’t it? From “making” universes” to hiding in our not yet elaborated science. I would say that the gaps has gone, since inflation both made our observable universe from a spontaneous end of inflation going back to blown up quantum fluctuations and populated it with structure from similar quantum fluctuations. Indeed, the “magical seed” of a spacetime with physics that you propose, has been diluted by at least 10^100 which is far more insanely wrong (or weak, in your interpretation) than standard 10^30 dilution of the “magical seed” in homeopathy! Mostly, I wish that a science site wouldn’t have this unnecessary discussion of non-functional magical ideas. Why is it seen as acceptable? We aren’t pre-enlightenment pre-science pre-functional anymore… • I find it a strength in this blog that such ideas are discussed. In many places there are strong limits imposed on what can be discussed. And these ideas are far from some lunatic fringe, they are real questions and issues people have. They should be addressed. Dismissing them as ‘pre-science pre-functional’ is about as useful as trying to convert atheists by saying ‘I’ll pray for you.’ • Torbjörn Larsson, OM If the universe was fine-tuned for life, it obviously should have more that the 0.00000000001 % or so habitable volume than it has. =D There is one or two possibly fine-tuned parameters (cosmological constant, lifetime of the vacuum), all else has an open parameter space. E.g. Victor Stenger, and others, find that covarying parameters leave ~ 50 % or so livable universes. Then you can ask why those correlations, but no one has pinned such to anthropic (environmental) theory yet. • Stenger isn’t reliable. Who are the others you refer to? • I don’t see logically what the percentage habitable volume of the universe has to do with anything. A knife was meant to cut things but only a tiny percentage of it is cutting edge, the rest is uncutting support for that edge or a handle. Of course maybe most of the universe *is* habitable… for dark matter photino birds and we’re just some weird accident. 20. I have an unrelated question concerning the masses of particles. I have heard that the reason that the masses of particles are fixed and not a mushy mix of energy is due to the Schrödinger equation, for example it takes 0.511 MeV to make one electron and only one because 0.511 MeV is the electrons rest mass. But does this apply to photons? or gluons? I mean the electromagnetic spectrum suggests that photons have absolutely no quantization in terms of how much energy a photon can have.(what i mean is there is no specific amount of energy that you require to make a photon eny amount of energy seems to do something to the electromagnetic field unlike the electron field. The main question is that does this have something to do with the higgs field. What i mean is that if the electron interacted with the higgs field more than it does now (therefore having more mass) then it must take more energy to create an electron from the electron field. The idea that I have basically fabricated is that the higgs field is responsible for the energy in particles when there at rest, and that the amount of energy it takes to make a particle from its underlying field is the energy that it takes to have a wave in that field that can exist while it interacts with the non-zero higgs field. And the reason it can take an arbitrary amount of energy to make a photon is that the photon particle does not interact with the higgs field. Please correct me if I am wrong because the whole quantization of electron quarks ect.. and the seemingly unquantization of photons (in terms of energy) has bean very confusing. • A particle is a ‘stable, long-lived’ wave in a field. In order to create a particle you need to fulfill several conditions.(For example you need to ‘wiggle’ the field in a regular way so the resulting wave is nice and neat.) If you do not meet these requirements you will get a ‘virtual particle’ which cal ‘fall apart’ and give you your energy back. For massless particles energy is not a condition needed to make a particle. (There are some minor issues with uncertainty, two particles of similar energy will be indistinguishable because the uncertainties in their measurements will overlap.) In theory you could have photons and gluons of infinitely low energy. The Higgs field can be considered as taking two ‘photon-like’ fields and connecting them.so that a stable wave in one creates a stable wave in the other. But this requires ‘extra’ energy if the waves are to both be stable; it adds another condition. So while I can excite the electron-left field with any amount of energy the wave I create cannot be stable unless I give it enough energy (0.511 MeV or more) to stably excite the electron-right field too. It may help then to view the Higgs field as a ‘destabilizer’ that requires extra energy to overcome. • Many thanks Mr. Kudzu, /there are some minor issues with uncertainty, two particles of similar energy will be “indistinguishable”* because the uncertainties in their measurements will overlap. So one of the “photon-like field” was polarized (distinguished*) during inflation by gravitational waves ? In the spontaneity of spontaneous symmetry breaking (weak mixing angle), W± and Z0 bosons, and the photon, are produced by the spontaneous symmetry breaking of the electroweak symmetry, by the Higgs mechanism. “I spent the years 1965-67 happily developing the implications of spontaneous symmetry breaking for the strong interactions”- Weinberg. Higgs field stabilize the “rest mass” or shielding the containment – unlike gluons, which have its own antiparticle to have the short distance force. In weak interaction’s short distance force, it needs Higgs mechanism. Higgs mechanism was used to unite weak and electromagnetic interactions – making, short distance weak interaction, but not for photons, due to zero mass. This means “gauge bosons can get rest mass through Higgs mechanism”. But due to polarization, photons can take arbitrary amount of energy or give energy back (radiation) – means, photon field is massless (ZERO) and stable but Higgs field is massive and non-ZERO (distabilizer) ? • A few points: When I was talking about ‘photon-like’ fields I meant those of particles affected by the Higgs. Gravitational waves polarize light by a different mechanism. It is rather tricky and I am not sure I understand it entirely but basically since the waves alter space itself in a certain way as they pass by they can ‘tweak’ the polarization of light in that space (which is initially undefined.) It is similar to how a lens can focus light from everywhere onto one point. (And gravitational lenses can also change the polarization of light.) Electroweak symmetry breaking is different from the Higgs mechanism. (If you like the Higgs mechanism happens with what is left of the Higgs particles AFTER electroweak symmetry breaking.) It does not unite the weak and electromagnetic forces; instead it MAKES them from two totally different forces (Isocharge and isospin) Both of these forces *were* long range with massless bosons but now only the electromagnetic one is.Symmetry breaking in effect created a ‘crippled’ weak force. The Higgs field doesn’t ‘stabilize’ rest mass, it *causes* it. (And not all of it either, the Higgs boson for example has a rest mass, but it doesn’t give itself that rest mass.) It can do this because it is nonzero all across the universe. Imagine if the electron field were like that; the universe would be filled with electricity, protons would not repel each other, all their positive charge would be blocked and so on. Polarization is a property of waves, they don’t need to be massless . If you wiggle a piece of string up and down that is polarization. Photons are massless because that is how ALL particles are ‘at the start’ unless something changes them and nothing changed the photon. (Well, it’s a little more tricky than that actually, but close enough.) 21. I have heard about how particles with fixed rest mass is just a consequence of quantum mechanics. But then why are photons completely free from such fixed energy’s and this must have something to do with the higgs field because interaction which the higgs field is what gives rest mass and thus sets the bar for how much energy it takes to make a particle with that rest mass. This is just a clarification of my previous post, i don’t want to cause confusion. 22. The ensembled phenomena of Higgs mechanism is, unphysical scalar particles called the Goldstone bosons who have a derivative (thermodynamical axioms) mixing with the massive gauge bosons. Once the axioms are fixed, you can discuss the existence of undecidable propositions – like spontaneity in spontaneous symmetry breaking. The set of equations designed to describe some aspect of nature may have multiple `vacua’ (i.e. multiple solutions that each represent different ways that the universe could be configured — what empty space could be like, and what types of fields, forces and particles could be found in the universe). “it seemed obvious that the strong interactions are not mediated by massless particles” – Weinberg. Does photon interacts with higgs field to make photo electric effect (particle nature with physical force)? Incomplete ZERO mass into complete non-ZERO ? 23. Im sorry but this does not answer my question please read it through again if you wish to give it another go. 24. elusiveparticle No Well Water is safe or suitable for animal consumption or plant growth, Fluorine is the 13th most abundant element in the earths crust, and water obtained from the ground will be contaminated with very high levels of Fluorides (fluorine + a positive ion). One small glass of Well Water, on average, contains the Fluoride equivalent of one pea sized dab of Fluoridated Toothpaste, this is about 0.7 parts per million of fluoride, however, any amount of fluoride exposure is incredibly harmful for all biological life. We have effectively increased our exposure to Fluorides on average by 50,000% when we choose to expose our selves to this water source, that is in comparison to the majority of our evolutionary history of drinking surface waters that contain which contain roughly 500x less fluoride, although, some wells may even be contaminated with levels as high as 12+ ppm. I believe that most people do not understand the science of why fluoride’s are so dangerous, so I’d like to explain why, but first I’d like to point out the fact that we are the only life on this planet to dig deep holes in the ground to obtain water to drink, at least deep enough to contaminate our water with high levels of fluoride in this way. The safest alternative is distilled water, or rain water (I suggest a collection time during long periods of rain to avoid polluted air contamination), filters like reverse osmosis do not do enough and they hardly filter the majority of fluorides at all because most naturally occurring fluorides are relatively small compounds, but they may help filter many larger fluoride compounds like those which may be additionally added to water supplies by many city water suppliers. Regardless of which fluorides are getting through, distilled water, or rain, will remove them all, except those leached into water droplets through out atmosphere, rain will be especially vulnerable to this and there are about 50 parts per billion of hydrogen fluoride / other fluoride gasses in our atmosphere, but rain collected should contain less than 0.008 parts per million of fluorides on average. So, why are fluorides dangerous…? Besides specific bond formations that may occur within the body and disrupt normal biological processes, like that of calcium bonds in our bones rather than our teeth due to excessive fluorides in the blood stream, here is the key issue behind fluorides disastrous effects, which continue be disastrous even after bonds like this occur. “Electronegativity”, which is the tendency to draw in electron mass, is a fundamental atomic property of all the elements on the periodic table, out of all these 118 elements of the Periodic Table, the element fluorine has the highest Electronegativity, and, more importantly, one eighth of the entire spectrum of Electronegativy for these Elements on the Periodic Table, is a gap between Element Oxygen, and Element Fluorine. Let me just repeat that once again… 1/8th, of the entire spectrum of Electronegativity, for the elements on the periodic table, is a gap, between elements oxygen and fluorine. That kind of Electronegative energy, drawing in electrons in a biological system, (which is a finely tuned system of exchanging electrons), will extremely distort functions of biological systems, especially systems that have not evolved to cope with the levels of Fluoride they are being exposed to. We are 70% water like most life on this planet, and only throughout roughly the last 100,000 years or so have we begun digging these holes in the ground to obtain water. Evolution does not occur rapidly enough for this fluoride contaminated water to really be called “safe” by now, one quick example of the the slow progression of evolution is that we split from Chimpanzee’s / Bonobo’s nearly 7 million years ago. There is roughly 500 times less fluoride in natural sources of water like rain, or springs, even rivers and lakes, than there is in this water from holes in the ground. Drinking and using this water for farming, globally, is destroying our bodies, developing a wide spectrum of health issues for us, our pets, and our plants, I’m sure you’ve noticed we are the only wild animals getting cancer. A good example of the powerful Electronegativity that the Element Fluorine has is when experiments are done in it’s pure state as a Gas, in this state, it is so reactive, that almost any substance, Glass, Metals, or even WATER, BURN, with a Bright Flame in a Jet of Fluorine Gas, WITHOUT the need for a Spark. This incredibly high Electronegativite Energy still lingers even after Fluorine forms a bond with positive ions, and these are what we call “Fluoride’s”. Among the wide spectrum of various health Issues fluoride can cause, Cancer is a big one, but i’d like to focus on another not so obvious symptom of fluoride exposure. I think it is crucial to understand this particular issue, i’ll start with the fact that we are 70% water, we need a lot of water each and every day, and our body can detect the amounts of Fluoride, along with the minerals that end up in our water as it is processed for use in the blood stream, and if this amount is higher than the natural levels we have evolved to cope with, our brains have evolved a coping mechanism which attempts to put a damper on precious processes such as higher cognitive functions involved with imagination, functions that are involved with implementing new neurons, or functions that strengthen new neurons and neurons already in use for more reliable access. This damper is done as a way to preserve these functions for more significant moments, which will be judged by your conscious thought and with emotions, rather than just allowing these functions to be used all the time, because doing so would damage neurons your trying to strengthen if there are fluorides present in the blood stream. (this is hard for me to explain so I hope you understand.) This damper is the reason why cognitive dissonance occurs in the minds of so many people, many of us who are so heavily poisoned by these fluorides would much rather rely on a set of neurons that are already hardwired and therefore reliable, than to risk developing new ones that may be distorted by the presence of excessive amounts of fluorides. Stress, Anxiety, and Depression, are emotions that help you to overcome this damper that the Human Brain Implements, these emotions help your conscious mind in ways that get you “worked up” enough to use these higher cognitive functions, it is like forcing your brain to use these parts it’s trying to preserve (imagination and functions that are involved with implementing new neurons) because your consciously telling it that you think what your doing is extremely important. A good example of this, is if your having a discussion with your boss at work and he is noticeably upset because you didn’t do your job right, you’ll stress your self out over this kind of event in order to strengthen particular neurons for more reliable access so it won’t happen again. That sort of behavior can cause someone to act impulsively, i’m sure you’ve seen it many times even with your self, we want to stick to a set of reliable and hardwired neurons, as if we know it all and we don’t need improvement on our methods or awareness of particular things… and many times cognitive dissonance will occur as we try to avoid stress, anxiety, and depression, as we are met with concepts perhaps true, which seem counter intuitive to us. To make it short, excessive fluoride in the blood stream can literally makes it stressful to try to imagine and develop new ways of thinking… Your brain is telling you not too do something via these emotions, because of fluorides present in your blood stream, and unless you overcome this stress with your conscious judgement of importance, like realization of truth and excitement for learning this truth, cognitive dissonance will occur. Money, and other various ways of cultures, will keep us plugging along sticking to Hardwired Neurons doing something we wish we weren’t doing, or preforming a job which barely pays at all, perhaps worthless and unsustainable as well. There are many various ways our cultures promote these bad ideas, among each other in this adversity, must stand up for the rights of our planet, to live sustainability, to do so we must use our consciousness to it’s utmost potential, that means getting off of fluoride as soon as possible…. that’s the first step I believe. Money is evil in the sense that it is a false incentive and provides many unconscious opportunity’s for the destruction of our planet, it will promote the success of particular neurons, meanwhile demoting others, and we rewarded our selves self chemically when we receive “money” for doing a job right, or when we receive / spend / have money, this strengthens those neurons even further, it’s all about money sometimes, and not the actual thing it’s self. Rain (Like Distillation) = Averages 0.008 ppm (parts per million) fluoride. Surface Waters (For example, Lakes, Rivers, Springs) = Averages 0.05 ppm fluoride. Bottled Water = Averages 0.1 ppm fluoride. Well Water = Ranges from 0.7 – 12+ ppm fluoride . Tap Water = Ranges from 0.7 – 12+ ppm fluoride (In america, the maximum allowance is 4.0 ppm, regulated by the FDA). Additionally fluoridated water supplies often use chemicals we’ve never even come into contact with throughout the last 3.5 billion years of evolution here on earth, for example, chemicals like Sodium fluorosilicate (Na2SiF6)and HydroFluorosilicic acid (H2SiF6). Fluoridation of a public water supply is medication without consent. Some Of The Easiest Thing And Most Important Thing You Can And Should Do Immediately to avoid the majority of Fluoride Exposure, Is to drink distilled water, and to avoid tea or tobacco, they both contain very high levels of fluoride. Distilled water is very inexpensive, it cost roughly $0.88 for a 1 Gallon jug which is available at nearly every grocery store, otherwise you can order it from a water supplier like “Culigan”, for nearly the same price. Please start drinking distilled water, or rain water (collected sometime near the end of a storm, during a long duration of rain preferably, to give time to clean the atmosphere and avoid polluted air contamination) as soon as possible, drink, cook, and process your foods with this water. Distilled water and rain water, are the process of evaporating water, collecting and condensing the Vapors, which will leave you with 100% uncontaminated (no dissolved solids) h2o. If your worried about not getting enough minerals anymore once you switch to distilled water, let me assure you that if you eat acidic fruits, or add some lime or lemon (Organic) to your distilled water, you will be completeeeely fine… and feeling much better than you were before. For water with more alkalinity, you can add coconut water to your distilled water, and there are plenty more options out there for you to try, but a proper food diet is all that is necessary, many fruits will contain all the minerals you need in your diet, Banana, Cantaloupe, Water Melon, etc, fruit is incredibly healthy, especially raw fresh fruit, which happens to contain incredibly low levels of fluoride as well, especially if grown organically. Home Water Distilling Units range in Price from 100$ which is sufficient to start drinking, to 1000$ which will be ridiculously efficient, supporting your entire Family and a Home Garden as well, although rain can be very sufficient for a home garden if unpolluted, rain water (<0.008 ppm fluoride) If you decide to distill rain, or even reverse osmosis water, you will never have to clean your distillation unit, you will only need to drain the water from the boiling tank occasionally and replace the post filter every 2-5 months. Fluorine is in low friction "plastics" such as Teflon which is a molecule comprised of only carbon and fluorine atoms, when Teflon Coated Cookware is Heated, it will release Carbon Fluoride Vapors into the Air or into your Food, do not use Teflon for this reason, and it is well known to cause cancer, Aluminum is very dangerous as well, it will increase the toxic build up of Fluorides in the Human Body, avoid aluminum cookware because it aluminum has a low melting point and if scraped, bits of it will end up in your food, small amounts may boil off into water or evaporate like Teflon as well. Aluminum is the 3rd most abundant element in the Earth's Crust, and Fluorine is the 13th most abundant element in the Earths Crust: An average of 950 ppm of Fluoride are contained in it. Top Soils contain approximately 330 ppm of Fluorine, ranging from 150 to 400 ppm. Some Soils can have as much as 1000 ppm, and contaminated soils from industrial processes have been found with 3500 ppm. Rain water has 0.008 ppm as I've mentioned, and it is what the majority of the water that the Life on this Planet has evolved to cope with, waters are further contaminated when they make contact with the surface of this planet, and the further down those waters go before they are dug up from wells for various human purposes, the more contaminated they are going to be. Fluorine that is located in Soils may potentially accumulate in Plants, Especially the Tobacco or Tea plants, i'd like to suggest to you that you avoid exposing your body to these two plants in particular, you can find very credible information on wikipedia about them and their fluoride contamination. The amount of uptake of fluoride by Plants depends upon the type of Plant, and the type of Soil, and the amount and type of Fluoride found in the Soil / Water. Too much Fluoride, whether taken in from the Soil by Roots, or absorbed from the Atmosphere by the Leaves, retards the growth of Plants and reduces Crop Yields. Growing plants with well water is like pumping the fluoride equivalent of 25 to 100+ tubes of fluoridated tooth paste in with the soil throughout the plants life… that's because well water averages a 0.7 ppm contamination of fluoride… and unfortunately that is what I assume the majority of our foods are grown with at this time. With Plants that are Sensitive to Fluoride Exposure, even Low Concentrations of Fluoride can cause Leaf Damage, and a Decline in Growth. Although Fluoride was once considered an Essential Nutrient, the U.S. National Research Council has since removed this Designation due to the lack of Studies showing it is essential for Human growth. It is important to note that if Fluorides are absorbed too frequently it will cause Calcification of the mammalian Pineal Gland. The Pineal Gland is Biological Filter for Fluoride. It has the most Profuse Blood Flow 2nd only to the Kidneys, with the Highest Concentrations of Fluoride throughout the entire Mammalian Body, ranging from 20,000 ppm to 25,000 ppm. Both the Pineal Gland and The Kidneys turn Fluorides from our Diet / Blood Stream (note: Smoking, Vaporizing, Water Vapors in the shower, Air (15-50 ppb) into Calcium Fluorides, which are safer to process out of the body. The Pineal Gland processes built up calcium fluorides to safely exit the body through urine at night, the process involves the molecule n,n-DMT which is produced naturally in the Pineal Gland, many plants contain this molecule, and this molecule is good medicine for decalcification of the Pineal Gland if taken before your sleep as concentrated dried plant matter (nearly boil plant matter containing n,n-DMT, let settle for an hour, scoop off top layer of water and dry it in a glass dish), the urge to urinate will increase and you should wake up to do so accordingly. The molecule n,n-dmt neutralizes the electronegative effects of fluorides in the blood stream by clumping together with fluorides to help them safely exit the body. If these two filters cannot handle the fluoride they are being exposed to, then Fluorides may end up in other calcium deposits throughout our bodies, like our Bones, which can cause Skeletal Flurosis, (which may lead to Arthritis or Joint Pains), this may even be the general cause of Arthritis. Dental Flurosis (White spots on Teeth) which may lead to Tooth Decay, is an obvious sign of wider systemic damage. This information is updated as frequently as I can update it at DamageReport.org An obvious indication of a significant reduction in fluoride exposure to your body is remembering your dreams from each time you sleep, and vividly. (You are experiencing a subjective reality, and there are many others experiencing a subjective reality as well, but we are all the universe it's self, and so we must not activate neural pain networks.) Much Love, – И 25. Attractive section of content. I just stumbled upon your website and in accession capitgal to assert that I gett actually enjoyed account your blog posts. Any way I will be subscriibing to your feeds and even I achievement you accss consistently fast. 26. Marshall Eubanks A related question is, could the CMB provide a direct detection of gravitational waves? I think in principle it could. The scale at that time was about 300,000 light years, and of course gravitational waves move at c, so in principle the pattern would move at ~ 3 x 10^-6 radians / year, EXCEPT that these patterns are (of course) redshifted by a factor of ~ 1500, so the pattern would move (for us) at ~ 2 x 10^-9 radians / year. I think that’s beyond the resolution of our instruments, but if you waited (say) a million years, it would be pretty observable. I think that would qualify as a direct detection. 27. Last Friday (May 2) Marc Kamionkowski (Johns Hopkins Univ.), who developed the concepts underlying the BICEP experiment, gave a colloquium at the University of Maryland. His talk was extremely clear and cautious. First, about the mechanism. Consider a gwave (from inflation or not) having a long wavelength at the time of emission of the radiation that we now receive as the Cosmic Microwave Background. At one moment, a single linearly polarized gwave compresses physical distances along one axis, while expanding those along the perpendicular axis (where both axes are perpendicular to the wave’s travel). Half a cycle later, the formerly compressed axis becomes the expanded axis, and vice versa. This produces a quadrupole perturbation in the local temperature. The light that scatters from the electrons having that perturbed distribution of velocities is linearly polarized thereby, and we detect that polarization when the light enters our instrument. It follows directly from the mechanism described above that the pattern of the directions of the linear polarization of the scattered light directions will curve around, so that the field of little polarization vectors will have a curl. Approximately quoting Kamionkowski, “Density perturbations have no handedness, so they cannot produce a polarization pattern with curl. The perturbations produced by long wavelength gwaves do have a handedness, so they can and do produce a polarization with curl.” Based on models of inflation, Kamionkowski and colleagues had predicted predicted in the 1990s that the angular scale of the curled patterns would be about half a degree. (Their power spectrum would peak at el = 200.) Several other distinctive features were predicted. They all seem to be borne out by the BICEP2 data. But, as noted in the comment by Terry Ambiel, before concluding that the observed curling patterns of linear polarization are due to gwaves, it is essential to be sure that they are not mainly an artifact of scattering by the uneven distribution of dust in our own Galaxy. (The mere existence of that possibility verifies the indirectness of BICEP2’s indication of gwaves.) The BICEP2 teams has worked very hard on that possibility. Depending on how you count them, four to six tests all suggest that dust does not dominate the production of the observed pattern. None of the tests are individually conclusive, because anything concerning interstellar dust suffers from significant uncertainties. It is reassuring – but not conclusive – that none of the tests suggest dust as the cause. Other experiments now in progress should reduce the ambiguities considerably. One of them is named CLASS. 28. It seem obvious that the quality of “direct/indirect” is used to label observations and theories that people find likable/strong or unlikable/weak. See the BICEP2 analysis here, or the problem that “direct” detection of gravitational waves are done in instruments that uses light to read of changes in lengths caused by the waves, ultimately converting them to descriptive data after arduous analysis. If I ask for a measure or better yet a test of the quality, I don’t seem to get an answer. I don’t necessarily agree with the idea that if something isn’t quantifiable, it isn’t science, a physics theory, an idea described here previously. In a weakened sense of science, anything that can be used as constraint, even qualitative properties, should be amenable. That is, if the core of science is hypothesis testing descriptions of observations and theories. But if “direct/indirect” isn’t quantifiable, it seems to point to a pre-science history. Scientists seems to find it useful, so maybe it shouldn’t be dumped outright. But [why] isn’t there a measurement theory describing “directness” or strongness? 29. Please consider this: 30. Curious Mayhem It’s hard to see what the controversy is here. The detection, if that is what it was, was indirect. No one has detected gravitational waves. What has been detected, both in the CBR polarization measurement and in the neutron star orbital decay, is the indirect effect of otherwise unseen gravitational waves. web site as a best website for newest updates. from that service? Thanks a lot! enlightening to read? 35. Pingback: BICEP2 Redux: How the Sausage is Made | Whiskey…Tango…Foxtrot? 36. I believe what you published made a ton of sense. But, think on this, suppose you wrote a catchier title? I am not suggesting your content is not good, but suppose you added a post title that makes people desire more? I mean Did BICEP2 Detect Gravitational Waves Directly or Indirectly? | Of Particular Significance is a little vanilla. You should look at Yahoo’s home page and note how they write article headlines to grab viewers to click. posts a little bit more interesting. Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s
931a159daab00e68
Wednesday, June 22, 2016 Is Research a Skill or a Talent? Or Both? 1. No question in my mind that it is both. There is also a subtlety that you did not address when it comes to asking questions: whether the person asking the questions has the intellectual ability to understand the answers given. The reference you make to knowing "too much" about a subject to be able to assist someone is something that resonates with me. I know a great deal about IT concepts and it can be very easy for me to make someone's eyes glaze over when talking about the subject by assuming too much knowledge and putting too much jargon into an explanation. To use a genealogical example if I were to refer to vital records a non-genealogist would reasonably assume that I am referring to records that are particularly important to a particular task or subject area, whereas a genealogist would know that I mean birth, marriage and/or death records. However it is not just whether the explanation is pitched at the correct level for someone's knowledge of a subject. What is also important is the capacity of a given person to actually understand an explanation. Take two people A and B with identical starting knowledge levels of a given subject. Assume that A and B have an identical learning style such that a given way of explaining something will work equally well with either A or B. It is still perfectly possible for A to pick up the new information and concepts much, much better than B if A has a greater ability in that particular area than B. For a given individual there are thresholds across which they simply cannot cross however much instruction they are given. Those thresholds vary by subject area and pursuit for a given person of course. For example I will never be a brilliant painter as my talents don't lie in that direction, but I am good amateur singer both chorally and as a soloist as my talents lie in that direction and I have practiced and worked on my singing over many years. To shift back to the example you give, there are two possibilities as to what happened: 1. Despite trying to rephrase and alter what you were trying to say you did not manage to find an appropriate way to talk to that patron such that they would pick up what you were saying. 2. Despite trying to rephrase and alter what you were trying to say you would never have managed to find any appropriate way to talk to that patron such that they would pick up what you were saying. Both are failures to communicate but I suspect that case number 2 may well have been in play. In case number 2 the patron would never understand what you were trying to explain because the person in question simply lacks the ability to grasp the concept itself. To take an extreme example if someone has the intellectual level of a average ten year old child they will never be able to understand the concept of a probability density wavefunction that the Schrödinger equation's solution produces in quantum mechanics because the level of understanding required is so far beyond what an average ten year old can deal with. So we have three factors in play when considering how good someone is at something and how well they can improve: 1. A person's innate ability to undertake a task 2. The ability of an instructor or teacher to communicate a new concept 3. The overall experience level of a person in a task or competence Points two and three are fungible but point one is not and thus it represents the ultimate overall destination point if a person has good teachers/instructors and lots of practice in an area. Personally speaking I tend to pick up new concepts very quickly and once I have assimilated them it is as if I had never not known them. That makes teaching others very hard for me as I can get frustrated if people don't pick up concepts as fast as I do and I also find it difficult to pitch things at an appropriate level for the person I am speaking to. I suspect that I would have given up trying to instruct that patron an awful lot sooner than you! 1. Thanks for this very interesting commentary. I am putting a note at the beginning of the blog post to make sure the readers read your comment. 2. Good Mothers tend to have a bit of an advantage here. They have to adapt their teaching styles to all ages, as their children grow. It irks me that someone, somewhere told you and me both, that we "Know too much" to be good teachers. I'd like to know what that person(s) knows? I have not only a Masters in Gifted education and taught in high school 20 years, but I also obtained an elementary teaching certificate, and taught preschool for 11 years and lower levels for 3 years. I have 7 children and 35 grand children from ages under 1 to 25, and they love my lessons of life, family history, and more. I say non-sense to knowing too much. Hum bug!
68f9a91a3dee4ef0
I know that $re^{i\theta} = x + iy$ for any complex number $x + iy$ by Euler's formula. How do you calculate relative and global phase? As clearly evident from the Euler's form $z=re^{i\theta}$, a phase has something to do with rotation in the Argand plane but not affect the amplitude of a complex number. You can make a set of set of infinite complex numbers with the same magnitude. It can just be regarded as the extra degree of freedom for a given complex number. In the perspective of Quantum Information/Computing, the observable quantities are the probabilities which are proportional to the complex number amplitudes $|z|^2=|re^{i\theta}|^2=(re^{i\theta})(r^*e^{-i\theta})=r^2$ which clearly doesn't care about the the phase $\theta$. Let's consider the most simple non-trivial example. For any quantum state with two degrees of freedom (qubit): \begin{equation} |\psi\rangle=r_1e^{i\theta_1}|0\rangle+r_2e^{i\theta_2}|1\rangle \end{equation} This is described by two complex numbers with phases $\theta_1$ and $\theta_2$ respectively. It can be rewritten as: \begin{equation} |\psi\rangle=e^{i\theta_1}(r_1|0\rangle+r_2e^{i(\theta_2-\theta_1)}|1\rangle) \end{equation} Now, if you calculate the amplitude $|\psi|^2$, the factor $e^{i\theta_1}$ in front will vanish by the argument above. This is called a global phase which is an overall phase in front. The relative phase is the quantity $\theta_2-\theta_1$ or $\theta_1-\theta_2$, however defined. The relative phase is an observable quantity in Quantum Theory and it can be changed when a state evolved in accordance with the Schrodinger's equation $i\hbar\frac{d}{dt}|\psi\rangle=\hat{H}|\psi\rangle$. The relative phase has also great importance when we consider the density matrix for a state defined as $\rho=|\psi\rangle \langle \psi|$ which for the example above is: \begin{equation} \rho=r_1^2|0\rangle\langle0|+r_1r_2e^{i(\theta_1-\theta_2)}|0\rangle\langle1|+r_2r_1e^{i(\theta_2-\theta_1)}|1\rangle\langle0|+r_2^2|1\rangle\langle1| \end{equation} where it is only the relative phase that appears and not the global phase. In Quantum Information point of view, this relative phase appearing in the off-diagonal terms of the above matrix carries the information of coherence of the system which is one of the most unique properties of quantum systems. These are some general concerns of relative and global phases. It does not make any sense to talk about a relative phase for a single complex number $z$. Also, please see the wiki articles of such concepts, they clear enough content on these as a good start. Here you can refer to https://en.wikipedia.org/wiki/Qubit, mainly the Bloch sphere section. • $\begingroup$ Can you please explain how $\vert \psi \vert^2$ is calculated to make $e^{i\theta_1}$ vanish? $\endgroup$ Jul 15 '20 at 21:46 • $\begingroup$ If $|\psi\rangle=e^{i\theta_1}C$, where $C$ is any complex number. Then $|\psi|^2=\langle\psi|\psi\rangle=e^{-i\theta_1}C^* \times e^{i\theta_1}C=C^*C=|C|^2$. $\endgroup$ Jul 15 '20 at 21:50 • $\begingroup$ Thank you so much for the explanation and the fast reply! $\endgroup$ Jul 15 '20 at 21:56 • $\begingroup$ Sorry for digging up this old answer, but it is a reference question ! You say that the relative phase arises from states evolving in accordance to the Schrödinger equation. Do you have any work to cite for this ? As I wrote it in a piece of work and would like to know where to find this information. $\endgroup$ Dec 4 '20 at 21:26 • 1 $\begingroup$ @Eenoku He mixed the complex number and quantum states. Better wirte as this for quamtum states: $|\psi\rangle=e^{i\theta}|\phi\rangle, \langle \psi |\psi\rangle =e^{-i\theta}\langle\phi| e^{i\theta}|\phi\rangle=\langle \phi |\phi\rangle$ .For complex number $C'= e^{i\theta}C$, you can check $|C'|^2=|C|^2$. $\endgroup$ – RicknJerry Oct 6 at 6:29 From a physical point of view, there couldn't be a bigger difference. Global phases are artefacts of the mathematical framework you are using, and have no physical meaning. Two states differing only by a global phase represent the same physical system. Indeed, a more careful treatment of quantum mechanics would involve defining quantum states as elements of a projective Hilbert space, in which all elements differing only by a phase are identified as equal. On the other hand, relative phases are in some sense the core of quantum mechanics. States differing by a relative phase are different systems that evolve in different ways, although they will appear identical if only measured in the measurement basis in which they only differ by such relative phase. Your Answer
75b17404af7d7a98
Nonlinear Schrödinger equation From Wikipedia, the free encyclopedia Jump to: navigation, search Absolute value of the complex envelope of exact analytical breather solutions of the nonlinear Schrödinger (NLS) equation in nondimensional form. (A) The Akhmediev breather; (B) the Peregrine breather; (C) the Kuznetsov–Ma breather.[1] In theoretical physics, the (one-dimensional) nonlinear Schrödinger equation (NLSE) is a nonlinear variation of the Schrödinger equation. It is a classical field equation whose principal applications are to the propagation of light in nonlinear optical fibers and planar waveguides[2] and to Bose-Einstein condensates confined to highly anisotropic cigar-shaped traps, in the mean-field regime.[3] Additionally, the equation appears in the studies of small-amplitude gravity waves on the surface of deep inviscid (zero-viscosity) water;[2] the Langmuir waves in hot plasmas;[2] the propagation of plane-diffracted wave beams in the focusing regions of the ionosphere;[4] the propagation of Davydov's alpha-helix solitons, which are responsible for energy transport along molecular chains;[5] and many others. More generally, the NLSE appears as one of universal equations that describe the evolution of slowly varying packets of quasi-monochromatic waves in weakly nonlinear media that have dispersion.[2] Unlike the linear Schrödinger equation, the NLSE never describes the time evolution of a quantum state (except hypothetically, as in some early attempts, in the 1970s, to explain the quantum measurement process[6]). The 1D NLSE is an example of an integrable model. In quantum mechanics, the 1D NLSE is a special case of the classical nonlinear Schrödinger field, which in turn is a classical limit of a quantum Schrödinger field. Conversely, when the classical Schrödinger field is canonically quantized, it becomes a quantum field theory (which is linear, despite the fact that it is called ″quantum nonlinear Schrödinger equation″) that describes bosonic point particles with delta-function interactions — the particles either repel or attract when they are at the same point. In fact, when the number of particles is finite, this quantum field theory is equivalent to the Lieb–Liniger model. Both the quantum and the classical 1D nonlinear Schrödinger equations are integrable. Of special interest is the limit of infinite strength repulsion, in which case the Lieb–Liniger model becomes the Tonks–Girardeau gas (also called the hard-core Bose gas, or impenetrable Bose gas). In this limit, the bosons may, by a change of variables that is a continuum generalization of the Jordan–Wigner transformation, be transformed to a system one-dimensional noninteracting spinless[nb 1] fermions.[7] The nonlinear Schrödinger equation is a simplified 1+1-dimensional form of the Ginzburg–Landau equation introduced in 1950 in their work on superconductivity, and was written down explicitly by R. Y. Chiao, E. Garmire, and C. H. Townes (1964, equation (5)) in their study of optical beams. The nonlinear Schrödinger equation is a partial differential equation, applicable to classical and quantum mechanics. Classical equation[edit] The classical field equation (in dimensionless form) is:[8] Nonlinear Schrödinger equation (Classical field theory) i\partial_t\psi=-{1\over 2}\partial^2_x\psi+\kappa|\psi|^2 \psi for the complex field ψ(x,t). This equation arises from the Hamiltonian[8] H=\int \mathrm{d}x \left[{1\over 2}|\partial_x\psi|^2+{\kappa \over 2}|\psi|^4\right] with the Poisson brackets \{\psi(x),\psi(y)\}=\{\psi^*(x),\psi^*(y)\}=0 \, \{\psi^*(x),\psi(y)\}=i\delta(x-y). \, Quantum mechanics[edit] To get the quantized version, simply replace the Poisson brackets by commutators {}[\psi(x),\psi(y)] &= [\psi^*(x),\psi^*(y)] = 0\\ {}[\psi^*(x),\psi(y)] &= -\delta(x-y) and normal order the Hamiltonian H=\int dx \left[{1\over 2}\partial_x\psi^\dagger\partial_x\psi+{\kappa \over 2}\psi^\dagger\psi^\dagger\psi\psi\right]. The quantum version was solved by Bethe ansatz by Lieb and Liniger . Thermodynamics was described by Chen Nin Yang. Quantum correlation functions also were evaluated, see.[7] The model has higher conservation laws, expression in terms of local fields can be found in. [1]. Solving the equation[edit] The nonlinear Schrödinger equation is integrable: Zakharov and Shabat (1972) solved it with the inverse scattering transform. The corresponding linear system of equations is known as the Zakharov–Shabat system: \phi_x &= J\phi\Lambda+U\phi \\ \phi_t &= 2J\phi\Lambda^2+2U\phi\Lambda+(JU^2-JU_x)\phi, \Lambda = , \quad J = i\sigma_z = , \quad U = i The nonlinear Schrödinger equation arises as compatibility condition of the Zakharov–Shabat system: \phi_{xt}=\phi_{tx} \quad \Rightarrow \quad U_t=-JU_{xx}+2JU^2U \quad \Leftrightarrow \quad iq_t=q_{xx}+2qrq \\ By setting q = r* or q = − r* the nonlinear Schrödinger equation with attractive or repulsive interaction is obtained. An alternative approach uses the Zakharov–Shabat system directly and employs the following Darboux transformation: & \phi \to \phi[1]=\phi\Lambda-\sigma\phi \\ & U \to U[1]=U+[J,\sigma] \\ & \sigma = \varphi\Omega\varphi^{-1} which leaves the system invariant. Here, φ is another invertible matrix solution (different from ϕ) of the Zakharov–Shabat system with spectral parameter Ω: \varphi_x &= J\varphi\Omega+U\varphi \\ \varphi_t &= 2J\varphi\Omega^2+2U\varphi\Omega+(JU^2-JU_x)\varphi. Starting from the trivial solution U = 0 and iterating, one obtains the solutions with n solitons. Computational solutions are found using a variety of methods, like the split-step method. Galilean invariance[edit] The nonlinear Schrödinger equation is Galilean invariant in the following sense: Given a solution ψ(x, t) a new solution can be obtained by replacing x with x + vt everywhere in ψ(x, t) and by appending a phase factor of e^{-iv(x+vt/2)}\,: \psi(x,t) \mapsto \psi_{[v]}(x,t)=\psi(x+vt,t)\; e^{-iv(x+vt/2)}. The nonlinear Schrödinger equation in fiber optics[edit] In optics, the nonlinear Schrödinger equation occurs in the Manakov system, a model of wave propagation in fiber optics. The function ψ represents a wave and the nonlinear Schrödinger equation describes the propagation of the wave through a nonlinear medium. The second-order derivative represents the dispersion, while the κ term represents the nonlinearity. The equation models many nonlinearity effects in a fiber, including but not limited to self-phase modulation, four-wave mixing, second harmonic generation, stimulated Raman scattering, etc. The nonlinear Schrödinger equation in water waves[edit] A hyperbolic secant (sech) envelope soliton for surface waves on deep water. Blue line: water waves. Red line: envelope soliton. For water waves, the nonlinear Schrödinger equation describes the evolution of the envelope of modulated wave groups. In a paper in 1968, Vladimir E. Zakharov describes the Hamiltonian structure of water waves. In the same paper Zakharov shows, that for slowly modulated wave groups, the wave amplitude satisfies the nonlinear Schrödinger equation, approximately.[9] The value of the nonlinearity parameter к depends on the relative water depth. For deep water, with the water depth large compared to the wave length of the water waves, к is negative and envelope solitons may occur. For shallow water, with wavelengths longer than 4.6 times the water depth, the nonlinearity parameter к is positive and wave groups with envelope solitons do not exist. Note, that in shallow water surface-elevation solitons or waves of translation do exist, but they are not governed by the nonlinear Schrödinger equation. The nonlinear Schrödinger equation is thought to be important for explaining the formation of rogue waves. The complex field ψ, as appearing in the nonlinear Schrödinger equation, is related to the amplitude and phase of the water waves. Consider a slowly modulated carrier wave with water surface elevation η of the form: \eta = a(x_0,t_0)\; \cos \left[ k_0\, x_0 - \omega_0\, t_0 - \theta(x_0,t_0) \right], where a(x0, t0) and θ(x0, t0) are the slowly modulated amplitude and phase. Further ω0 and k0 are the (constant) angular frequency and wavenumber of the carrier waves, which have to satisfy the dispersion relation ω0 = Ω(k0). Then \psi = a\; \exp \left( i \theta \right). So its modulus |ψ| is the wave amplitude a, and its argument arg(ψ) is the phase θ. The relation between the physical coordinates (x0, t0) and the (x, t) coordinates, as used in the nonlinear Schrödinger equation given above, is given by: x = k_0 \left[ x_0 - \Omega'(k_0)\; t_0 \right], \quad t = k_0^2 \left[ -\Omega''(k_0) \right]\; t_0 Thus (x, t) is a transformed coordinate system moving with the group velocity Ω'(k0) of the carrier waves, The dispersion-relation curvature Ω"(k0) is always negative for water waves under the action of gravity. For waves on the water surface of deep water, the coefficients of importance for the nonlinear Schrödinger equation are: \kappa = - 2 k_0^2, \quad \Omega(k_0) = \sqrt{g k_0} = \omega_0 \,\! \Omega'(k_0) = \frac{1}{2} \frac{\omega_0}{k_0}, \quad \Omega''(k_0) = -\frac{1}{4} \frac{\omega_0^3}{k_0^3} \,\! where g is the acceleration due to gravity at the Earth's surface. Gauge equivalent counterpart[edit] NLSE (1) is gauge equivalent to the following isotropic Landau-Lifshitz equation (LLE) or Heisenberg ferromagnet equation \vec{S}_t=\vec{S}\wedge \vec{S}_{xx}. \qquad Note that this equation admits several integrable and non-integrable generalizations in 2 + 1 dimensions like the Ishimori equation and so on. Relation to vortices[edit] Hasimoto (1972) showed that the work of da Rios (1906) on vortex filaments is closely related to the nonlinear Schrödinger equation. See also[edit] 1. ^ A possible source of confusion here is the spin-statistics theorem, which demands that fermions have half-integer spin; however, it is a theorem of relativistic 3+1-dimensional quantum field theories, and thus is not applicable in this 1D, nonrelativistic case. 1. ^ Figure 1 from: Onorato, M.; Proment, D.; Clauss, G.; Klein, M. (2013), "Rogue Waves: From Nonlinear Schrödinger Breather Solutions to Sea-Keeping Test", PLoS One 8 (2): e54629, doi:10.1371/journal.pone.0054629, PMC 3566097  2. ^ a b c d Malomed, Boris (2005), "Nonlinear Schrödinger Equations", in Scott, Alwyn, Encyclopedia of Nonlinear Science, New York: Routledge, pp. 639–643  3. ^ Pitaevskii, L. & Stringari, S. (2003), Bose-Einstein Condensation, Oxford, U.K.: Clarendon  4. ^ Gurevich, A. V. (1978), Nonlinear Phenomena in the Ionosphere, Berlin: Springer  5. ^ Balakrishnan, R. (1985). "Soliton propagation in nonuniform media". Physical Review A 32 (2): 1144–1149. doi:10.1103/PhysRevA.32.1144. PMID 9896172.  edit 6. ^ Bassi, A.; Lochan, K.; Satin, S.; Singh, T. P.; Ulbricht, H. (2013). "Models of wave-function collapse, underlying theories, and experimental tests". Reviews of Modern Physics 85 (2): 471. doi:10.1103/RevModPhys.85.471.  edit 7. ^ a b Korepin, V. E.; Bogoliubov, N. M.; Izergin, A. G. (1993). Quantum Inverse Scattering Method and Correlation Functions. Cambridge, U.K.: Cambridge University Press. doi:10.2277/0521586461. ISBN 978-0-521-58646-7.  8. ^ a b V.E. Zakharov; S.V. Manakov (1974). "On the complete integrability of a nonlinear Schrödinger equation". Journal of Theoretical and Mathematical Physics 19 (3): 551–559. Bibcode:1974TMP....19..551Z. doi:10.1007/BF01035568. . Originally in: Teoreticheskaya i Matematicheskaya Fizika 19 (3): 332–343. June 1974.  9. ^ V. E. Zakharov (1968). "Stability of periodic waves of finite amplitude on the surface of a deep fluid". Journal of Applied Mechanics and Technical Physics 9 (2): 190–194. Bibcode:1968JAMTP...9..190Z. doi:10.1007/BF00913182.  Originally in: Zhurnal Prikdadnoi Mekhaniki i Tekhnicheskoi Fiziki 9 (2): 86–94, 1968.] External links[edit]
87b2699f9cd5e00f
Web essay: www.zeh-hd.de  -- (2006 - last revised: May 2011) Quantum nonlocality vs. Einstein locality H. D. Zeh Quantum theory is kinematically nonlocal, while the theory of relativity (including relativistic quantum field theory) requires dynamical locality ("Einstein locality"). How can these two elements of the theory (well based on experimental results) be simultaneously meaningful and compatible? How can dynamical locality even be defined in terms of kinematically nonlocal concepts? Dynamical locality in conventional terms means that there is no action at a distance: states "here" cannot directly influence states "there". Relativistically this has the consequence that dynamical effects can only arise within the forward light cones of their causes. However, generic quantum states are "neither here nor there", nor are they simply composed of "states here and states there" (with a logical "and" that would in the quantum formalism be represented as a direct product). Quantum systems at different places are usually entangled, and thus do not possess any states of their own. Therefore, quantum dynamics must in general describe the dynamics of global states. It may thus appear to be necessarily nonlocal. This discrepancy is often muddled by insisting that reality is made up of local events or phenomena only. However, quantum entanglement does not merely represent statistical correlations that would represent incomplete information about a local reality. Individually observable quantities, such as the total angular momentum of composed systems, or the binding energy of the He atom, can not be defined in terms of local quantities. This nonlocality has been directly confirmed by the violation of Bell's inequalities or the existence of Greenberger-Horne-Zeilinger relations. If there were kinematically local concepts completely describing reality, they would indeed require some superluminal "spooky action at a distance" (in Einstein's words). Otherwise, however, such a picture is questionable or meaningless. In particular, nothing has to be teleported in so-called quantum teleportation experiments. In terms of nonlocal quantum states, one has to carefully prepare an appropriate entangled state that contains, among its components, all states to be possibly teleported (or their dynamical predecessors) already at their final destination – similar to the hedgehog's wife in the Grimm brothers' story of Der Hase und der Igel (see Quantum teleportation and other quantum misnomers). These kinematical properties characterize quantum nonlocality. But what about Einstein locality in this description? Why does the change of a global quantum state not allow superluminal signals, for example? The concept of locality in quantum theory requires more than a formal Hilbert space structure (relativistically as well as non-relativistically). It presumes a local Hilbert space basis (for example consisting of spatial fields and/or particles). Dynamical locality then means that the Hamiltonian is a sum over local terms, or an integral over a local Hamiltonian density in space, while all dynamical propagators for these local elements must relativistically obey the light cone structure. This framework is most successfully represented by quantum field theory. It may be characterized by the following program: (1) Define an underlying set of local "classical" fields (including a spatial metric) on a three-dimensional (or more general) manifold. (2) Define quantum states as wave functionals of these fields (that is, nonlocal superpositions of different spatial fields). (3) Assume that the Hamiltonian operator H (acting on wave functionals) is defined as an integral over a Hamiltonian density, written in terms of these fields at each space point. (4) Using this Hamiltonian, write down a time-dependent Schrödinger equation for the wave functionals, or, in order to allow the inclusion of quantum gravity, a Wheeler-DeWitt equation: H \Psi = 0. The dynamics is then local (in the classical sense) for all local basis elements, which, according to this construction, must span the space of all states. This concept defines the quantum version of Einstein locality. (I have here not discussed complications resulting from nonlocal gauge degrees of freedom.) The local (additive) form of the Hamiltonian has an important dynamical consequence for nonlocal states. If two distant systems \phi and \psi are entangled, assuming the form ∑n√pn\phin\psin in their Schmidt decomposition, all matrix elements of H between components with different n must vanish, since the individual, local terms of H can only act on \phi or \psi. Such "dislocalized superpositions" arise unavoidably by means of decoherence, while their relocalization ("recoherence") would require an improbable accident in a causal universe (see The Physical Basis of the Direction of Time). The factorizing Schmidt components thus describe dynamically autonomous "worlds", which must contain separate observers, and which permanently branch by means of measurement-like processes.  This dynamical argument, based on nothing else but the Schrödinger equation with its local Hamiltonian, justifies Everett's collapse-free interpretation of quantum theory – see How Decoherence can solve the measurement problem. (Note that the linearity of dynamics by itself would not be sufficient for this purpose, since it would not be able to describe quantum measurements and other processes that lead to nonlocal entanglement.) If, in the case of a Wheeler-DeWitt equation, a WKB approximation (based on a Born-Oppenheimer expansion in terms of the Planck mass) applies, orbit-like "wave tubes" in the "superspace" of spatial geometries (the configuration space of general relativity) may define quasi-classical spacetimes (such as solutions of the Einstein equations). The corresponding matter states obey a derived time-dependent Schrödinger equation with respect to a "WKB time" parameter along these quasi-classical orbits of spatial geometries (see C. Kiefer: Quantum Gravity, Cambridge UP, 2007). Wave tubes on the configuration space of geometry are then decohered from one another by the matter states (which thereby act as an environment to quantum geometry) according to the Wheeler-DeWitt equation. This decoherence along quasi-trajectories in superspace may lead to further quasi-classical fields, and possibly other quasi-local variables, which are robust in the sense that their different values define dynamically autonomous components ("branches"). Einstein locality then holds up to remaining quantum uncertainties of the spacetime metric (resulting from the non-vanishing widths of the wave packets in superspace). In "effective" (phenomenological) quantum field theories, dynamical locality is often formulated by means of a condition of microcausality. It requires that commutators between field operators at spatially different spacetime points vanish. This condition is partially kinematical (as it presumes a local reference basis of quantum states), partially dynamical (as it uses the Heisenberg picture for field operators), and partially a matter of definition (as it requires a decomposition of the field operators in terms of "particles and antiparticles", which may depend on the effective vacuum, for example). The dynamical consistency of this microcausality condition is nontrivial. In principle, the properties of (anti-)commutators of (effective) field operators at different times should represent a deterministic consequence from those on an arbitrary simultaneity, t = t', caused by the given relativistic dynamics (Hamiltonian). They cannot be independently postulated for all times. In his foundation of quantum field theory, Steven Weinberg derived microcausality and the locality of the Hamiltonian from his cluster decomposition principle. This is a phenomenological constraint to the S-matrix, which requires that "distant experiments give uncorrelated results". However, such a principle cannot form a fundamental element of the quantum theory, since (a) observable correlations may exist or controllably be prepared either as statistical correlations or as entanglement between distant systems, and (b) the concept of an S-matrix is (approximately) applicable only to sufficiently isolated (microscopic) systems. Macroscopic systems never cease to interact uncontrollably with their environment; this fact is known as the source of decoherence, and hence of the classical phenomena and the appearance of "quantum events" (see How decoherence may solve the measurement problem). Only such apparent events justify the probability interpretation of the S-matrix – even for microscopic objects. So I feel that instead of going beyond the empirically founded effective theories when searching for mathematical consistency of hypothetical theories (in the hope for finding the final universal theory), physicsts should first analyze the physical consistency and meaning of effective field theories (see also Chap. 6 of The Physical Basis of the Direction of Time).     BACK to homepage zeh
1ec1dd464952d02a
Dismiss Notice Join Physics Forums Today! Quantum physics with Green's functions 1. Nov 13, 2007 #1 The density of states can be expressed in terms of Green's functions and is defined as the imaginary part of the Green function (multiplied by 1/Pi). Can someone explain me what happens if the imaginary part is just 0? It would be the case if all Eigenvalues of the Schrödinger equation are real, which should happen very often in my understanding of quantum systems. 2. jcsd 3. Nov 13, 2007 #2 The imaginary part wouldn't really be zero at the eigenvalues of the Hamiltonian, because those will be the poles of the Green's function and it will be undefined at that point. You get a more sensible spectral function if you add a small imaginary part to the denominator of your Green's function. Starting with a Green's function of the form: [tex]G(\omega) = \left[ \omega + \mu - H + i\delta \right]^{-1}[/tex] then the imaginary part is: [tex]\Im G(\omega) = \frac{-\delta}{ (\omega + \mu - H)^2 + \delta^2}[/tex] This pushes the poles off the real axis so you don't have singularities there, and also broadens the peaks (important for numerical studies where you are evaluating the Green's function on a regular grid). You will notice is you take the limit as [tex]\delta[/tex] goes to zero you will get a delta functions at the points where [tex]\omega + \mu - H[/tex] 4. Nov 14, 2007 #3 Thanks. You did help a lot. I didn't know that there will remain a delta function after delta went to zero. Do you know a derivation of this? You seem to have a background on Green's functions. Can you help me in one more question, please? Is it possible to get a solution of the Green's function without solving the Schrödinger equation to get the Eigenvalues? If it wouldn't be the case I don't really understang why the Green's function method is such an advantage. I mean, solving the Schrödinger equation for Eigenvalues and Eigenvectors, already results in having the neccessary components for the density matrix. Ok, there must be a reason to use Green's functions, otherwise so many people wouldn't use them. But I don't really understand the advantage. 5. Nov 15, 2007 #4 Off the top of my head, a derivation might go like, as [tex]\delta[/tex] goes to zero, for [tex]x = \omega + \mu - H = 0[/tex] non zero, the denominator stays non-zero and the numerator goes to zero, so you have a zero. For x = 0, then it goes like [tex]1/\delta[/tex] as [tex]\delta[/tex] goes to zero, so this is divergent. The only trick from there would be to show that the integral of [tex]\delta/(x^2 + \delta^2)[/tex] over all x is a constant value (I think it might be pi) independent of [tex]\delta[/tex]. My experience with Green's functions is fairly limited. So the only ways I know of to evaluate Green's functions require knowledge of the full spectrum of the Hamiltonian. But I will list the advantages I can think of, all of which are condensed matter topics. The density of states you calculate from a non-interacting (or mean-field) theory doesn't correspond very well with experimental photo absorption/emission, because these experiments involve exciting a particle to a state with a finite lifetime that the whole system can have a response to. If the physical system were non-interacting it makes sense that the density of states would be an adequate description, because the excited particle wouldn't affect any of the other particles. But in a system with strong interactions, this isn't the case. The Green's function calculated via [tex]G(t) = \langle c(t) c^\dagger \rangle[/tex] is a direct calculation of particle excitation, so from this argument, one would expect the Green's function to give a more physically realistic picture. If you calculate the band structure (ie. eigenvalues of H_k) via LDA, you will have a set of energy bands which are sharp delta-function peaks at a specific energy values, all having the weight of one (ignoring degeneracy). This doesn't reflect experiment well because a variety of factors will broaden these peaks, or shift their weight to other bands. Some of these factors are: impurities, disorder, temperature, correlation effects. (In many cases these can be safely neglected, but there are certainly interesting cases where they are relevant). Impurities can be treated with the Coherent Potential Approximation, which uses a Green's function formalism (and that is the extent of my knowledge on CPA). My knowledge of Green's functions comes from doing Dynamical Mean Field Theory (DMFT) calculations. In the DMFT formalism, we take the Green's function at each k-value: [tex]G_{k}(\omega) = \frac{1}{\omega + \mu - H_k - \Sigma(\omega)}[/tex] where you can see a frequency dependent potential has been added to the Hamiltonian (the so-called self-energy). The spectrum of these Green's functions gives the usual k-space band structure, if the self-energy is zero. The self-energy is calculated via some other means, often with some self-consistency condition with the Green's function. The self-energy is non-Hermitian and may have a substantial imaginary part at some frequency. This can broaden the spectrum of H, create additional peaks, shift weight around, etc. to create the kinds of effects that appear in experiment. In DMFT the self-energy is used to include correlation effects. I'd imagine it's difficult / impossible to include any frequency dependent potential without using Green's functions. Have something to add? Similar Discussions: Quantum physics with Green's functions 1. Green function for SHO (Replies: 2)
02d6cbef608740e1
Exact calculations of nuclear systems with realistic forces A collaboration of Argonne National Laboratory, University of Illinois at Urbana-Champaign, Universidade de Lisboa, Los Alamos National Laboratory, Old Dominion University, and Thomas Jefferson National Accelerator Facility Constant density surfaces for a polarized deuteron in the Md = ±1 (left) and Md = 0 (right) states. The deuteron, or 2H nucleus, contains one proton and one neutron and has a total angular momentum of 1. It can be oriented in a specific di"RECT"ion, for example by an external magnetic field, with possible spin projections of Md = +1 (parallel), -1 (antiparallel), or 0 (perpendicular). The force between two nucleons, which can be attributed to the exchange of pi-mesons at long range, has a strong tensor character which leads to these unusual shapes. The length of the dumbell and the diameter of the doughnut are both about 1.5 femtometer (1.5 x 10-15 meter). We wish to understand the stability, structure, and reactions of nuclei as a consequence of the interactions between individual nucleons. In quantum mechanics, the state of a many-body system is described by its wave function, psi, while the motion and interaction of the particles is determined by the Hamiltonian, ham. We are trying to build a consistent description of nuclear systems ranging in size from the deuteron to neutron stars using a single Hamiltonian. This requires finding accurate solutions to the many-body Schrödinger equation, se. Realistic nuclear forces, which accurately describe nucleon-nucleon (NN) scattering and bound states, are very complicated. The most basic forces include central, spin-spin, tensor, and spin-orbit components, all with and without isospin dependence: Even more components are required to get a really good description of NN data. We have constructed several force models over the years as new data has become available, and as the accuracy of the many-body calculations have improved. Our most recent force model, the Argonne v18 nucleon-nucleon potential [1], uses eighteen operator components to fit over 4,300 NN scattering data. There is also strong evidence for many-nucleon forces and special relativity can also be important. Solving the many-nucleon Schrödinger equation is consequently a very challenging theoretical problem. We have been using quantum Monte Carlo (QMC) methods to study few-body nuclei. We start with a trial guess for the form of the wave function,psit and then systematically improve on it using the Green's function Monte Carlo (GFMC) algorithm to approach the true ground state: These calculations are computationally intensive, and our recent progress has required the use of massively parallel supercomputers. In fact, very rapid progress has been made in recent years, as detailed in the following table: Progress in exact ground-state calculations Year denotes the date when the first solutions accurate within 1% of the binding energy were obtained. FLOPS is the number of FLoating point OPerations required. Nucleus Method Year FLOPS Computer Time 2H Diff Equation 1953 50x103 Illiac-I 15 min 3H 34-ch Faddeev 1984 100x109 Cray XMP 30 min 4He GFMC 1987 15x1012 Cray 2 40 hr 5He GFMC 1993 100x1012 Cray C90 100 hr 6Li GFMC 1995 300x1012 IBM SP1 6000 node-hr 7Li GFMC 1996 4x1015 IBM SP2 1000 node-hr 8Be GFMC 1997 17x1015 IBM SP2 1300 node-hr To study larger systems, we use cluster variational Monte Carlo methods for closed-shell nuclei, and variational chain summation methods for nuclear and neutron matter. Recent Progress • QMC has been used to calculate the ground states and many low-lying excited states for all A<=8 nuclei [2,3,4], demonstrating for the first time the microscopic origin of nuclear shell structure. This is illustrated by the excitation spectra for A=6-8 nuclei. • QMC calculations can explain both elastic and inelastic electromagnetic form factors observed in electron-scattering experiments, without the use of effective charges [5]. • The strong nuclear tensor force has been shown to produce novel toroidal correlations in nuclei [6] as seen in the figure at the top of this page; several experiments have been proposed to search for these structures. • Similar considerations lead to the prediction of pion-condensed phases in both nuclear and neutron matter near twice the saturation density, which has interesting implications for neutron star structure [7]. • The effects of special relativity on nuclear forces and nuclear binding are being studied in the framework of relativistic quantum mechanics [8,9]. • QMC is being used to study neutron drops, collections of neutrons bound by an external well, thus providing bench marks for more schematic methods used to study nuclei far from stability and nuclei in the crusts of neutron star [10,11]. • Studies have been made [12] or are in progress for several radiative and weak capture reactions of astrophysical interest, including: • The QMC wave functions are being used to study the feasibility of constructing polarized helium and lithium targets for electron-scattering experiments. 1. Accurate nucleon-nucleon potential with charge-independence breaking, R. B. Wiringa, V. G. J. Stoks, and R. Schiavilla, Phys. Rev. C 51, 38 (1995). 2. Quantum Monte Carlo calculations of A<=6 nuclei, B. S. Pudliner, V. R. Pandharipande, J. Carlson, and R. B. Wiringa, Phys. Rev. Lett. 74, 4396 (1995). 3. Quantum Monte Carlo calculations of nuclei with A<=7, B. S. Pudliner, V. R. Pandharipande, J. Carlson, S. C. Pieper, and R. B. Wiringa, Phys. Rev. C 56, 1720 (1997). 4. Quantum Monte Carlo calculations for light nuclei, R. B. Wiringa, Nucl. Phys. A631, 70c (1998). 5. Microscopic calculation of 6Li elastic and transition form factors R. B. Wiringa and R. Schiavilla, Phys. Rev. Lett. 81, 4317 (1998) 6. Femtometer toroidal structures in nuclei, J. L. Forest, V. R. Pandharipande, S. C. Pieper, R. B. Wiringa, R. Schiavilla, and A. Arriaga, Phys. Rev. C 54, 646 (1996). 7. Spin-isospin structure and pion condensation in nucleon matter, A. Akmal and V. R. Pandharipande, Phys. Rev. C 56, 2261 (1997). 8. Relativistic nuclear Hamiltonians, J. L. Forest, V. R. Pandharipande, and J. L. Friar, Phys. Rev. C 52, 568 (1995). 9. Variational Monte Carlo calculations of 3H and 4He with a relativistic Hamiltonian, J. L. Forest, V. R. Pandharipande, J. Carlson, and R. Schiavilla, Phys. Rev. C 52, 576 (1995). 10. Neutron drops and Skyrme energy-density functionals, B. S. Pudliner, A. Smerzi, J. Carlson, V. R. Pandharipande, S. C. Pieper, and D. G. Ravenhall, Phys. Rev. Lett. 76, 2416 (1996). 11. Neutron drops and neutron pairing energy, A. Smerzi, D. G. Ravenhall, and V. R. Pandharipande, Phys. Rev. C 56, 2549 (1997). 12. Weak capture of protons by protons, R. Schiavilla, V. G. J. Stoks, et al., Phys. Rev. C 58, 1263 (1998).
2c0a6782bb2bead1
Forgot your password? 50 Year Old Quantum Physics Problem Solved 112 Posted by Hemos from the interesting-aspect-of-science dept. notsosilentbob writes "This story about a 50 year old unsolved Quantum Physics problem at Eurekalert.org is interesting, if just for the discussion about the computing power required (SGI/Cray machines). Unlike the blowhard from BlacklightPower, this sounds like an important breakthrough. " The problem solved is that of the scattering effects of three charged particles. This is important, as this event occurs in everything from fluorescent lights to the ion etching of silicon chips. 50 Year Old Quantum Physics Problem Solved Comments Filter: • This is the kind of story that I like to see. Just when we think there is nothing new to know and that unsolved equals unsolvable, someone cracks an enigma like this and shows that a new perspective is often the only thing required to make significant breakthroughs. • It's just amazing the amount of computing power it takes to solve some of these problems... I wonder when games with physics engines are going to be able to simulate the universe to this detail? 100 years? 1000 years? It's just amazing how far we have come since the dawn of the information age! • here [eurekalert.org] and here. [eurekalert.org] • Plenty of people out there would cheer this breakthrough, not for its obvious worth as a furthering of scientific thought, but as a further entrenchment of quantum physics as a dominant theory for the mechanations of the universe, because frankly, it suits their personal philosophies of how the universe should remain somehow mystical. Newtonian physics and its euclidean geometries is far too cold, too exact, too exacting. Bring on the theories that tell us we live in worlds of probabilities: I want to win the lottery, dammit. My ancestors read the tea leaves before me, and soon I'll have a nice quantum computer in a cup of coffee. How much can anyone truly know for sure? Certainly I don't know much, so give me a theory that says no one else can be much more certain. Now that appeals to my insecurities and warms my cockles. It's quite fitting that such breakthroughs be made on the threshhold of a new era of unprecedented cultural return to mysticism. I'm still betting in science's corner, myself. • Newtonian physics is already in use in many games today. For example, in Quake, when your conection lags, Newtonian physics is used to extrpolate your future position baced on your current speed and trajectory. I'm sure this isn't the only example, but it's the first one that comes to mind. • http://www.sciam.com/1998/ 0698issue/0698gershenfeld.html [sciam.com] a good scientific america article, basically a quantum particle can exist in more than one state at once (based on its probability of being in a certain state). the state is a way of saying that there are finite energy levels a particle (electron) can have, these can each represent a state. it also talks about action at a distance. all of this is interesting but unfortunately very hard to understand. its based on wacky math and probability functions. the ones and zeros are basically the same thing...a high level and low level of energy. this is just on a smaller level • I am trying to understand the importance of this discovery. Although the article mention the ionization process that lead to the grow of the flourenscent tubes, to the engraving of silicon chips, we have done all that WITHOUT understanding exactly how these things are done. Can anyone tell me what this discovery for the "scattering problem" may yield, that is, apart from the Quantum Physics discipline? Thanks in advance for any pointer. Merry Christmas !! • quantum computing will never be common, but one or two quantum computers will be able to solve a couple of the really tough problems we never thought we could solve • Unlike the blowhard from BlacklightPower, this sounds like an important breakthrough You're editorialising again, Hemos! An assessment of the majority reaction to the Blacklight Power story might make it seem safe to do so on this occasion, but public opinion would change pretty quickly if Randall Mills was vindicated. Mills' claims are certainly outrageous but he's only raised enough capital from hardened venture capitalists to fund his research, and is turning would-be investors away in droves. He's obviously not a fraud. Even his critics in the physics community don't deny he is at least sincere. And don't forget he appears to have a better grasp of maths, chemistry and physics than most people - he's not ignorant or even unqualified. His enthusiasm for his own theory isn't really enough to warrant labelling him a blowhard. It's not as if he's gone around badmouthing everybody who disagrees with him. If you believed you'd made a breakthrough that would turn science on its head, wouldn't you have something to say about it? Would that make you a blowhard? Don't get me wrong, I'm not jumping on Mills' bandwagon either (yet). But if his ideas were completely without credibility then he'd surely have been forced out of business by now. I think we ought to give him the benefit of the doubt until his work has been properly peer reviewed by people who are qualified to assess it. Consciousness is not what it thinks it is Thought exists only as an abstraction • > quantum computing will never be common, > but one or two quantum computers will be able to > solve a couple of the really tough problems we > never thought we could solve You remind me of that quote where someone said that there would only ever be a few computers in the world, mostly in specialised research... What makes you think they won't become mainstream in the same fashion that current technology did? You're editorialising again, Hemos! Hemos did not make that statement. notsosilentbob is the one who made the comment, Hemos just let the comment through unedited. In a Slashdot news post, text in italics is written by the submitter. Plain text is written by the Slashdot crew. • I don't really see a problem with /. editorializing. I mean, I come here expecting a news service with some sort of humans behind it, and I get it. If I don't agree with it, I say so in the comments. Leave the plain facts to the news services /. links to - here, I want opinions to knock down! ...too unable to explain too many phenomena Now that appeals to my insecurities and warms my cockles. Speak for yourself. I'm still betting in science's corner, myself. And which 'science' did you use for your psychoanalysis of practically the entire Physics community? From what I've gathered, it tends to be more the people enamored of mysticism/religion who are offended by quantum theory. • Quantum particles used to store information ("Qubits") can be either on, off, or they can be in superposition between on and off. It's sort of hard to explain, but what it basically means is it's BOTH on AND off at the same time. Not sorta anything. Very weird stuff. I like to believe the Universe is a little more organized than that, but who knows... • Is this the sort of thing which could benefit from distributed computing? Or is it one of those things, like protein folding computations, which have to be done on special, ultra-powerful machines? • Third-person perspective narrated in a first-person format for the purpose of understated satire. Naturally, I don't myself believe any of it -- Of people I know, I'm the least enamored with mystical thinking. But I guess it goes over some people's heads sometimes. • This being Xmas, I was home having an argument with my father today about Canada's adoption of metric 30 years ago. He (age 53) is rather offended by this still today. I was trying to come up with ways to convince him that his personal discomfort was not enough reason to stay Imperial -- and now I've found one. This discovery has nothing to do with metric specifically, and (rather amusingly) happened in an Imperial (and imperious, sometimes) country. But it's still representative: countries using common systems (metric) allow many to work together, across borders, to solve problems that we could not work out alone. In this case it was three American schools, but in other cases it has been schools or researchers from separate continents. Yes, NASA messed up the metric thing. But that was based on one country not matching *all* the others, right? So imagine if this discovery is recanted in 3 weeks: "Oops, we were using inches and gallons, not centimeters and litres." This happens too often (even once is too often). I'm not sure what my point is. I think it's a combination of "cool" and "why isn't everyone metric yet?" - Cam MacLeod • You mean the following? "I think there's a world market for about five computers." -- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943 Ummm... I understood that. That's why I was saying speak for yourself. Snigger. The history of con artists shows this not to be the case. I wish you were right, but too many people want to believe the claims of snake oil salesmen. • Ooops :-( Now I get it: I parsed the 'their' as the people who subscribe to quantum theory, that threw off my interpretation of the sarcasm, i.e. I knew you were being sarcastic, but I mis-identified the target of the sarcasm. My apologies. • Sir, this is in essence, better than a beowulf cluster. I do not wish to waste my time, or anybody else's- but a little explanation may be in order. A quantum processor would work in parallel with itself; checking as many possibilities as it has capacity for all at once- in a really short time. This could find the solution to an equation much faster by merely recognizing which state of the supposition is the correct answer rather than trying them all in sequence. You will not get a faster quantum processor- merely a bigger one. What would a cluster configuration do but split up the task and make the chips to talk to each other unnecesarily? just my two cents • by Signal 11 (7608) Hemos, why must you make slams and inject your own commentary to stories? For that matter, why must slashdot? Calling people "blowhards" on a site that is making a genuine attempt to be taken seriously by mainstream is at the least shooting yourself in the foot. If you want to make a comment, put it in the forums with everybody else's and let moderation take it's course. • Hemos didn't say that, the person who submitted the story did. • Thanks for the pointer. Call me dumb as you must, but I do have difficulty connecting Quantum Computer with the solving of the "scattering" problem. I thought someone have already prototyped some sort of "quantum" computer, before the "scattering" problem was solved. That goes back to my original question - that we have done things like Flouresence tube and engraving chips with ion beams _before_ anyone have a definite answer to the "scattering" problem, and my original question is - what that discovery will yield for us, apart for making the Quantum Physicians feel much better? Again, thanks for your pointer. • It's the difference between being able to sink the 8-ball some of the time by "feel", and being able to calculate the proper angle and energy to sink the 8-ball. If this holds up (and it appears to be doing it so far) it will assist us in making predictions about what happens to very small or very high-energy things. As the article said, we make flourescent tubes and play with plasma, but up to now it mostly has worked by accident. The breakthrough is the solution of the simplest case, but it's a step towards manipulating plasma properly on purpose. • Now if only the moderators were as willing to see the light and erase that "overrated" moderation. :) C'mon people: even if you don't like the substance of what I have to say, you have to reward the posters who actually take the time to string together a complicated assortment of syllables with correct spelling and without the aid of a thesaurus, right? Forget an aibo. All I want at this time of year is massive moderation reform. • My point is that perhaps the "discovery" of the "scattering solution" may not be yielding much practical effect, like the onet you have mentioned - distributed computing. Perhaps the "discovery" itself may be used for predicting when and where the "scattering effect" may occur, and with the ability to predict, new branches of science may finally be able to mushroom. • by Anonymous Coward Beowulf clusters need lots of power. By understanding these particle interactions better, physicists will understand plasma physics better. Greater knowledge of plasma physics will lead to affordable power from fusion. Affordable power will reduce the costs of operating beowulf clusters. Yeah, it's a stretch, but you asked for it. • In stories I've submitted with similar comments - those were stripped prior to posting, or otherwise modified. Hemos didn't have to post that, but he did anyway. In my book if you censor anything, you're responsible for everything. Hemos posted it. • I see you've been reading the Slashdot Karma HOWTO. seen it? hell I helped write it! ;) • And just when I thought the Data Encryption Standard was absolutely uncrackable... • by Anonymous Coward No scientist worth his/her salt will back a theory because it matches their philosophy. We are just trying to come up with models to describe how the world works. It is up to religous leaders and the philosophers to figure out what it all means. I am sorry if you don't like quantum mechanics because it goes against your religion or whatever. The theory wasn't meant to offend it was meant to descirbe how small particles work. However speaking as a philosopher and not as a scientist I would say: if your beliefs don't fit the world around you maybe you should change your beliefs and not try to change the way the world works (the world works the way it works -- you can't change that; if you don't like it than tough!) • this is not at all what i meant. they said 5 computers would solve all the worlds problems. i only say a few really tough problems that require immense computational power can be solved. they wont be mainstream because of the large amount of temperature control and other controls that will be required to keep the quanta in a certain state. the difference in energy levels here is very small, there is just no practical way to achieve this in a home. now universities, government institutions (dept of energy - weather) and a few corporations (lockheed and boeing maybe) on the other hand would really be able to put these to use. there is simply no reason a home user would need the incredible amount of computational power these could provide...it would be like every home having 2 crays. i hate when people say 'we dont need more memory, we have enough' or speed or whatever, but this is not even close to comparable. • Unless you're planning on playing games on the atomic and sub-atomic level what would be the point of utilizing these new developments in quantum mechanics in game engines? While the world we live in is a quantum world, Newtonian mechanics holds fine for just about every aspect I can think of in a game. What type of game might be so detailed that we would have to understand the quantum interactions between particles? • Well, it'll probably make it much more efficient to engrave chips, show us cheaper and easier ways to do what we're doing. Let's look at a caveman who clubs someone over the head with a wooden club. When his foe becomes dead or unconcious, he might not know *why*, but he knows the effect. He might deduce things like removing branches and leaves that soften the blow make it more effective. But if he learned the reasons behind it, he could make a more effective club using rock, or maybe even metal. Sure, wooden clubs work fine, but isn't a stone axe just so much more convenient and stylish? :) • This result is interesting because previously this problem has been treated by using approximations. The many different "solutions" given by wildly differing methods did not agree - and the errors introduced by the approximations also were impossible to find. Numerical methods are very good in that you know the degree of error. Increase the number of grid points, and the error will decrease... (but the computation time will increase accordingly.) This one fact means that the results produced are meaningful - they can be compared with experiment. Now why are these scattering events interesting? Well there is a slightly more complicated collision where the incoming electron knocks out an electron - leaving the atom in an excited state. The excited atom then de-excites itself by emitting yet another electron. (Auger emission.) You can't do this with hydrogen (not enough electrons.) - However, the nobel gases work well... This second type of collision is very interesting, in that the distribution of outgoing electrons is related to the Fourier transform of the wavefunctions of the electrons in the atom... You can "map" the distribution of an electron in an orbital with this technique. This in turn provides tests on the quantum theory... This also happens in ionisation events that form Aurora. • Oops, my bad. Apologies to Hemos. Mild rebuke to notsosilentbob instead. Consciousness is not what it thinks it is Thought exists only as an abstraction • If you feel that God's word is to be found in a book written by men rather than in the study of the miraculous universe around you, I feel sorry for you -- you're missing a lot of spiritual wonder and beauty. • knowing what to do with it • this could be done with much less computing power. much more likely would be optical computers. now those, someday, might be commonplace • yea actually etching is suprisingly inaccurate, something like 2 out of 10 wafers are good, the rest are tossed. can you imagine the price change if the supply went up 3 or 4 times? • It would be like every home having 2 crays. Well, your top end Pentium III/Athlon probably has more computing power than an average mid-80's Cray (maybe not necessarily the same I/O throughput although they probably aren't too far off on that either). Since last decade's high-end CPU cores often get migrated into this year's embedded processors, I would expect that in the next decade most homes and cars will contain more than 2 processors which are equivalent to mid 80's Crays. I would agree that the class of problems which can use the capabilities of quantum computing is currently limited and few seem to be applicable to the average home. However that may change after we have had access to quantum computers for ten years. Around 1987, I took a class in Biophysics with Dr. Hoffmann (who is more well known for his work in immunology). At the time I told him that I figured in a little over a decade we might have massively parallel processors which would be able to tackle the protein folding problem. He basically told me I didn't really understand the magnitude of the problem. Recently, IBM have announced their project, Blue Gene, whose stated goal is the creation of a computer capable of fully solving the protein folding problem within five years. I was off by a few years but was still fairly accurate as software engineering or physics estimates go :-) So, you won't need a Cray to run your microwave, but you may want one (or two for backups) 80's Cray equivalent to run your house and have it respond intelligently to your voice commands. High-end cars already have very powerful computers running their active suspensions. Who knows what applications we may come up with for quantum computers. Currently you can't even get one working in a university lab, but if we have molecular nanotechnology in 40 years, it may be quite conceivable for every house to have one. In the latter case the only question is will there be household applications (distributed RC5 doesn't qualify) which require one? • yea this is what i was trying to say, once we get a couple, well be able to figure out quantum physics, meaning more and/or better quantum computers. i really didnt know how much computing power crays have, just trying to make the point that once we get a few, many of the interesting hard problems (mostly quantum physical/dna/neural net) will fall, and well all end up with some completely different kind of computer on our desktop than pc (if we still have desktop pc at all), but its highly unlikely to be quantum • While you might be true in saying that we ought to improve on the "Feel" thing - in sinking the 8-ball or in other endeavors - but please do not disregard the _importance_ of feel. There are times I have done thing by "feel" alone, and those are the times I could have done extensive calculations and such, but there is always that little voice (call it instinct if you may) that tells me to go by "Feel" - yea, sounds like Obiwan's "Feel the force, Luke" thing, doesn't it? :) - and so far (fingers crossed) I haven't have my "Feel" betrays me yet. I have tried to explain what "Feel" is, but I just can't. It's something you gotta have within yourself. Anyway, Merry Christmas !! • Salesman, bullshit. The guy came up with a theory that predicted experimental results that were confirmed by experimental data from independent labs. That, my boy, is sterling science. No one accepts his theory yet but at least the guy has gone out of his way to attempt to get independent experimental confirmation. Getting 25 million out of conservative utilities and retired investment bankers from Morgan Stanley is gonna take a *little* more than a nice smile and shiney shoes. • This is my own opinion - I personally do not think the journalist who wrote the piece actually gets it. Most things that we have here, today, from gunpowder to electronic wonders, the ideas behind them all originated not from tweaking equations, but from intuition and inspiration. Sometimes it requires "clicks" in the mind's eye to find a true "EUREKA!". Tweaking equations, IMHO, just doesn't make it. After all, tweaking equations require _prior_ equations to exist, or there won't be anything to be "tweaked", right? And most of those prior equations owed their existence from the "clicks" of somebody's mind's eye. Sorry, I've wandered to far out of topic. Gotta stop when I'm still able to. Merry Christmas ! • Their breakthrough employs a mathematical transformation of the Schrödinger wave equation that makes it possible to treat the outgoing particles not as if their wave functions extend to infinity -- as they must be treated conventionally -- but instead as if they simply vanish at large distances from the nucleus. I'm confused by this, how did they find an exact solution to the scattering problem if they are using a finite version of the wave function? Wouldn't that be an approximation of the true wave function, which extends to infinity? • Which means our intuition works only for day-to-day experiences. When you get to absurdly small or absurdly large things, you have to invent new reasoning methods, like relativity and quantum mechanics. What's really amazing is that mathematics is the tool that makes both possible. • Haven't been to the movies lately, right? Only 007 can beat 666! • I think what he meant is to have the capability to have a real-time simulation of the Universe from the quantum particles to macroscale galaxies and et al... which, I think is never. • ..I just want to know if that damn cat is dead or alive. Funny how so many otherwise well-informed people still think that "mysticism" must necessarily be opposed to science. I agree that the current trend towards unquestioning acceptance of "crystals", horoscopes, channeling, creationism and so forth is alarming, but to extend this trend to argue that "since these things are 'mystic', mysticism is 100% wrong" doesn't follow. Read some of Fritjof Capra's work to see how scientific insights can have a mystic aspect. Yes, many people jump on that and just start mixing "quantum" and other buzzwords into their tealeaves, but I think it's perfectly possible to study the workings of the universe in a rigorous scientific way and keep a sense of marvel about the whole thing. • I don't know how to implement that, though. seems to me you could just put a redirect on the story, instead of a simple anchor. but then, all one would have to do is click it, then quickly click back without reading. oh well • I join you in bemoaning a return to mysticism, but must part ways with you in your condemnation of quantum physics. I feel, with all due respect, that rather than admitting that quantum physics is very difficult to understand, you have dismissed it all as bunk. All societies are woefully suceptible to all manner of trickery and pseudo-science as a substitute for thinking things out for themselves and/or admitting once and a while a simple "I don't know". If Newtonian physics can tell me why light appears as both a particle *and* a wave, then we can chuck quantum physics. Newtonian physics is *lo-res* and works well in that realm. As the resolution gets finer, Newtonian physics breaks down. Don't blame the scientist for looking for answers elsewhere. • Don't blame the moderators. You'll have to be more skillful in your execution of satire if you don't want your 'masterpieces' to be moderated down as junk. • Except that it tends to break down within schwartzchild singularities, and that problem's being worked on by people much smarter than I. My previous comment was much more facetious than others seem to recognize. • Sorry, QM is successful because of its breathtaking predictive power. And if you're familiar with experimental results like the Aspect Experiment, it should be clear to you that no theory with the "common sense" deterministic appeal of Newtonian Mechanics can correctly mirror reality. Your social scientific explanations for theories confirmed again and again by empirical results are, as US people like to say, way off base. • I was talking to a friend of mine about this, and I didnt think it could be done.. I am not really into QM, but this was my argument: To simulate the entire universe on quantum level, you would need to simutate the state of every quark(?). To simulate a single quark, you would at least need one quark.. Thus to simulate to whole universe you would need at least every quark in the universe, and the universe would be its own simulation.. If this is bullshit, please let me know.. It sounds pretty solid to me (and my now convinced friend).. • A better grasp of math, physics, and chemistry indeed. Can you find a single PhD physicist, chemist, or mathematician who thinks Mills is up to snuff ? ? ? The academic establishment tends to make things very difficult for anyone who breaks ranks. Sensible scientists will keep quiet until there is irrefutable evidence to support Mills' theory. He is basically a very obsessed individual who went WAY off track a long time ago. If he were properly schooled in mathematics, physics, or chemistry he would have been a handful to get back on track. This is pure speculation. Instead, look at what he is proposing. He is reinventing particle physics, without advanced training in particle physics. So? Chemists need to know quite a bit of physics (especially including quantum physics). They can't even win their degree without it. Ditto higher maths. Quantum physics itself isn't particularly difficult to master anyway, its certainly no harder than any other branch of chemistry. Particle physics today is just tedious (it's like zoology) and is still 90% speculation. He is reinventing single hydrogen chemistry, without substantial training in hydrogen chemistry. Well, he is trained as a chemist, and so am I. Are you? What is "hydrogen chemistry"? anyway? As far as the mainstream is concerned, "hydrogen chemistry" is very straightforward, hardly deserving of a whole branch of chemistry all to itself. It's only got one damn electron for heaven's sake! It's the only element for which solutions have been found to its wave equations. What's more, your remarks suggest strongly that you haven't even read his published work, from which it's abundantly clear that he does have a very good grasp of "hydrogen chemistry" as it's generally understood. He just happens to have something new to add to it. And the comments of at least one mathematics professor (see book comments at Amazon) indicate his mathematics is merely good enough to prevent his investors from personally double checking him. I read Ulrich Gerlach's assassination piece too. His criticisms deserve serious consideration. I'm not really able to assess the criticisms about the maths as it'd take more time than I have. But some of it may well be a failure of interpretation. Both holes in the maths and misinterpretation are likely to occur at this stage as it's a new theory and it hasn't even been submitted to referees yet. The criticisms may not be wholly significant; they don't necessarily kill the theory even if they're valid. If they did, then quantum mechanics, supersymmetry, string theory and inflation theory would never have got past first base (and the Linux kernel would never have got past version 0.1 either ;o). Complex theories generally don't emerge fully formed, they often need a little massage after feedback has been obtained. Michio Kaku doesn't swallow Mills' theory either, and I respect Kaku (I have a couple of his books). But even eminent scientists are sometimes wrong, especially when defending something. And Kaku doesn't attack the maths. I'd be surprised if he's even bothered to look. BTW, there's another comment [amazon.com] there now written by former Assistant Secretary of Energy Shelby T Brewer, who is also now involved with BlackLight Power. It lists Mills' impressive credentials as a scientist which must be genuine whatever you think of Brewer's objectivity. If Mills turns out to be right it will set the whole of 20th Century physics and chemistry on its head. It would mean that people like Kaku have been wrong all their lives. So you have to expect that the establishment would fight it anyway. Remember that Einstein wasn't believed either until he had verified experimental results. One thing is clear - he is one heck of a salesman. Persons which such personalities can often convince large groups of relatively uneducated people to follow them. He smells just like a snake oil salesman to me. I might believe you if Mills showed signs of raking in all the cash he could before someone exposed him. But he's not, he's just taken enough to fund the business. This suggests he expects to make money out of his discovery in a more conventional manner. Also note that the some of the scientists who've criticised his theory have gone out of their way to state that they believe he is sincere, just misguided. Scientists generally don't do that if they think someone is a fraud, they tend to come out and say so or else just leave it unsaid. PT Barnum was right. From the sublime to the ridiculous. Perhaps you understand Barnum's theory better than you understand Mills' theory and thus place more faith in it. Personally I don't think its valid to compare Mills with Barnum. Consciousness is not what it thinks it is Thought exists only as an abstraction • Get serious. No one cares about how many Barbies there are, for crying out loud. But Pokemon are important. Yours Snorlax - WKiernan@concentric.net • It's a step forward in computational physics, but it's not a suprise: the theory agrees with experiment. Quantum electrodynamics is right yet again. The compute power required was large. They had to use Blue Pacific, probably the unclassfied machine, which has 1344 PowerPC604 CPUs. I wonder how much machine time was required, and how tightly coupled the computation is. • Re how about keeping Imperial for consumer stuff, metric for science applications? "just fine" Actually, I'm not so sure about that. I mean yes, obviously, any system would be fine, so long as it's consistent and made some sense. But much like different countries working together, there are other issues that might appear. For instance, when people move from being youth learning to drive into university students in science, this puts extra learning curve on their backs. Or when students in elementary are learning about ... I dunno, maybe percentages or something, and they see ".2 oz fat" on the milk carton (guessing) and their textbook has nothing but examples of "5g fat", and their answer must be in metric... again, why force this on to their heads? What is actually gained, long term, from maintaining two separate systems? Short term, those who grew up with Imperial will be comfortable, but long term, we'd be maintaining the status quo -- people complaining about metric not being what they're used to. In perpetuity. That just doesn't make sense to me. - Cam MacLeod • I'm unsatisfied with other replies to this posting, and wanted to put in my two cents with stepping on toes. Sue me for discontinuity. Until now, the Schrödinger equations have been beyond the abilities of physicist to solve for anything but very simple situations. The exceptions have been where they've played tricks on the math and got solutions for more complicated, but very specific, situations. This article is more descriptive of a more general approach to the problem, which could concievably make the Schödinger equations not just predictive, but also actually useful. So, the real importance is that QM can now be used much more generally as a predictive tool, which has a number of incredible applications, mostly in high-energy chemistry. Watch for advances in materials, especially in displays, lights, and, I'd wager, explosives. The mentions of etching silicon and flourescent tube is more to make the ideas more real, AFAICT. Other posts about the feel of pool and whatnot miss the point. Which is the the unfortunate consequence of the "real life" examples that get added to pop sci articles. • The result of this solution is a set of probability functions describing the likely locations of the outbound electrons within the calculated domain. The approximation used (limiting the calculation domain with an arbitrary boundary) is most likely done by describing the areas outside of the boundary in terms of the probability wave approaching the boundary (once the electrons are far enough from the proton, treat it as a QED two electron problem) and remove the region outside the boundary from the calculation domain. The result of this hack is an arbitrarily close approximation of the actual electron probability functions. In QED, you don't generally look for an exact prediction of the electron's location. Most of the time you are looking for a usefully accurate model. The breakthrough is finding a way to make the problem computationally tractable (which is done by the "large distance" approximation). Finding a way to calculate the large distance in terms of the near distance in all cases is the big deal here. There are two difficulties with any Newtonian three body model (where gravity is the dominant force). The first is gathering complete information (where are all of the interacting objects). The second is computer errors including the position rounding error (at 32 bits, or whatever) and the sampling error (how often does the computer recalculate *all* of the vectors based on updated positions?). Ballistic models that describe interacting particles in terms of probability functions can be much more successful, but run into difficulty during interpretation (the electron really can be in five different places, the spaceship cannot). By reducing distant bodies to planar gravity fields (large distance approximation), we end up with spacecraft like the Galileo probe that made it to Jupiter with only a few small course corrections to make up for the slight inaccuracies in the approximated model. But beware, it's still just a useful model. Don't expect to hit the center ring halfway across the solar system with your eyes closed based on any model. You'll need to correct (or update) your model with empirical data to make it actually work. So, to finally answer your question. The breakthrough is going the other way (from Newtonian three body to QED). Before this, however, the QED models didn't have any way to reduce the large distance wave functions to a useful approximation. Now they do. If the large distance approximation used can be applied or extended to more complex interactions, our models of quantum interactions will be dramatically improved and our ability to describe complex probabilistic events will become correspondingly more confident. Regards, Ross • Did you look at the independent test result links? Those documents are all on his site, posted and authored by him. Not one is hosted by other institutions. I don't know enough math to say one way or another if his theories are based on sound reason, but the independent confirmation is suspect, to say the least. • If you look again at the article, you'll see that the submitter used the 'blowhard' comment, not Hemos. If anything, Hemos is guilty of not editing out a derogatory comment. • It's been more than 10 years since the original Pons and Fleischmann anouncement of [sic] cold fusion. These guys (Blacklight) are one of the myriad branches from that bizarre root. Personally I find it amusing that they deny [blacklightpower.com] a relationship with cold fusion, while the cold fusion advocates point to them [padrak.com] as a success story. • Well, off the top of my head, the following comment seems currently unprovable (certainly not proven by your comments): That is of course the basis of your statement. I personally do not see the inherent truth of that. Additionally, I believe the original poster would be satisfied with a simulation of a subset of the universe or some small fictional universe following the same laws as our own. After all, poor word choice aside, sounds more like a plea for more realistic games, not for a simulation of a galaxy 5 million light years away from the game's setting. • I give ALL consideration within reason to all sorts of hypotheses. If I think something is at ALL possible I will let it alone and see how it develops. I would NEVER attack something from my field in public if there were even a remote chance it was correct. My livelihood is my reputation amongst my peers. That's what I meant when I said: Sensible scientists will keep quiet until there is irrefutable evidence to support Mills' theory. Or irrefutable evidence that he's wrong. I don't think there's either, yet. Consciousness is not what it thinks it is Thought exists only as an abstraction • I can follow the math. It is poorly written and makes leaps and bounds far over and beyond those pointed out by Gerlach. It is either obtuse INTENTIONALLY, or because Mills never understood the necessity for others to follow his work in order for it to be accepted. Or because there are misconceptions embodied within it that Mills himself is missing. Can you be more explicit? Or am I just supposed to believe an AC who may or may not be able to follow the maths? Consciousness is not what it thinks it is Thought exists only as an abstraction • "Mystical" is an entirely subjective decision. The spirit of true science, if confronted with the inescapable fact that the world rested on the back of a giant turtle, would be expected to ask, "Ok so what does it consume to stay alive?" Science is not about supporting what you think is "the right way the universe should work"; it is about making observations and constructing possible explanations based on those observations. If the universe really does look random, blurry, and oddly mystical, oh well. People make value judgements; science shouldn't. • Because the Qubit is can be "in between" on and off, it would allow one Qubyte to store every number between 0-255, or just a couple of them. This way, you could multiply all the numbers in one by another qubyte, which on a 32-bit quantum couputer would allow you to multiply up to 4294967296 numbers in a single opperation, although you would have to use every number between one and 4294967296 to get that many. As you can see, a working quantum computer would blow conventional computers out of the water. The only problem is when your working this small, just about anything could interfere with the processor, and as far as I know the most advanced working quantum coputer can and & or two bits together.
5f97cacac618e470
Dismiss Notice Dismiss Notice Join Physics Forums Today! Problem with solving the Colatitude equation 1. Apr 24, 2007 #1 Show that the priciple quantum number n limits the values of the angular momentum quantum number l such as l <= n-1 Calculate the degeneracy of energy levels of the hydrogen atom when only the Colomb force is taken into account. The attempt at a solution I wrote the 3D-Schrödinger equation (in spherical coordinates) and solved it by seperating the variables in the form Psi(r,phi,theta) = R(r) * P(theta) * F(phi) -> so i have 3 equations for 3 quantum numbers. Then I solved the azimuthal equation to get to constant Cphi, which is Ctheta = -ml^2 (ml.. magnetic quantum number) So I have now the Colatitude equation: sin(theta)/P * d/dtheta [ sin(theta) dP/dtheta] + Cr sin^2(theta) = -Cphi with Cphi = -ml^2 So far I'm right, I think.. I know that, if I solve this equation it will show that "l <= n-1". I also know that I have to do this by using polynomial expansion. But I don't know how to do this here. I hope somebody can help me! Thx 2. jcsd Can you help with the solution or looking for help too? Similar Discussions: Problem with solving the Colatitude equation 1. Lagrangian problem (Replies: 0) 2. Problem is. (Replies: 0)
4999fc79d5e8356f
Dismiss Notice Join Physics Forums Today! Classical Mechanics WITHOUT determinism? 1. May 24, 2005 #1 User Avatar Staff Emeritus Science Advisor Education Advisor First of all, a disclaimer. I am making NO vouch for the validity of this paper. I just found it an amusing read and thought it's a twist from what normally happens. Typically, we tend to think that classical mechanics, even classical statistics, is completely deterministic, and that only when we get to the quantum scale would such thing be an issue for debate. But here, it seems to also apply to classical mechanics, where you get rather "predictable" dynamics even when one do not start off with a deterministic system. Have a go at it and see what you think... 2. jcsd 3. May 24, 2005 #2 A note. Many people tends to think that classical mechanics is deterministic and quantum mechanics is indeterministic. Still is exactly the inverse. Precisely Schrödinger equation is purely deterministic and indeterminism arises when there is contact with a measurer. Precisely the measurer is a classical object like emphasized by Böhr. If the measuring apparatus is a quantum object, the whole system is quantum one and perfectly described by a Schrödinger deterministic equation. Precisely are the large systems where the deterministic formulation fails. precisely are large systems (some are called LPS). Usual classical dynamics (e.g. newtonian one) is a formulation for classical systems when random components of equations are omited. always one use F = ma one is using <F> = m <a> This is the theory of physical ensembles (does not confound with the theory of Gibbs ensembles) And another note, the typical interpretation of classical statistical mechanics sound very very poor. It appears that you are supporting the old "coarse grained" interpretation of statistical mechanics is (really outdated) on the framework of physical and mathematical research in the topic. About the paper. it is another paper about a very very old idea inspired in the first-decades-formulation-of-QM-supposition that a wave function is a kind of wave in the sense of classical physics. Any attempt to derive QM from a supposed underlying deterministic classical newer found is comdemned to failure. All atempts including omit lot of important stuff. Note the emphasis on a single particle for obtain phy(x,t). People does not abandon that kind of uggly approach by conceptual or philosophical decades ago (on 1950 if i remember correctly) By no talk about this approach contradicts special relativity (as required in relativistic quantum field theory), does not acomodate the existence of spin, and so forth, violate the dual representations, for example, in momentum eigenspace, etc. The role of position operator confronts with Landau uncertainty rules for photons and relativistic electrons, etc, etc, etc, etc. And finally we obtain a rare, rather incorrect, restricted formulation that obtains no new predictions and therefore the asumption of the existence of underlying deterministic trajectories is simply an act of faith. Have something to add? Similar Discussions: Classical Mechanics WITHOUT determinism? 1. Classical Mechanics (Replies: 9) 2. Classical Mechanics (Replies: 2)
8af40afa0ea39a27
Take the 2-minute tour × Assume that we have massless spin-1/2 particles. The Dirac-spinor is the solution of the Dirac equation: $$ p^\mu \gamma_\mu u_\pm(p) = 0, \quad p^2 = 0$$ The subscripts $\pm$ denote two different solutions, belonging to two different helicities. Is it possible to find a representation for the $\gamma_\mu$ so that the following relation holds true? $$ u_+ = \left( u_- \right)^* $$ I know this possible for (polarization) vectors, i.e. one may choose $\epsilon^\mu_+ = \left( \epsilon^\mu_- \right)^* $, but I guess it is not for spinors (in the weyl representation it obviously is not). So here is my Question: Is it possible to find a representation of the Dirac-Gamma-matrices so that the spinors of different helicities are related by complex conjugation? share|improve this question 2 Answers 2 up vote 6 down vote accepted For massless particles, helicity coincides with chirality thus you ask to find the basis such that $$ \psi_{\pm}=\left( \psi_{\mp}\right) ^{\star},\quad\gamma_{5}\psi_{\pm}% =\pm\psi_{\pm}. $$ Using the decomposition of hermitian operator: $$ \left( \gamma_{5}\right) _{ij}=\left( \psi_{+}\right) _{i}\left( \psi _{+}^{\star}\right) _{j}-\left( \psi_{-}\right) _{i}\left( \psi_{-}% ^{\star}\right) _{j}=\left( \psi_{+}\right) _{i}\left( \psi_{-}\right) _{j}-\left( \psi_{-}\right) _{i}\left( \psi_{+}\right) _{j}, $$ we find that $\gamma_{5}$ should be an antisymmetric matrix. Since $\gamma_{5}$ is a hermitian operator it implies that all components of $\gamma_{5}$ should be pure imaginary. In Majorana basis all $\gamma$-matrices are pure imaginary and since $$ \gamma_{5}=i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}, $$ it means that $\gamma_{5}$ is also pure imaginary (and thus antisymmetric): $$ \gamma_{5}=\left( \begin{array} [c]{cc} \sigma_{2} & \\ & -\sigma_{2}% \end{array} \right) . $$ Update. Off topic. From my personal point of view, I would never use the basis of the polarization wave function such that $\psi_{j,m}^{\star}=\psi_{j,-m}$. The reason is the following: sometimes it is extremely convenient (see explanation below) to associate $\psi^{\star}$ with the «time-revesed» wave function as it is suggested by the Schrödinger equation and the anti-unitary nature of the operation of time reversal. The polarization wave function of a particle of spin $s>0$ is a contravariant rank $2s$ symmetric spinor: $$ \psi_{s,\sigma}=\phi^{i_{1}..i_{2s}}, $$ in whose indices $1$ occurs $s+\sigma$ times and $2$ $s-\sigma$ times. The complex conjugation leads to the covariant spinor: $$ \phi_{i_{1}\ldots i_{2s}}^{\left( rev\right) }=\left( \phi^{i_{1}..i_{2s}% }\right) ^{\star}. $$ Using the antisymmetric tensor $\epsilon^{ij}$ (the metric tensor for $SU\left( 2\right) $) one can construct again a covariant spinor, so that the sign changes as many times as there are twos among the indices: $$ \psi_{s,-\sigma}^{\left( rev\right) }=T\left( \psi_{s,\sigma}\right) =\psi_{s,\sigma}^{\star}\left( -1\right) ^{s-\sigma}. $$ For example, when the time-reversal operation is repeated: $$ T^{2}\left( \psi_{s,\sigma}\right) =\left( -1\right) ^{2s}\psi_{s,\sigma}, $$ that immediately leads to the well-known Kramers' theorem: For a system with the half-integral sum of the spins of particles, in an arbitrary electric field, all the levels must be doubly degenerate, and complex conjugate spinors correspond to two different states with the same energy. For details, see § 60, L.D. Landau and E.M. Lifshitz, Quantum Mechanics, Non-relativistic Theory. Therefore a (polarization) wave function with is usually normalized by the condition: $$ \psi_{j,m}^{\star}=\psi_{j,-m}\left( -1\right) ^{s-\sigma}.\quad\quad(1) $$ For example, for $s=1$ the polarization vectors have the form: $$ \varepsilon_{\pm1}=\frac{\mp i}{\sqrt{2}}\left( 1,\pm i,0\right) ,\quad\varepsilon_{0}=i\left( 0,0,1\right) , $$ see also Eqs. (28.7-28.9) in L.D. Landau and E.M. Lifshitz, Quantum Mechanics, Non-relativistic Theory. However, some people find the normalization (1) too complicated. share|improve this answer This is called a Majorana representation, in your convention for the gamma matrices, you make sure that all the gamma matrices are pure imaginary. share|improve this answer Your Answer
29c4f534aad97478
Syllabus for physical science June 2011 I. Mathematical Methods of Physics Dimensional analysis. Vector algebra and vector calculus. Linear algebra, matrices, Cayley-Hamilton Theorem. Eigenvalues and eigenvectors. Linear ordinary differential equations of first & second order, Special functions (Hermite, Bessel, Laguerre and Legendre functions). Fourier series, Fourier and Laplace transforms. Elements of complex analysis, analytic functions; Taylor & Laurent series; poles, residues and evaluation of integrals. Elementary probability theory, random variables, binomial, Poisson and normal distributions. Central limit theorem. II. Classical Mechanics Newton's laws. Dynamical systems, Phase space dynamics, stability analysis. Central force motions. Two body Collisions - scattering in laboratory and Centre of mass frames. Rigid body dynamics- moment of inertia tensor. Non-inertial frames and pseudoforces. Variational principle. Generalized coordinates. Lagrangian and Hamiltonian formalism and equations of motion. Conservation laws and cyclic coordinates. Periodic motion: small oscillations, normal modes. Special theory of relativity- Lorentz transformations, relativistic kinematics and mass–energy equivalence. III. Electromagnetic Theory Electrostatics: Gauss's law and its applications, Laplace and Poisson equations, boundary value problems. Magnetostatics: Biot-Savart law, Ampere's theorem. Electromagnetic induction. Maxwell's equations in free space and linear isotropic media; boundary conditions on the fields at interfaces. Scalar and vector potentials, gauge invariance. Electromagnetic waves in free space. Dielectrics and conductors. Reflection and refraction, polarization, Fresnel's law, interference, coherence, and diffraction. Dynamics of charged particles in static and uniform electromagnetic fields. IV. Quantum Mechanics Wave-particle duality. Schrödinger equation (time-dependent and time-independent). Eigenvalue problems (particle in a box, harmonic oscillator, etc.). Tunneling through a barrier. Wave-function in coordinate and momentum representations. Commutators and Heisenberg uncertainty principle. Dirac notation for state vectors. Motion in a central potential: orbital angular momentum, angular momentum algebra, spin, addition of angular momenta; Hydrogen atom. Stern-Gerlach experiment. Time-independent perturbation theory and applications. Variational method. Time dependent perturbation theory and Fermi's golden rule, selection rules. Identical particles, Pauli exclusion principle, spin-statistics connection. V. Thermodynamic and Statistical Physics Laws of thermodynamics and their consequences. Thermodynamic potentials, Maxwell relations, chemical potential, phase equilibria. Phase space, micro- and macro-states. Micro-canonical, canonical and grand-canonical ensembles and partition functions. Free energy and its connection with thermodynamic quantities. Classical and quantum statistics. Ideal Bose and Fermi gases. Principle of detailed balance. Blackbody radiation and Planck's distribution law. VI. Electronics and Experimental Methods Semiconductor devices (diodes, junctions, transistors, field effect devices, homo- and hetero-junction devices), device structure, device characteristics, frequency dependence and applications. Opto-electronic devices (solar cells, photo-detectors, LEDs). Operational amplifiers and their applications. Digital techniques and applications (registers, counters, comparators and similar circuits). A/D and D/A converters. Microprocessor and microcontroller basics. Data interpretation and analysis. Precision and accuracy. Error analysis, propagation of errors. Least squares fitting I. Mathematical Methods of Physics Green's function. Partial differential equations (Laplace, wave and heat equations in two and three dimensions). Elements of computational techniques: root of functions, interpolation, extrapolation, integration by trapezoid and Simpson's rule, Solution of first order differential equation using Runge-Kutta method. Finite difference methods. Tensors. Introductory group theory: SU(2), O(3). II. Classical Mechanics Dynamical systems, Phase space dynamics, stability analysis. Poisson brackets and canonical transformations. Symmetry, invariance and Noether's theorem. Hamilton-Jacobi theory. III. Electromagnetic Theory Dispersion relations in plasma. Lorentz invariance of Maxwell's equation. Transmission lines and wave guides. Radiation- from moving charges and dipoles and retarded potentials. IV. Quantum Mechanics Spin-orbit coupling, fine structure. WKB approximation. Elementary theory of scattering: phase shifts, partial waves, Born approximation. Relativistic quantum mechanics: Klein-Gordon and Dirac equations. Semi-classical theory of radiation. V. Thermodynamic and Statistical Physics First- and second-order phase transitions. Diamagnetism, paramagnetism, and ferromagnetism. Ising model. Bose-Einstein condensation. Diffusion equation. Random walk and Brownian motion. Introduction to nonequilibrium processes. VI. Electronics and Experimental Methods Linear and nonlinear curve fitting, chi-square test. Transducers (temperature, pressure/vacuum, magnetic fields, vibration, optical, and particle detectors). Measurement and control. Signal conditioning and recovery. Impedance matching, amplification (Op-amp based, instrumentation amp, feedback), filtering and noise reduction, shielding and grounding. Fourier transforms, lock-in detector, box-car integrator, modulation techniques. High frequency devices (including generators and detectors). VII. Atomic & Molecular Physics Quantum states of an electron in an atom. Electron spin. Spectrum of helium and alkali atom. Relativistic corrections for energy levels of hydrogen atom, hyperfine structure and isotopic shift, width of spectrum lines, LS & JJ couplings. Zeeman, Paschen-Bach & Stark effects. Electron spin resonance. Nuclear magnetic resonance, chemical shift. Frank-Condon principle. Born-Oppenheimer approximation. Electronic, rotational, vibrational and Raman spectra of diatomic molecules, selection rules. Lasers: spontaneous and stimulated emission, Einstein A & B coefficients. Optical pumping, population inversion, rate equation. Modes of resonators and coherence length. VIII. Condensed Matter Physics Bravais lattices. Reciprocal lattice. Diffraction and the structure factor. Bonding of solids. Elastic properties, phonons, lattice specific heat. Free electron theory and electronic specific heat. Response and relaxation phenomena. Drude model of electrical and thermal conductivity. Hall effect and thermoelectric power. Electron motion in a periodic potential, band theory of solids: metals, insulators and semiconductors. Superconductivity: type-I and type-II superconductors. Josephson junctions. Superfluidity. Defects and dislocations. Ordered phases of matter: translational and orientational order, kinds of liquid crystalline order. Quasi crystals. IX. Nuclear and Particle Physics Basic nuclear properties: size, shape and charge distribution, spin and parity. Binding energy, semi-empirical mass formula, liquid drop model. Nature of the nuclear force, form of nucleon-nucleon potential, charge-independence and charge-symmetry of nuclear forces. Deuteron problem. Evidence of shell structure, single-particle shell model, its validity and limitations. Rotational spectra. Elementary ideas of alpha, beta and gamma decays and their selection rules. Fission and fusion. Nuclear reactions, reaction mechanism, compound nuclei and direct reactions. Classification of fundamental forces. Elementary particles and their quantum numbers (charge, spin, parity, isospin, strangeness, etc.). Gellmann-Nishijima formula. Quark model, baryons and mesons. C, P, and T invariance. Application of symmetry arguments to particle reactions. Parity non-conservation in weak interaction. Relativistic kinematics. Follow the link to download the Syllabus Directly : MCA Sem-II Computer Oriented Numerical Methods January 2011 GTU Question paper for Computer Oriented Numerical Methods ; GTU question paper for MCA 2011 ; GTU questionbank 2011 Seat No.: _____ Enrolment No.______ MCA. Sem-II Remedial Examination December 2010 Subject code: 620005 Subject Name: Computer Oriented Numerical Methods Date: 20 /12 /2010 Time: 10.30 am – 01.00 pm Total Marks: 70 Follow the link to download free MCA Sem-II Computer Oriented Numerical Methods January 2011 GTU Question paper for Computer Oriented Numerical Methods  question paper: - Direct link
b307510696f520b0
Nondiffracting Accelerating Wave Packets of Maxwell’s Equations Phys. Rev. Lett. 108, 163901 Accelerating beams of the Maxwell equations. (a) The forward-accelerating beam of TE polarization (α=150) reaches a trajectory which is almost vertical, while exhibiting nondiffracting (shape-preserving) acceleration. (b) The paraxial approximation yields an Airy beam which accelerates only for a short distance before breaking up. (c), (d) The forward propagating beam of TM polarization (α=150). The power transfers from the x polarization (c) to the z polarization (d). All figures are simulated with λ=1μm and in a square of 35μm×35μm. The diagram in the center shows the Fourier plane where the beam is confined in a circle describing the propagating plane waves. The top (bottom) half stands for the forward (backward) beam.Accelerating beams of the Maxwell equations. (a) The forward-accelerating beam of TE polarization (α=150) reaches a trajectory which is almost vertical, while exhibiting nondiffracting (shape-preserving) acceleration. (b) The paraxial approximation y... Show more The research on accelerating beams has been growing rapidly since it was brought into the domain of optics in 2007 [1]. An ideal paraxial accelerating beam is propagating along a parabolic trajectory, while preserving its amplitude structure indefinitely, being a nondiffracting wave packet. The effect is caused by interference: the waves emitted from all points on the Airy beam profile maintain a propagation-invariant Airy structure, which shifts laterally along a parabola. This beautiful phenomenon has led to many intriguing ideas ranging from guiding particles along a curve [2], and generating self-bending plasma channels [3] to recent studies on shape-preserving accelerating beams in nonlinear optics [4–7]. In addition, it is possible to find beams accelerating along arbitrary curves at the expense of these beams being not shape-preserving [8]. All of these beams are solutions of the paraxial wave equation, where the beam trajectory is fundamentally limited to small (paraxial) angles, and when it bends to larger angles—the beam is no longer shape preserving. As a result, the transverse acceleration of Airy beams is always restricted to small angles. This restriction is a serious limitation, because spatial acceleration means that the propagation angle continuously increases, and eventually, after physically relevant distances, the beam trajectory inevitably reaches a steep angle, and the beam dynamics as a whole always goes into the nonparaxial regime. That is, a paraxial accelerating beam is moving along a curve which bends ever faster, and eventually it is bound to break its own domain of existence. Several attempts to find an accelerating beam beyond the paraxial regime have shown a complete breakup: part of the beam becomes evanescent while the other part quickly deforms while exhibiting only very small trajectory bending [9]. Most notable is a recent pioneering work [10] where the caustics method is stretched from (paraxial) ray optics to the nonparaxial regime, predicting beams that bend to large angles. However, as we discuss below, the caustics method cannot provide shape-preserving (nondiffracting) solutions. The recent interest in nonparaxial accelerating beams brings about a series of fundamental questions: Can a beam bend itself to large nonparaxial angles? If it does, would such a nonparaxial accelerating beam be nondiffracting (shape preserving) as it is in the paraxial limit? Such a beam should not be restricted by any physical parameters, and should be able to bend from a launch angle of zero all the way to angles close to 90°—perpendicular to the original direction of propagation. The dynamics of light is governed by the Maxwell equations; are there any accelerating nondiffracting solutions to the Maxwell equations? Here, we present nonparaxial spatially accelerating shape-preserving beams. These accelerating beams are the complete set of general solutions to the full Maxwell equations, for any monochromatic fields. These nonparaxial accelerating beams propagate along a circular trajectory, therefore asymptotically reaching 90° angles, completing a quarter of a circle, after which diffraction broadening takes over and the beams spread out. As a validity test, we prove that taking these beams to the paraxial limit recovers the known paraxial Airy beams. Thus, an accelerating beam of electromagnetic waves (a wave packet satisfying Maxwell’s equations) is the beam we present here, and that the Airy beam found in [1] is actually our solution taken to the paraxial limit. We find the solutions for both TE and TM, thereby generalizing to arbitrary polarization. Importantly, what we present here is a new class of nondiffracting solutions to the Helmholtz equation: solutions that self-bend, unlike all previously known nondiffracting solutions of Helmholtz equations (Bessel beams) that propagate on straight trajectory [11,12]. Generally, the beams we find exhibit shape-preserving bending with subwavelength features, and the Poynting vector of their main lobe displays a turn of more than 90°. We show that these accelerating beams are self-healing, and analyze their properties when they are emitted from finite apertures. Additionally, we show that any given circular trajectory can support an entire family of accelerating solutions, whereby their superpositions form periodic accelerating beams. Finally, the fact that our self-bending beams are solutions to the full wave equation makes this work applicable beyond optics, practically for any time-harmonic wave obeying the simple (Helmholtz-type) wave equation: from sound waves to acoustics, surface waves in fluids, and more. We begin from Maxwell’s equations in vacuum, for a TE-polarized electric field E¯=EY(x,z,t)y^, obeying the Helmholtz equation Equation (1) has full symmetry between the x and z coordinates. Hence, it is logical to seek a shape-preserving beam whose trajectory resides on a circle. As the initial condition, we use E(x,z=0,t) and let the beam propagate in the forward +z direction. Of course, such a beam cannot turn back to propagate in the -z direction; hence, the largest bending expected is a trajectory parallel to the x direction. That is, the beam will asymptotically complete circular motion on a quarter of a circle. To seek such motion which is also shape-invariant (diffraction-free), it is convenient to transform Eq. (1) to the rest frame of the beam. Since the motion is on a circle, we transform to polar coordinates r, θ, by taking z=rsin(θ), x=rcos(θ), and seek shape-preserving solutions of the form E=U(r)eiαθ-iωt, where α is some real number, and ω is the temporal frequency. The result is a monochromatic beam, which is shape preserving along any circular curve. The radial function U(r) must satisfy: The exact solutions of Eq. (2) are the Bessel functions U=Jα[(ω/c)r]. [Actually, there is an additional family of solutions, also from the Bessel family, but those diverge at the origin; hence, we will not discuss them here.] A related method was used recently by Hacyan [13], to find electromagnetic waves that accelerate relativistically in time. To unravel the physics of our solution, it must be transformed back to the coordinates x, z, and separated into forward and backward propagating waves. We do it through the Fourier transform of the beam, which is confined to reside on a circle of radius k=ω/c=2π/λ in the kx-kz plane (see the diagram in the center of Fig. 1). The top half of the diagram (positive kz) gives the forward propagating part of the beam, while the bottom half (negative kz) gives the backward propagating part. The forward propagating part is the actual accelerating beam from a single source at z=0. Importantly, this beam does not have the Bessel structure, but only asymptotically half of it [14]. This can be calculated by integration over the top half of the circle (angles 0 to π), as sketched on the top half of the diagram on Fig. 1. Where Jα+ is “half a Bessel” (because integrating the expression from -π to π yields the α-order Bessel function Jα(kr)). Here, α can be any real number (not necessarily an integer) since we are not restricted to periodic boundary conditions (as the beam never completes a full circle). Figure 1(a) shows the accelerating solution of Eq. (3), for λ=1μm and α=150. It is important to emphasize that the axes have the same scale in all figures in this Letter, unlike the usual representation of Airy beams, as appears in many papers, where the curvatures are usually exaggerated to highlight the beam bending, by making use of unequal scales. Figure 1(a) shows that the beam is indeed nondiffracting (shape preserving), but only up to an angle close to 90°. We should discuss the reason for this limit: Mathematically, it is clear that a Bessel solution is exact and shape preserving. However, the physical beam, generated by the initial condition at plane z=0, is only “half a Bessel”. In what sense is this beam nondiffracting? When α>0, the exact Bessel beam is antisymmetric at z=0 with respect to the origin: two main lobes positioned at opposite sides of x=0 and their oscillating tails stretch toward plus and minus infinity on their right and left sides, respectively. But, the phase of the beam is what makes the beam antisymmetric: it follows from the anticlockwise rotation of the beam. This rotation makes the right half of the beam propagate forward and to the left [Fig. 1(a)], while the left half propagates backward and to the right. The latter contradicts the physical boundary conditions. When cutting the backward propagating waves in Fourier space, we are left with an almost-exact Bessel shape confined on the right side of the x axis. Only when the bending gets close to 90°, the nondiffracting property breaks, where the two “half Bessel wave packets” were supposed to meet and interfere ([14]). Mathematically, larger α (larger angular momentum) gives a better separation, hence also a more accurate nondiffracting propagation. The solution for the TM polarization is found through a similar procedure for the magnetic field, and from it the TM electric field components are found to be This TM solution is of special interest: each of its polarization components is not shape preserving on its own, as shown in Figs. 1(c) and 1(d), but the total intensity of the TM beam does preserve its shape. That is, as the beam bends by 90°, the power is transferred from the x component to the z component of the field. This shows that the accelerating beam not only bends but actually rotates, similar to the phase front of the beam which also rotates by 90°, staying normal to the beam trajectory at all times. A natural extension comes from the superposition of the TE and TM beams, which yields a vectorial solution of general polarization. Properties of the Bessel-like accelerating beams. (a) The Poynting vector of the TE/TM polarization, when launched with an initial angle and bends by 130°. (b) The self-healing property “reviving” the first lobe that was initially cut out. The dashed black curve describes the original trajectory of the first lobe of a perfect accelerating beam. (c) A periodic accelerating beam composed of two jointly accelerating components α=150 and α=165 with equal coefficients. All figures are simulated with α=150, λ=1μm and in a square of 30μm×45μm (a) or 35μm×35μm (b), (c).Properties of the Bessel-like accelerating beams. (a) The Poynting vector of the TE/TM polarization, when launched with an initial angle and bends by 130°. (b) The self-healing property “reviving” the first lobe that was initially cut out. The dashed... Show more Generalizing Eq. (3) to arbitrary polarization gives the full family of vectorial 3D accelerating beams. We still choose the trajectory of acceleration in the xz plane, but allow a plane wave in y. This leaves three functions in k space that relate to the electric field via Eqs. (5) [similar to Eq. (3)] where kxz, ky must satisfy the conditions: kxzcos(kθ)fx(kθ)+kyfy(kθ)+kxzsin(kθ)fz(kθ)=0, and k2xz+k2y=k2. Each polarization is therefore composed of a superposition of solutions of Eq. (3) in the TE and the TM polarization, multiplied by a plane wave which only changes the effective wave number from k to kxz. Superpositions of fields with different ky should give beams which are confined in the y direction, extending the solutions to 3D. To highlight the impressive angle of bending, we note that it is actually possible to double the angle, by launching the beam at an angle opposite to the direction of bending. See Fig. 2(a) for an example of a beam that is launched at an angle of -65° and subsequently bends all the way to 65°, completing a turn of 130°. In theory, the maximal bending is limited to 180° asymptotically because the boundary conditions allow only forward propagating waves. In practice, we measure the bending in Fig. 2(a) by the difference between the Poynting vectors of the main lobe at the incoming plane and at the outgoing plane. One can prove that the Poynting vector of the TM polarization is exactly the same. Having found the accelerating solutions of Maxwell’s equations, it is interesting to examine the small angles limit of the expression in Eq. (3), and see if it recovers the paraxial Airy solution. To do that, we recall a property of the Bessel function stating that the maximum of the main lobe occurs close to x=α/k. Thus, to make the approximation in the correct range, we take x=α/k+Δx and assume Δx and z to be small. We also assume α to be very large, so that the exponent oscillates very fast and cancels out most of the contribution of the nonparaxial regime. In the limit of large α, we expand the cosine and sine by a Taylor series around π/2, up to third order. The result is an integral that is solved analytically to yield where Ai is the Airy function. Note the z3 term characteristic of the paraxial Airy beam, indicating acceleration at a parabolic trajectory A direct consequence is that there is a unique relation between the parameter α and the acceleration (from the trajectory Δx=-gz2/2), which is simply g=k/α. Hence, the acceleration g is smaller for larger α, which makes sense because higher orders of α give circular motion with larger radii, so the radial acceleration is indeed smaller. Another conclusion is that if we try to approximate an accelerating beam which is a superposition of several α’s, we get a superposition of Airy beams with different accelerations. This is why the paraxial accelerating beam must be a single Airy function, with a uniquely defined acceleration, whereas the nonparaxial accelerating beams can support a family of beams of different shape (different α’s) which all accelerate at the same trajectory. Finally, for small values of α, we find accelerating beams that cannot exist in the paraxial regime at all. Those beams are mainly made up of very high spatial frequencies; hence, trying to construct an Airy beam with these spatial frequency constituents gives rise to a beam that breaks up after a very short propagation distance. See Fig. 1(b) for an example, with α=150, where the parabolic acceleration survives for a very short distance only. It is now worth while to compare the known features of the Airy beam to our new nonparaxial accelerating beam. To this end, we notice that the self-healing effect [15] also exists in our nonparaxial beams: see Fig. 2(b) for an example, where the main lobe is initially cut out (blocked). The second lobe gets more power and its trajectory bends more—to replace the first lobe; see the dotted black line on Fig. 2(b) marking the trajectory of the original first lobe. When this “replacement” occurs, each lobe is shifted by a steeper bending to replace the lobe on its left. Another property common for both the Airy beam and the nonparaxial accelerating beam is that both are not square integrable; hence, they carry infinite power. In this context, launching either of them from a finite aperture yields a beam that accelerates over a finite propagation range only. As with the Airy beam, a longer tail in the nonparaxial case allows for more lobes to exhibit shape-preserving propagation for larger distances. Interestingly, the reason that the nonparaxial accelerating beam carries infinite power comes from only two singular points in k space, which are placed at the edges ( -k and k) bordering the regime of evanescent waves. Removing these singular points leaves an accelerating beam of finite power, since then the k-space spectrum becomes square integrable (unlike the paraxial spectrum of the Airy which is unbounded). Physically, some part of the spatial spectrum will always be removed, since the edges of the k space represent waves moving transversely perpendicular to z. At the same time, a rather short tail (of about twice the radius of the trajectory) is sufficient to make the first lobes bend into a deep nonparaxial angle (more than 50°). Consequently, a nonparaxial accelerating beam launched from a finite aperture (thus carrying finite power) will bend on a circular curve while maintaining a virtually propagation-invariant shape for the majority of the physically accessible one quarter of a circle. Finally, another difference between the Airy beam and the nonparaxial accelerating beam is that, unlike the Airy beam, the Bessel-like nonparaxial accelerating beams cannot be simply scaled (by squeezing or stretching the x axis) to control the acceleration curve. Rather, in the nonparaxial case different values of α imply different orders of the Bessel-like function, which affects the widths of the lobes indirectly. When coming to examine the nonparaxial accelerating solutions of Eqs. (3) and (4), we note that any superposition of these solutions, with different values of α, also gives an accelerating beam which is propagating on the same curved trajectory. Such superposition accelerates in unison, but it is not shape preserving: it is a breather, with a periodicity depending on the difference between the values of α. Thus, an infinite family of periodic accelerating beams can be generated from superpositions. Figure 2(c) displays such a periodic accelerating beam. Note that some of the periodic accelerating beams have finite power, due to the destructive interference at the tails. Mathematically, this happens when the singular points in k space ( kθ=0, π) are canceled by summing two or more waves, as in the case of the x component of the TM polarization (which is also a legitimate periodic solution for TE). Many other examples of finite power periodic accelerating beams can be generated from Eq. (5) for mixed polarizations. Before closing, we note that accelerating beams can also be found through methods relying on caustics [8,10,16]. A recent paper [10] proposed using the caustic method to generate nonparaxial accelerating beams. This method is based on ray-optic principles but taken into nonparaxial angles. This way, accelerating beams moving along an arbitrary curve can reach large bending angles. However, while this method constrains the main lobe to accelerate along the predesigned curve, it does not determine how the rest of the beam is propagating. In practice, the beam is considered accelerating, but it is not nondiffracting: after some distance, diffraction effects smear the beam structure and acceleration stops [8,10]. Hence, such “caustic-designed accelerating beams” are different from the paraxial Airy beams, from nonlinear accelerating beams [4–7], and from the nonparaxial accelerating beams described here, which are all nondiffracting: for all of these, the entire beam is accelerating with a propagation-invariant amplitude, whereas caustic-designed accelerating beams are not meant to be propagation invariant. To summarize, we have found nonparaxial accelerating beams and nonparaxial periodically oscillating accelerating beams. These beams are the full vector solutions of Maxwell’s equation for shape-preserving accelerating beams. Moreover, in their scalar form, these beams are the exact solutions for nondispersive accelerating wave packets of the simple and most common wave equation describing time-harmonic waves. As such, the work presented in this Letter has profound implications to almost any linear wave system in nature, ranging from sound waves and surface waves in fluids to many kinds of classical waves. In this spirit, it is now clear that the phenomenon of accelerating waves is not the result of a specific unusual behavior of the Schrödinger equation (which is equivalent to the paraxial wave equation), as one may think from reading the first paper pioneering this subject [17]. In a similar vein, this work shows that nonparaxial nondiffracting beams are no longer necessarily Bessel-like beams [11,12], which always propagate on a straight line, but now include also self-bending beams. To complete the picture, future work should study the possibility of 3D accelerating beams, including those with trajectories that do not lie in a single plane. In practical terms, this work brings accelerating beam optics into the subwavelength regime, through the less-than-wavelength features of our solutions, facilitating higher resolution for particle manipulation. 1. G. A. Siviloglou and D. N. Christodoulides, Opt. Lett. 32, 979 (2007); ; G. A. Siviloglou, J. Broky, A. Dogariu, D. N. Christodoulidesand , Phys. Rev. Lett. 99, 213901 (2007). 3. P. Polynkin, M. Kolesik, J. V. Moloney, G. A. Siviloglou, and D. N. Christodoulides, Science 324, 229 (2009). 4. I. Kaminer, M. Segev, and D. N. Christodoulides, Phys. Rev. Lett. 106, 213903 (2011); 5. A. Lotti, D. Faccio, A. Couairo, D. G. Papazoglou, P. Panagiotopoulos, D. Abdollahpour, and S. Tzortzakis, Phys. Rev. A 84, 021807 (2011). 6. I. Dolev, I. Kaminer, A. Shapira, M. Segev, and A. Arie, Phys. Rev. Lett. 108, 113903 (2012). 7. R. Bekenstein and M. Segev, Opt. Express 19, 23706 (2011). 8. E. Greenfield, M. Segev, W. Walasik, and O. Raz, Phys. Rev. Lett. 106, 213902 (2011). 9. A. V. Novitsky and D. V. Novitsky, Opt. Lett. 34, 3430 (2009); ; L. Carretero, P. Acebal, S. Blaya, C. García, A. Fimia, R. Madrigal, and A. Murciano, Opt. Express 17, 22432 (2009). 12. J. A. Stratton, Electromagnetic Theory (Classic Reissue, IEEE Press, New Jersey, 2007). 13. S. Hacyan, J. Opt. 13, 105710 (2011). 14. Similar to a temporally accelerating pulse which is also split in two halves, corresponding to positive and negative group velocities; see I. Kaminer, Y. Lumer, M. Segev, and D. N. Christodoulides, Opt. Express 19, 23132 (2011). 15. J. Broky, G. A. Siviloglou, A. Dogariu, and D. N. Christodoulides, Opt. Express 16, 12880 (2008). 16. Y. Kaganovsky and E. Heyman, Opt. Express 18, 8440 (2010). 18. F. Courvoisier, A. Mathis, L. Froehly, R. Giust, L. Furfaro, P.-A. Lacourt, M. Jacquot, and J. M. Dudley, arXiv:1202.3318v1. About the Authors Image of Ido Kaminer Image of Rivka Bekenstein Image of Jonathan Nemirovsky Image of Mordechai Segev Related Articles Synopsis: A Crack in Earth’s Protective Shield Synopsis: A Crack in Earth’s Protective Shield Synopsis: Time Optimization in Quantum Computing   Quantum Information Synopsis: Time Optimization in Quantum Computing   Synopsis: Scaling-Up with a Quantum Socket Quantum Information Synopsis: Scaling-Up with a Quantum Socket More Articles
1d7a7f929c202ceb
The Book of Universes by John D. Barrow (2011) This book is twice as long and half as good as Barrow’s earlier primer, The Origin of the Universe. In that short book Barrow focused on the key ideas of modern cosmology – introducing them to us in ascending order of complexity, and as simply as possible. He managed to make mind-boggling ideas and demanding physics very accessible. This book – although it presumably has the merit of being more up to date (published in 2011 as against 1994) – is an expansion of the earlier one, an attempt to be much more comprehensive, but which, in the process, tends to make the whole subject more confusing. The basic premise of both books is that, since Einstein’s theory of relativity was developed in the 1910s, cosmologists and astronomers and astrophysicists have: 1. shown that the mathematical formulae in which Einstein’s theories are described need not be restricted to the universe as it has traditionally been conceived; in fact they can apply just as effectively to a wide variety of theoretical universes – and the professionals have, for the past hundred years, developed a bewildering array of possible universes to test Einstein’s insights to the limit 2. made a series of discoveries about our actual universe, the most important of which is that a) it is expanding b) it probably originated in a big bang about 14 billion years ago, and c) in the first few milliseconds after the bang it probably underwent a period of super-accelerated expansion known as the ‘inflation’ which may, or may not, have introduced all kinds of irregularities into ‘our’ universe, and may even have created a multitude of other universes, of which ours is just one If you combine a hundred years of theorising with a hundred years of observations, you come up with thousands of theories and models. In The Origin of the Universe Barrow stuck to the core story, explaining just as much of each theory as is necessary to help the reader – if not understand – then at least grasp their significance. I can write the paragraphs above because of the clarity with which The Origin of the Universe explained it. In The Book of Universes, on the other hand, Barrow’s aim is much more comprehensive and digressive. He is setting out to list and describe every single model and theory of the universe which has been created in the past century. He introduces the description of each model with a thumbnail sketch of its inventor. This ought to help, but it doesn’t because the inventors generally turn out to be polymaths who also made major contributions to all kinds of other areas of science. Being told a list of Paul Dirac’s other major contributions to 20th century science is not a good way for preparing your mind to then try and understand his one intervention on universe-modelling (which turned, in any case, out to be impractical and lead nowhere). Another drawback of the ‘comprehensive’ approach is that a lot of these models have been rejected or barely saw the light of day before being disproved or – more complicatedly – were initially disproved but contained aspects or insights which turned out to be useful forty years later, and were subsequently recycled into revised models. It gets a bit challenging to try and hold all this in your mind. In The Origin of the Universe Barrow sticks to what you could call the canonical line of models, each of which represented the central line of speculation, even if some ended up being disproved (like Hoyle and Gold and Bondi’s model of the steady state universe). Given that all of this material is pretty mind-bending, and some of it can only be described in advanced mathematical formulae, less is definitely more. I found The Book of Universes simply had too many universes, explained too quickly, and lost amid a lot of biographical bumpf summarising people’s careers or who knew who or contributed to who’s theory. Too much information. One last drawback of the comprehensive approach is that quite important points – which are given space to breathe and sink in in The Origin of the Universe are lost in the flood of facts in The Book of Universes. I’m particularly thinking of Einstein’s notion of the cosmological constant which was not strictly necessary to his formulations of relativity, but which Einstein invented and put into them solely in order to counteract the force of gravity and ensure his equations reflected the commonly held view that the universe was in a permanent steady state. This was a mistake and Einstein is often quoted as admitting it was the biggest mistake of his career. In 1965 scientists discovered the cosmic background radiation which proved that the universe began in an inconceivably intense explosion, that the universe was therefore expanding and that the explosive, outward-propelling force of this bang was enough to counteract the contracting force of the gravity of all the matter in the universe without any need for a hypothetical cosmological constant. I understand this (if I do) because in The Origin of the Universe it is given prominence and carefully explained. By contrast, in The Book of Universes it was almost lost in the flood of information and it was only because I’d read the earlier book that I grasped its importance. The Book of Universes Barrow gives a brisk recap of cosmology from the Sumerians and Egyptians, through the ancient Greeks’ establishment of the system named after Ptolemy in which the earth is the centre of the solar system, on through the revisions of Copernicus and Galileo which placed the sun firmly at the centre of the solar system, on to the three laws of Isaac Newton which showed how the forces which govern the solar system (and more distant bodies) operate. There is then a passage on the models of the universe generated by the growing understanding of heat and energy acquired by Victorian physicists, which led to one of the most powerful models of the universe, the ‘heat death’ model popularised by Lord Kelvin in the 1850s, in which, in the far future, the universe evolves to a state of complete homegeneity, where no region is hotter than any other and therefore there is no thermodynamic activity, no life, just a low buzzing noise everywhere. But this is all happens in the first 50 pages and is just preliminary throat-clearing before Barrow gets to the weird and wonderful worlds envisioned by modern cosmology i.e. from Einstein onwards. In some of these models the universe expands indefinitely, in others it will reach a peak expansion before contracting back towards a Big Crunch. Some models envision a static universe, in others it rotates like a top, while other models are totally chaotic without any rules or order. Some universes are smooth and regular, others characterised by clumps and lumps. Some are shaken by cosmic tides, some oscillate. Some allow time travel into the past, while others threaten to allow an infinite number of things to happen in a finite period. Some end with another big bang, some don’t end at all. And in only a few of them do the conditions arise for intelligent life to evolve. The Book of Universes then goes on, in 12 chapters, to discuss – by my count – getting on for a hundred types or models of hypothetical universes, as conceived and worked out by mathematicians, physicists, astrophysicists and cosmologists from Einstein’s time right up to the date of publication, 2011. A list of names Barrow namechecks and briefly explains the models of the universe developed by the following (I am undertaking this exercise partly to remind myself of everyone mentioned, partly to indicate to you the overwhelming number of names and ideas the reader is bombarded with): • Aristotle • Ptolemy • Copernicus • Giovanni Riccioli • Tycho Brahe • Isaac Newton • Thomas Wright (1771-86) • Immanuel Kant (1724-1804) • Pierre Laplace (1749-1827) devised what became the standard Victorian model of the universe • Alfred Russel Wallace (1823-1913) discussed the physical conditions of a universe necessary for life to evolve in it • Lord Kelvin (1824-1907) material falls into the central region of the universe and coalesce with other stars to maintain power output over immense periods • Rudolf Clausius (1822-88) coined the word ‘entropy’ in 1865 to describe the inevitable progress from ordered to disordered states • William Jevons (1835-82) believed the second law of thermodynamics implies that universe must have had a beginning • Pierre Duhem (1961-1916) Catholic physicist accepted the notion of entropy but denied that it implied the universe ever had a beginning • Samuel Tolver Preson (1844-1917) English engineer and physicist, suggested the universe is so vast that different ‘patches’ might experience different rates of entropy • Ludwig Boltzmann and Ernst Zermelo suggested the universe is infinite and is already in a state of thermal equilibrium, but just with random fluctuations away from uniformity, and our galaxy is one of those fluctuations • Albert Einstein (1879-1955) his discoveries were based on insights, not maths: thus he saw the problem with Newtonian physics is that it privileges an objective outside observer of all the events in the universe; one of Einstein’s insights was to abolish the idea of a privileged point of view and emphasise that everyone is involved in the universe’s dynamic interactions; thus gravity does not pass through a clear, fixed thing called space; gravity bends space. The American physicist John Wheeler once encapsulated Einstein’s theory in two sentences: Matter tells space how to curve. Space tells matter how to move. (quoted on page 52) • Marcel Grossmann provided the mathematical underpinning for Einstein’s insights • Willem de Sitter (1872-1934) inventor of, among other things, the de Sitter effect which represents the effect of the curvature of spacetime, as predicted by general relativity, on a vector carried along with an orbiting body – de Sitter’s universe gets bigger and bigger for ever but never had a zero point; but then de Sitter’s model contains no matter • Vesto Slipher (1875-1969) astronomer who discovered the red shifting of distant galaxies in 1912, the first ever empirical evidence for the expansion of the galaxy • Alexander Friedmann (1888-1925) Russian mathematician who produced purely mathematical solutions to Einstein’s equation, devising models where the universe started out of nothing and expanded a) fast enough to escape the gravity exerted by its own contents and so will expand forever or b) will eventually succumb to the gravity of its own contents, stop expanding and contract back towards a big crunch. He also speculated that this process (expansion and contraction) could happen an infinite number of times, creating a cyclic series of bangs, expansions and contractions, then another bang etc A graphic of the oscillating or cyclic universe (from Discovery magazine) • Arthur Eddington (1882-1944) most distinguished astrophysicist of the 1920s • George Lemaître (1894-1966) first to combine an expanding universe interpretation of Einstein’s equations with the latest data about redshifting, and show that the universe of Einstein’s equations would be very sensitive to small changes – his model is close to Eddington’s so that it is often called the Eddington-Lemaître universe: it is expanding, curved and finite but doesn’t have a beginning • Edwin Hubble (1889-1953) provided solid evidence of the redshifting (moving away) of distant galaxies, a main plank in the whole theory of a big bang, inventor of Hubble’s Law: • Objects observed in deep space – extragalactic space, 10 megaparsecs (Mpc) or more – are found to have a redshift, interpreted as a relative velocity away from Earth • This Doppler shift-measured velocity of various galaxies receding from the Earth is approximately proportional to their distance from the Earth for galaxies up to a few hundred megaparsecs away • Richard Tolman (1881-1948) took Friedmann’s idea of an oscillating universe and showed that the increased entropy of each universe would accumulate, meaning that each successive ‘bounce’ would get bigger; he also investigated what ‘lumpy’ universes would look like where matter is not evenly spaced but clumped: some parts of the universe might reach a maximum and start contracting while others wouldn’t; some parts might have had a big bang origin, others might not have • Arthur Milne (1896-1950) showed that the tension between the outward exploding force posited by Einstein’s cosmological constant and the gravitational contraction could actually be described using just Newtonian mathematics: ‘Milne’s universe is the simplest possible universe with the assumption that the universe s uniform in space and isotropic’, a ‘rational’ and consistent geometry of space – Milne labelled the assumption of Einsteinian physics that the universe is the same in all places the Cosmological Principle • Edmund Fournier d’Albe (1868-1933) posited that the universe has a hierarchical structure from atoms to the solar system and beyond • Carl Charlier (1862-1934) introduced a mathematical description of a never-ending hierarchy of clusters • Karl Schwarzschild (1873-1916) suggested  that the geometry of the universe is not flat as Euclid had taught, but might be curved as in the non-Euclidean geometries developed by mathematicians Riemann, Gauss, Bolyai and Lobachevski in the early 19th century • Franz Selety (1893-1933) devised a model for an infinitely large hierarchical universe which contained an infinite mass of clustered stars filling the whole of space, yet with a zero average density and no special centre • Edward Kasner (1878-1955) a mathematician interested solely in finding mathematical solutions to Einstein’s equations, Kasner came up with a new idea, that the universe might expand at different rates in different directions, in some parts it might shrink, changing shape to look like a vast pancake • Paul Dirac (1902-84) developed a Large Number Hypothesis that the really large numbers which are taken as constants in Einstein’s and other astrophysics equations are linked at a deep undiscovered level, among other things abandoning the idea that gravity is a constant: soon disproved • Pascual Jordan (1902-80) suggested a slight variation of Einstein’s theory which accounted for a varying constant of gravitation as through it were a new source of energy and gravitation • Robert Dicke (1916-97) developed an alternative theory of gravitation • Nathan Rosen (1909-995) young assistant to Einstein in America with whom he authored a paper in 1936 describing a universe which expands but has the symmetry of a cylinder, a theory which predicted the universe would be washed over by gravitational waves • Ernst Straus (1922-83) another young assistant to Einstein with whom he developed a new model, an expanding universe like those of Friedman and Lemaître but which had spherical holes removed like the bubbles in an Aero, each hole with a mass at its centre equal to the matter which had been excavated to create the hole • Eugene Lifschitz (1915-85) in 1946 showed that very small differences in the uniformity of matter in the early universe would tend to increase, an explanation of how the clumpy universe we live in evolved from an almost but not quite uniform distribution of matter – as we have come to understand that something like this did happen, Lifshitz’s calculations have come to be seen as a landmark • Kurt Gödel (1906-1978) posited a rotating universe which didn’t expand and, in theory, permitted time travel! • Hermann Bondi, Thomas Gold and Fred Hoyle collaborated on the steady state theory of a universe which is growing but remains essentially the same, fed by the creation of new matter out of nothing • George Gamow (1904-68) • Ralph Alpher and Robert Herman in 1948 showed that the ratio of the matter density of the universe to the cube of the temperature of any heat radiation present from its hot beginning is constant if the expansion is uniform and isotropic – they calculated the current radiation temperature should be 5 degrees Kelvin – ‘one of the most momentous predictions ever made in science’ • Abraham Taub (1911-99) made a study of all the universes that are the same everywhere in space but can expand at different rates in different directions • Charles Misner (b.1932) suggested ‘chaotic cosmology’ i.e. that no matter how chaotic the starting conditions, Einstein’s equations prove that any universe will inevitably become homogenous and isotropic – disproved by the smoothness of the background radiation. Misner then suggested the Mixmaster universe, the  most complicated interpretation of the Einstein equations in which the universe expands at different rates in different directions and the gravitational waves generated by one direction interferes with all the others, with infinite complexity • Hannes Alfvén devised a matter-antimatter cosmology • Alan Guth (b.1947) in 1981 proposed a theory of ‘inflation’, that milliseconds after the big bang the universe underwent a swift process of hyper-expansion: inflation answers at a stroke a number of technical problems prompted by conventional big bang theory; but had the unforeseen implication that, though our region is smooth, parts of the universe beyond our light horizon might have grown from other areas of inflated singularity and have completely different qualities • Andrei Linde (b.1948) extrapolated that the inflationary regions might create sub-regions in  which further inflation might take place, so that a potentially infinite series of new universes spawn new universes in an ‘endlessly bifurcating multiverse’. We happen to be living in one of these bubbles which has lasted long enough for the heavy elements and therefore life to develop; who knows what’s happening in the other bubbles? • Ted Harrison (1919-2007) British cosmologist speculated that super-intelligent life forms might be able to develop and control baby universe, guiding the process of inflation so as to promote the constants require for just the right speed of growth to allow stars, planets and life forms to evolve. Maybe they’ve done it already. Maybe we are the result of their experiments. • Nick Bostrom (b.1973) Swedish philosopher: if universes can be created and developed like this then they will proliferate until the odds are that we are living in a ‘created’ universe and, maybe, are ourselves simulations in a kind of multiverse computer simulation Although the arrival of Einstein and his theory of relativity marks a decisive break with the tradition of Newtonian physics, and comes at page 47 of this 300-page book, it seemed to me the really decisive break comes on page 198 with the publication Alan Guth’s theory of inflation. Up till the Guth breakthrough, astrophysicists and astronomers appear to have focused their energy on the universe we inhabit. There were theoretical digressions into fantasies about other worlds and alternative universes but they appear to have been personal foibles and everyone agreed they were diversions from the main story. However, the idea of inflation, while it solved half a dozen problems caused by the idea of a big bang, seems to have spawned a literally fantastic series of theories and speculations. Throughout the twentieth century, cosmologists grew used to studying the different types of universe that emerged from Einstein’s equations, but they expected that some special principle, or starting state, would pick out one that best described the actual universe. Now, unexpectedly, we find that there might be room for many, perhaps all, of these possible universes somewhere in the multiverse. (p.254) This is a really massive shift and it is marked by a shift in the tone and approach of Barrow’s book. Up till this point it had jogged along at a brisk rate namechecking a steady stream of mathematicians, physicists and explaining how their successive models of the universe followed on from or varied from each other. Now this procedure comes to a grinding halt while Barrow enters a realm of speculation. He discusses the notion that the universe we live in might be a fake, evolved from a long sequence of fakes, created and moulded by super-intelligences for their own purposes. Each of us might be mannequins acting out experiments, observed by these super-intelligences. In which case what value would human life have? What would be the definition of free will? Maybe the discrepancies we observe in some of the laws of the universe have been planted there as clues by higher intelligences? Or maybe, over vast periods of time, and countless iterations of new universes, the laws they first created for this universe where living intelligences could evolve have slipped, revealing the fact that the whole thing is a facade. These super-intelligences would, of course, have computers and technology far in advance of ours etc. I felt like I had wandered into a prose version of The Matrix and, indeed, Barrow apologises for straying into areas normally associated with science fiction (p.241). Imagine living in a universe where nothing is original. Everything is a fake. No ideas are ever new. There is no novelty, no originality. Nothing is ever done for the first time and nothing will ever be done for the last time… (p.244) And so on. During this 15-page-long fantasy the handy sequence of physicists comes to an end as he introduces us to contemporary philosophers and ethicists who are paid to think about the problem of being a simulated being inside a simulated reality. Take Robin Hanson (b.1959), a research associate at the Future of Humanity Institute of Oxford University who, apparently, advises us all that we ought to behave so as to prolong our existence in the simulation or, hopefully, ensure we get recreated in future iterations of the simulation. Are these people mad? I felt like I’d been transported into an episode of The Outer Limits or was back with my schoolfriend Paul, lying in a summer field getting stoned and wondering whether dandelions were a form of alien life that were just biding their time till they could take over the world. Why not, man? I suppose Barrow has to include this material, and explain the nature of the anthropic principle (p.250), and go on to a digression about the search for extra-terrestrial life (p.248), and discuss the ‘replication paradox’ (in an infinite universe there will be infinite copies of you and me in which we perform an infinite number of variations on our lives: what would happen if you came face to face with one of your ‘copies?? p.246) – because these are, in their way, theories – if very fantastical theories – about the nature of the universe and he his stated aim is to be completely comprehensive. The anthropic principle Observations of the universe must be compatible with the conscious and intelligent life that observes it. The universe is the way it is, because it has to be the way it is in order for life forms like us to evolve enough to understand it. Still, it was a relief when he returned from vague and diffuse philosophical speculation to the more solid territory of specific physical theories for the last forty or so pages of the book. But it was very noticeable that, as he came up to date, the theories were less and less attached to individuals: modern research is carried out by large groups. And he increasingly is describing the swirl of ideas in which cosmologists work, which often don’t have or need specific names attached. And this change is denoted, in the texture of the prose, by an increase in the passive voice, the voice in which science papers are written: ‘it was observed that…’, ‘it was expected that…’, and so on. • Edward Tryon (b.1940) American particle physicist speculated that the entire universe might be a virtual fluctuation from the quantum vacuum, governed by the Heisenberg Uncertainty Principle that limits our simultaneous knowledge of the position and momentum, or the time of occurrence and energy, of anything in Nature. • George Ellis (b.1939) created a catalogue of ‘topologies’ or shapes which the universe might have • Dmitri Sokolov and Victor Shvartsman in 1974 worked out what the practical results would be for astronomers if we lived in a strange shaped universe, for example a vast doughnut shape • Yakob Zeldovich and Andrei Starobinsky in 1984 further explored the likelihood of various types of ‘wraparound’ universes, predicting the fluctuations in the cosmic background radiation which might confirm such a shape • 1967 the Wheeler-De Witt equation – a first attempt to combine Einstein’s equations of general relativity with the Schrödinger equation that describes how the quantum wave function changes with space and time • the ‘no boundary’ proposal – in 1982 Stephen Hawking and James Hartle used ‘an elegant formulation of quantum  mechanics introduced by Richard Feynman to calculate the probability that the universe would be found to be in a particular state. What is interesting is that in this theory time is not important; time is a quality that emerges only when the universe is big enough for quantum effects to become negligible; the universe doesn’t technically have a beginning because the nearer you approach to it, time disappears, becoming part of four-dimensional space. This ‘no boundary’ state is the centrepiece of Hawking’s bestselling book A Brief History of Time (1988). According to Barrow, the Hartle-Hawking model was eventually shown to lead to a universe that was infinitely large and empty i.e. not our one. • In 1986 Barrow proposed a universe with a past but no beginning because all the paths through time and space would be very large closed loops • In 1997 Richard Gott and Li-Xin Li took the eternal inflationary universe postulated above and speculated that some of the branches loop back on themselves, giving birth to themselves The self-creating universe of J.Richard Gott III and Li-Xin Li • In 2001 Justin Khoury, Burt Ovrut, Paul Steinhardt and Neil Turok proposed a variation of the cyclic universe which incorporated strong theory and they called the ‘ekpyrotic’ universe, epkyrotic denoting the fiery flame into which each universe plunges only to be born again in a big bang. The new idea they introduced is that two three-dimensional universes may approach each other by moving through the additional dimensions posited by strong theory. When they collide they set off another big bang. These 3-D universes are called ‘braneworlds’, short for membrane, because they will be very thin • If a universe existing in a ‘bubble’ in another dimension ‘close’ to ours had ever impacted on our universe, some calculations indicate it would leave marks in the cosmic background radiation, a stripey effect. • In 1998 Andy Albrecht, João Maguijo and Barrow explored what might have happened if the speed of light, the most famous of cosmological constants, had in fact decreased in the first few milliseconds after the bang? There is now an entire suite of theories known as ‘Varying Speed of Light’ cosmologies. • Modern ‘String Theory’ only functions if it assumes quite a few more dimensions than the three we are used to. In fact some string theories require there to be more than one dimension of time. If there are really ten or 11 dimensions then, possibly, the ‘constants’ all physicists have taken for granted are only partial aspects of constants which exist in higher dimensions. Possibly, they might change, effectively undermining all of physics. • The Lambda-CDM model is a cosmological model in which the universe contains three major components: 1. a cosmological constant denoted by Lambda (Greek Λ) and associated with dark energy; 2. the postulated cold dark matter (abbreviated CDM); 3. ordinary matter. It is frequently referred to as the standard model of Big Bang cosmology because it is the simplest model that provides a reasonably good account of the following properties of the cosmos: • the existence and structure of the cosmic microwave background • the large-scale structure in the distribution of galaxies • the abundances of hydrogen (including deuterium), helium, and lithium • the accelerating expansion of the universe observed in the light from distant galaxies and supernovae He ends with a summary of our existing knowledge, and indicates the deep puzzles which remain, not least the true nature of the ‘dark matter’ which is required to make sense of the expanding universe model. And he ends the whole book with a pithy soundbite. Speaking about the ongoing acceptance of models which posit a ‘multiverse’, in which all manner of other universes may be in existence, but beyond the horizon of where can see, he says: Copernicus taught us that our planet was not at the centre of the universe. Now we may have to accept that even our universe is not at the centre of the Universe. Related links Reviews of other science books The environment Human evolution • The Double Helix by James Watson (1968) Particle physics The Origin of the Universe Related links Reviews of other science books The environment Human evolution Genetics and life • What Is Life? How Chemistry Becomes Biology by Addy Pross (2012) • The Diversity of Life by Edward O. Wilson (1992) • The Double Helix by James Watson (1968) Particle physics %d bloggers like this:
0d15a509efcde6ab
Durham University Department of Physics PHYS3621 Foundations of Physics 3A (2018/19) Details of the module's prerequisites, learning outcomes, assessment and contact hours are given in the official module description in the Faculty Handbook - follow the link above. A detailed description of the module's content is given below, together with book lists and a link to the current library catalogue entries. For an explanation of the library's categorisation system see Quantum Mechanics 3 Dr N. Gidopoulos 14 lectures + 5 workshops in Michaelmas Term The course is defined by material contained in this book and in particular the material defined in the syllabus below where the numbers refer to the sections in the book. Additional: Introduction to Quantum Mechanics, D. J. Griffiths (2nd edition, Pearson, 2005) The course is partly defined by material contained in Chapter 6 of this book which has been placed on duo. Additional: Physics of Atoms and Molecules, B. H. Bransden and C. J. Joachain (2nd edition, Prentice Hall, 2003) The course is partly defined by material contained in Chapter 7 of this book which has been placed on duo. Additional: Quantum Mechanics - An Experimentalist's Approach, E. D. Commins (Cambridge University Press, 2014) 1. Introduction to many-particle systems (wave function for systems of several particles, identical particles, bosons and fermions, Slater determinant) [10.1,10.2] 2. The variational method (ground state, excited states, trial functions with linear variational parameters) [8.3] 3. The ground state of two-electron atoms [10.4] 4. The excited states of two-electron atoms (singlet and triplet states, exchange splitting, exchange interaction written in terms of spin operators) [“Atoms and Molecules”, Ch. 7] 5. Complex atoms (electronic shells, the central-field approximation) [10.5] 6. Time-dependent perturbation theory [9.1] 7. Fermi’s Golden Rule [9.2] 8. Periodic perturbations [9.3] 9. The Schrödinger equation for a charged particle in an EM field [11.1] 10. The dipole approximation [11.1] 11. Transition rates for harmonic perturbations [11.2] 12. Absorption and stimulated emission [11.2] 13. Einstein coefficients and spontaneous emission [11.3] 14. Selection rules for electric dipole transitions [11.4] 15. Lifetimes [11.5] 16. The interaction of particles with a static magnetic field (spin and magnetic moment, particle of spin one-half in a uniform magnetic field, charged particles with uniform magnetic fields; Larmor frequency; Landau levels) [12.2] 17. One-electron atoms in magnetic fields [12.3, Griffiths 6.4] Nuclear and Particle Physics Dr D. Maitre and Dr M. Bauer 29 lectures + 12 workshops in Michaelmas and Epiphany Terms Required: Particles and Nuclei: An Introduction to the Physical Concepts, B. Povh, K Rith, C. Scholz and C Zetsche (Springer-Verlag, 6th Edition) The course is defined by material contained in this book, in particular Chapters 1-17. Syllabus: Fundamental Interactions, symmetries and conservation Laws, global properties of nuclei (nuclides, binding energies, semi-empirical mass formula, the liquid drop model, charge independence and isospin), nuclear stability and decay (beta-decay, alpha-decay, nuclear fission, decay of excited states), scattering (elastic and inelastic relativistic kinematics scattering, cross sections, Fermi's golden rule, Feynman diagrams), geometric shapes of nuclei (kinematics, Rutherford cross section, Mott cross section, nuclear form factors), elastic scattering off nucleons (nucleon form factors), deep inelastic scattering (nucleon excited states, structure functions, the parton model), quarks, gluons, and the strong interaction (quark structure of nucleons, quarks in hadrons), particle production in electron-positron collisions (lepton pair production, resonances), phenomenology of the weak interaction (weak interactions, families of quarks and leptons, parity violation), exchange bosons of the weak interaction (real W and Z bosons), the Standard Model, quarkonia (analogy with Hydrogen atom and positronium, Charmonium, quark-antiquark potential), hadrons made from light quarks (mesonic multiplets, baryonic multiplets, masses and decays), the nuclear force (nucleon-nucleon scattering, the deuteron, the nuclear force), the structure of nuclei (Fermi gas model, shell Model, predictions of the shell model). 2 or 3 lectures in Easter Term, including one by each lecturer. Teaching Methods Lectures: 2 or 3 one hour lectures per week Workshops: These provide an opportunity to work through and digest the course material by attempting exercises assisted by direct interaction with the workshop leaders. They also provide opportunity for you to obtain further feedback on the self-assessed formative weekly problems. Students will be divided into four groups, each of which will attend one one-hour class every week. The workshops for this module are not compulsory. Progress test: One compulsory formative progress test (to be completed over the Christmas break) Problem exercises: See
993adfdc38b57ab3
main article image Darren Tunnicliff/Flickr Physicists Confirm That Time Moves Forward Even in The Quantum World 4 DEC 2015 For the first time, an experiment has confirmed that the laws of thermodynamics hold true even at the quantum level – which means that even in the quantum world, you can’t unspill that glass of milk.  The reason time runs the way it does in our everyday lives is because of the second law of thermodynamics, which states that over time all systems become more disordered, or increase in entropy. And that process is irreversible, which is why time only moves forward. But theoretical physicists had predicted that on the quantum level, the process might go both ways. That’s because when you start dealing with really, really small particles, the laws of physics – such as the Schrödinger equation – are 'time-symmetric' or reversible. "In theory, forward and backward microscopic processes are indistinguishable," writes Lisa Zyga for Now physicists led by the Federal University of ABC in Brazil have performed an experiment that confirms that those theories don’t match up with the reality, with thermodynamic processes remaining irreversible even in quantum systems. But they still don’t understand why that’s the case. "Our experiment shows the irreversible nature of quantum dynamics, but does not pinpoint, experimentally, what causes it at the microscopic level, what determines the onset of the arrow of time," one of the researchers, Mauro Paternostro from Queen's University in Ireland, told "Addressing it would clarify the ultimate reason for its emergence." So how do you go about testing the laws of thermodynamics in a quantum system? Basically scientists need to be able to set up an isolated quantum system and observe the reversal of a natural process – which is tricker than it sounds.  For this experiment, the researchers used a bunch of carbon-13 atoms in liquid chloroform, and flipped their nuclear spins using an oscillating magnetic field. They then used another magnetic pulse to reverse the spins again. "If the procedure were reversible, the spins would have returned to their starting points – but they didn’t," writes Zyga. Instead what they saw was that the alternating magnetic pulses were applied so quickly that sometimes the atoms’ spin couldn’t keep up, which lead to the isolated system getting out of equilibrium.  The physicists confirmed that after the experiment the entropy was indeed increasing, which shows that the process of thermodynamics was irreversible, regardless of how small the particles involved were. All of that basically means that the one-way arrow of time exists even for the tiniest particles in the Universe, defying the microscopic laws of physics. And it suggests that something else is getting involved to stop quantum systems from being reversible. The physicists are now interested in figuring out what that is, and they believe the new insight into quantum systems could help advance the march towards quantum computers and other quantum devices. "Any progress towards the management of finite-time thermodynamic processes at the quantum level is a step forward towards the realisation of a fully fledged thermo-machine that can exploit the laws of quantum mechanics to overcome the performance limitations of classical devices," said Paternostro. For now though, we can take away from this research the knowledge that we can’t move backwards in time, as much as we might want to. The past really has passed… even on the atomic scale. The research has been published in Physical Review Letters.
a111277dcebb527c
Thursday, April 25, 2019 Yes, scientific theories have to be falsifiable. Why do we even have to talk about this? The task of scientists is to find useful descriptions for our observations. By useful I mean that the descriptions are either predictive or explain data already collected. An explanation is anything that is simpler than just storing the data itself. An hypothesis that is not falsifiable through observation is optional. You may believe in it or not. Such hypotheses belong into the realm of religion. That much is clear, and I doubt any scientist would disagree with that. But troubles start when we begin to ask just what it means for a theory to be falsifiable. One runs into the following issues: 1. How long it should take to make a falsifiable prediction (or postdiction) with a hypothesis? If you start out working on an idea, it might not be clear immediately where it will lead, or even if it will lead anywhere. That could be because mathematical methods to make predictions do not exist, or because crucial details of the hypothesis are missing, or just because you don’t have enough time or people to do the work. My personal opinion is that it makes no sense to require predictions within any particular time, because such a requirement would inevitably be arbitrary. However, if scientists work on hypotheses without even trying to arrive at predictions, such a research direction should be discontinued. Once you allow this to happen, you will end up funding scientists forever because falsifiable predictions become an inconvenient career risk. 2. How practical should a falsification be? Some hypotheses are falsifiable in principle, but not falsifiable in practice. Even in practice, testing them might take so long that for all practical purposes they’re unfalsifiable. String theory is the obvious example. It is testable, but no experiment in the foreseeable future will be able to probe its predictions. A similar consideration goes for the detection of quanta of the gravitational field. You can measure those, in principle. But with existing methods, you will still collect data when the heat death of the universe chokes your ambitious research agenda. Personally, I think predictions for observations that are not presently measurable are worthwhile because you never know what future technology will enable. However, it makes no sense working out details of futuristic detectors. This belongs into the realm of science fiction, not science. I do not mind if scientists on occasion engage in such speculation, but it should be the exception rather than the norm. 3. What even counts as a hypothesis? In physics we work with theories. The theories themselves are based on axioms, that are mathematical requirements or principles, eg symmetries or functional relations. But neither theories nor principles by themselves lead to predictions. To make predictions you always need a concrete model, and you need initial conditions. Quantum field theory, for example, does not make predictions – the standard model does. Supersymmetry also does not make predictions – only supersymmetric models do. Dark matter is neither a theory nor a principle, it is a word. Only specific models for dark matter particles are falsifiable. General relativity does not make predictions unless you specify the number of dimensions and chose initial conditions. And so on. In some circumstances, one can arrive at predictions that are “model-independent”, which are the most useful predictions you can have. I scare-quote “model-independent” because such predictions are not really independent of the model, they merely hold for a large number of models. Violations of Bell’s inequality are a good example. They rule out a whole class of models, not just a particular one. Einstein’s equivalence principle is another such example. Troubles begin if scientists attempt to falsify principles by producing large numbers of models that all make different predictions. This is, unfortunately, the current situation in both cosmology and particle physics. It documents that these models are strongly underdetermined. In such a case, no further models should be developed because that is a waste of time. Instead, scientists need to find ways to arrive at more strongly determined predictions. This can be done, eg, by looking for model-independent predictions, or by focusing on inconsistencies in the existing theories. This is not currently happening because it would make it more difficult for scientists to produce predictions, and hence decrease their paper output. As long as we continue to think that a large number of publications is a signal of good science, we will continue to see wrong predictions based on useless models. 4. Falsifiability is necessary but not sufficient. A lot of hypotheses are falsifiable but just plain nonsense. Really arguing that a hypothesis must be science just because you can test it is typical crackpot thinking. I previously wrote about this here. 5. Not all aspects of a hypothesis must be falsifiable. It can happen that a hypothesis which makes some falsifiable predictions leads to unanswerable questions. An often named example is that certain models of eternal inflation seem to imply that besides our own universe there exist an infinite number of other universes. These other universes, however, are unobservable. We have a similar conundrum already in quantum mechanics. If you take the theory at face value then the question what a particle does before you measure it is not answerable. There is nothing wrong with a hypothesis that generates such problems; it can still be a good theory, and its non-falsifiable predictions certainly make for good after-dinner conversations. However, debating non-observable consequences does not belong into scientific research. Scientists should leave such topics to philosophers or priests. This post was brought on by Matthew Francis’ article “Falsifiability and Physics” for Symmetry Magazine. 1. You might find interesting Lee McIntyre's book The Scientific Attitude (see my review: which spends quite a lot of time on the demarcation issue (either between science and non-science or science and pseudoscience). 2. Still, if proposed mechanism as a hypothesis was especially odd but there were no other reasonable explanation yet, would also crazy ideas be considered as science? (Susskind's words adapted). 1. If the crazy ideas pass through experimental verification, then those ideas are considered proven science. 3. How would you classify an analysis of this kind? A nice argument to account for the Born rule within MWI ( "Less is More: Born's Rule from Quantum Frequentism" ). I believe it's fair to say the paper concludes that the experimental validity of the Born rule implies the universe is necessarily infinite. If we never observe a violation of the Born rule, would this hypothesis qualify as science? 1. This seems to be a perfect case of falsifiability. If a Born rule violation is never observed this is not proven, but there is a confidence level, maybe some form of statistical support, for this theory. If a Born rule is found to be violated the theory is false, or false outside some domain of observation. Since a quantum gravity vacuum is not so far well defined, and there are ambiguities such as with Boulware vacua, it could be the Born rule is violated in quantum gravity. 2. Science has defied categories since the start. If anyone is responsible for defining science in the modern context it is probably Galileo. Yet we have different domains of science that have different criteria for what is meant by testable. A paleontologist never directly experiences the evolution of life in the past, but these time capsules called fossils serve to lead to natural selection as the most salient understanding of speciation. Astronomy studies objects and systems at great distances, where we will only ever visit some tiny epsilon of the nearest with probes. So we have to make enormous inferences about things. From parallax of stars, to Cepheid variables, to red-shift of galaxies to the luminosity of type I supernova we have this chain of meter sticks to measure the scale of the universe. We measure not the Higgs particle or the T-quark, but the daughter products that infer the existence of these particles and fields. We do not make observations that are as direct as some purists would like. As Eusa and Helbig point out there are aspects of modern theories which have unobservable aspects. Susskind does lean heavily on the idea of theores that are of a nature "it can't be any other way." General relativity predicts a lot of things about black hole interiors. That is a big toughy. No one will ever get close to a black hole that could be entered before being pulled apart, the closest is SgrA* at 27k light years away. Even if theoretical understanding of black hole interiors is confirmed in such a venture that will remain a secret held by those who entered a black hole. It is plausible that aspects of black hole interiors can have some indirect physics with quantum black holes, but we will not be generating quantum black holes any day soon. Testability and falsifiability are the gold standard of science. Theories that have their predictions confirmed are at this top. quantum mechanics is probably the modern physics that is the most confirmed. General relativity has a good track record, and the detection of gravitational radiation is a big feather in the GR war bonnet. Other physics such as supersymmetry are really hypotheses and not theories in a rigorous sense. Supersymmetry also is a framework that one puts phenomenology on. So far all that phenomenology of light SUSY partners looks bad. When I started graduate school I was amazed that people were interested in SUSY at accelerator energies. At first I thought it was properly an aspect of quantum gravitation. I still think this may be the case. At best some form of split SUSY Arkani Hamed proposes may play a role at low energy, which I think might be 1/8th SUSY or something. So these ideas are an aspect of science, but they have not risen to the level of a battle tested theory. IMO string theory really should be called the string hypothesis; it is not a theory --- even if I might think there may be some stringy aspect to nature. There is a certain character in a small country sandwiched between Austrai, Germany and Poland who has commented on this and ridicules the idea of falsifiability. I just checked his webpage and sure enough he has an entry on this. I suppose his pique on this is because he holds to an idea about the world that producing 35 billion tons of carbon in CO_2 annually into the atmosphere has no climate influence. He upholds a stance that has been falsified; the evidence for AGW is simply overwhelming, and by now any scientific thinker should have abandoned climate denialism. Curious how religion and ideology can override reason, even with the best educated. 3. Lawrence Crowell wrote: This assumption may not be the case. The theory of Hawking radiation has been verified in supersonic wave based analog black holes in the lab. Yes, entangled virtual items have been extracted from the vacuum and made real. The point to be explored in the assumptions that underlie science is can such a system using Hawking radiation be engineered to greatly amplify the process of virtual energy realization to the point where copious energy is extracted from nothing. When does such a concept become forbidden as a violation of the conservation of energy to consider as being real? In this forbidden case, it is not so much the basic science of the system, but the point where the quantity of its energy production becomes unthinkable since the conservation of energy is inviolate. 4. There are optical analogues of black holes and Hawking radiation. Materials that trap light can be made to appear black hole like. This property can be tuned with a reference beam of some type. There is no "something from nothing" here. The energy comes from the energy employed to establish the BH analogue. Black holes have a time-like Killing vector, which in a Noether theorem sense means there is a constant of motion for energy. Mass-energy is conserved. Another example: GR says a lot about what goes on inside the event horizon of a black hole, which (classically) is by definition non-observable. But of course this is not a mark against GR. Similarly, the unobservability of other universes in (some types of) the multiverse is not a mark against the theories which have the multiverse as a consequence, as long as they are testable in other ways. It is not GR per se, that is responsible for the event horizon (or the singularity) of the modern 'relativistic' black hole. Rather it is the Schwarzschild solution to the GR field equations that produces both of those characteristics. If Schwarzschild had incorporated the known fact that the speed of light varies with position in a gravitational field we probably wouldn't be talking about black holes. 2. Here another culprit: The renormalisation group itself as David Tong says in here (pdf p.62): “The renormalisation group isn't alone in hiding high-energy physics from us. In gravity, cosmic censorship ensures that any high curvature regions are hidden behind horizons of black holes while, in the early universe, inflation washes away any trace of what took place before. Anyone would think there's some kind of conspiracy going on....” 3. Phillip, I believe I have said this before but here we go again: 1) What happens inside a black hole horizon is totally observable. You just cannot come back and tell us about it. 2) We have good reason to think that the inside of a black hole does play a role for our observations and that, since the black hole evaporates, it will not remain disconnected. For these reasons the situation with black holes is very different from that of postulating other universes which you cannot visit and that are and will remain forever causally disconnected. 4. I think the comparison of other pocket cosmologies and black hole interiors is actually fairly comparable. The interior of a black hole probably has some entanglement role with the exterior world. We might have some nonlocal phenomena with other pocket worlds or these pockets may interact. There is some data coming about that could upend a fair amount of physics and cosmology. The CMB data is compatible with a Hubble parameter H = 67km/sec-Mpc and data from galaxies out to z > 8 indicates H = 74km/sec-Mpc. The error bars on these data sets do not overlap. Something odd is happening. This could mean possibly three things, four if I include something completely different we have no clue about. The universe is governed by phantom energy. The evolution of the vacuum energy dρ/dt = -3H(p + ρ) > 0 with p = wρ, and for w < -1 we have dρ/dt = -3H(1 + w)ρ > 0. This means the observable universe will in time cease to primarily exponentially expand, but will asymptote to some value in a divergent expansion. This is the big rip. One possibility is this pocket world interacted with another at some point. If the two regions had different vacuum energy then maybe some of that from the other pocket spilled into this world. The region we observe out to around 12 billion light years and beyond the cosmic horizon then had this extra vacuum energy fill in sometime in the first few hundred million years of this observable world. Another is that quantum states in our pocket world have some entanglement with quantum states in the inflationary region or in other pocket regions. There may then be some process similar to the teleportation of states that is increasing the vacuum energy of this pocket. It might be this happens generally, or it occurs under different conditions the pocket is in within the inflationary spacetime. Susskind talks about entangled black holes, and I think more realistically there might be entanglement of a few quantum states on a black hole with some quantum states on another, maybe in another pocket world or cosmology, and then another set entangled with a BH elsewhere and there is then a general partition of these states that is similar to an integer partition. If so then it is not so insane to think of the vacuum in this pocket world entangled with vacua elsewhere. The fourth possibility is one that no one has thought of. At any rate, we are at the next big problem in cosmology. This discrepancy in the Hubble parameter from CMB and from more recent galaxies is not going away. 5. Regarding the forth possibility... The CMB tells us about the state that the universe existed in when it was very young. There is no reason to assume that the expansion of the universe is constant. The associated projections about the proportions of the various types of matter and energy that existed at that early time are no longer reliable since the expansion rate of the universe has increased. It is likely that the associated proportions of the various types of matter and energy that exist now have changed from its primordial CMB state. This implies that there is a vacuum based variable process in place that affects the proportions of the various types of matter and energy as an ongoing activity that has always existed and that has caused the Hubble parameter derived from the CMB to differ from its current measured value. 6. We ultimately get back to this problem with what we mean by energy in general relativity. I wrote the following on stack exchange on how a restricted version of FLRW dynamics can be derived from Newton's laws The ADM space plus time approach to general relativity results in the constraints NH = 0 and N^iH_i = 0 that are the Hamiltonian and momentum constraints respectively. The Hamiltonian constraint, or what is energy on a contact manifold, means there is no definition of energy in general relativity for most spacetimes. The only spacetimes where energy is explicitly defined is where there is an asymptotic flat region, such as black holes or Petrov type D solutions. In a Gauss's law setting for a general spacetime there is no naturally defined surface where one can identify mass-energy. Either the surface can never contain all mass-energy or the surface has diffeomorphic freedom that makes it in appropriate (coordinate dependent or non-covariant etc) to define an observable such as energy. The FLRW equations though are a case with H = 0 with kinetic and potential parts E = 0 = ½må^2 - 4πGρa^2/3 for a the scale factor on distance x = ax_0, where x_0 is some ruler distance chosen by the analyst and not nature. Further å = da/dt for time t on the Hubble frame. From there the FLRW equations can be seen. The density has various dependencies for matter ρ ~ a^{-3}, radiation ρ ~ a^{-4} and for the vacuum ρ is generally assumed to be constant. The question is then what do we mean by a vacuum. The Hamiltonian constraint has the quantum mechanical analogue in the Wheeler-DeWitt equation HΨ[g] = 0, which looks sort of like the Schrödinger equation HΨ[g] = i∂Ψ/∂t, but where i∂Ψ/∂t = 0. The time-like Killing vector is K_t = K∂/∂t and we can think of this as a case where the timelike Killing vector is zero. This generally is the case, and the notable cases where K_t is not zero is with black holes. We can however adjust the WDW equation with the inclusion of a scalar field φ and the Hamiltonian can be extended to include this with HΨ[g, φ] = 0, such that there is a local oscillator term with a local meaning to time. This however is not extended everywhere, unless one is happy with pseudotensors. The FLRW equation is sort of such as case; it is appropriate for the Hubble frame. One needs a special frame, usually tied to the global symmetry of the spacetime, to identify this. However, transformations can lead to troubles. Even with black holes there are Boulware vacua, and one has no clear definition of what is a quantum vacuum. I tend to think this may be one thing that makes quantum gravitation different from other quantum fields. 5. But isnt it advantageous for proponents of something like string theory to not have anything that is falsifiable..and continue with the hope the "results" are just around the corner..and for the $$ to keep flowing..forever ? 6. >1. How long should it take to make a falsifiable prediction or postdiction ... >2. How practical should a falsification be? It doesn't make much difference how long it takes, the real question is how much work, and/or time, and/or money it should take to develop an executable falsifiable outcome. In the final analysis this comes down to whether we should pay person A, B, or C to work on hypotheses X, Y or Z. It is a relative-value question, and at times it is very difficult to rank hypotheses in a way that lets us sort them. This is especially true when "beauty" and "naturalness" can generate enthusiasm among researchers; those can render the people that know the most about the prospects for hypotheses X, Y or Z incapable of properly ranking them; their bias is to vote on the hypothesis most pleasing if it were true, instead of the hypothesis most likely to be true or most testable, or that would take the fewest personnel-hours to pursue. In the end there is a finite amount of money-per-year, thus a finite number of personnel-hours, equipment, lab space, computer time and engineering support. In the end it is going to be portioned out, one way or another. The problem is in judging the unknowns: 1) How many $ are we away from an executable falsifiable proposal? 2) How much time and money will it cost? 3) How likely is a proof/refutation? 4) How much impact will a proof/refutation of the hypothesis have on the field in question? Ultimately we need stats we are unlikely to ever develop! In such a case, one solution is to sidestep the reasoning and engage in something like the (old) university model: Professors get paid to work on whatever they feel like, as long as they want, in return for spending half their week teaching students. That can include some amount for experimentation and equipment. "Whatever they want" can include the work of other researchers; so they can collaborate and pool resources. This kind of low-level "No Expectations" funding can be provided by governments. Additional funding would not be provided until the work was developed to the point that the above "unknowns" are plausible answered; meaning when they DO know how to make a falsifiable proposal for an experiment. As for the thousands of dead-ends they might engage in: That's self-regulating; they would still like to work on something relevant with experiments. But if their goal is just to invent new mathematics or whatever that bear no relationship to the real world; that's fine. Not all knowledge requires practical application. 7. It's nice that you toy in your own terms with the familiar philosophical notion of under-determination by experience (remarked on by physicist Duhem as early as 19th century, and leveraged against Popper and positivist philosophies in the 1950's). Maybe the problem is more widespread than you think, and I would be tempted to add a (6): coming up with clear-cut falsification criteria requires assuming an interpretative and methodological framework. To take just the most extreme cases, one should exclude the possibilities that experimenters are systematically hallucinating, and other radical forms of skepticism. But this also includes a set of assumptions that are part of scientific methodology on how to test hypotheses, what kinds of observations are robust, what statistical analysis or inductive inferences are warranted, etc. This things are shared by scientists, because they belong to the same culture, and the general success of science brings confidence into them. Yet all these methodological and interpretative principles are not strictly speaking falsifiable. Now the problem is: of what counts as falsification rests on non-falsifiable methodological assumptions, how can anything be absolutely falsifiable? And I think the answer is that nothing is strictly falsifiable, but only relative to a framework that is acceptable for its general fruitfulness. 1. " one should exclude the possibilities that experimenters are systematically hallucinating," Yes, we're all hallucinating that are computers, which confirm the quantum behaviour of the electron quadrillions of times a second, are working; and that are car satnavs which confirm time dilation in GR trillions of times a second, are working. *Real* scientists are the only people who are *not* hallucinating. 2. Steven Evans, You're missing the point. I'm talking about everything that you have to implicitly assume to trust experimental results, in your example, the general fiability of computers and the fact that they indeed do what you claim they do. I personally don't doubt it. It seems absurd of course. The point is that any falsification ultimately rests on many other assumptions, there's no falsification simpliciter. 3. @Steven Evans maybe you're under the impression that I'm making an abstract philosophical point that is not directly relevant to how science works out should work. But no: take the OPERA experiment that apparently showed that neutrino travel faster than light. It tooks several weeks for scientists to understand what went wrong, and why relativity was not falsified. If anything, this shows that falsifying a theory is not a simple recipe, just a matter of observing that the theory is false. (And the bar can be more or less high depending on how will the theory is established so pragmatic epistemic cost considerations enter into the picture). My point is simply this: what counts as falsification is not a simple matter, a lots of assumptions and pragmatic aspects come in. Do you disagree with this? 4. @Quentin Ruyant You are. Take 1 kilogram of matter, turn it into energy. Does the amount of energy = c^2? Put an atomic clock on an orbiting satellite. Does it run faster than an atomic clock on the ground by the amount predicted by Einstein? Building the instruments is presumably tricky, checking the theories not so much. OPERA was a mistake, everybody knew it was a mistake. Where there is an issue, the issue is not a subtle point about falsifiability, it is far more mundane - people telling lies about their being empirical evidence to support universal fine-tuning or string theory. Or people claiming the next gen collider is not a hugely expensive punt. The people saying this are frauds. In the medical or legal professions they would be struck off and unable to practise further. 8. "To make predictions you always need a concrete model..." The problem is that qualitative (concrete) modeling is a lost art in modern theoretical physics. All of the emphasis is on quantitative modeling (math). The result is this: It is a waste of time to develop more quantitative model variants on the same old concrete models, but what is desperately needed are new qualitative models. All of the existing quantitative models are variations on qualitative models that have been around for the better part of a century (the big bang and quantum theory). The qualitative models are the problem. Unfortunately, with its mono-focus on quantitative analysis, modern theoretical physics does not appear to have a curriculum or an environment conducive to properly evaluating and developing new qualitative models. I want to be clear that I am not suggesting the abandonment of quantitative for qualitative reasoning. What is crucial is a rebalancing between the two approaches, such that in reflecting back on one another, the possibility of beneficial, positive and negative feedback loops is introduced. The difficulty in achieving such a balance lies in the fact that qualitative modeling is not emphasized, if taught at all, in the scientific academy. Every post-grad can make new mathematical models. Nobody even seems to think it necessary to consider, let alone construct, new qualitative models. At minimum, if the qualitative assumptions made a century ago aren't subject to reconsideration, "the crisis in physics" will continue. 9. Thanks for stating this so clearly. 10. Some non-falsifiable hypotheses are not optional. These are known as axioms or assumptions (aka religion) and no science is possible without them. For instance, cosmology would be dead without the unverifiable assumption (religious belief) that the laws of physics are universal in time and space. Science = Observation + Assumptions, Facts Selection, Extrapolations, Interpretations… Assumptions, Facts Selection, Extrapolations, Interpretations… = Sum of Axiomatic Beliefs Sum of Axiomatic Beliefs = Religion …therefore, Science = Observation + Religion 11. The demarcation problem of science versus pseudo science was of course pondered long before Karl Popper. Aristotle for one was quite interested in solving it. No indication that this quandary will ever be satisfactorily resolved. Though I do consider Popper’s “falsifiability” heuristic to be reasonably useful, I’m not hopeful about the project in general. I love when scientists remove their science hats in order to put on philosophy hats! It’s an admission that failure in philosophy causes failure in science. And why does failure in philosophy cause failure in science? Because philosophy exists at a more fundamental level of reality exploration than science does. Without effective principles of metaphysics, epistemology, and value, science lacks an effective place to stand. (Apparently “hard” forms of sciences are simply less susceptible than “personal” fields such as psychology, though physics suffers here as well given that we’re now at the outer edges of human exploration in this regard.) I believe that it would be far more effective to develop a new variety of philosopher rather than try to define a hard difference between “science” and “pseudo science”. The sole purpose of this second community of philosophers would be to develop what science already has — respected professionals with their own generally accepted positions. Though small initially, if scientists were to find this community’s principles of metaphysics, epistemology, and value useful places from which to develop scientific models, this new community should become an essential part of the system, or what might then be referred to as “post puberty science”. 1. One problem with this proposal is the use of the word "metaphysics". To me this carries connotations of God, religion, angels, demons and magic. It means "beyond physics," and in the world today it is synonymous with the "supernatural" (i.e. beyond natural) and used to indicate faith which is "beyond testable or verifiable or falsification". I hear "metaphysics" and I run for the hills. Unless their position on metaphysics is there are no metaphysics, I cannot imagine why I would have any professional respect for them. Their organization would be founded on a cognitive error. I think it is likely possible to develop a "science of science" by categorizing and then generalizing what we think are the failures of science; and why. From those one might derive or discover useful new axioms of science, self-evident claims upon which to rest additional reasoning about what is and is not "science". Part of the problem may indeed be that we have not made such axioms explicit; and instead we rely on instinct and absorption of what counts as self-evident. That is obviously an approach ripe for error, and difficult to correct without formal definitions. Having something equivalent to the family tree of logical fallacies could be useful in this regard. But that effort would not be separate from science, it would just be a branch of science, science modeling itself. That should not cause a problem of recursiveness or infinite descent; and we have an example of this in nature: Each of us contain a neural model of ourself, which we use for everything from planning our movements to deciding what we'd enjoy for dinner, or what clothing we should buy, or what career to pursue. Science can certainly model science, without having to appeal to anything above or beyond science. To some extent this has already been done. Those efforts could be revisited, revised, and expanded. 2. Dr. Castaldo, I think you’d enjoy my good friend Mike Smith’s blog. After reading this post of Sabine’s he wrote an extensive post on the matter as well, and did so even before I was notified that Sabine had put up this one! I get the sense that you and he are similarly sharp. Furthermore I think you’d enjoy extensively delving into the various mental subjects which are his (and I think my) forte. Anyway I was able to submit the same initial comment to both sites. He shot back something similarly dismissive of philosophy btw. On metaphysics, I had the same perspective until a couple years ago. (I only use the “philosophy” modifier as a blogging pseudonym.) Beyond the standard speech connotation, I realized that “metaphysics” is technically mean to refer to what exists before one can explore physics… or anything really. A given person’s metaphysics might be something spiritual for example, and thus faith based. My own metaphysics happens to be perfectly causal. The metaphysics of most people seems to fluctuate between the two. Consider again my single principle of metaphysics, or what I mean to be humanity’s final principle of metaphysics: “To the extent that causality fails (in an ontological sense rather than just epistemically mind you), nothing exists for the human to discover.” All manners of substance dualist populate our soft sciences today. Furthermore many modern physicists seem to consider wave function collapse to ontologically occur outside of causality, or another instance of supernaturalism. I don’t actually mind any of this however. Some of them may even be correct! But once (or if) my single principle of metaphysics becomes established, these people would then find themselves in a club which resides outside of standard science. In that case I’m pretty sure that the vast majority of scientists would change their answer in order to remain in our club. (Thus I suspect that very few physicists would continue to take an ontological interpretation of wave function collapse, and so we disciples of Einstein should finally have our revenge!) Beyond this clarification for the “metaphysics” term, I’m in complete agreement. Science needs a respected community of professionals with their own generally accepted principles of how to do science. It makes no difference if these people are classified as “scientist”, “philosopher”, or something else. Thus conscientious scientists like Sabine would be able to get back to their actual jobs. Or they might become associated professionals if they enjoy this sort of work. And there’s plenty needed here since the field is currently in need of founders! I hope to become such a person, and by means of my single principle of metaphysics, my two principles of epistemology, and my single principle of axiology. I don't get the distinction. There is much to be said for the "shut up and compute" camp; though I don't like the name. It is an approach that works, and has worked for millennia. We never had to know the cause of gravity in order to compute the rules of gravity. We may still not know the cause of gravity; there may be no gravitons, and I admit I am not that clear on how a space distortion translates into an acceleration. Certainly when ancient humans were building and sculpting monoliths, they ran a "shut up and compute" operation; i.e. it makes no difference why this cuts stone, it does. The investigation can stop there. Likewise, I don't have to believe in magic or the supernatural to believe the wavefunction collapses for reasons that appear random to me, or truly are random, or in principle is predictable but would require so much information to predict that prediction is effectively impossible. That last is the case in predicting the outcome of a human throwing dice: Gathering all the information necessary to predict the outcome before the throw begins would be destructive to the human, the dice, and the environment! "Shut up and compute" says ignore why, just treat the wavefunction collapse as randomized according to some distribution described by the evolution equations, and produce useful predictions of the outcomes. Just like we can ignore why gravity is the way it is, why steel or titanium is the way it is, why granite is the way it is. We can test all these things to characterize what we need to know about them in order to build a skyscraper. Nor do we need to know why earthquakes occur. We can characterize their occurrence and strength statistically and successfully use that to improve our buildings. Of course I am not dissing the notion of investigating underlying causations and developing better models of what contributes to material strength, or prevents oxidation, or lets us better predict earthquakes or floods. But I am saying that real science does not demand causality; it can and has progressed without it. Human brains are natural modeling machines. I don't need a theory of why animals migrate on certain paths to use that information to improve my hunting success, and thus my survival chances. We didn't need to be botanists or geneticists to understand enough to start the science of farming and selective breeding for yields. It is possible to know that some things work reliably without understanding why they work reliably. To my mind, it is simply false to claim that without causality there is nothing to know. There is plenty to know, and a true predictive science can (and has) been built resting on foundations of "we don't know why this happens, but it does, and apparently randomly." 4. Well let’s try this Dr. Castaldo. I’d say that there are both arrogant and responsible ways to perceive wave function collapse. The arrogant way is essentially the ontological stance, or “This is how reality itself IS”. The responsible way is instead epistemological, or “This is how we perceive reality”. The first makes absolute causal statements while the second does not. Thus the first may be interpreted as “arrogant”, with the second “modest”. I’m sure that there are many here who are far more knowledgeable in this regard than I am, and so could back me up or refute me as needed, but I’ve been told that in the Copenhagen Interpretation of QM essentially written by Bohr and Heisenberg, they did try to be responsible. This is to say that they tried to be epistemic rather than ontological. But apparently the great Einstein would have none of it! He went ontological with the famous line, “God does not play dice”. So what happens in a psychological capacity when we’re challenged? We tend to double down and get irresponsible. That’s where the realm of physics seems to have veered into a supernatural stance, or that things happen in an ontological capacity, without being caused to happen. So my understanding is that this entire bullshit dispute is actually the fault of my hero Einstein! Regardless I’d like to help fix it by means of my single principle of metaphysics. Thus to the extent that “God” does indeed play dice, nothing exists to discover. And more importantly, if generally accepted then the supernaturalists which reside in science today, would find that they need to build themselves a club which is instead populated by their own kind! :-) 12. @Philosopher Eric You seem to have an inflated view of philosophy and philosophers. I fully agree with you in so far as one ought not ignore what philosophers do and say. To excel in other fields will constrain one from interrogating the work of philosophers. Those who make that choice ought to accept their decision and refrain from the typical contemptuous language seen so often. I have spent the last thirty years studying the foundations of mathematics. To be quite frank about it, I am exhausted by the lunacy of both philosophers and scientists who think mathematics has any relationship to reality beyond one's subjective cognitive experience. From what I can tell, the main emphasis of philosophers in this arena over the last century has been to justify science as a preferred world view by crafting mathematics in the image of their belief systems. Their logicians are even more pathetic. Hume's account of skepticism is good philosophy. It is also unproductive. To represent a metaphysical point of view and then invoke a distinction between syntax and semantics to claim one is not doing metaphysics is simply deceptive. We have a great deal of progress with no advancement. You are correct that such matters cannot be sorted out without digging into the philosophical development of the subject matter. But what you are likely to find are people running around saying, "I don't believe that!". So what one has are contradictory points of view and different agendas. That is what philosophers and their logicians have given to mathematics. Should you disagree with me, what is logic without truth? One can claim that one is only studying "forms". But once one believes they have identified a correct form, one defends one's claims from the standpoint of belief. Philosophers and their logicians can never get away from metaphysics whether they care to admit it or not. But their pretensions to the contrary are simply lies. Science fails because of naive beliefs with respect to truth, reality, and the inability to accept epistemic limitations. Philosophers have shown just as much willingness to fail along those same lines. 1. mls, Thanks for your reply. I’ve dealt with a number of professional philosophers online extensively, and from that can assure you that they don’t consider me to inflate them. Unfortunately most would probably say the opposite, and mind you that I try to remain as diplomatic with them as possible. Your disdain for typical contemptuous language is admirable. They’re a sensitive bunch. Aren’t we all? What I believe must be liberated in order to improve the institution of science, is merely the subject matter which remains under the domain of philosophy. Thus apparently we’ll need two distinct forms of “philosophy”. One would be the standard cultural form for the artist in us to appreciate. But we must also have a form that’s all about developing a respected community with it’s own generally accepted understandings from which to found the institution of science. So you’re a person of mathematics, and thus can’t stand how various interests defile this wondrous language — this monument of human achievement — by weaving it into their own petty interests? I hear you there. But then consider how inconsistent it would be if mathematics were instead spared. I believe that defiling things to our own interests needs to become acknowledged to be our nature. I thinks it’s standard moralism which prevents us from understanding ourselves. I seek to “fix science” not for that reason alone, but rather so that it will be possible for the human to effectively explore the nature of the human. I’d essentially like to help our soft sciences harden. Once we have a solid foundation from which to build, which is to say a community of respected professionals with their own associated agreements, I believe that many of your concerns would be addressed. What is logic without truth? That’s exactly what I have. I have various tools of logic (such mathematics) but beyond just a single truth, I have only belief. The only truth that I can ever have about Reality, is that I exist. It is from this foundation that I must build my beliefs as effectively as can. 2. " I am exhausted by the lunacy of both philosophers and scientists who think mathematics has any relationship to reality beyond one's subjective cognitive experience." Wiles proved, via an isomorphism between modular forms and semi-stable elliptic curves, that there are no positive integer solutions to x^3 + y^3 = z^3. Now, back in "reality", take some balls arranged into a cube, and some more balls arranged into another cube, put them all together and arrange them into a single cube. You can't. Why is that do you think? 3. Steven, It seems to me that two equally sized cubes stacked do not, by definition, form a cube. Nor do three. Four of them however, do. It’s simple geometry. But I have no idea what that has to do with lms’ observation about the lunacy of people who believe that mathematics exists beyond subjective experience, or the mathematical proof that you’ve referred to. I agree entirely with lms — I consider math to merely be a human invented language rather than something that exists in itself (as platonists and such would have us believe.) Do you agree as well? And what is the point of your comment? 4. @Steven Evans Should you take the time to learn about my views, you would find that I am far more sympathetic with core mathematics than not. Get a newsgroup reader and load headers for sci.logic back to January 2019. Look for posts by "mitch". I doubt that you will have much respect for what you read, but, you will find an account of truth tables based upon the affine subplane of a 21-point projective plane. Since there is a group associated with this affine geometry, this basically marrys Klein's Erlangen program with symbolic logic in the sense of well-formedness criteria (that is, logical constants alone do not make a logical algebra). But this is precisely the kind of thing committed logicists will reject. Now, Max Black presented a critical argument against mathematical logicians based upon a "symmetric universe". My constructions are similarly based upon symmetry considerations -- except that I am using tetrahedra oriented with labeled vertices. Who knew that physicists had been inventing all sorts of objects on the basis of similar ideas, although they use continuous groups because they must ultimately relate to physical measurement? For the last two weeks I have been associating collineations in that geometry with finite rotations in four dimensions using Petrie polygon projections of tesseracts. And, as other posts in that newsgroup show, any 16 element group which carries a 2-(16,6,2) design can be mapped into this affine geometry. So, I happen to think that logicians and philosophers have turned left and right into true and false. You must forgive me for critcizing physicists who publish cool mathematics as science without a single observation to back it up. 5. @Philosopher Eric I don't mean stack the cubes(!), I mean take 2 cubes of balls of any size, take all the balls from both cubes and try to rearrange them into a single cube of balls. You can't, whatever the sizes of the original 2 cubes. The reason we know you can't do this is because of Wiles' proof of Fermat's Last Theorem: there are no positive integer solutions to x^3 + y^3 = z^3 The point is this that this is maths existing in reality, in contradiction to what you wrote - whether you know Wiles' theorem or not, you can't take 2 cubes-worth of balls and arrange them into a single cube. There are 2 reasons that this theorem applies to reality: 1) The initial abstraction that started mathematics was the abstraction of number. So it is not a surprise when mathematical theorems, like Wiles', can be reapplied to reality. 2) Wiles' proof depends on 350 years-worth of abstractions upon abstractions (modular forms, semi-stable elliptic curves) from the time of Fermat, but the reason Wiles' final statement is still true is because mathematics deals with precise concepts. (Contrast with philosophy, which largely gets nowhere because they try to write "proofs" in natural language - stupid idea.) Tl;DR Maths often applies to reality because it was initially an abstraction of a particular characteristic of reality. 6. "You must forgive me for critcizing physicists who publish cool mathematics as science without a single observation to back it up." Fair criticism, and it is the criticism of the blog author's "Lost In Math" book. But that's not what you wrote originally. You wrote originally that it was lunacy to consider any maths as being real. O.K., arrange 3 balls into a rectangle. How did it go? Now try it with 5 balls, 7 balls, 11 balls, 13 balls,.. What shall we call this phenomenon in reality that has nothing to do with maths? Do you think there is a limit to the cases where the balls can't be arranged into a rectangle? My money is on not. But maths has nothing to do with reality. Sure. 7. Okay Steven, I think that I now get your point. You’re saying that because the idea you’ve displayed in mathematics is also displayed in our world, that maths must exist in reality, or thus be more than a human construct. And actually you didn’t need to reference an esoteric proof in order to display your point. The same could be said of a statement like “2 + 2 = 4”. There is no case in our world where 2 + 2 does not equal 4. It’s true by definition. But this is actually my point. Mathematics exists conceptually through a conscious mind, and so is what it is by means of definition rather than by means of causal dynamics of this world. It’s independent of our world. This is to say that in a universe that functions entirely differently from ours, our mathematics would still function exactly same. In such a place, by definition 2 + 2 would still equal 4. We developed this language because it can be useful to us. Natural languages such as English and French are useful as well. It’s interesting to me how people don’t claim that English exists independently of us, even though just as many “true by definition” statements can be made in it. I believe it was Dr. Castaldo who recently implied to me that “Lost in Math” doesn’t get into this sort of thing. (My own copy of the book is still on its way!) In that case maybe this could be another avenue from which to help the physics community understand what’s wrong with relying upon math alone to figure out how our world works? 8. " that maths must exist in reality," You've got it the wrong way round. Maths is an abstraction of a property in physical reality. Even before humans appeared, it was not possible to arrange 5 objects into a rectangle. " And actually you didn’t need to reference an esoteric proof " The point is that modular forms and elliptic curves are still related to reality, because the axioms of number theory are based on reality. "2 + 2 would still equal 4." The concept might not arise in another universe. In this universe, the only one we know, 2+2=4 represents a physical fact. "what’s wrong with relying upon math alone to figure out how our world works" It's a trivial question. Competent physicists understand you need to confirm by observation. 9. Steven, If you’re not saying that maths exists in reality beyond us, but rather as an abstraction of a physical property, then apparently I had you wrong. I personally just call maths a language and don’t tie it to my beliefs about the physical, though I can see how one might want to go that way. As long as you consider it an abstraction of reality then I guess we’re square. 13. The title of this column and the second paragraph appear to conflate theories and hypotheses. Theories can generate hypotheses, and hopefully do, but it is the hypothesis that should be falsifiable, and the question remains whether even a robustly falsified hypothesis has any impact on the validity of a theory. Scientists work in the real work, and in that real world, historically, countless hypotheses have been falsified -- or have failed tests -- yet the theories behind them were preserved, and in some cases (one thinks immediately of Pasteur and the spontaneous generation of life) the theory remains fundamental to this day. At the same time, I always remember philosopher Grover Maxwell's wonderful example of a very useful hypothesis that is not falsifiable: all humans are mortal. As Maxwell noted, in a strict Popperian test, you'd have to find an immortal human to falsify the hypothesis, and you'll wait a looooong time for that. 1. " I always remember philosopher Grover Maxwell's wonderful example of a very useful hypothesis that is not falsifiable: all humans are mortal." And yet no-one so far has made it past about 125 years old, even on a Mediterranean diet. What useful people philosophers are. 2. I don't understand how 'all humans are mortal' is a useful hypothesis. It is pretty obvious to anybody reaching adulthood that other humans are mortal, and to most that they themselves can be hurt and damaged, by accident if nothing else. We see people get old, sick and die. We see ourselves aging. I don't understand how this hypothesis is useful for proving anything. It would not even prove that all humans die on some timescale that matters. It doesn't tell us how old a human can grow to be; it doesn't tell us how long an extended life we could live with technological intervention. A hypothesis, by definition, is a supposition made as a starting point for further investigation. Is this even a hypothesis, or only claimed to be a hypothesis? I will say, however, that in principle it is a verifiable hypothesis; because it doesn't demand that all humans that will ever exist be mortal, and there are a finite number of humans alive today. So we could verify this hypothesis by bringing about the death of every human on Earth, and then killing ourselves; and thus know that indeed every human is mortal. Once a hypothesis is confirmed, then of course it cannot be falsified. That is true of every confirmed hypothesis; and the unfalsifiability of confirmed hypotheses is not something that worries us. 3. Dr. Castaldo: "useful to prove anything" is not a relevant criterion for being good science. That said, much of human existence entails acting on the assumption that all humans are mortal, so I think that Maxwell's tongue-in-cheek example is of a hypothesis that us extremely useful. Your comment about how the hypothesis is in principle verifiable (because there are a finite number of humans) is, forgive me, somewhat bizarre -- the classic examples of good falsifiable hypotheses, such as "all swans are white" would be equally verifiable for the same reason, yet those examples were invented to show that it is the logical form of the hypothesis that Popper and falsificationists appeal to, not the practicalities of testing. Moreover, while it could be arguable that the number of anything in the universe is finite, one issue with humans (and swans) is that the populations are indefinite in number -- as Maxwell commented, you don't know if the next baby to be born will be immortal, or the 10 millionth baby to be born. @Steven Evans: while your observation about human longevity is true (so far), Maxwell's humorous point -- which, by the way, was a critique of Popper -- was that you cannot be absolutely certain that the next child born will not be mortal, just as Popper insisted that the next swan he encountered could, just possibly, be black. Maxwell's point was about how you would establish a test of this hypothesis. In Popper's strange world of absolutes, you'd have to find an immortal human. Maxwell noted that here in in the real world of actual science, no one would bother, especially since markers of mortality pile up over the lifespan. 4. @DKP: I am not the one that claimed it was a useful hypothesis. Once that claim is made, it should be provable: What is it useful for? The only thing a hypothesis can be useful for is to prove something true or false if it holds true or fails to hold true; I am interested in what that is: Otherwise it is not a useful hypothesis. In other words, it must have consequences or it is not a hypothesis at all. Making a claim that is by its nature is unprovable does not make it a hypothesis. I can't even claim every oxygen atom in the universe is capable of combining with two hydrogen atoms, in the right conditions, to form a molecule of water. I can't claim that as a hypothesis, I can't prove it true for every oxygen atom in the universe, without that also being a very destructive test. UNLESS I rely on accepted models of oxygen and hydrogen atoms, and their assertions that these apply everywhere in the universe, which they also cannot prove conclusively. Maxwell's "hypothesis" is likewise logically flawed; but if we resort to the definition of what it is to be human, than it is easily proven, because it is not a hypothesis at all but a statement of an inherent trait of being human; just like binding with hydrogen is a statement of the inherent trait of oxygen and the atom we call oxygen. I know Maxwell's point was about how you would establish a test of this hypothesis; MY point was that Maxwell's method is not the only method, is it? If all living humans should die, then there will be no future humans, and we will have proved conclusively that all humans are mortal. In fact, in principle, my method of confirming the truth of the hypothesis is superior to Maxwell's method if falsifying it, because mine can be done in a finite amount of time (since there are a finite number of humans alive at any given time, and it takes a finite amount of time to kill each one of us). And confirmation would obviously eliminate the need for falsification. Of course, I am into statistics and prefer the statistical approach; I imagine we (humanity, collectively throughout history) have exceed an 8 sigma confirmation by now on the question of whether all humans are mortal; so I vote against seeking absolute confirmation by killing everyone alive. 5. @DKP " In Popper's strange world of absolutes, you'd have to find an immortal human. " Or kill all humans. The point is that you can apply falsifiability in each instance - run a test that confirms the quantum behaviour of the electron. Then carry out this test 10^100000000000 times and you now have an empirical fact, which is certainly sufficient to support a business model for building a computer chip based on the quantum behaviour of the electron. By the standards of empirical science, there will never be an immortal human as the 2nd law will eventually get you, even if you survive being hit by a double-decker: As a society, we would better off giving most "philosophers" a brush and tell them to go and sweep up leaves in the park. They could still ponder immortal humans and other irrelevant, inane questions while doing something actually useful. 6. @Steven Evans: Perhaps you missed the point of Maxwell's example, which was to suggest that at least one particular philosopher was irrelevant, by satirizing his simplistic notion of falsification. As a scientist myself, and not a philosopher, I found myself in agreement with Maxwell, and 50 years later I still find historians of science to offer more insight into the multiple ways in which "science" has worked and evolved -- while philosophers still wrestle, as Maxwell satirized, with simplistic absolutes. More seriously, your proposed test of the behavior of the electron makes the point I started with in my first comment: theories are exceedingly difficult to falsify in the way that Sabine's article here suggests; efforts at falsification focus on hypotheses. 14. There is an intriguing name (proposal) for a new book by science writer Jim Baggott (@JimBaggott): A Game of Theories. Theory-making does seem to form a kind of game, with 'falsifiability' just one of the cards (among many) to play. And today (April 26) is Wittgenstein's (language games) birthday. 15. Very manipulative article. All the traditional attempts of theoreticians to dodge the question are there. But to me it was even more amusing to see an attempt to bring in Popper and not to oppose Marx. But since Popper was explicitly arguing against Marx' historicism they had to make up "Stalinist history" (what would it even be?). 16. Hi Sabine, You claim that string theory makes predictions, which prediction do you have in mind? Peter Woit often claims that string makes no predictions ... "zip, zero, nadda" in his words. 1. Thanks, that FAQ #1 is a little short on specifics. As a result I am still puzzled. As far as string cosmology goes I would question whether it is so flexible you can get just about anything you want out of it. 2. String cosmology is not string theory. You didn't ask for specifics. 17. Sabine Said… debating non-observable consequences does not belong into scientific research. Scientists should leave such topics to philosophers or priests. Of course you are correct, I’m wondering if you’ve also gotten the impression some scientists may even be using non-observable interpretations as a basis for their research? 18. Thank you. Your writing is clear and amusing, as usual. I'm glad to see that you allow for some nuance when it comes to falsifiability. There is a distinction between whether or not a non-falsifiable hypothesis is "science", and whether or not the practice of a particular science requires falsifiability at every stage of its development, even over many decades. I am glad string theory was pursued. I am also glad, but only in retrospect, that I left theoretical physics after my undergraduate degree and did not waste my entire career breaking my brain doing extremely difficult math for its own sake. Others, of course, would not see this as a waste. But how much of this will be remembered? Or to quote Felix Klein: "When I was a student, abelian functions were, as an effect of the Jacobian tradition, considered the uncontested summit of mathematics and each of us was ambitious to make progress in this field. And now? The younger generation hardly knows abelian functions." 19. Dr. Hossenfelder, So a model, e.g. string cosmology, is a prediction? 1. Korean War, A model is not a prediction. You make a prediction with a model. If the prediction is falsified, that excludes the model. Of course the trouble is that if you falsify one model of string cosmology, you can be certain that someone finds a fix for it and will continue to "research" the next model of string cosmology. That's why these predictions are useless: It's not the model itself that's at fault, it's that the methodology to construct models is too flexible. 2. Dr. Hossenfelder, Thanks for your response, I thought that was the case. If this comment just shows ignorance, please don't publish it. If it might be of use, my question arose because Jan Reimera asked for a specific string theory prediction to refute Peter Woit''s claim that none exist. After reading the faq, I couldn't see that it does this unless the string cosmology model is either sufficient in itself or can be assumed to reference already published predictions. 3. String theory generically predicts string excitations, which is a model-independent prediction. Alas, these are at too high energies to actually be excited at energies we can produce, etc etc. String cosmology is a model. String cosmology is not the same as string theory. 20. Hi, Sabine. Ich vertraue darauf,dass alles gut lauft. I enjoyed your post. All things considered, Ich weiB nicht,wie du dass I'll be plain, I'm a experimental scientist. " The term ' falsification ' (for me) belongs to the realm of theory. when I run an experiment the result is obvious - and observable. When you brought up Karl (Popper) , we're you making a statement on ' critical rationalism' , I hope not. (in the quantum realm, you will find a maze) At any rate, You struck me with the words ' I start working on an idea and then ... You know me. (2 funny) In parting, for You I have a new moniker for Your movement. - as a # , tee-shirts, it's -- DISCERN. (a play on words) not to mean 'Dis-respect' In the true definition of the word: ' to be able to tell the say, ... between a good idea - and be a bad one. Once again, Love Your Work. - All Love, 1. I did not "bring up Popper." How about reading what I wrote before commenting? 21. Wasn't the argument that atomism, arguably one of the most productive theories of all time, wasn't falsifiable? Of course it was ultimately confirmed, which is not quite the same thing - it just took 2000 plus years. 22. @Lawrence: Off the top of my head: Perhaps the statistical distributions are wrong, and thus the error bars are wrong. I don't know anything about how physicists have come to conclusions on distributions (or have devised their own), but I've done work on fitting about 3 dozen different statistical distributions, particularly for finding potential extreme values; and without large amounts of data it is easy to mistakenly think we have a good fit for one distribution when we know the test data was generated by another. Noise is another factor, if the data being fitted is noisy in any dimension; including time. For example, in the generalized extreme value distribution, using IRL engineering to predict the worst wind speeds, flood levels, or in aviation, the extent of crack growth in parts do to aviation stressors (and thus time to failure), minor stochastic errors in the values can change things like the shape parameter that wildly skew the predictions. Even computing something like a 100-year flood level: sorting 100 samples of the worst flood per year. The worst of all would be assigned the rank index (100/101), (i/(N+1) is its expected value on the probability axis) but that can be wrong. The worst flood in 1000 years may have occurred in the last 100 years. There is considerable noise in both dimensions; the rank values and the measured values, even if we fit the correct distribution. There is also the problem of using the wrong distribution; I believe I have seen this in medical literature. Weibull distributions can look very much like a normal curve, but they are skewed, and have a lower limit (a reverse Weibull has an upper limit). They are easily confused with Fréchet distributions. But they can give very different answers on exactly where your confidence levels (and thus error bars) are for 95%, 99%, or 99.9%. One fourth possibility is the assumption of what the statistical distribution should even be is in error. It may depend upon initial conditions in the universe, or have too much noise in the fitting, or too few samples to rule out other distributions prevailing. In general, the assumptions made in order to compute the error bars may be in error. 1. I can't comment too much on the probability and statistics. To be honest this has been from early years my least favorite area of mathematics. I know just the basic stuff and enough to get me through. With Hubble data this trend has been there for decades. Telescope redshift data for decades have been in the 72 to 74km/sec-Mpc range. The most recent Hubble data is 74.03±1.42km/sec-MpC. With the CMB data this is now based on the ESA Planck spacecraft data. It is consistent with the prior NASA WMAP spacecraft data. and this is very significantly lower around 67.66±0.42Km/sec-Mpc. Other data tend to follow a similar trend. There has been in the last 5 to 10 years this growing gap between the two. I would imagine there are plenty of statistics pros who eat the subject for lunch. I question whether some sort of error has gotten through their work. 23. Each brain creates a model of the inside and outside. Each of us call that the reality. But it's just a model. Now we create models of parts of the model that might or not fit the first model. It's a bit of a conundrum. Personally I believe that it's all about information. The one that makes a theory that takes all that into account will reap the nobel price. That's the next step. 24. "An hypothesis that is not falsifiable through observation is optional. You may believe in it or not." One has no reason to think it is true as an empirical fact. Believing it in this case is just delusion (see religion). The issue is simply honesty. People who claim there is empirical evidence for string theory, or fine-tuning of the universe, or the multiverse, or people who claim that the next gen collider at CERN is anything but a massively expensive, completely unprecedented punt are simply liars. It's easy to see when you compare with actual empirical facts, which in physics are often being confirmed quintillions of times a second in technology (quantum behaviour of electron in computer chips, time dilation in satnav, etc.) How can someone honestly claim that universal fine-tuning is physics just like E=mc^2? They can't - they are lying. Where taxpayers' money is paying for these lies, it is criminal fraud. 25. I realise that the notion of "model" is too subtle for someone like me just fixated at playing with parsing guaranteed Noether symmetries with ward like identities upon field equations from action principles.... So the Equivalence Principle in itself is predictive in that it need not be supplemented with some Consitituive equations (like the model of susceptibility of medium in which a Maxwell field source resides say or model of a star) to describe the materiality of the inertial source? 26. @Philosopher Eric Nice response. I think your initial remarks sparked a reaction rather than a response on my part. Your last paragraph expresses an essential problem. One's first assumption, then, ought to be that one is not alone. And, science as a community enterprise requires something along the lines of Gricean maxims. This is completely undermined when, for the sake of a logical calculus, philosophers pretend that words are to be treated as mere parameters. Tarski explicitly rejected this methodology in his paper on the semantic conception of truth. Yet, those who invoke the distinction between semantics and syntax as some inviolable principle regularly invoke Tarski as the source of their views (one should actually look to Carnap as the source of such extreme views). This is the kind of thing I find so disturbing where philosophy, logic, and mathematics intersect. There is a great deal of misinformation in the literature. There is a great deal that needs "fixing". But the received paradigms are largely defensible. It is not as if they are not the product of highly intelligent practitioners. 27. The difficulty of detecting gravitons raises a related question: what counts as a detection? Saying that it must be detected in a conventional particle physics experiment is a rather ad hoc criterion. If all the knowledge we have today already implies the existence of the graviton, then that should count as it having been detected. The same can be said about LIGO's detection of gravitational waves. The existence of gravitational waves was already implied by the detection of the decaying orbits of orbiting pulsars. Or one may argue that this was in turn a prediction of GR which had ample observational support before the observation of the orbiting pulsars. 28. Sean Carroll wrote a blogpost about this: He is not a crackpot. Maybe you two could have a podcast / youtube-discussion about it? 1. In practice, calls to remove falsifiability are intended to support string theory, fine-tuning and the multiverse as physics. They are not physics, merely speculation, and the people claiming they are physics *are* crackpots. Remove falsifiability and just watch all the loonies swarm in with their ideas that "can't be disproved" and are "compatible with observations". There's nothing wrong with speculation but it is important that one is aware it is speculation otherwise you end up with the situation as in string theory where too much money and too many careers have been wasted on it. (Or philosophy where several thousand years have been wasted.) 29. @Steven Evans I assure you that we are, for the most part, on the same side of these issues. Your arguments, however, are very much like those of the foundations community who challenge dissent by demanding that a contradiction to their views be shown. 1999 Pavicic and Megill (the latter known for the metamath program) showed that propositional logic is not categorical and that the model faithful to the syntactic structure of the logic is not Boolean. So the contradiction demand is silly and simplistic. You are making arguments on the basis of 'abstractions'. Where exactly do these abstractions reside in time and space? Or, as many philosophers do, are you speaking of a realm of existence beyond time and space? Indeed, Tarski's semantic conception of truth properly conveys the intentions we ascribe to correspondence theories of truth. So, if we state that some abstraction is meaningful with respect to the truth of our scientific theories, we must account for the existence of the objects denoted by our language terms. Either you are claiming realms of existence which I shall not concede to you, or, you can show me "the number one" as an existent individual. Most of my acquaintances do not have formal education. When they ask me to explain my interest, I remind them of just how often one hears that "mathematics is the language of science". So, in a very crude sense, what is true in science depends on the nature of truth in mathematics. I expect that you will disagree with that view. But, I do not think you will be able to demonstrate the substantive existence of the abstractions you are invoking to challenge me. You may have problems with the very publications I mentioned because we share a similar sense of what constitutes science. But I see the kernel of the problem in the very statements you are making about the nature of mathematics. It is not so much that I disagree with you, it is that your positions are not defensible. You need to stipulate a theory of truth. You need to stipulate which conception of truth is applied under that theory. You need to stipulate logical axioms. You need to stipulate axioms for your mathematical theory. You need to decide whether or not you are following a formalist paradigm. If not, you will have to accommodate substitutions in the calculus with a strategy to warrant substitutions. If so, you will be faced with the problem of non-categoricity. Dr. Hossenfelder discussed this last problem in her book when considering Tegmark's suggestion that all interpretations be taken as meaningful. It is just not as simple as you would like it to be. 1. I've no idea what the correct logical terms are, but arithmetic is a physical fact. I can do arithmetic with physical balls, add them, subtract them, show prime numbers, show what it means for sqrt(2) to be irrational, etc., etc. This maths exists physically, and it is this physical maths that is the basis of abstract maths. Physical arithmetic obeys the axioms of arithmetic and the logical steps used to prove theorems are also embodied in physical arithmetic. Of course - because arithmetic and logic are observed in the physical world, that's where the ideas come from. Of course, philosophers can witter on at length about theoretical issues with what I have written, but they will never be able to come up with a concrete counter-example. They will nit-pick. I will re-draft what I have written. They will nit-pick some more, I will re-draft some more. And 2,000 years later we will have got nowhere, yet still it will be physically impossible to arrange a prime number of balls into a rectangle. Again, you've got it the wrong way round. Maths comes from the physical. Anyway, the issue of this blog post, falsifiability, is in practice an issue with people trying to suspend falsifiability to support string theory, fine-tuning and the multiverse. In more extreme cases, it is about philosophers and religious loonies claiming they can tell us about the natural world beyond what physics tells us. These people trying to suspend falsifiability are all dishonest cranks. That is why falsifiability is important, not because of any subtleties. There are straight-up cranks, even amongst trained physicists, who want to blur the line between "philosophy"/"religion" and physics and claim Jesus' daddy made the universe. Falsifiability stops these cranks getting their lies on the physical record. 30. Hi Sabine, sorry for a late reply. (everyone's busy) All apologies for the misunderstanding. 1) I did read your post. 2) I know you didn't mention him by name, but (in my mind) I don't see how one can speak of 'falsifiability' and not 'bring up' Karl Popper. 3) In the intro to your post you said " I don't know why we should even be talking about this". I agreed. ... and then wondered why we were. I thought you might be making a separate statement of some kind. At any rate, I'm off to view your new video while I have time. (can't wait) Once again, Love Your Work All love, 31. Maths does exist in reality beyond us. Of course it does, because Maths comes from a description of reality. 5 objects can't be arranged into a rectangle whether human mathematicians exist or not. 1. Steven, I’m not going to say that you’re wrong about that. If you want to define maths to exist beyond us given that various statements in it are true of this world (such as 5 points cannot form a rectangle, which I certainly agree with), then yes, math does indeed exist. I’m not sure that your definition for “exists” happens to be all that useful however. In that case notice that English and French also exist beyond us given that statements can be made in these human languages which are true of this world. The term “exists” may be defined in an assortment of ways, though when people start getting platonic with our languages, I tend to notice them developing all sorts of silly notions. Max Tegmark would be a prominent example of this sort of thing. 32. Sabine Hossenfelder posted (Thursday, April 25, 2019): Are theories also based on principles for how to obtain empirical evidence, such as, famously, Einsteins requirement that »All our space-time verifications invariably amount to a determination of space-time coincidences {... such as ...} meetings of two or more of these material points.« ? > To make predictions you always need a concrete model, and you need initial conditions. As far as this is referring to experimentally testable predictions this is a very remarkable (and, to me, agreeable and welcome) statement; constrasting with (wide-spread) demands that "scientific theories ought to make experimentally testable predictions", and claims that certain theories did make experimentally testable predictions. However: Is a principal reason for considering "[concrete] initial conditions" separate from "a [or any] concrete model", and not as part it ? Sabine Hossenfelder wrote (2:42 AM, April 27, 2019): > A model is not a prediction. You make a prediction with a model. Are conrete, experimentally falsifiable predictions part of models ? > if you falsify one model [...] someone [...] will continue to "research" the next model I find this description perfectly agreeable and welcome; yet it also seems very remarkable because it appears to contrast with (wide-spread) demands that "scientific theories ought to be falsifiable", and claims that certain theories had been falsified. > That's why these predictions are useless: [...] Any predictions may still be used as rationals for economic decisions, or bets. 33. mls: Where exactly do these abstractions reside in time and space? Originally the abstractions were embodied in mental models, made of neurons. Now they are also on paper, in textbooks, as a way to program and recreate such neural models. Math is just recursively abstracting abstractions. When I count my goats, each finger stands for a goat. If I have a lot of goats, each hash mark stands for one finger. When I fill a "hand", I use the thumb to cross four fingers, and start another hand. Abstractions of abstractions. Math is derived from reality, and built to model reality; but the rules of math can be extended, by analogy, beyond anything we see in reality. We can extend our two dimensional geometry to three dimensions, and then to any number of dimensions; I cluster mathematical objects in high dimensional space fairly frequently; it is a convenient way to find patterns. But I don't think anybody is proposing that reality has 143 dimensions, or that goats exist in that space. So math can be used to describe reality, or because the abstractions can be extended beyond reality, it can also be used to describe non-reality. If you are looking for "truth", that mixed bag is the wrong place to look. Even a simple smooth parabolic function describing a thrown object falling to earth is an abstraction. If all the world is quantized, there is no such thing: The smooth function is just an estimator of something taking quantum jumps in a step-like fashion, even though the steps are very tiny in time and space; so the progress appears to be perfectly smooth. To find truth, we need to return to reality, and prove the mathematics we are using describe something observable. That is how we prove we are not using the parts of the mixed bag of mathematics that are abstractions extended beyond reality. 34. @ Steven Evans In response to David Hume's "An Enquiry Concerning Human Understanding" Kant offered an account of objective knowledge grounded in the subjective experience of individuals. He distinguished between mathematics (sensible intuition) and logic (intelligible understanding). But to take this as his starting point he had to deny the actuality of space and time as absolute concepts. He took space to correspond with geometry and time to correspond with arithmetic. The relation to sensible intuition he claimed for these correspondences is expressed in the sentences, "Time is the form of inner sense." "Space, by all appearances, is the form of outer sense." The qualification in the second statement reflects the fact that the information associated with what we do not consider as part of ourselves is conditioned by our sensory apparatus before it can be called a spatial manifold. Hence, external objects are only known through "appearances". This certainly provides a framework by which mathematics can be understood in terms of descriptions related to the reality of experience. But it does not provide for a reality outside of our own. This, of course, is why I acknowledged Philosopher Eric's knowledge claim in his response to me. You seem to be assuming that an external reality substantiates the independent existence of your descriptions. The Christians I know use the same strategy to assure themselves of God's existence and the efficacy pf prayer. Kant's position on geometry is one instance of misinformation in the folklore of mathematical foundations. But, that does not really affect many of the arguments used against him. Where in sensible experience, for example, can one find a line without breadth? Or, if mathematics is grounded in visualizations, what of optical illusions? These criticisms are not without merit. Of major importance is that the sense of necessity attributed to mathematical truth seems to be undermined. Modern analytical philosophy recovers this sense of necessity by reducing mathematics to a priori stipulations presentable in formal languages with consequences obtained by rules for admissible syntactic transformations. Any relationship with sensible intuition is eradicated. What is largely lost is the ability to account for the utility of mathematics in applications. The issues are just not that simple. And they were alluded to by George Ellis in Dr. Hossenfelder's book. 1. @mls: "Time is the form of inner sense." / "Space, by all appearances, is the form of outer sense." Kant sounds utterly ridiculous, and these sound like trying to force a parallelism that does not exist. These definitions have no utility I can fathom. mls: Where in sensible experience, for example, can one find a line without breadth? Points without size and lines without breadth are abstractions used to avoid the complications of points and lines with breadth. So our answers (say about the sums of angles) are precise and provable. A line without breadth is the equivalent of a limit: If we reason using lines with breadth, we must give it a value, say W. Then our answer will depend on W. The geometry of lines without breadth is what we get as W approaches 0, and this produces precise answers instead of ranges that depend on W. mls: Or, if mathematics is grounded in visualizations, what of optical illusions? Mathematics began grounded in reality. Congenitally blind people can learn and understand mathematics without visualizations. Those are shortcuts to understanding for sighted people, not a necessity for mathematics, so optical illusions are meaningless. Thus contrary to your assertion, those criticisms are indeed without merit. Mathematics began by abstracting things in the physical world, but by logical inference it has grown beyond that in order to increase its utility. mls: Any relationship with sensible intuition is eradicated. Not any relationship. Mathematics can trump one's sensible intuition; that is a good thing. Our brains work by "rules of thumb," they work with neural models that are probabilistic in nature and therefore not precise. Mathematics allows precise reasoning and precise predictions; some beyond the capabilities of "intuition". Dr. Hossenfelder recently tweeted an article on superconductivity appearing in stacked graphene sheets, with one rotated by exactly 1.1 degrees with respect to the other. This effect was dismissed by many researchers out of hand, their intuition told them the maths predicting something would be different were wrong. But it turns out, the maths were right; something (superconductivity) does emerge at this precise angle. Intuition is not precise, and correspondence with intuition is not the goal; correspondence with reality is the goal. mls: What is largely lost is the ability to account for the utility of mathematics in applications. No it isn't, mathematics has been evolving since the beginning to have utility and applications. I do not find it surprising that when our goal is to use mathematics to model the real world, by trial and error we find or invent the mathematics to do that, and then have successes in that endeavor. What is hard to understand about that? It is not fundamentally different than wanting to grow crops and by trial and error figuring out a set of rules to do that. mls: The issues are just not that simple. I think they are pretty simple. Neural models of physical behaviors are not precise; thus intuition can be grossly mistaken. We all get fooled by good stage magicians, even good stage magicians can be fooled by good stage magicians. But the rules of mathematics can be precise, and thus precisely predictive because we designed it that way, and thus mathematics can predict things that test out to be true in cases where our "rule of thumb" intuition predicts otherwise; because intuition evolved in a domain in which logical precision was not a necessity of survival, and fast "most likely" or "safest" decisions were a survival advantage. 35. “Falsifiable” continues to be a poor term that I’m surprised so many people are happy using. Yeah, yeah, I know..Popper. It’s still a poor term. Nothing in empirical scientific inquiry is ever truly proven false (or true), only shown to be more or less likely. “Testable” is a far better word to describe that criterion for a hypothesis or a prediction. It renders a lot of the issues raised in this thread much less sticky. 36. "various statements in it are true of this world " You keep getting it the wrong way round. The world came first. Human maths started by people counting objects in the physical world. Physical arithmetic was already there, then people observed it. 37. OK, so what I strictly mean but couldn't be bothered to write out, was that if you take a huge number of what appear to observation at a certain level of precision as quantum objects they combine to produce at the natural level of observation of the senses of humans and other animals enough discrete-yness to embody arithmetic. This discrete-yness and this physical arithmetic exist (are available for observation) for anything coming along with senses at the classical level. In this arena of classical discrete-yness, 5 discrete-y objects can't be arranged into a rectangle, for example. I am aware of my observations, so I'll take a punt that you are similarly aware of your observations, that what I observe as my body and your body exist in the sense that they are available for observation to observers like ourselves and now it makes no sense not to accept the existence of the 5 objects, in the sense that they are available for observation. As I said, arithmetic exists in reality and human maths comes from an observation of that arithmetic. It is that simple. "Where in sensible experience, for example, can one find a line without breadth?" The reality of space at the human level is 3-D Euclideanesque. A room of length (roughly) 3 metres and breadth (roughly) 4 metres will have a diagonal of (roughly) 5 metres. For best results, count the atoms. "The Christians I know use the same strategy to assure themselves of God's existence and the efficacy pf prayer." "God" doesn't exist - it's a story. However, 5 objects really can't be arranged into a rectangle - try it. "Of major importance is that the sense of necessity attributed to mathematical truth seems to be undermined." I would stake my life on the validity of the proof that sqrt(2) is irrational. Undermined by whom? Dodgy philosophers who have had their papers read by a couple of other dodgy philosophers? Meanwhile, Andrew Wiles has proved Fermat's Last Theorem for infinite cases. Also known as proving from axioms as Euclid did over 2,000 years ago. And all originally based on our observations of the world. 38. Well, with Dr. Hossenfelder's permission, perhaps I might respond with a post or two that actually reflect my views rather than what one finds in the literature. At the link, one may find the free Boolean lattice on two generators. Its elements are labeled with the symbols typically taught in courses on propositional logic. If one really wants to argue that the claims of philosophers and their logicians are of questionable merit, thos is one of the places to start. Let's see a show of hands. Who sees the tetrahedron? In combinatorial topology, one decomposes a tetrahedron into vertices, edges, faces, and an interior. With exception for the bottom element, the order-theoretic representation of this decomposition is order-isomorphic with the lattice above. And, one need only hold that the bottom element denote the exterior to complete the sixteen set here. Philosophers and their logicians hold that mathematics has been arithmetized. Even though the most basic representation of how their logical connectives relate to one another can be directly compared with a tetrahedron, they will insist that geometry has been eliminated from mathematics. You can thank David Hilbert and the formalists for that. Say all that you want about Euclid, Hilbert's "Foundations of Geometry" reconstructs the Euclidean corpus without reference to motions or temporality. Remember this the next time you want to recite some result from mathematical logic which is contrary to your beliefs about mathematics. So, if logicians have simply put labels on a tetrahedron, one has just cause for questioning the relevance of their claims concerning the foundations of mathematics. But, that bottom element is still bothersome because it is not typically addressed in combinatorial topology. In the link, one can find the 3-dimensional projection of a tesseract, although Wikipedia does not show the edges connecting the vertices to a point at infinty. When this is added, all of the elements are 4-connected as in the Boolean order. The bottom of the Boolean order would coincide with the point at infinity. Amazing, is it not? Our logic words have a 4-dimensional character. Let me repeat something I have maintained repeatedly in blog posts here. If the theory of evolution is among our best science, then we have no more facility for knowing the truth of reality than an earthworm. I do not need Euclid's axioms to make two paper tetrahedra with vertices colored so that they cannot be superimposed with all four colors matched. One can do a lot with that to criticize received views in the foundations community. Ignoring their arguments because you believe differently just puts you in the queue of "he said, she said" that Steven Evans has used to discredit philosophers. 39. I read a preview of one of Smolin's books on Amazon in which he proclaims the importance of Leibniz' identity of indiscernibles. Since I have read Leibniz, I would tend to agree with him. However, Leibniz also motivated the search for a logical calculus. So, the principle is more often associated with logical contexts. Leibniz attributes the principle to St. Thomas Aquinus to answer how God knows each soul individually. In keeping with Smolin's account, Leibniz does claim to be generalizing the principle to a geometric application. But in the debates over how Leibniz and Newton differed, the principle became associated with its logical application. Steven Evans would like me to acknowledge the reality of arithmetic in some sense. Kant had probably been the first critic of the logical principle. He asserted that numerical difference is known through spatial intuition. In modern contexts, the analogous portrayal can be found in Strawson's book "Individuals". He uses a diagram with different shapes to explain the distinction between qualitative identity and quantitative identity. In other words, numerical difference is grounded by spatial intuition. Since mathematician's make it a habit to work from axioms, I wrote a set of axioms intended to augment set theory by interpreting the failure of equality as topological separation. In other words, two points in space are distinct if one is in a part of space that the other is not. When you run around using a membership relation while thinking in terms of geometric incidence, keep in mind that this is not what a membership predicate means. One may say that the notion of a set is not yet decided, but the received view is one where geometry is deprecated because mathematics has been arithmetized. And, since numbers can be defined in logic, any relation of the membership predicate with numerical identity associated with spatial intuition has been lost. My views on mathematics are far closer to those who study the physical sciences than not. So do not hold me accountable for a summary of what is the case in the foundations of mathematics. You have physicists running around pretending that the mathematics is telling them truths about the universe and others using mathematics to say that they should be believed. My point is that they are further enabled by what is going on in the foundations of mathematics. 40. @Steven Evans " a story" You have probably never heard of deflationary nominalism. It is one way of speaking of mathematical objects without committing to their reality: Motivated by the fact that core mathematicians actually define their terms, I needed a logic that supported descriptions. Free logics do that, although the general discussion of free logics does not apply to my personal work. The logic I had to write for my own purposes is better compared with how free logics can be used for fictional accounts, My logic is classical (rather than paraconsistent) and the method "works" because proofs are finite. The standard account of formal systems relies on a completed infinity outside of the context of an axiom system. I doubt that Euclid ever had this in mind. David Hilbert turned his attention to arithmetical metamathematics with the objective of a finite consistency proof precisely because completed infinities are *NOT* sensibly demonstrable. 41. @Dr. Castaldo I really have no reason to accept reductionist arguments in physics. If you can substantiate your claim, then do so. Words explaining words is how we get into these problems to begin with. Having said that, a comment in another thread made some small reference to circularity. I forget the specifics right now, but I pointed out the result of a 2016 Science article about concept formation and hexagonal grid cells. It is a beautiful circularity. Abstract concepts depend upon neural structures that exhibit hexagonal symmetry. A book on my shelf which explicitly classifies hexagons and relates them to tetrahedra. String theorists asking people to believe in six rolled up dimensions. And the need for physical theories to build the instruments and interpret the data so we can identify how hexagonal symmetries pertain to abstract concept formation. You are a pragmatic gentleman. Thank you for your other replies as well. For what this is worth, I am certainly not looking for truth. When Frege retracted his logicism he suggested that all mathematics is geometrical. That is mostly what I have uncovered from my own deliberations. It really does not make sense to speak of truth and falsity in geometry. 42. @mls: What is "amazing" about that? I can do the same thing on paper better with bits; given 2 binary states there are 2^2 = 4 possible states. In binary we can uniquely number them, [0,1,2,3]. That is not "four dimensional" any more than 10 states by 10 states is "100" dimensional. On your "earthworm" comparison, obviously that is wrong. We have far more facility than an earthworm for knowing the truth of reality, or earthworms wouldn't let us use them as live bait. And fish wouldn't fall for that and bite into a hook, if they could discern reality equally as well as us. Humans understand the truth of reality well enough to manipulate chemistry on the atomic level, to build microscopic machines, to create chemical compounds and materials on a massive scale that simply do not exist in nature. Only humans can make and execute plans that require decades, or even multiple lifetimes, to complete. Where are the particle colliders built by any other non-human ape or animal? I have no idea how you think the theory of evolution creates any equivalence between the intelligence of earth worms and that of humans. I suspect you don't understand evolution. 43. @mls You don't address the point that maths exists in reality and came from reality. Obviously, the field of logic has something to say about maths and has a credible standard of truth and method like science and maths. I do not need to discredit the field of philosophy as it discredits itself - there are professional philosophers who are members of the American Philosophical Association who publish "proofs of God"(!!); in the comments in this very blog a panpsychist professional philosopher couldn't answer the blog author's point that the results of the Standard Model are not compatible with panpsychism being an explanation of consciousness in the brain. Philosophers can churn out such nonsense because the "standard" of truth in philosophy is to write a vaguely plausible-sounding natural language "proof". This opens the field to all kinds of cranks and frauds. And these frauds want to have their say about natural science, too, but fortunately the falsifiability barrier keeps them at bay. It is not a "he said, she said" argument. I have explained why I think maths exists in reality. Comment moderation on this blog is turned on.
637085fa523ba562
Open main menu Complete acetylene (H–C≡C–H) molecular orbital set. The left column shows MO's which are occupied in the ground state, with the lowest-energy orbital at the top. The white and grey line visible in some MO's is the molecular axis passing through the nuclei. The orbital wave functions are positive in the red regions and negative in the blue. The right column shows virtual MO's which are empty in the ground state, but may be occupied in excited states. In chemistry, a molecular orbital (MO) is a mathematical function describing the wave-like behavior of an electron in a molecule. This function can be used to calculate chemical and physical properties such as the probability of finding an electron in any specific region. The term orbital was introduced by Robert S. Mulliken in 1932 as an abbreviation for one-electron orbital wave function.[1] At an elementary level, it is used to describe the region of space in which the function has a significant amplitude. Molecular orbitals are usually constructed by combining atomic orbitals or hybrid orbitals from each atom of the molecule, or other molecular orbitals from groups of atoms. They can be quantitatively calculated using the Hartree–Fock or self-consistent field (SCF) methods. A molecular orbital (MO) can be used to represent the regions in a molecule where an electron occupying that orbital is likely to be found. Molecular orbitals are obtained from the combination of atomic orbitals, which predict the location of an electron in an atom. A molecular orbital can specify the electron configuration of a molecule: the spatial distribution and energy of one (or one pair of) electron(s). Most commonly a MO is represented as a linear combination of atomic orbitals (the LCAO-MO method), especially in qualitative or very approximate usage. They are invaluable in providing a simple model of bonding in molecules, understood through molecular orbital theory. Most present-day methods in computational chemistry begin by calculating the MOs of the system. A molecular orbital describes the behavior of one electron in the electric field generated by the nuclei and some average distribution of the other electrons. In the case of two electrons occupying the same orbital, the Pauli principle demands that they have opposite spin. Necessarily this is an approximation, and highly accurate descriptions of the molecular electronic wave function do not have orbitals (see configuration interaction). Molecular orbitals are, in general, delocalized throughout the entire molecule. Moreover, if the molecule has symmetry elements, its nondegenerate molecular orbitals are either symmetric or antisymmetric with respect to any of these symmetries. In other words, application of a symmetry operation S (e.g., a reflection, rotation, or inversion) to molecular orbital ψ results in the molecular orbital being unchanged or reversing its mathematical sign: Sψ = ±ψ. In planar molecules, for example, molecular orbitals are either symmetric (sigma) or antisymmetric (pi) with respect to reflection in the molecular plane. If molecules with degenerate orbital energies are also considered, a more general statement that molecular orbitals form bases for the irreducible representations of the molecule's symmetry group holds.[2] The symmetry properties of molecular orbitals means that delocalization is an inherent feature of molecular orbital theory and makes it fundamentally different from (and complementary to) valence bond theory, in which bonds are viewed as localized electron pairs, with allowance for resonance to account for delocalization. In contrast to these symmetry-adapted canonical molecular orbitals, localized molecular orbitals can be formed by applying certain mathematical transformations to the canonical orbitals. The advantage of this approach is that the orbitals will correspond more closely to the "bonds" of a molecule as depicted by a Lewis structure. As a disadvantage, the energy levels of these localized orbitals no longer have physical meaning. (The discussion in the rest of this article will focus on canonical molecular orbitals. For further discussions on localized molecular orbitals, see: natural bond orbital and sigma-pi and equivalent-orbital models.) Formation of molecular orbitalsEdit Molecular orbitals arise from allowed interactions between atomic orbitals, which are allowed if the symmetries (determined from group theory) of the atomic orbitals are compatible with each other. Efficiency of atomic orbital interactions is determined from the overlap (a measure of how well two orbitals constructively interact with one another) between two atomic orbitals, which is significant if the atomic orbitals are close in energy. Finally, the number of molecular orbitals formed must be equal to the number of atomic orbitals in the atoms being combined to form the molecule. Qualitative discussionEdit For an imprecise, but qualitatively useful, discussion of the molecular structure, the molecular orbitals can be obtained from the "Linear combination of atomic orbitals molecular orbital method" ansatz. Here, the molecular orbitals are expressed as linear combinations of atomic orbitals.[3] Linear combinations of atomic orbitals (LCAO)Edit Molecular orbitals were first introduced by Friedrich Hund[4][5] and Robert S. Mulliken[6][7] in 1927 and 1928.[8][9] The linear combination of atomic orbitals or "LCAO" approximation for molecular orbitals was introduced in 1929 by Sir John Lennard-Jones.[10] His ground-breaking paper showed how to derive the electronic structure of the fluorine and oxygen molecules from quantum principles. This qualitative approach to molecular orbital theory is part of the start of modern quantum chemistry. Linear combinations of atomic orbitals (LCAO) can be used to estimate the molecular orbitals that are formed upon bonding between the molecule's constituent atoms. Similar to an atomic orbital, a Schrödinger equation, which describes the behavior of an electron, can be constructed for a molecular orbital as well. Linear combinations of atomic orbitals, or the sums and differences of the atomic wavefunctions, provide approximate solutions to the Hartree–Fock equations which correspond to the independent-particle approximation of the molecular Schrödinger equation. For simple diatomic molecules, the wavefunctions obtained are represented mathematically by the equations where   and   are the molecular wavefunctions for the bonding and antibonding molecular orbitals, respectively,   and   are the atomic wavefunctions from atoms a and b, respectively, and   and   are adjustable coefficients. These coefficients can be positive or negative, depending on the energies and symmetries of the individual atomic orbitals. As the two atoms become closer together, their atomic orbitals overlap to produce areas of high electron density, and, as a consequence, molecular orbitals are formed between the two atoms. The atoms are held together by the electrostatic attraction between the positively charged nuclei and the negatively charged electrons occupying bonding molecular orbitals.[11] Bonding, antibonding, and nonbonding MOsEdit When atomic orbitals interact, the resulting molecular orbital can be of three types: bonding, antibonding, or nonbonding. Bonding MOs: • Bonding interactions between atomic orbitals are constructive (in-phase) interactions. • Bonding MOs are lower in energy than the atomic orbitals that combine to produce them. Antibonding MOs: • Antibonding interactions between atomic orbitals are destructive (out-of-phase) interactions, with a nodal plane where the wavefunction of the antibonding orbital is zero between the two interacting atoms • Antibonding MOs are higher in energy than the atomic orbitals that combine to produce them. Nonbonding MOs: • Nonbonding MOs are the result of no interaction between atomic orbitals because of lack of compatible symmetries. • Nonbonding MOs will have the same energy as the atomic orbitals of one of the atoms in the molecule. Sigma and pi labels for MOsEdit The type of interaction between atomic orbitals can be further categorized by the molecular-orbital symmetry labels σ (sigma), π (pi), δ (delta), φ (phi), γ (gamma) etc. These are the Greek letters corresponding to the atomic orbitals s, p, d, f and g respectively. The number of nodal planes containing the internuclear axis between the atoms concerned is zero for σ MOs, one for π, two for δ, three for φ and four for γ. σ symmetryEdit A MO with σ symmetry results from the interaction of either two atomic s-orbitals or two atomic pz-orbitals. An MO will have σ-symmetry if the orbital is symmetric with respect to the axis joining the two nuclear centers, the internuclear axis. This means that rotation of the MO about the internuclear axis does not result in a phase change. A σ* orbital, sigma antibonding orbital, also maintains the same phase when rotated about the internuclear axis. The σ* orbital has a nodal plane that is between the nuclei and perpendicular to the internuclear axis.[12] π symmetryEdit A MO with π symmetry results from the interaction of either two atomic px orbitals or py orbitals. An MO will have π symmetry if the orbital is asymmetric with respect to rotation about the internuclear axis. This means that rotation of the MO about the internuclear axis will result in a phase change. There is one nodal plane containing the internuclear axis, if real orbitals are considered. A π* orbital, pi antibonding orbital, will also produce a phase change when rotated about the internuclear axis. The π* orbital also has a second nodal plane between the nuclei.[12][13][14][15] δ symmetryEdit A MO with δ symmetry results from the interaction of two atomic dxy or dx2-y2 orbitals. Because these molecular orbitals involve low-energy d atomic orbitals, they are seen in transition-metal complexes. A δ bonding orbital has two nodal planes containing the internuclear axis, and a δ* antibonding orbital also has a third nodal plane between the nuclei. φ symmetryEdit Suitably aligned f atomic orbitals overlap to form phi molecular orbital (a phi bond) Theoretical chemists have conjectured that higher-order bonds, such as phi bonds corresponding to overlap of f atomic orbitals, are possible. There is as of 2005 only one known example of a molecule purported to contain a phi bond (a U−U bond, in the molecule U2).[16] Gerade and ungerade symmetryEdit For molecules that possess a center of inversion (centrosymmetric molecules) there are additional labels of symmetry that can be applied to molecular orbitals. Centrosymmetric molecules include: Non-centrosymmetric molecules include: If inversion through the center of symmetry in a molecule results in the same phases for the molecular orbital, then the MO is said to have gerade (g) symmetry, from the German word for even. If inversion through the center of symmetry in a molecule results in a phase change for the molecular orbital, then the MO is said to have ungerade (u) symmetry, from the German word for odd. For a bonding MO with σ-symmetry, the orbital is σg (s' + s'' is symmetric), while an antibonding MO with σ-symmetry the orbital is σu, because inversion of s' – s'' is antisymmetric. For a bonding MO with π-symmetry the orbital is πu because inversion through the center of symmetry for would produce a sign change (the two p atomic orbitals are in phase with each other but the two lobes have opposite signs), while an antibonding MO with π-symmetry is πg because inversion through the center of symmetry for would not produce a sign change (the two p orbitals are antisymmetric by phase).[12] MO diagramsEdit The qualitative approach of MO analysis uses a molecular orbital diagram to visualize bonding interactions in a molecule. In this type of diagram, the molecular orbitals are represented by horizontal lines; the higher a line the higher the energy of the orbital, and degenerate orbitals are placed on the same level with a space between them. Then, the electrons to be placed in the molecular orbitals are slotted in one by one, keeping in mind the Pauli exclusion principle and Hund's rule of maximum multiplicity (only 2 electrons, having opposite spins, per orbital; place as many unpaired electrons on one energy level as possible before starting to pair them). For more complicated molecules, the wave mechanics approach loses utility in a qualitative understanding of bonding (although is still necessary for a quantitative approach). Some properties: • A basis set of orbitals includes those atomic orbitals that are available for molecular orbital interactions, which may be bonding or antibonding • The number of molecular orbitals is equal to the number of atomic orbitals included in the linear expansion or the basis set • If the molecule has some symmetry, the degenerate atomic orbitals (with the same atomic energy) are grouped in linear combinations (called symmetry-adapted atomic orbitals (SO)), which belong to the representation of the symmetry group, so the wave functions that describe the group are known as symmetry-adapted linear combinations (SALC). • The number of molecular orbitals belonging to one group representation is equal to the number of symmetry-adapted atomic orbitals belonging to this representation • Within a particular representation, the symmetry-adapted atomic orbitals mix more if their atomic energy levels are closer. The general procedure for constructing a molecular orbital diagram for a reasonably simple molecule can be summarized as follows: 1. Assign a point group to the molecule. 2. Look up the shapes of the SALCs. 3. Arrange the SALCs of each molecular fragment in increasing order of energy, first noting whether they stem from s, p, or d orbitals (and put them in the order s < p < d), and then their number of internuclear nodes. 4. Combine SALCs of the same symmetry type from the two fragments, and from N SALCs form N molecular orbitals. 5. Estimate the relative energies of the molecular orbitals from considerations of overlap and relative energies of the parent orbitals, and draw the levels on a molecular orbital energy level diagram (showing the origin of the orbitals). 6. Confirm, correct, and revise this qualitative order by carrying out a molecular orbital calculation by using commercial software.[17] Bonding in molecular orbitalsEdit Orbital degeneracyEdit Molecular orbitals are said to be degenerate if they have the same energy. For example, in the homonuclear diatomic molecules of the first ten elements, the molecular orbitals derived from the px and the py atomic orbitals result in two degenerate bonding orbitals (of low energy) and two degenerate antibonding orbitals (of high energy).[11] Ionic bondsEdit When the energy difference between the atomic orbitals of two atoms is quite large, one atom's orbitals contribute almost entirely to the bonding orbitals, and the other atom's orbitals contribute almost entirely to the antibonding orbitals. Thus, the situation is effectively that one or more electrons have been transferred from one atom to the other. This is called an (mostly) ionic bond. Bond orderEdit The bond order, or number of bonds, of a molecule can be determined by combining the number of electrons in bonding and antibonding molecular orbitals. A pair of electrons in a bonding orbital creates a bond, whereas a pair of electrons in an antibonding orbital negates a bond. For example, N2, with eight electrons in bonding orbitals and two electrons in antibonding orbitals, has a bond order of three, which constitutes a triple bond. Bond strength is proportional to bond order—a greater amount of bonding produces a more stable bond—and bond length is inversely proportional to it—a stronger bond is shorter. There are rare exceptions to the requirement of molecule having a positive bond order. Although Be2 has a bond order of 0 according to MO analysis, there is experimental evidence of a highly unstable Be2 molecule having a bond length of 245 pm and bond energy of 10 kJ/mol.[12][18] The highest occupied molecular orbital and lowest unoccupied molecular orbital are often referred to as the HOMO and LUMO, respectively. The difference of the energies of the HOMO and LUMO is called the HOMO-LUMO gap. This notion is often the matter of confusion in literature and should be considered with caution. Its value is usually located between the fundamental gap (difference between ionization potential and electron affinity) and the optical gap. In addition, HOMO-LUMO gap can be related to a bulk material band gap or transport gap, which is usually much smaller than fundamental gap. Homonuclear diatomicsEdit Homonuclear diatomic MOs contain equal contributions from each atomic orbital in the basis set. This is shown in the homonuclear diatomic MO diagrams for H2, He2, and Li2, all of which containing symmetric orbitals.[12] Electron wavefunctions for the 1s orbital of a lone hydrogen atom (left and right) and the corresponding bonding (bottom) and antibonding (top) molecular orbitals of the H2 molecule. The real part of the wavefunction is the blue curve, and the imaginary part is the red curve. The red dots mark the locations of the nuclei. The electron wavefunction oscillates according to the Schrödinger wave equation, and orbitals are its standing waves. The standing wave frequency is proportional to the orbital's kinetic energy. (This plot is a one-dimensional slice through the three-dimensional system.) As a simple MO example, consider the electrons in a hydrogen molecule, H2 (see molecular orbital diagram), with the two atoms labelled H' and H". The lowest-energy atomic orbitals, 1s' and 1s", do not transform according to the symmetries of the molecule. However, the following symmetry adapted atomic orbitals do: 1s' – 1s" Antisymmetric combination: negated by reflection, unchanged by other operations 1s' + 1s" Symmetric combination: unchanged by all symmetry operations The symmetric combination (called a bonding orbital) is lower in energy than the basis orbitals, and the antisymmetric combination (called an antibonding orbital) is higher. Because the H2 molecule has two electrons, they can both go in the bonding orbital, making the system lower in energy (hence more stable) than two free hydrogen atoms. This is called a covalent bond. The bond order is equal to the number of bonding electrons minus the number of antibonding electrons, divided by 2. In this example, there are 2 electrons in the bonding orbital and none in the antibonding orbital; the bond order is 1, and there is a single bond between the two hydrogen atoms. On the other hand, consider the hypothetical molecule of He2 with the atoms labeled He' and He". As with H2, the lowest energy atomic orbitals are the 1s' and 1s", and do not transform according to the symmetries of the molecule, while the symmetry adapted atomic orbitals do. The symmetric combination—the bonding orbital—is lower in energy than the basis orbitals, and the antisymmetric combination—the antibonding orbital—is higher. Unlike H2, with two valence electrons, He2 has four in its neutral ground state. Two electrons fill the lower-energy bonding orbital, σg(1s), while the remaining two fill the higher-energy antibonding orbital, σu*(1s). Thus, the resulting electron density around the molecule does not support the formation of a bond between the two atoms; without a stable bond holding the atoms together, molecule would not be expected to exist. Another way of looking at it is that there are two bonding electrons and two antibonding electrons; therefore, the bond order is 0 and no bond exists (the molecule has one bound state supported by the Van der Waals potential).[citation needed] Dilithium Li2 is formed from the overlap of the 1s and 2s atomic orbitals (the basis set) of two Li atoms. Each Li atom contributes three electrons for bonding interactions, and the six electrons fill the three MOs of lowest energy, σg(1s), σu*(1s), and σg(2s). Using the equation for bond order, it is found that dilithium has a bond order of one, a single bond. Noble gasesEdit Considering a hypothetical molecule of He2, since the basis set of atomic orbitals is the same as in the case of H2, we find that both the bonding and antibonding orbitals are filled, so there is no energy advantage to the pair. HeH would have a slight energy advantage, but not as much as H2 + 2 He, so the molecule is very unstable and exists only briefly before decomposing into hydrogen and helium. In general, we find that atoms such as He that have full energy shells rarely bond with other atoms. Except for short-lived Van der Waals complexes, there are very few noble gas compounds known. Heteronuclear diatomicsEdit While MOs for homonuclear diatomic molecules contain equal contributions from each interacting atomic orbital, MOs for heteronuclear diatomics contain different atomic orbital contributions. Orbital interactions to produce bonding or antibonding orbitals in heteronuclear diatomics occur if there is sufficient overlap between atomic orbitals as determined by their symmetries and similarity in orbital energies. In hydrogen fluoride HF overlap between the H 1s and F 2s orbitals is allowed by symmetry but the difference in energy between the two atomic orbitals prevents them from interacting to create a molecular orbital. Overlap between the H 1s and F 2pz orbitals is also symmetry allowed, and these two atomic orbitals have a small energy separation. Thus, they interact, leading to creation of σ and σ* MOs and a molecule with a bond order of 1. Since HF is a non-centrosymmetric molecule, the symmetry labels g and u do not apply to its molecular orbitals.[19] Quantitative approachEdit To obtain quantitative values for the molecular energy levels, one needs to have molecular orbitals that are such that the configuration interaction (CI) expansion converges fast towards the full CI limit. The most common method to obtain such functions is the Hartree–Fock method, which expresses the molecular orbitals as eigenfunctions of the Fock operator. One usually solves this problem by expanding the molecular orbitals as linear combinations of Gaussian functions centered on the atomic nuclei (see linear combination of atomic orbitals and basis set (chemistry)). The equation for the coefficients of these linear combinations is a generalized eigenvalue equation known as the Roothaan equations, which are in fact a particular representation of the Hartree–Fock equation. There are a number of programs in which quantum chemical calculations of MOs can be performed, including Spartan and HyperChem. Simple accounts often suggest that experimental molecular orbital energies can be obtained by the methods of ultra-violet photoelectron spectroscopy for valence orbitals and X-ray photoelectron spectroscopy for core orbitals. This, however, is incorrect as these experiments measure the ionization energy, the difference in energy between the molecule and one of the ions resulting from the removal of one electron. Ionization energies are linked approximately to orbital energies by Koopmans' theorem. While the agreement between these two values can be close for some molecules, it can be very poor in other cases. 1. ^ Mulliken, Robert S. (July 1932). "Electronic Structures of Polyatomic Molecules and Valence. II. General Considerations". Physical Review. 41 (1): 49–71. Bibcode:1932PhRv...41...49M. doi:10.1103/PhysRev.41.49. 2. ^ 1930-2007., Cotton, F. Albert (Frank Albert), (1990). Chemical applications of group theory (3rd ed.). New York: Wiley. p. 102. ISBN 0471510947. OCLC 19975337.CS1 maint: extra punctuation (link) 3. ^ Albright, T. A.; Burdett, J. K.; Whangbo, M.-H. (2013). Orbital Interactions in Chemistry. Hoboken, N.J.: Wiley. ISBN 9780471080398. 4. ^ F. Hund, "Zur Deutung einiger Erscheinungen in den Molekelspektren" [On the interpretation of some phenomena in molecular spectra] Zeitschrift für Physik, vol. 36, pages 657-674 (1926). 5. ^ F. Hund, "Zur Deutung der Molekelspektren", Zeitschrift für Physik, Part I, vol. 40, pages 742-764 (1927); Part II, vol. 42, pages 93–120 (1927); Part III, vol. 43, pages 805-826 (1927); Part IV, vol. 51, pages 759-795 (1928); Part V, vol. 63, pages 719-751 (1930). 6. ^ R. S. Mulliken, "Electronic states. IV. Hund's theory; second positive nitrogen and Swan bands; alternate intensities", Physical Review, vol. 29, pages 637–649 (1927). 7. ^ R. S. Mulliken, "The assignment of quantum numbers for electrons in molecules", Physical Review, vol. 32, pages 186–222 (1928). 8. ^ Friedrich Hund and Chemistry, Werner Kutzelnigg, on the occasion of Hund's 100th birthday, Angewandte Chemie International Edition, 35, 573–586, (1996) 9. ^ Robert S. Mulliken's Nobel Lecture, Science, 157, no. 3785, 13-24. Available on-line at: 10. ^ Sir John Lennard-Jones, "The electronic structure of some diatomic molecules", Transactions of the Faraday Society, vol. 25, pages 668-686 (1929). 11. ^ a b Gary L. Miessler; Donald A. Tarr. Inorganic Chemistry. Pearson Prentice Hall, 3rd ed., 2004. 12. ^ a b c d e Catherine E. Housecroft, Alan G. Sharpe, Inorganic Chemistry, Pearson Prentice Hall; 2nd Edition, 2005, p. 29-33. 13. ^ Peter Atkins; Julio De Paula. Atkins’ Physical Chemistry. Oxford University Press, 8th ed., 2006. 14. ^ Yves Jean; François Volatron. An Introduction to Molecular Orbitals. Oxford University Press, 1993. 15. ^ Michael Munowitz, Principles of Chemistry, Norton & Company, 2000, p. 229-233. 16. ^ Gagliardi, Laura; Roos, Björn O. (2005). "Quantum chemical calculations show that the uranium molecule U2 has a quintuple bond". Nature. 433: 848–851. Bibcode:2005Natur.433..848G. doi:10.1038/nature03249. PMID 15729337. 17. ^ Atkins., Peter ... [et al]. (2006). Inorganic chemistry (4. ed.). New York: W.H. Freeman. p. 208. ISBN 978-0-7167-4878-6. 18. ^ Bondybey, V.E. (1984). "Electronic structure and bonding of Be2". Chemical Physics Letters. 109 (5): 436–441. Bibcode:1984CPL...109..436B. doi:10.1016/0009-2614(84)80339-5. 19. ^ Catherine E. Housecroft, Alan G, Sharpe, Inorganic Chemistry, Pearson Prentice Hall; 2nd Edition, 2005, ISBN 0130-39913-2, p. 41-43. External linksEdit
f44f01cecf8f5c74
Encyclopedia … combined with a great Buyer's Guide! Sponsoring this encyclopedia: Haus Master Equation Definition: an analytical equation describing the evolution of ultrashort pulses in a laser resonator Categories: light pulses, methods How to cite the article; suggest additional literature The Master equation as introduced by Hermann A. Haus [1] is the central piece of a physical model which can be used to describe the pulse evolution in the resonator of a mode-locked laser. The basic ideas underlying the model are the following: form of Haus Master equation As an example, consider the case of an actively mode-locked laser with intracavity dispersion and a Kerr nonlinearity. In that case, the Haus Master equation reads form of Haus Master equation where the terms on the right-hand side describe (in that order) the constant part of the laser gain, the frequency dependence of the gain and the second-order dispersion, the resonator losses, the time-dependent modulator loss (approximated for small time arguments), and the Kerr nonlinearity. This is a generalized Landau–Ginzberg equation. The Master equation can be seen as a generalization of the nonlinear Schrödinger equation, as is often used for the study of soliton pulse phenomena. It is important to recognize which kind of approximations are usually required: • The changes of the pulse profile per resonator round trip must be small. This condition is well satisfied for many but not all mode-locked lasers. In particular, it is problematic for lasers generating few-cycle pulses (e.g. titanium–sapphire lasers), where the pulse parameters usually undergo large changes within a single resonator round trip, and the order of optical components matters. • The considered terms should usually not be too complicated, as otherwise it may not be possible to find analytical solutions (see below). For example, it is common to omit terms for higher-order dispersion (which would introduce third or higher order temporal derivatives), to neglect Raman scattering, to assume weak saturation of absorbers, etc. In not too complicated situations, analytical solutions of the Haus Master equation are known for the steady state (as reached after many resonator round trips). For example, this is the case for the actively mode-locked laser without dispersion and nonlinearities. The result is that the pulse shape is Gaussian, and the calculated pulse duration exactly agrees with that of the earlier Kuizenga–Siegman theory. There are also higher-order solutions described with Hermite–Gaussian functions, but these experience lower net gains per resonator round trip, and are thus not observed. In more difficult cases, approximated results can be obtained by using a simple ansatz for the function A(t), e.g. corresponding to sech2-shaped pulses (possibly with a chirp). This leads to equations which allow one e.g. to calculate pulse duration, chirp, and spectral bandwidth as functions of the laser parameters. In still more complicated cases, analytical solutions usually require approximations, the validity conditions of which are often not well satisfied in realistic situations. In particular, the time-dependent loss caused by a slow saturable absorber is difficult to treat, as it depends on the optical intensity at earlier times and thus introduces an integral on the right-hand side of the Haus Master equation. Another issue is the stability of solutions; there are cases where the theoretical solution is dynamically unstable and can thus never be observed in practice. For these reasons, numerical algorithms are often used to calculate the steady-state pulse profile. However, this approach removes the ability to obtain analytical equations as results, which can be more helpful in recognizing relations between parameters. There is then usually no more advantage over using straightforward pulse propagation models, which are less dependent on various approximations (see above), and thus able to describe reliably a wider range of phenomena. Also, such models are not more complex to implement and validate. Another use of the Master equation is to derive dynamic equations (coupled differential equations) for a limited number of pulse parameters, such as pulse energy, duration, chirp, and temporal position, using the moment method [4]. The emphasis is then not necessarily on the steady state, but on the evolution of pulse parameters, which can be used, e.g., to investigate noise properties. In conclusion, the Haus Master equation can be considered as a useful tool mainly for the study of simple situations, where analytical solutions can be obtained, and as the basis of some dynamic models, whereas simple pulse propagation models (treating different effects on the pulses sequentially) are usually more appropriate for more complex situations, particularly for passively mode-locked lasers. [1]H. A. Haus et al., “Structures for additive pulse mode locking”, J. Opt. Soc. Am. B 8 (10), 2068 (1991) [2]A. M. Dunlop et al., “Pulse shapes and stability in Kerr and Active Mode-Locking (KAML)”, Opt. Express 2 (5), 204 (1998) (extended equations, including the transverse spatial dimensions) [3]H. A. Haus, “Mode-locking of lasers”, IEEE J. Sel. Top. Quantum Electron. 6 (6), 1173 (2000) [4]N. G. Usechak and G. P. Agrawal, “Rate-equation approach for frequency-modulation mode locking using the moment method”, J. Opt. Soc. Am. B 22 (12), 2570 (2005) (Suggest additional literature!) See also: pulse propagation modeling, mode locking, ultrashort pulses, Kuizenga–Siegman theory and other articles in the categories light pulses, methods
ffdc5381319c8a9f
Superconductivity Research Understanding quantum phase transitions i.e phase transitions at absolute zero, are the key to understanding high temperature superconductors. Depending on the geometry of the underlying lattice structure, antiferromagnetically interacting spins can find themselves in a multiply degenerate ground-state in which the orientation of certain electron spins does not change the energy of the ground state. In isolation, spins will naturally be in a superposition of two possible states. Consider an equilateral triangle, with a spin living on each vertex. Since spins can only be defined in one axis, in this case consider the vertical axis, one of the three spins will not be able to align antiferromagnetically with another. Thus these spins are frustrated meaning they cannot achieve the lowest possible energy state. Since natures guiding principle is energy minimisation, a superposition of all ground-states, each having an equal probability of existing, is created. This is analogous to the superposition of an electrons spin but for a configuration of spins. Due to the dependence of a spins state on the state of other spins, the superposition of the degenerate ground-states is entangled. The entanglement depends on the geometry of the lattice and thus there is a deep relationship between lattice geometry and entanglement. Such a spin structure forms a quantum spin liquid whose main characteristic is its inherent finite entropy even at absolute zero. Such frustrated quantum spin liquids can undergo a phase transition when parameters such as the externally applied pressure or magnetic field strength is varied. To see this imagine a horizontally applied magnetic field defined by its field strength g, acting on a quantum spin liquid whose spins are defined in the vertical axis. At zero g, the superposition of the degenerate ground-states is symmetric under spin inversion symmetry and does not possess a magnetic moment while each individual configuration of spins does. This would not be the case in a conventional lattice in which spins are not frustrated. As the field strength is increased spins find it energetically more favourable to align with the field and eventually all spins will point in the direction of the field thus no longer being in an antiferromagnetic state. The frustration due to the individual spin interactions will be trumped by the strength of the field and go to zero as the field strength goes to infinity. As the field forces the spins into the horizontal direction, all individual spins will be in a superposition of up and down and thus the structure is again spin inversion symmetric. However, now the degeneracy of the ground-state is singular while the overall configuration remains entangled. Thus the degeneracy of a spin quantum liquids ground-state evolves as the magnetic filed strength is varied. The maximum value for the degeneracy of the system is defined to occur at critical value of g which lies between 0 and infinity. Current theoretical calculations imply that a larger difference in energy of each spin configuration results in broader region of minimum entropy around zero kelvin, which increases more slowly with temperature, therefore predicting a higher but less abrupt transition temperature to a low entropy state. The entropy calculated actually oscillates around its minimum value very subtly as the temperature is varied. As the number of total spins increases however, this oscillation becomes ever more present as the temperature is lowered to zero kelvin. This contrasts the evolution of a non-frustrated system in which the spin inversion symmetry is restored at as the field strength is taken to infinity implying the initial antiferromagnetic state is not spin inversion symmetric which it clearly is. Quantum Superposition: Example of Quantum slit experiment, the electron must decide to go through a slit but it has equal chances of going through both so it goes through each slit half the time. Each electron thus passes through both slits and its state is the linear superposition of the state corresponding through the left state and state corresponding through the right state. In a similar way frustrated electrons are in a state corresponding  to a linear superposition of the up and down state. These electrons are perfectly anti-correlated meaning that the state of one electron completely determines the state of the other(s). Such a collective state is non local and is shared by both electrons. By removing some electrons in an antiferromagnet, some spins no longer find themselves in a preferred direction as the energy of the electron in both configurations is equal. Nevertheless this depends on the orientation of a neighbouring spin and when this spin finds itself in a similar situation, the spins pair up in a linear superposition and become entangled. When this process happens for all electrons in a system, the entangled pairs can move around and collectively acquire a state representing a superposition of all possible configurations. In this way each electron pair is no longer locally defined but has an equal probability of being everywhere in the system simultaneously. Such a state is described as a Bose-Einstein condensate. Entangled pairs of electrons can exchange their partner and due to the non locality of the state these exchanges must not occur with adjacent electrons but can occur with electrons at the opposite end of the system thus leading to long range entanglement. At the quantum critical point the long range entanglement is believed to be maximised and the critical temperature is maximised. Combined pairs of electrons result in a combined degree of freedom. Combining this with a similar degree of freedom result in another degree of freedom and so on. In this way the depth of entanglement can be increased. String theory allows calculation of what happens to particles under conditions of long range quantum entanglement. When entangled electrons are separated by a black holes event horizon they are still able to exchange information. This seems to imply that entanglement is a process that occurs in an extra dimension surpassing any three dimensional boundary and therefore being non local. Long range quantum entanglement enhances the probability of pair production which then results in higher critical temperature for superconductivity. If ER=EPR is considered valid, the reason for superconductivity is explained by the fact that entangled electrons can travel through an extra dimension thus avoiding the atoms in there path and therefore never colliding with them. The path through this extra dimension is analogous to the path of a wormhole through space time. Entangled electrons can travel through this extra dimension and thereby change their spin. In this way electrons can travel without interacting with any atoms leading to high temperature. Black Hole Atoms Similar to the quantised description of ordinary atoms consisting of protons and neutrons, a new type of atom with microscopic black holes as their nuclei could exist. In ordinary atoms electrons orbit the nucleus due to their electric attraction to the oppositely charged protons in the nucleus, while quantum mechanics tells us that these electrons are quantised to certain energy levels depending on their distance from the nucleus. The same principle can be applied to the gravitational force.  The quantisation of gravity can be approximately described by applying the ideas of Bohr’s atom to Newtonian gravity. Another way would be solving the Schrödinger equation by replacing the electromagnetic potential energy with the Newtonian gravitational potential energy and then assuming the mass of the nucleus to be equal to that of a black hole as given by the Schwarzschild radius. The resulting equation for the nth radius of a mass, m and a black hole of radius, R is given by r(n) = 2ħ²n²÷c²m²R Hence, the mass’s orbital radius is inversely proportional to its own mass squared and the radius of the black hole. For large black holes, there is no mass small enough for the initial quantised radii to be observed as they all lie within the event horizon. One could increase n but, just like in ordinary atoms this would result in a continuous energy spectrum and hide the discrete nature of gravity. The energy levels E(n) in terms of the black holes mass M are given as follows E(n) = -G²M²m³÷2ħ²n² Hence, the quantisation of gravity is not evident on large scales, where the effects of gravity are mostly considered. One can calculate the radius a black hole would need in order for all of these radii to be outside of the event horizon by setting the the radius of the horizon equal to the radius of the first energy level of a mass m and solving for it. The result is equal to R∗ = √2ħ÷cm Assuming the most natural and simplistic scenario by setting the mass m equal to the mass of an electron, the radius R∗ is equal to 6.7*10^-13 meters which is smaller than an atom yet bigger than a nucleus. This describes a scenario in which the ground state of the electrons is actually on the event horizon of the black hole. By considering a smaller black hole radius, the radius of the ground state is increased. For example, considering the black hole radius to be the size of a nucleus would result in an electron ground state radius on the order of Ångstroms or the size of an ordinary atom. The mass of such an atom would be equal to 10^11 kg which is about the mass of the entire human population. It is also approximately the mass of a primordial black hole with an evaporation time equal to the age of the universe which raises the thought if there is a connection between the two. Perhaps it would be possible to observe the effect of such black hole atoms evaporating. In principle it should be possible not only for electrons, but also any other particle to orbit the black hole. The black hole acts as a filtering mechanism which divides each particle into its distinct orbit. Heavier particles lie closer or within the horizon while lighter particles orbit further outside and may leak out of the horizon. This depends on the size of the black hole. These black hole atoms fit the description of the observed dark matter in the universe due to their high mass and weak interaction with their surrounding other than their gravitational interaction. Moreover, it should be possible to detect them. Just like in ordinary atoms, when electrons jump from one energy level to the next they emit or absorb radiation equal to the change in the energy states. With electrons orbiting around the black holes, perhaps entire molecules could be composed out of these black hole atoms.
f63129298820fefb
Nonadiabatic Dynamics A number of systems in spectroscopy, astrochemistry and combustion chemistry are influenced by spin-forbidden processes between electronic states of differing spin, coupled through the spin-orbit interaction. Though typically modeled using time independent Landau-Zener theory, time dependent molecular dynamic trajectory surface hopping methods can be employed. We revisit the spin-forbidden 3B1 to 1A1 transition for SiHthrough direct on-the-fly molecular dynamics simulations incorporating the Tully’s Fewest Switches trajectory surface hopping method for trajectories spanning 2 ps. For an improved description of the hopping, the time-uncertainty method is utilized as well as the gradV (∇V) method for improved momentum adjustment upon hopping.  The resulting dynamics illustrate a large distribution of associated hopping geometries and spin-orbit couplings, but their average lies near the values used in Landau-Zener analysis. A challenge in applying Landau-Zener theory to molecular systems, lies in selecting an appropriate model to describe the system velocity at the MECP, particularly for complex systems with nonequilibrium energy distribution between the vibrational degrees of freedom. We believe that both, computationally inexpensive Landau-Zener theory and direct nonadiabatic molecular dynamics, will be useful in describing transitions between spin-orbit coupled electronic states in complex molecular systems. Bioinspired Catalysts Molecular hydrogen reduction and oxidation currently require expensive platinum group catalysts. Promising alternatives to these catalysts are structural models of the active sites of hydrogenases, which contain abundant and inexpensive first-row transition metals. To create catalytically effective structural models, it is essential to know the mechanisms of hydrogen reduction and oxidation on the active site. We investigate H2 binding to the active site of [NiFe]-hydrogenase through a triplet/singlet spin-forbidden pathway using DFT and MCQDPT2. Firstly, the H2 molecule prefers to bind to the Fe atom of the active site, both in singlet and triplet states. However, the H2binding to the triplet state of the active site is more energetically favorable. We then demonstrate that the rotation of the terminal thiolate ligands around the Ni center induces a crossing between the lowest energy singlet and triplet electronic states. At the crossing, nonadiabatic spin-forbidden transitions, mediated by spin-orbit coupling between the singlet and triplet states, can occur. We found the probability of these transitions by utilizing Landau-Zener theory in the 270-370 K temperature range. These nonadiabatic transitions could play an important role in hydrogen catalysis on the active site of [NiFe]-hydrogenase, and could explain the current inability of structural models with small ligands to bind molecular hydrogen. Computational Chemistry The diatomic alkali molecules have been proposed as possible candidates for applications in ultracold chemistry, quantum computing, and for high-precision measurements of fundamental constants. These applications require very low temperatures (μK-nK range), which reduces the probability of transitions of the molecule to other quantum states and increases its average lifetime in a specific quantum state. To estimate the vibrational state lifetime of the ground and excited states of heteronuclear alkali dimers XY (X, Y = Li, Na, K, Rb, Cs) we solve the vibrational Schrödinger equation using the CCSDT potential energy and dipole moment curves.  The dissociation energies are overestimated by only 14 cm-1for LiNa and by no more than 114 cm-1 for the other molecules. The discrepancies between the experimental and calculated harmonic vibrational frequencies are less than 1.7 cm-1, and the discrepancies for the anharmonic correction are less than 0.1 cm-1. The transition dipole moments between all vibrational states, the Einstein coefficients, and the lifetimes of the vibrational states are calculated. For all studied alkali dimers the ground vibrational state has the largest lifetime. Therefore, for applications where lifetime is important, such as quantum computing, molecules should be in the ground state. The recent discoveries of complex organic molecules such as cyclopropenone and glycoaldehyde in interstellar space have renewed the interest in astrochemical reaction mechanisms. We investigate three previously proposed reaction mechanisms for cyclopropenone formation in interstellar medium using ab initio quantum chemical methods. The nonadiabatic spin-forbidden reaction between atomic oxygen and cyclopropenylidene characterized by very small activation barrier and significant spin-orbit coupling between the lowest energies singlet and triplet states. We calculate the Landau-Zener probability of transition between the triplet and singlet states, and use nonadiabatic transition state theory to estimate the reaction rate constant of this spin-forbidden reaction.  The reaction between acetylene and carbon monoxide, and between molecular oxygen and cyclopropenylidene, are two spin-allowed cyclopropenone formation pathways, also investigated in this work. Of the three studied reactions, the most probable mechanism of cyclopropenone formation in cold regions of interstellar space is between molecular oxygen and cyclopropenylidene since it is found to be a barrier free reaction.
c66065937bc17928
Playing Quantum Physics Jeopardy with zero-energy eigenstates L. P. Gilbert Department of Physics, Davidson College, Davidson, North Carolina 28035, USA    M. Belloni Department of Physics, Davidson College, Davidson, North Carolina 28035, USA    M. A. Doncheski Department of Physics, The Pennsylvania State University, Mont Alto, Pennsylvania 17237, USA    R. W. Robinett Department of Physics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA We describe an example of an exact, quantitative Jeopardy-type quantum mechanics problem. This problem type is based on the conditions in one-dimensional quantum systems that allow an energy eigenstate for the infinite square well to have zero curvature and zero energy when suitable Dirac delta functions are added. This condition and its solution are not often discussed in quantum mechanics texts and have interesting pedagogical consequences. I Introduction Students often work backward when problem solving in their introductory physics courses. This method typically entails using the information in the problem and if available, the answer in the back of the book to work backward to determine which equation to use.larkin Students using this novice problem-solving approach have been shown to lack the conceptual foundation of more advanced problem solvers.chi In more advanced courses such as quantum mechanics, students still apply novice problem-solving approaches and studies have shown that students’ conceptual understanding on all levels is lacking.rick_qmvi Van Heuvelen and Maloneymaloney have described a problem type that encourages a more conceptual problem-solving strategy called a “working backward” or Jeopardy problem. These problems begin with the answer (an equation, a diagram or a graph, or a simulation) and ask students to work back toward the question, much like the game show Jeopardy. In Quantum Physics Jeopardy, students are given an energy eigenstate (the answer) and are asked to find the potential energy function (the question) that yields this eigenstate. This problem elucidates the connection between the form of the energy eigenstate and the potential energy function in one-dimensional quantum systems.taylor The technique is similar to inverse problems in quarkonium spectroscopy where constraints on the binding potential from the bound-state energies are obtained.quarkonium Applications of inverse methods to other fields, such as medical imagingsaito ; blackledge (for example, CT scans) and geophysicsjacobsen are even more familiar. This paper describes a new class of quantitative Quantum Physics Jeopardy problems that exploit a zero-curvature () energy eigenstate that occurs when suitably chosen Dirac delta functions are added to an infinite well. These special configurations are often overlooked,footnote but they provide additional exactly-solvable problems in quantum mechanics. Ii Infinite Well with Delta Functions In the infinite square well the energy eigenstate is rejected by most textbooks on the grounds that the general form for energy eigenstates, , does not yield a physical solution when . As pointed out in Ref. isw_mistake, the energy eigenstate of the Schrödinger equation is not a sinusoidal function. Instead, is zero inside the well and the time-independent Schrödinger equation simplifies to , and yields , where and are constants. In this context the energy eigenstate cannot be normalized and still satisfy the boundary and continuity conditions for the infinite square well, and thus the state cannot be an allowed energy eigenstate. Although the authors of Ref. isw_mistake, used the zero-curvature case to show that the infinite square well cannot have a zero-energy eigenstate, cases in which zero-curvature states are valid are seldom considered,gilbert ; zc_paper even though, for example, all of the energy eigenstates for the infinite square well are linear (namely zero) outside of the well. A zero-energy eigenstate can occur with the addition of a single attractive Dirac delta function , with , at the origin of a symmetric well with infinite walls at . We write the delta function in this way to work with positive constants and to make explicit the minus sign in . Similar scenarios have been consideredsenn_numerical ; rick_susy and are related to experiments in which a potential energy “spike” inserted in a quantum well is modeled by a Dirac delta function.marzin ; salis Our approach differs in that we tune to be so that an energy eigenstate of zero energy arises. The additional potential energy function splits the well into two regions: region I () and region II (). Assuming a zero-energy eigenstate exists, continuity requires that it be represented as , , and zero outside the well. We then ensure that the energy eigenstate has the proper discontinuity in its slope at the origin due to the Dirac delta function. In general, when the Dirac delta function occurs at the position , we must have that It is easy to show that the energy eigenstate and the chosen value of satisfy Eq. (1). Figure 1 shows the eigenstate corresponding to a symmetric infinite square well with added, , which is also a limiting case of an analysis that used supersymmetric quantum mechanics.rick_susy There are an indefinite number of combinations of Dirac delta functions that when added to the infinite square well result in zero-energy eigenstates. unnormalized zero-energy eigenstates corresponding to two different sets of Dirac delta function(s) added to a symmetric infinite square Figure 1: Two unnormalized zero-energy eigenstates corresponding to two different sets of Dirac delta function(s) added to a symmetric infinite square well. We can also proceed in the opposite direction as in Jeopardy: write any piecewise linear (single-valued) energy eigenstate that vanishes at and does not vanish at a kink and determine the Dirac delta function potential(s) that must be added to the infinite square well. Consider the M-shaped, zero-energy eigenstate in Fig. 1 and determine . A quantitative result is possible from direct measurement of the energy eigenstate slopes at the kinks and their positions. Because Eq. (1) is independent of an overall multiplicative factor in , we can even begin with an unnormalized energy eigenstate. The answer turns out to be . We have created a worksheet template for these Jeopardy exercises.worksheet ; EPAPS Students can be given individual problems by replacing the figure in the worksheet with another drawing of a piecewise linear and single-valued energy eigenstate that vanishes at the infinite walls and does not vanish at a kink; the eigenstate can even be drawn with a ruler and graph paper. A positive side effect of these Jeopardy problems is that they further illustrate how energy eigenstates with obvious kinks can be valid states. This fact is not discussed in most introductory texts on quantum mechanics. Energy eigenstates must be smooth only if their corresponding potential energy function is well behaved. Only infinite walls (such as in the boundaries of the infinite square well) and Dirac delta functions behave badly enough to generate kinks in energy eigenstates. It can also be shown that these states exhibit kinetic/potential energy sharing, by calculating and which yields an exact cancellation so that for these states (as expected). This kinetic/potential analysis provides another illustration of the quantum-mechanical virial theorem.zc_paper Iii Conclusion Zero-energy eigenstates extend the standard treatment of the infinite square well and other piecewise-constant potential energy wells.lin_smit Although these states seem like an intuitively natural interpolation between the much more commonly discussed oscillatory and tunneling solutions, the unfamiliar mathematical form of the one-dimensional Schrödinger equation for situations where is zero over an extended region of space catches many students by surprise.gilbert We would like to thank Gary White for useful conversations regarding this work. LPG and MB were supported in part by a Research Corporation Cottrell College Science Award (CC5470) and MB was also supported by the National Science Foundation (DUE-0126439 and DUE-0442581).
441515142e4c5ad1
Site Loader ECUACION DE SCHRODINGER PDF – Author: Akinokazahn Gogis Country: Grenada Language: English (Spanish) Genre: History Published (Last): 21 April. En la figura muestra tres regiones en las que vamos a obtener la solución de la ecuación de Schrödinger. ( bytes). En la primera. En la segunda . Author: Samusho Mugal Country: Fiji Language: English (Spanish) Genre: Photos Published (Last): 16 November 2006 Pages: 298 PDF File Size: 10.93 Mb ePub File Size: 13.24 Mb ISBN: 134-1-23064-273-4 Downloads: 96959 Price: Free* [*Free Regsitration Required] Uploader: Sazilkree Energy quantization is discussed below. The potential energy, in general, is not the sum of the separate potential energies for each particle, it is a function of all the spatial positions of ce particles. The equations for relativistic quantum fields can be obtained in other ways, such as starting from a Lagrangian density and using the Euler—Lagrange equations for fields, or use the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin and mass. In classical mechanics, a particle has, at every moment, an exact position and an exact momentum. In general for interacting particles, the above decompositions are not possible. In particular, the ground state energy is positive when V x is everywhere positive. But it has a significant influence on the centre-of-mass motion. David Deutsch regarded this as the earliest known reference to an many-worlds interpretation of quantum mechanics, an interpretation generally credited to Hugh Everett III[11] while Jeffrey A. Therefore, at least in principle, it becomes a measurable quantity. In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position. For a more general introduction to the topic, see Introduction to quantum mechanics. This derivation is explained below. If there is no degeneracy they scbrodinger only differ by a factor. By using this site, you agree to the Terms of Use and Privacy Policy. Schrödinger equation – Wikipedia In general, physical situations are not purely described by plane waves, so for generality the superposition principle is required; any wave can be made by superposition of sinusoidal plane schodinger. Barrett ecuacin the more modest position that it indicates a “similarity in The foundation of the equation is structured to be a linear differential equation based on classical energy conservation, and consistent with the De Broglie relations. The resulting partial differential ecuwcion is solved for the wave function, which contains information about the system. In the time-dependent equation, complex conjugate waves move in opposite directions. The energy and momentum operators are differential operatorswhile the potential energy function V is just a multiplicative factor. To solve the measurement problema mere explanation why a wave-function collapses to, e. However using the correspondence principle it is possible to show that, in the classical limit, the expectation value of H is indeed the classical energy. Potencial periódico Measurement in quantum mechanicsHeisenberg uncertainty principleand Interpretations of quantum mechanics. The symmetry of complex conjugation is called time-reversal symmetry. In that case, the expected position and expected ceuacion will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position. According to Penrose’s idea, when a quantum particle is measured, there is an interplay of this nonlinear collapse and environmental decoherence. Unfortunately the paper was rejected by the Physical Review, as recounted by Kamen. Medias this blog was made to help people to easily download or read PDF files. This would mean that ecuaciin a completely collapsed quantum system still can be found at a distant location. The small uncertainty in momentum ensures that the particle remains well localized in position for a long time, so that expected position and momentum continue to closely track the classical trajectories. Wave equations in physics can normally be derived from other physical laws — the wave equation for mechanical vibrations on strings and in matter can be derived from Newton’s lawswhere the wave function represents the displacement of matter, echacion electromagnetic waves from Maxwell’s equationswhere the wave functions are electric and magnetic fields. It physically cannot be negative: Since the time schrdoinger phase factor is always the same, only the spatial part needs to be solved for in time independent problems. The superposition property allows the particle to eucacion in a quantum superposition of two or more quantum states at the same time. In general, one wishes to build relativistic wave equations from the relativistic energy—momentum relation. This case describes the standing wave solutions of dd time-dependent equation, which are the states with definite energy instead of a probability distribution of different energies. Although the first of these equations is consistent with the classical behavior, the second is not:
a5f336bad9fb8532
Original Link : https://www.nytimes.com/2018/05/08/books/review/adam-becker-what-is-real.html quantum physics에 대한 이미지 검색결과 The Unfinished Quest for the Meaning of Quantum Physics Are atoms real? Of course they are. Everybody believes in atoms, even people who don’t believe in evolution or climate change. If we didn’t have atoms, how could we have atomic bombs? But you can’t see an atom directly. And even though atoms were first conceived and named by ancient Greeks, it was not until the last century that they achieved the status of actual physical entities — real as apples, real as the moon. The first proof of atoms came from 26-year-old Albert Einstein in 1905, the same year he proposed his theory of special relativity. Before that, the atom served as an increasingly useful hypothetical construct. At the same time, Einstein defined a new entity: a particle of light, the “light quantum,” now called the photon. Until then, everyone considered light to be a kind of wave. It didn’t bother Einstein that no one could observe this new thing. “It is the theory which decides what we can observe,” he said. Which brings us to quantum theory. The physics of atoms and their ever-smaller constituents and cousins is, as Adam Becker reminds us more than once in his new book, “What Is Real?,” “the most successful theory in all of science.” Its predictions are stunningly accurate, and its power to grasp the unseen ultramicroscopic world has brought us modern marvels. But there is a problem: Quantum theory is, in a profound way, weird. It defies our common-sense intuition about what things are and what they can do. “Figuring out what quantum physics is saying about the world has been hard,” Becker says, and this understatement motivates his book, a thorough, illuminating exploration of the most consequential controversy raging in modern science. The debate over the nature of reality has been growing in intensity for more than a half-century; it generates conferences and symposiums and enough argumentation to fill entire journals. Before he died, Richard Feynman, who understood quantum theory as well as anyone, said, “I still get nervous with it…I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem.” The problem is not with using the theory — making calculations, applying it to engineering tasks — but in understanding what it means. What does it tell us about the world? From one point of view, quantum physics is just a set of formalisms, a useful tool kit. Want to make better lasers or transistors or television sets? The Schrödinger equation is your friend. The trouble starts only when you step back and ask whether the entities implied by the equation can really exist. Then you encounter problems that can be described in several familiar ways: Wave-particle duality. Everything there is — all matter and energy, all known forces — behaves sometimes like waves, smooth and continuous, and sometimes like particles, rat-a-tat-tat. Electricity flows through wires, like a fluid, or flies through a vacuum as a volley of individual electrons. Can it be both things at once? The uncertainty principle. Werner Heisenberg famously discovered that when you measure the position (let’s say) of an electron as precisely as you can, you find yourself more and more in the dark about its momentum. And vice versa. You can pin down one or the other but not both. The measurement problem. Most of quantum mechanics deals with probabilities rather than certainties. A particle has a probability of appearing in a certain place. An unstable atom has a probability of decaying at a certain instant. But when a physicist goes into the laboratory and performs an experiment, there is a definite outcome. The act of measurement — observation, by someone or something — becomes an inextricable part of the theory. The strange implication is that the reality of the quantum world remains amorphous or indefinite until scientists start measuring. Schrödinger’s cat, as you may have heard, is in a terrifying limbo, neither alive nor dead, until someone opens the box to look. Indeed, Heisenberg said that quantum particles “are not as real; they form a world of potentialities or possibilities rather than one of things or facts.” This is disturbing to philosophers as well as physicists. It led Einstein to say in 1952, “The theory reminds me a little of the system of delusions of an exceedingly intelligent paranoiac.” So quantum physics — quite unlike any other realm of science — has acquired its own metaphysics, a shadow discipline tagging along like the tail of a comet. You can think of it as an “ideological superstructure” (Heisenberg’s phrase). This field is called quantum foundations, which is inadvertently ironic, because the point is that precisely where you would expect foundations you instead find quicksand. Competing approaches to quantum foundations are called “interpretations,” and nowadays there are many. The first and still possibly foremost of these is the so-called Copenhagen interpretation. “Copenhagen” is shorthand for Niels Bohr, whose famous institute there served as unofficial world headquarters for quantum theory beginning in the 1920s. In a way, the Copenhagen is an anti-interpretation. “It is wrong to think that the task of physics is to find out how nature is,” Bohr said. “Physics concerns what we can say about nature.” Nothing is definite in Bohr’s quantum world until someone observes it. Physics can help us order experience but should not be expected to provide a complete picture of reality. The popular four-word summary of the Copenhagen interpretation is: “Shut up and calculate!” For much of the 20th century, when quantum physicists were making giant leaps in solid-state and high-energy physics, few of them bothered much about foundations. But the philosophical difficulties were always there, troubling those who cared to worry about them. Becker sides with the worriers. He leads us through an impressive account of the rise of competing interpretations, grounding them in the human stories, which are naturally messy and full of contingencies. He makes a convincing case that it’s wrong to imagine the Copenhagen interpretation as a single official or even coherent statement. It is, he suggests, a “strange assemblage of claims.” An American physicist, David Bohm, devised a radical alternative at midcentury, visualizing “pilot waves” that guide every particle, an attempt to eliminate the wave-particle duality. For a long time, he was mainly lambasted or ignored, but variants of the Bohmian interpretation have supporters today. Other interpretations rely on “hidden variables” to account for quantities presumed to exist behind the curtain. Perhaps the most popular lately — certainly the most talked about — is the “many-worlds interpretation”: Every quantum event is a fork in the road, and one way to escape the difficulties is to imagine, mathematically speaking, that each fork creates a new universe. So in this view, Schrödinger’s cat is alive and well in one universe while in another she goes to her doom. And we, too, should imagine countless versions of ourselves. Everything that can happen does happen, in one universe or another. “The universe is constantly splitting into a stupendous number of branches,” said the theorist Bryce DeWitt, “every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself.” This is ridiculous, of course. “A heavy load of metaphysical baggage,” John Wheeler called it. How could we ever prove or disprove such a theory? But if you think the many-worlds idea is easily dismissed, plenty of physicists will beg to differ. They will tell you that it could explain, for example, why quantum computers (which admittedly don’t yet quite exist) could be so powerful: They would delegate the work to their alter egos in other universes. Is any of this real? At the risk of spoiling its suspense, I will tell you that this book does not propose a definite answer to its title question. You weren’t counting on one, were you? The story is far from finished. When scientists search for meaning in quantum physics, they may be straying into a no-man’s-land between philosophy and religion. But they can’t help themselves. They’re only human. “If you were to watch me by day, you would see me sitting at my desk solving Schrödinger’s equation…exactly like my colleagues,” says Sir Anthony Leggett, a Nobel Prize winner and pioneer in superfluidity. “But occasionally at night, when the full moon is bright, I do what in the physics community is the intellectual equivalent of turning into a werewolf: I question whether quantum mechanics is the complete and ultimate truth about the physical universe.”
a798631587d75175
Carmen Vierheilig and Milena Grifoni Institut für Theoretische Physik, Universität Regensburg We study the dissipative quantum Duffing oscillator in the deep quantum regime with two different approaches: The first is based on the exact Floquet states of the linear oscillator and the nonlinearity is treated perturbatively. It well describes the nonlinear oscillator dynamics away from resonance. The second, in contrast, is applicable at and in the vicinity of a -photon resonance and it exploits quasi-degenerate perturbation theory for the nonlinear oscillator in Floquet space. It is perturbative both in driving and nonlinearity. A combination of both approaches yields the possibility to cover the whole range of driving frequencies. As an example we discuss the dissipative dynamics of the Duffing oscillator near and at the one-photon resonance. Nonlinear quantum oscillator, Floquet theory journal: Chemical Physics 1 Introduction Classical nonlinear systems show interesting phenomena like bistability, frequency doubling and nonlinear response and cover a wide range of applicability and various physical realizations Nayfeh (); Jordan (). In the last years it has been possible to build nonlinear devices which can potentially reach the quantum regime. These are for example cavities incorporating a Josephson junction Boaknin (); Metcalfe (), SQUIDs used as bifurcation amplifiers to improve qubit read-out Lee (); Picot (); Siddiqi (); Siddiqi2 () or nanomechanical resonators Almog (); Alridge (). Recently, a novel class of devices combining SQUIDS and resonators has been demonstrated. For example, sensitive detection of the position of a micromechanical resonator embedded in a nonlinear, strongly damped DC-SQUID has been achieved Etaki (). In the deep quantum regime, where bistability is no longer observed, there has been to date no experimental operation of nonlinear driven oscillators to our knowledge. From the theoretical side, semiclassical approaches have been used to describe the situation where the underlying classical bistability still plays a dominant role. A DC-SQUID embedded in a cavity allowing displacement detection and cooling was analyzed in Nation (). Composed qubit-Josephson bifurcation amplifier systems have been considered in SerbanDet (); Serbanqubit (). Dynamical tunneling in a Duffing oscillator was accounted for in SerbanMQDT () within a semiclassical WKB scheme, while in Guo (); Katz1 (); Katz2 () a Wigner function analysis near the bifurcation point is put forward. The behaviour of the Duffing oscillator (DO) in the deep quantum regime has attracted recently lot of interest. In particular Rigo et al. Rigo () demonstrated, based on a quantum diffusion model, that in the steady state the quantum DO does not exhibit any bistability or hysteresis. It was also shown that the response of the Duffing oscillator displays antiresonant dips and resonant peaks Fistul (); Peano1 (); Peano2 (); Peano3 () depending on the frequency of the driving field, originating from special degeneracies of the eigenenergy spectrum of the nonlinear oscillator Fistul (). While the antiresonances persist in the presence of a weak Ohmic bath, for strong damping the nonlinear response turns to a resonant behaviour, namely the one of a linear oscillator at a shifted frequency Peano2 (). Finally, recently Nakano et al. Nakano () looked at the composed qubit-DO dynamics during read-out process. In this work we investigate the deep quantum limit of the quantum Duffing oscillator and present two different approaches covering different parameter regimes. The first approach is based on the exact Floquet energies and states of the driven linear oscillator with the nonlinearity treated perturbatively. As there is no restriction on the driving amplitude, this scheme can also be applied to the regime where the driving amplitude is larger than the nonlinearity. The second approach treats both the driving and the nonlinearity perturbatively. It is applicable for driving frequencies which can resonantly excite two states of the nonlinear oscillator, requiring that the driving cannot overcome the nonlinearity. In general a combination of both approaches allows to cover the whole range of driving frequencies. Exemplarily we consider the dynamics of the Duffing oscillator near the one-photon resonance, where the oscillator dynamics is described analytically. As in Fistul (); Peano1 (); Peano2 (); Peano3 () we obtain that for weak dissipation the amplitude of the oscillations displays an antiresonance rather than a resonance. We find a characteristic asymmetry of the antiresonance lineshape. In contrast to Fistul (); Peano1 (); Peano2 (); Peano3 (), our analytic results are obtained without applying a rotating wave approximation (RWA) on the Duffing oscillator. The paper is organized as follows: In section 2 we introduce the Hamiltonian of the non-dissipative Duffing oscillator and the two Floquet based approximation schemes to treat it. The energy spectrum and eigenstates of the non-dissipative system are calculated with the two different schemes in section 3 and section 4 and both approaches are compared in section 5. Afterwards dissipative effects are included within a Born-Markov-Floquet master equation in section 6. Section 7 addresses the special case of the one-photon resonance including dissipative effects. In section 8 conclusions are drawn. 2 Quantum Duffing oscillator A quantum Duffing oscillator is described by the Hamiltonian: where and are the mass and frequency of the Duffing oscillator which is driven by a monochromatic field of amplitude and frequency . For later convenience we introduce the oscillator length . In the following we will consider the case of hard nonlinearities, , such that the undriven potential is monostable. To treat the quantum Duffing oscillator problem we observe that the Hamiltonian can be rewritten as: where describes a driven linear oscillator, while is the Hamiltonian of an undriven nonlinear oscillator. Due to the periodic driving Floquet theory Floquet (); Shirley (); Sambe (); Grifoni304 (), reviewed in A, can be applied. In particular Floquet theorem states that solutions of the time-dependent Schrödinger equation for are of the form where . The quasienergies and Floquet states , respectively, are eigenvalues and eigenfunctions of the Floquet Hamiltonian . As discussed in A, yields a physically equivalent solution but with shifted quasienergy . Eqs. (2a) and (2b) suggest two different approaches, shown in Figure 1, to solve the eigenvalue problem described by Eqs. (91) and (92): App I Nonlinearity Nonlinearity App II Figure 1: Different procedures to incorporate driving and nonlinearity. In App I starting point are the exact Floquet states and eigenenergies of the driven linear oscillator . The nonlinearity is the perturbation. In App II the driving is a perturbation expressed on the basis of the Floquet states of the undriven nonlinear oscillator . In the first one, called App I, starting point are the exact Floquet states and eigenenergies of the driven linear oscillator , see Eq. (2a). The nonlinearity is treated as a perturbation. A similar problem was considered by Tittonen et al. Tittonen (). This approach is convenient if the Floquet states of the time-dependent Hamiltonian are known. For the driven harmonic oscillator they have been derived by Husimi and Perelomov Husimi (); Popov () and are given in B. In the second approach, which we call App II, one considers as unperturbed system the undriven nonlinear oscillator (NLO) and the driving is the perturbation, see Eq. (2b). As we shall see, the different ways of treating the infinite dimensional Floquet Hamiltonian result in crucial differences when evaluating observables of the Duffing oscillator. 3 Perturbation theory for a time-periodic Hamiltonian with time-independent perturbation The starting point of the perturbative treatment App I is the Floquet equation for the full Floquet Hamiltonian in the extended Hilbert space , see Eq. (91), where . Moreover, the Floquet states of the Floquet Hamiltonian satisfying the eigenvalue equation (91) are known, see e.g. Eq. (93a) and Eq. (96): We look for an expression of and in first order in . Hence we introduce the first order corrections and as: Because the perturbation is time-independent, it is diagonal in the Hilbert space . Additionallly, we introduce the Fourier coefficients: As in the case of conventional stationary perturbation theory, the perturbed states are written as a linear combination of the unperturbed states: where denotes the couple of quantum numbers and . Inserting ansatz Eq. (6) in the Floquet equation (4) we obtain: Because and solve the Floquet equation for the last equation reduces to: This equation allows to determine the modification to the quasienergy and the actual form of the coefficients . To calculate we multiply it from the left with . It follows: To determine the coefficients for the states we multiply Eq. (10) from the left with , where the couple . Moreover we exclude the case of degenerate quasienergies and impose: . In case of degenerate perturbation theory should be applied. We obtain: If we set the driving to zero the quasienergy and the states reduce to the ones obtained by applying conventional stationary perturbation theory on the unforced system. 3.1 Application to the quantum Duffing oscillator We can now determine the actual form of the quasienergy spectrum and the corresponding expansion coefficients for the case of the quantum Duffing oscillator using as perturbation . The matrix elements Eq. (7) defined as: are given in C. The quasienergies of the quantum Duffing oscillator, exact in all orders of the driving strength and up to first order in the nonlinearity, read: In the limit of no driving Eqs. (8) and (15) yield: such that the modifications due to the nonlinearity are exactly those obtained by conventional stationary perturbation theory Carmen (), where are the eigenstates of the undriven harmonic oscillator. Expanding up to second order in the driving amplitude we obtain from Eq. (8) for the result: where we used the abbreviation and are the Floquet states of the linear oscillator. 4 Perturbative approach for the one-photon resonance When the nonlinearity becomes a relevant perturbation to the equidistant spectrum of the linear oscillator, it becomes preferable to use the second approximation scheme, App II, based on the decomposition Eq. (2b). In this case it is convenient to express the Floquet Hamiltonian in the composite Hilbert space spanned by the vectors , where is an eigenstate of the nonlinear oscillator given in Eq. (16). Hence, in this basis the Floquet Hamiltonian of the nonlinear oscillator , see Eq. (19) below at vanishing driving amplitude, is diagonal. In contrast, the perturbation is time-dependent and thus non-diagonal also in the Hilbert space . From the relation: where are the Fourier coefficients of the matrix , it follows with and the energies of the nonlinear oscillator Eq. (16a). From Eq. (19) is is thus apparent that two eigenstates , of become degenerate when , i.e. for a driving frequency satisfying Setting one speaks of a -photon resonance. From Eq. (16a) for the energies it follows, with , In the following we restrict to the one-photon resonance , i.e., the quasienergies and are degenerate if . Moreover, due to the arbitrariness in the choice of the Brillouin zone index , we fix it in the following to the zeroth Brillouin zone, i.e., . For our perturbative treatment we further require that the nonlinearity is large enough that if is resonant with the remaining quasi-energy levels are off resonance and not involved in the doublet spanned by the two degenerate levels. Having this in mind, we have to restrict ourselves to a certain range of possible values of , namely to the resonance region such that the chosen doublet remains degenerate or almost degenerate, i.e. for the one-photon resonance: . This results from the fact that if , the closest lying levels and are by away. Because of the manifold (doublet) structure of the quasi-energy spectrum, we apply in the following Van Vleck perturbation theory Shavitt1980 (); Cohen1992 () and treat the driving as a small perturbation, i.e. . Consequently, a consistent treatment in App II requires that either contributions are neglected if we consider the nonlinearity only up to first order, or that both driving and nonlinearity are treated up to second order. As the second order in both parameters is very involved, we restrict to the first order in the nonlinearity and neglect quadratic contributions in the driving strength, as long as their reliability cannot be verified within a different approach, i.e. App I. Within Van Vleck perturbation theory we construct an effective Floquet Hamiltonian having the same eigenvalues as the original Hamiltonian and not containing matrix elements connecting states belonging to different manifolds. Therefore it is block-diagonal with all quasi-degenerate energy states in one common block. To determine the transformation and the effective Hamiltonian we write both as a power series in the driving: In D the general formulas for the energies and the states up to second order are provided Shavitt1980 (); Cohen1992 (); Certain (); Kirtman (). The zeroth order energies are and and the corresponding (quasi)-degenerate Floquet states are: and . The quasi-degenerate block of the effective Hamiltonian in this basis up to second order in the driving strength acquires the form: Notice that the unperturbed quasienergies and are correct up to first order in the nonlinearity. For consistency also has to be treated up to first order in only. As shown by Eq. (4) below it is essential to determine the eigenenergies of up to second order in . They are also the eigenenergies of and read: The convention is chosen such that for , whereas it jumps at resonance such that for . Because the first order correction in the driving enters Eq. (4) quadratically, a calculation of the quasienergies up to first order in merely yields (when ) the zeroth order results. Consequently to be consistent one has to take into account also the second order corrections and to the energies. The eigenstates of the block Eq. (24) are determined by: In conventional Van Vleck perturbation theory the eigenstates of are obtained by applying a back transformation: Expanding the exponential up to first order we obtain for the eigenstates: For reasons given in section 5 and the chosen parameter regime of App II we do not determine the second order correction for the states coming from the second order contribution to . In the second line of Eq. (32) we used the fact that we can express the transformation by introducing the reduced resolvent , allowing a nice connection to conventional degenerate perturbation theory as shown in D. From Eq. (32) it follows: The effect of the transformation is to yield a contribution from states outside the manifold. Notice that in order to obtain the states to first order in the trigonometric functions and should be expanded in powers of . We conclude this section by mentioning that eigenenergies and eigenstates of the Duffing oscillator have been calculated near and at resonance also by Peano et al. Peano2 (). However in Peano2 () the nonlinear undriven Hamiltonian is approximated by , where is the occupation number operator of the undriven linear oscillator. This approximated Hamiltonian is diagonal in the linear oscillator basis and yields the result Eq. (16a) for the energies of . However, further corrections of order contained in the eigenstates (16) are neglected. The results of Peano2 () at finite driving can be retained from Eqs. (4) and (33) by treating the driving up to first order, by replacing by and by setting . 5 Comparison of the outcomes of the two approaches The approximation scheme in Sec. 3, App I, is valid when the quasienergy spectrum of the linear oscillator is non-degenerate, i.e., away of a -photon resonance. In contrast, the perturbative approach of Sec. 4, denoted as App II, works at best near a -photon resonance in the quasienergy spectrum of the undriven nonlinear oscillator. Thus a comparison of the outcomes of the two approaches is possible in the frequency regime near resonance, i.e. within . Additionallly, as the Van Vleck-based approach is perturbative in the driving, remember , a comparison requires an expansion in of the results from App I. This section is organized as follows: First the energies and then the matrix elements of the position operator are compared. 5.1 Comparison of the quasienergies We start with the off resonant case and expand the result in Eq. (4) up to second order in the driving amplitude : Expanding further for consistency the eigenvalues up to first order in the nonlinearity we obtain: Inserting , these are exactly the results obtained from App I for <
ea7fb92e8e75e42e
The three-dimensional Laplacian can be defined as $$\nabla^2=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}.$$ Expressed in spherical coordinates, it does not have such a nice form. But I could define a different operator (let's call it a "Laspherian") which would simply be the following: $$\bigcirc^2=\frac{\partial^2}{\partial \rho^2}+\frac{\partial^2}{\partial \theta^2}+\frac{\partial^2}{\partial \phi^2}.$$ This looks nice in spherical coordinates, but if I tried to express the Laspherian in Cartesian coordinates, it would be messier. Mathematically, both operators seem perfectly valid to me. But there are so many equations in physics that use the Laplacian, yet none that use the Laspherian. So why does nature like Cartesian coordinates so much better? Or has my understanding of this gone totally wrong? • 65 $\begingroup$ your laspherian is not dimensionally consistent $\endgroup$ – wcc Apr 26 '19 at 15:25 • 3 $\begingroup$ That's true but: the Laplacian wouldn't be dimensionally consistent either except we happen to have given x,y, and z all the same units. We could equally well give the same units to $\rho$, $\theta$, and $\phi$. I think @knzhou's answer of rotational symmetry justifies why, at least in our universe, we only do the former. I've never made that connection before, though! $\endgroup$ – Sam Jaques Apr 26 '19 at 15:35 • 32 $\begingroup$ You can't give the same units to distance and angle. $\endgroup$ – user2357112 supports Monica Apr 26 '19 at 17:43 • 9 $\begingroup$ @SamJaques Your original question is good, but the above comment comes off as you being stubborn. You are asking what is more confusing about a convention where angles and distance have the same units than a system where they have different units? Come on, man. $\endgroup$ – user1717828 Apr 26 '19 at 17:48 • 13 $\begingroup$ "Mathematically, both operators seem perfectly valid to me." Mathematically, it's perfectly valid for gravity to disappear every Tuesday or for the electric force to drop off linearly with distance. Most things that are mathematically valid are not the way the universe works. $\endgroup$ – Owen Apr 27 '19 at 23:57 Nature appears to be rotationally symmetric, favoring no particular direction. The Laplacian is the only translationally-invariant second-order differential operator obeying this property. Your "Laspherian" instead depends on the choice of polar axis used to define the spherical coordinates, as well as the choice of origin. Now, at first glance the Laplacian seems to depend on the choice of $x$, $y$, and $z$ axes, but it actually doesn't. To see this, consider switching to a different set of axes, with associated coordinates $x'$, $y'$, and $z'$. If they are related by $$\mathbf{x} = R \mathbf{x}'$$ where $R$ is a rotation matrix, then the derivative with respect to $\mathbf{x}'$ is, by the chain rule, $$\frac{\partial}{\partial \mathbf{x}'} = \frac{\partial \mathbf{x}}{\partial \mathbf{x}'} \frac{\partial}{\partial \mathbf{x}} = R \frac{\partial}{\partial \mathbf{x}}.$$ The Laplacian in the primed coordinates is $$\nabla'^2 = \left( \frac{\partial}{\partial \mathbf{x}'} \right) \cdot \left( \frac{\partial}{\partial \mathbf{x}'} \right) = \left(R \frac{\partial}{\partial \mathbf{x}} \right) \cdot \left(R \frac{\partial}{\partial \mathbf{x}} \right) = \frac{\partial}{\partial \mathbf{x}} \cdot (R^T R) \frac{\partial}{\partial \mathbf{x}} = \left( \frac{\partial}{\partial \mathbf{x}} \right) \cdot \left( \frac{\partial}{\partial \mathbf{x}} \right)$$ since $R^T R = I$ for rotation matrices, and hence is equal to the Laplacian in the original Cartesian coordinates. To make the rotational symmetry more manifest, you could alternatively define the Laplacian of a function $f$ in terms of the deviation of that function $f$ from the average value of $f$ on a small sphere centered around each point. That is, the Laplacian measures concavity in a rotationally invariant way. This is derived in an elegant coordinate-free manner here. The Laplacian looks nice in Cartesian coordinates because the coordinate axes are straight and orthogonal, and hence measure volumes straightforwardly: the volume element is $dV = dx dy dz$ without any extra factors. This can be seen from the general expression for the Laplacian, $$\nabla^2 f = \frac{1}{\sqrt{g}} \partial_i\left(\sqrt{g}\, \partial^i f\right)$$ where $g$ is the determinant of the metric tensor. The Laplacian only takes the simple form $\partial_i \partial^i f$ when $g$ is constant. Given all this, you might still wonder why the Laplacian is so common. It's simply because there are so few ways to write down partial differential equations that are low-order in time derivatives (required by Newton's second law, or at a deeper level, because Lagrangian mechanics is otherwise pathological), low-order in spatial derivatives, linear, translationally invariant, time invariant, and rotationally symmetric. There are essentially only five possibilities: the heat/diffusion, wave, Laplace, Schrodinger, and Klein-Gordon equations, and all of them involve the Laplacian. The paucity of options leads one to imagine an "underlying unity" of nature, which Feynman explains in similar terms: Is it possible that this is the clue? That the thing which is common to all the phenomena is the space, the framework into which the physics is put? As long as things are reasonably smooth in space, then the important things that will be involved will be the rates of change of quantities with position in space. That is why we always get an equation with a gradient. The derivatives must appear in the form of a gradient or a divergence; because the laws of physics are independent of direction, they must be expressible in vector form. The equations of electrostatics are the simplest vector equations that one can get which involve only the spatial derivatives of quantities. Any other simple problem—or simplification of a complicated problem—must look like electrostatics. What is common to all our problems is that they involve space and that we have imitated what is actually a complicated phenomenon by a simple differential equation. At a deeper level, the reason for the linearity and the low-order spatial derivatives is that in both cases, higher-order terms will generically become less important at long distances. This reasoning is radically generalized by the Wilsonian renormalization group, one of the most important tools in physics today. Using it, one can show that even rotational symmetry can emerge from a non-rotationally symmetric underlying space, such as a crystal lattice. One can even use it to argue the uniqueness of entire theories, as done by Feynman for electromagnetism. | cite | improve this answer | | • $\begingroup$ In other words, the Cartesian form of the Laplacian is nice because the Cartesian metric tensor is nice. $\endgroup$ – probably_someone Apr 26 '19 at 15:42 • 1 $\begingroup$ I think it's also probably valid to talk about the structure of spacetime; it is Lorentzian and in local inertial frames it always looks like Minkowski space. So if we were to ignore the time coordinates and just consider the spatial components of spacetime then the structure always possesses Riemann geometry and appears Euclidean in a local inertial frame. Cartesian coordinates are then the most natural way to simply describe Euclidean geometry, which is why the Laplacian appears the way it does. Nature favours the Laplacian because space appears Euclidean in local inertial frames. $\endgroup$ – Ollie113 Apr 26 '19 at 15:53 • 1 $\begingroup$ Are you drawing a distinction between the heat/diffusion and Schrödinger equations because the latter contains terms depending on the fields themselves, rather than just their derivatives? (And similarly for "wave" vs. "Klein-Gordon"?) Or is there another reason that you're differentiating between cases that have the same differential operators in them? $\endgroup$ – Michael Seifert Apr 28 '19 at 19:24 • 2 $\begingroup$ The third block-set equation makes explicit use of the notion that an inner product is taken between a space and its dual, but the notation associated with that idea appears halfway through as if out of nowhere. It might be better to include the ${}^T$ in the first two dot products as well. $\endgroup$ – dmckee --- ex-moderator kitten Apr 28 '19 at 19:46 • 1 $\begingroup$ yes, please explain where that $^T$ suddenly comes from. Just give us the general public some names we could search for. $\endgroup$ – Will Ness Apr 29 '19 at 5:36 This is a question that hunted me for years, so I'll share with you my view about the Laplace equation, which is the most elemental equation you can write with the laplacian. If you force the Laplacian of some quantity to 0, you are writing a differential equation that says "let's take the average value of the surrounding". It's easier to see in cartesian coordinates: If you approximate the partial derivatives by $$ \frac{\partial f}{\partial x }(x) \approx \frac{f(x + \frac{\Delta x}{2}) - f(x-\frac{\Delta x}{2})}{\Delta x} $$ $$ \frac{\partial^2 f}{\partial x^2 }(x) \approx \frac{ \frac{\partial f}{\partial x } \left( x+ \frac{\Delta x}{2} \right) - \frac{\partial f}{\partial x } \left( x - \frac{\Delta x}{2} \right) } { \Delta x} = \frac{ f(x + \Delta x) - 2 \cdot f(x) + f(x - \Delta x) } { \Delta x ^2 } $$ for simplicity let's take $\Delta x = \Delta y = \delta$, then the Laplace equation $$\nabla ^2 u =0 $$ becomes: $$ \nabla ^2 u (x, y) \approx \frac{ u(x + \delta, y) - 2 u(x, y) + u(x - \delta, y) } { \delta ^2 } + \frac{ u(x, y+ \delta) - 2 u(x, y) + u(x, y - \delta) } { \delta ^2 } = 0 $$ $$ \frac{ u(x + \delta, y) - 2 u(x, y) + u(x - \delta, y) + u(x, y+ \delta) - 2 u(x, y) + u(x, y - \delta) } { \delta ^2 } = 0 $$ from which you can solve for $u(x, y)$ to obtain $$ u(x, y) = \frac{ u(x + \delta, y) + u(x - \delta, y) + u(x, y+ \delta)+ u(x, y - \delta) } { 4 } $$ That can be read as: "The function/field/force/etc. at a point takes the average value of the function/field/force/etc. evaluated at either side of that point along each coordinate axis." Laplace equation function Of course this only works for very small $\delta$ for the relevant sizes of the problem at hand, but I think it does a good intuition job. I think what this tell us about nature is that at first sight and at a local scale, everything is an average. But this may also tell us about how we humans model nature, being our first model always: "take the average value", and maybe later dwelling into more intricate or detailed models. | cite | improve this answer | | • 1 $\begingroup$ Out of curiosity, is that (very nice) figure a scan of a hand sketch, or do you have a software tool that supports such nice work? $\endgroup$ – dmckee --- ex-moderator kitten Apr 28 '19 at 19:43 • 2 $\begingroup$ Your nice idea that the potential u(x,y) is the average of it's surroundings is exactly the way a spreadsheet (like Excel) is used to solve the Poisson equation for electrostatic problems that are 2-dimensional like a long metal pipe perpendicular to the spreadsheet. Each cell is programmed equal to the average of it's surrounding 4 cells. Fixed numbers (=voltage) are then put into any boundary or interior cells that are held at fixed potentials. The spreadsheet is then iterated until the numbers stop changing at the accuracy you are interested in. $\endgroup$ – Gary Godfrey Apr 28 '19 at 20:59 • 1 $\begingroup$ @dmckee thank you for the compliment! I wish it was a software tool but it's my hand. Graphing software draws very nice rendered 3d graphics but I have yet to find one that draws in a more organic way. If you know some that does please recommend! $\endgroup$ – fredwhileshavin Apr 29 '19 at 1:14 • $\begingroup$ I've been experimenting with a Wacom tablet from time to time. But I cheaped out and bought the USD200 one instead of the USD1000 one that is also a high-resolution display. And the result is that I'm having to do a lot of art-school style exercises again to learn to draw on one surface while looking at another and in the mean time I'm just not able to do some of the more sophisticated things I would like to do. But the pressure sensitivity is very nice. If you have the funds the pro version might be a better investment. $\endgroup$ – dmckee --- ex-moderator kitten Apr 29 '19 at 3:45 • $\begingroup$ The numerical technique @GaryGodfrey mentioned is an example of a relaxation method. You can learn more about it from Per Brinch Hansen's report on "Numerical Solution of Laplace's Equation" (surface.syr.edu/eecs_techreports/168), and from many other places too. $\endgroup$ – Vectornaut Apr 29 '19 at 15:36 For me as a mathematician, the reason why Laplacians (yes, there is a plethora of notions of Laplacians) are ubiquitous in physics is not any symmetry of space. Laplacians also appear naturally when we discuss physical field theories on geometries other than Euclidean space. I would say, the importance of Laplacians is due to the following reasons: (i) the potential energy of many physical systems can be modeled (up to errors of third order) by the Dirichlet energy $E(u)$ of a function $u$ that describes the state of the system. (ii) critical points of $E$, that is functions $u$ with $DE(u) = 0$, correspond to static solutions and (iii) the Laplacian is essentially the $L^2$-gradient of the Dirichlet energy. To make the last statement precise, let $(M,g)$ be a compact Riemannian manifold with volume density $\mathrm{vol}$. As an example, you may think of $M \subset \mathbb{R}^3$ being a bounded domain (with sufficiently smooth boundary) and of $\mathrm{vol}$ as the standard Euclidean way of integration. Important: The domain is allowed to be nonsymmetric. Then the Dirichlet energy of a (sufficiently differentiable) function $u \colon M \to \mathbb{R}$ is given by $$E(u) = \frac{1}{2}\int_M \langle \mathrm{grad} (u), \mathrm{grad} (u)\rangle \, \mathrm{vol}.$$ Let $v \colon M \to \mathbb{R}$ be a further (sufficiently differentiable) function. Then the derivative of $E$ in direction of $v$ is given by $$DE(u)\,v = \int_M \langle \mathrm{grad}(u), \mathrm{grad}(v) \rangle \, \mathrm{vol}.$$ Integration by parts leads to $$\begin{aligned}DE(u)\,v &= \int_{\partial M} \langle \mathrm{grad}(u), N\rangle \, v \, \mathrm{vol}_{\partial M}- \int_M \langle \mathrm{div} (\mathrm{grad}(u)), v \rangle \, \mathrm{vol} \\ &= \int_{\partial M} \langle \mathrm{grad}(u), N \rangle \, v \, \mathrm{vol}_{\partial M}- \int_M g( \Delta u, v ) \, \mathrm{vol}, \end{aligned}$$ where $N$ denotes the unit outward normal of $M$. Usually one has to take certain boundary conditions on $u$ into account. The so-called Dirichlet boundary conditions are easiest to discuss. Suppose we want to minimize $E(u)$ subject to $u|_{\partial M} = u_0$. Then any allowed variation (a so-called infinitesimal displacement) $v$ of $u$ has to satisfy $v_{\partial M} = 0$. That means if $u$ is a minimizer of our optimization problem, then it has to satisfy $$ 0 = DE(u) \, v = - \int_M g( \Delta u, v ) \, \mathrm{vol} \quad \text{for all smooth $v \colon M \to \mathbb{R}$ with $v_{\partial M} = 0$.}$$ By the fundamental lemma of calculus of variations, this leads to the Poisson equation $$ \left\{\begin{array}{rcll} - \Delta u &= &0, &\text{in the interior of $M$,}\\ u_{\partial M} &= &u_0. \end{array}\right.$$ Notice that this did not require the choice of any coordinates, making these entities and computations covariant in the Einsteinian sense. This argumentation can also be generalized to more general (vector-valued, tensor-valued, spinor-valued, or whatever-you-like-valued) fields $u$. Actually, this can also be generalized to Lorentzian manifolds $(M,g)$ (where the metric $g$ has signature $(\pm , \mp,\dotsc, \mp)$); then $E(u)$ coincides with the action of the system, critical points of $E$ correspond to dynamic solutions, and the resulting Laplacian of $g$ coincides with the wave operator (or d'Alembert operator) $\square$. | cite | improve this answer | | • $\begingroup$ Bit late but I think this has knzhou's answer hidden in it: How is the inner product of gradients defined? You're taking the usual inner product on $\mathbb{R}^3$, right? So I can be pedantic and ask: why not take a different inner product? Rotation and translation invariance seems to be still be the answer. $\endgroup$ – Sam Jaques May 4 at 11:11 • $\begingroup$ Well, if you weaken "rotation-invariance" to "isotropy" (rotation-invariance per tangent space) and abandon the translation invariance, I am with you. My point is that a general (pseudo-)Riemannian manifold need not have any global isometries. But the Laplacians/d'Alembert operator is still well-defined. $\endgroup$ – Henrik Schumacher May 4 at 11:56 The expression you've given for the Laplacian, $$ \nabla^2=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}, $$ is a valid way to express it, but it is not a particularly useful definition for that object. Instead, a much more useful way to see the Laplacian is to define it as $$ \nabla^2 f = \nabla \cdot(\nabla f), $$ i.e., as the divergence of the gradient, where: • The gradient of a scalar function $f$ is the vector $\nabla f$ which points in the direction of fastest ascent, and whose magnitude is the rate of growth of $f$ in that direction; this vector can be cleanly characterized by requiring that if $\boldsymbol{\gamma}:\mathbb R \to E^3$ is a curve in Euclidean space $E^3$, the rate of change of $f$ along $\boldsymbol\gamma$ be given by $$ \frac{\mathrm d}{\mathrm dt}f(\boldsymbol{\gamma}(t)) = \frac{\mathrm d\boldsymbol{\gamma}}{\mathrm dt} \cdot \nabla f(\boldsymbol{\gamma}(t)). $$ • The divergence of a vector field $\mathbf A$ is the scalar $\nabla \cdot \mathbf A$ which characterizes how much $\mathbf A$ 'flows out of' an infinitesimal volume around the point in question. More explicitly, the divergence at a point $\mathbf r$ is defined as the normalized flux out of a ball $B_\epsilon(\mathbf r)$ of radius $\epsilon$ centered at $\mathbf r$, in the limit where $\epsilon \to 0^+$, i.e. as $$ \nabla \cdot \mathbf A(\mathbf r) = \lim_{\epsilon\to0^+} \frac{1}{\mathrm{vol}(B_\epsilon(\mathbf r)} \iint_{\partial B_\epsilon(\mathbf r))} \mathbf A \cdot \mathrm d \mathbf S. $$ Note that both of these definitions are completely independent of the coordinate system in use, which also means that they are invariant under translations and under rotations. It just so happens that $\nabla^2$ happens to coincide with $\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2},$ but that is a happy coincidence: the Laplacian occurs naturally in multiple places because of its translational and rotational invariance, and that then implies that the form $\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}$ happens frequently. But that's just hanging on from the properties of the initial definition. | cite | improve this answer | | • $\begingroup$ It makes sense to me why a gradient defined in that way would be simpler for Cartesian coordinates, since they also form a basis in the strict sense of a vector space, which spherical coordinates don't. In the definition you gave, normalization/units are sneaking in, I think: The dot product implies the units must be the same to be added together. Which is weird because the left-hand side definition, $\frac{d}{dt}f(\gamma(t))$, doesn't seem to use units at all. But: The derivative of $f$ can't be defined without a metric on $E^3$, and the metric sneaks in the necessary normalization. $\endgroup$ – Sam Jaques Apr 29 '19 at 7:37 • $\begingroup$ An attempted summary of your answer: The Laplacian looks nice with Cartesian coordinates because they play nice with the $L^2$ norm, and we want that because real-life distance uses the $L^2$ norm. $\endgroup$ – Sam Jaques Apr 29 '19 at 7:39 Your Answer
7372f244ab49f47f
mishranant A CS Undergraduate, intern at Arista Networks In a Parallel Universe In a Parallel Universe Humorous memes, science fiction, religion, philosophy, psychology, astronomy, fantasy, what do they all have in common? I guess you read the title of this article, so you already know the answer. In a parallel universe, you could have whatever you can’t have here and do stuff you can’t do here. An alternate timeline of your life where you chose a different college, had different parents or probably were born in another country. It is fascinating to think that there may be a world, where the two deadly world wars did not happen. But, is it just a wild fantasy? Or, is there more to it? What does Physics say about it? Let us begin with the past. In a 1952 lecture, Erwin Schrödinger (the great physicist who gave us the Schrödinger equation, which is the ‘F = ma’ of super-tiny aka Quantum systems) famously said that when his Nobel equations seemed to describe different histories, they were ‘…not alternatives but all really happen[ing] simultaneously’. That was probably the beginning of the idea of parallel universes or multiverses.(To be fair, Isaac Newton wrote some stuff back in the 1700s referring to idea of a Multiverse, but that was more philosophical rather than mathematically concrete). CLARIFICATION ALERT: The word ‘Universe’ is really a synonym for ‘the whole’, so that makes ‘Multiverse’ seem like a nonsensical word. Whenever Physicists refer to Multiverse, they are talking about multiple ’Observable’ Universes. In the quantum world, when little particles interact with other little particles (little, as in, really really little, think of an electron or a quark), they do not behave as concrete and determinable particles, rather they exist in a superposition of all possible states in what is called a ‘wavefunction’.When an observation is made, the wave-function collapses and the particle appears at a single spot. The famous Double Slit experiment, Quantum Teleportation and the Stern-Gerlach experiment, amongst many, line up to support this fact. While all Physicists agree about the collapse of the wave-function, nobody clearly knows how it happens. In other words, it is not clearly known why observation causes a quantum system to end up in a certain state and not the other. This has lead to a group of scientists in speculating many models of the different possible timelines aka Parallel Universes: 1. Quilted Multiverse This model assumes an infinite Universe. And, infinity is bigger than you think. Consider this: it takes about x =(10 )^(10)⁷⁰ quantum states to completely define 1 m3 of space. In other words, x is the number of different ways that we could put together 1 m³ of space resulting in different arrangements of atoms, one of which can be you or me. That essentially means if the universe was larger than x meters across, then there may be multiple copies of you and me in the Cosmos(Quantum Physics says that, not me). If you didn’t notice, x here is finite but we are not as sure about our Universe. This is the foundation of this model. There could be isolated bubbles of observable universes floating around that are so far away from each other that it will take gazillions of years for light to travel across them(recall that light is only about 3,00,000 km per sec fast, not infinitely fast). Thus, billions of copies of you and me maybe floating around but, alas, we cannot interact with them. 2. Membranes & Multidimensions This model relies on String theory and M-theory’s idea of spacetime existing in at least 10 dimensions (nostalgia anyone? Interstellar?). Each world lives on a 3D brane(short for membrane)embedded in a higher dimensional multi-brane ‘bulk’. Think of a newspaper, each of its pages is a 2D world embedded into a multi-brane newspaper. These branes can interact with each other, and when they collide, the amount of energy released is enough to produce a big bang. This can be explained mathematically with the help of M-theory. We have got an interpretation that explains the Big Bang better and accounts for Parallel Universes too! Two for one! Unfortunately, it is also an unconfirmed model. 3. Many Worlds’ Interpretation This is it. This is ‘the’ thing. This is the most popular interpretation of the Multiverse theory. Termed by Hugh Everett III, this is the one that repeatedly pops up in the DC/Marvel Universe, Sci-Fi movies, Fantasy Worlds and so on. This model uses ‘choice’ as the central idea. On a weekday morning, you are faced with two choices: either go to the class or watch your favourite TV-show. You chose the latter(I mean, let’s be honest). But that does not mean the other timeline does not exist. Both the futures are equally real, only you chose the second one(try not to think about GPA now). Imagine a tree, if it helps. An ant starts walking from the trunk to the branches, choosing one branch over the other, at each junction. The ant will experience only one path but the whole tree still exists. Similarly, we are currently living in one possible reality, that we decided to choose. To quote Mark Batterson, “You are one decision away from a totally different life.” It is good to think and explore possibilities but far more important is to stay realistic. How can we accept or reject any of the above-proposed models? Will it ever be possible? There is a continuous heated debate in the scientific community on the idea of Multiverse. How can an experiment rule out a theory if the theory provides for all possible outcomes? Many great physicists speak in favour while others in against of it. Stephen Hawking Brian Greene Michio Kaku Neil deGrasse Tyson Max Tegmark Steven Weinberg David Gross Paul Steinhardt George Ellis Neil Turok and more… But Physics is science, not philosophy. Hopefully, someday, one of us will design, build and conduct an experiment to test these hypotheses. We shall overcome this debate, someday. Maybe you believe one of the above models, maybe none. Screw it. How does it matter, we are all living inside the Matrix. Or maybe, not living, either. comments powered by Disqus
59b9046fd80b3348
Alan Walker - Force [NCS Release] Alan Walker - Force [NCS Release] Published: 2015/04/02 Channel: NoCopyrightSounds Force 2016 Full Movie | John Abraham | Vidyut Jamwal | Genelia D Published: 2014/07/10 Channel: Unisys Music Alan Walker - Force Alan Walker - Force Published: 2015/04/02 Channel: Alan Walker Alan Walker - Force [1 HOUR VERSION] Alan Walker - Force [1 HOUR VERSION] Published: 2015/04/02 Channel: HourTV Published: 2017/11/21 Channel: ManelAkaForce Published: 2017/11/22 Channel: RosyMcMichael WALLS vs FORCE FMS Bilbao Jornada 5 Batalla de Exhibición (Oficial) Published: 2017/11/21 Channel: Urban Roosters Marvel Strike Force! Overview Marvel Strike Force! Overview Published: 2017/11/23 Channel: Cynicalex What is a Force? What is a Force? Published: 2011/02/18 Channel: Veritasium [Bullet Force]Tommy Gun, Nuke. [Bullet Force]Tommy Gun, Nuke. Published: 2017/11/22 Channel: Farhang Published: 2017/11/22 Channel: Chrisandthemike Star Wars: The Force Awakens Recap Rap Star Wars: The Force Awakens Recap Rap Published: 2017/11/23 Channel: The Warp Zone Moto Z2 Force - Análise/Review - TecMundo Moto Z2 Force - Análise/Review - TecMundo Published: 2017/11/16 Channel: TecMundo FORCE vs CACHA - 16vos El Quinto Escalón: EL FINAL FORCE vs CACHA - 16vos El Quinto Escalón: EL FINAL Published: 2017/11/13 Channel: El Quinto Escalón The Truth About American Force Wheels The Truth About American Force Wheels Published: 2017/11/22 Channel: Custom Offsets MARVEL Strike Force Teaser Trailer MARVEL Strike Force Teaser Trailer Published: 2017/11/20 Channel: Marvel Entertainment CHUTY vs FORCE - Final: Final Nacional España 2017 - Red Bull Batalla de los Gallos Published: 2017/09/23 Channel: Red Bull Batalla De Los Gallos DTOKE HABLA DE FORCE - El Quinto Escalon Radio (13/11/17) Published: 2017/11/15 Channel: El Quinto Escalón WOS vs FORCE - 8vos El Quinto Escalón: EL FINAL WOS vs FORCE - 8vos El Quinto Escalón: EL FINAL Published: 2017/11/13 Channel: El Quinto Escalón Moto Z2 Force - Hands on daquele que não quebra? - Unboxing EuTestei Published: 2017/11/20 Channel: EuTestei John Force breaks down in tears as Brittany wins Top Fuel title | 2017 NHRA DRAG RACING Published: 2017/11/13 Channel: SPEED - MOTORS on FOX Published: 2017/09/30 Channel: Detodoy rap Villain Story/Campaign Progression + Level 28 Gameplay! - Marvel Strike Force Beta Published: 2017/11/23 Channel: Seatin Man of Legends MARVEL Strike Force Teaser Trailer MARVEL Strike Force Teaser Trailer Published: 2017/11/20 Channel: MARVEL Strike Force FORCE vs BTA - Cuartos: Final Nacional España 2017 - Red Bull Batalla de los Gallos Published: 2017/09/23 Channel: Red Bull Batalla De Los Gallos Alan Walker - Force 【10 HOURS】 Alan Walker - Force 【10 HOURS】 Published: 2016/07/07 Channel: 10 Hours Music FORCE vs JADO - Octavos: Final Nacional España 2017 - Red Bull Batalla de los Gallos Published: 2017/09/23 Channel: Red Bull Batalla De Los Gallos Published: 2017/09/24 Channel: Detodoy rap Published: 2017/11/09 Channel: Urban Roosters "Chahoon Bhi Toh" (Full Song) Force | John Abraham, Genelia D "Chahoon Bhi Toh" (Full Song) Force | John Abraham, Genelia D' Souza Published: 2011/10/18 Channel: T-Series Crossout - May The FORCE FIELD Be With You! - Crossout Gameplay Published: 2017/11/22 Channel: eNtaK ☆麥可倉庫機車精品☆【KOSO FORCE 方向燈煞車燈 整合式尾燈組】W鋼彈 GMS 尾燈 後燈組 海鷗 海鳥 燈匠 影片二~~~歡迎分享~~~ Published: 2017/11/03 Channel: 麥可倉庫機車精品 MITR With BrahMos Missile, Air Force Can Hit Enemy Ship In Minutes With BrahMos Missile, Air Force Can Hit Enemy Ship In Minutes Published: 2017/11/23 Channel: NDTV Ranger Mayhem | Milsim West Caspian Breakout (Elite Force 4CRS Block 2) Published: 2017/11/20 Channel: DesertFoxAirsoft Gripen and g-force Gripen and g-force Published: 2017/11/21 Channel: Saab The Legend of Zelda: Tri-Force Heroes – Episode 17: Classless Society Published: 2017/11/21 Channel: TheRunawayGuys Episode 11 - Force Grey: Lost City of Omu Episode 11 - Force Grey: Lost City of Omu Published: 2017/11/21 Channel: Dungeons & Dragons Published: 2017/10/26 Channel: Estrimo TDJ Bullet Force - UPDATE COMING SOON... Bullet Force - UPDATE COMING SOON... Published: 2017/11/20 Channel: iChase Published: 2017/11/12 Channel: SoloFreestyle "Khwabon Khwabon" Force Full song | Feat. John Abraham, Genelia D "Khwabon Khwabon" Force Full song | Feat. John Abraham, Genelia D'souza Published: 2011/10/29 Channel: T-Series Episode 1 - Force Grey: Lost City of Omu Episode 1 - Force Grey: Lost City of Omu Published: 2017/11/17 Channel: Dungeons & Dragons WE$T DUBAI - AIR FORCE ONE | Prod. Retro Money Published: 2017/11/17 Anuel AA X Ozuna X Yampi - No Forcen Remix Anuel AA X Ozuna X Yampi - No Forcen Remix Published: 2016/12/24 Channel: Trap Cartel Published: 2017/11/13 Channel: Gallos De Plaza NCAAF 2017 Week 12 Air Force at Boise State NCAAF 2017 Week 12 Air Force at Boise State Published: 2017/11/19 Channel: Alexander Malyshev Boise State vs Air Force 2017 Highlights Boise State vs Air Force 2017 Highlights Published: 2017/11/19 Channel: Mega Highlights François Boko revient en force François Boko revient en force Published: 2017/11/21 Channel: Situation au Togo NAVALHA vs FORCE - Cuartos: Mallorca, España 2015 | Red Bull Batalla de los Gallos Published: 2015/06/15 Channel: Red Bull Batalla De Los Gallos GALE FORCE vs METHOD | GRAND FINALS | Rocket League World Championship Season 4 | RLCS S4 Published: 2017/11/13 Channel: DreamerExtra GO TO RESULTS [51 .. 100] From Wikipedia, the free encyclopedia Jump to: navigation, search Force examples.svg Common symbols F, F SI unit newton (N) In SI base units 1 kg·m/s2 SI dimension LMT^{-2} Derivations from other quantities F = m a Concepts related to force include: thrust, which increases the velocity of an object; drag, which decreases the velocity of an object; and torque, which produces changes in rotational speed of an object. In an extended body, each part usually applies forces on the adjacent parts; the distribution of such forces through the body is the internal mechanical stress. Such internal mechanical stresses cause no accelation of that body as the forces balance one another. Pressure, the distribution of many small forces applied over an area of a body, is a simple type of stress that if unbalanced can cause the body to accelerate. Stress usually causes deformation of solid materials, or flow in fluids. Development of the concept Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part this was due to an incomplete understanding of the sometimes non-obvious force of friction, and a consequently inadequate view of the nature of natural motion.[2] A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Sir Isaac Newton formulated laws of motion that were not improved for nearly three hundred years.[3] By the early 20th century, Einstein developed a theory of relativity that correctly predicted the action of forces on objects with increasing momenta near the speed of light, and also provided insight into the forces produced by gravitation and inertia. With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic, weak, and gravitational.[4]:2–10[5]:79 High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction.[6] Pre-Newtonian concepts Aristotle famously described a force as anything that causes an object to undergo "unnatural motion" Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was especially famous for formulating a treatment of buoyant forces inherent in fluids.[2] Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed mostly of the elements earth and water, to be in their natural place on the ground and that they will stay that way if left alone. He distinguished between the innate tendency of objects to find their "natural place" (e.g., for heavy bodies to fall), which led to "natural motion", and unnatural or forced motion, which required continued application of a force.[7] This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows. The place where the archer moves the projectile was at the start of the flight, and while the projectile sailed through the air, no discernible efficient cause acts on it. Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation demands a continuum like air for change of place in general.[8] Aristotelian physics began facing criticism in medieval science, first by John Philoponus in the 6th century. The shortcomings of Aristotelian physics would not be fully corrected until the 17th century work of Galileo Galilei, who was influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion. He showed that the bodies were accelerated by gravity to an extent that was independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction.[9] Newtonian mechanics Sir Isaac Newton described the motion of all objects using the concepts of inertia and force, and in doing so he found they obey certain conservation laws. In 1687, Newton published his thesis Philosophiæ Naturalis Principia Mathematica.[3][10] In this work Newton set out three laws of motion that to this day are the way forces are described in physics.[10] First law Newton's First Law of Motion states that objects continue to move in a state of constant velocity unless acted upon by an external net force (resultant force).[10] This law is an extension of Galileo's insight that constant velocity was associated with a lack of net force (see a more detailed description of this below). Newton proposed that every object with mass has an innate inertia that functions as the fundamental equilibrium "natural state" in place of the Aristotelian idea of the "natural state of rest". That is, Newton's empirical First Law contradicts the intuitive Aristotelian belief that a net force is required to keep an object moving with constant velocity. By making rest physically indistinguishable from non-zero constant velocity, Newton's First Law directly connects inertia with the concept of relative velocities. Specifically, in systems where objects are moving with different velocities, it is impossible to determine which object is "in motion" and which object is "at rest". The laws of physics are the same in every inertial frame of reference, that is, in all frames related by a Galilean transformation. For instance, while traveling in a moving vehicle at a constant velocity, the laws of physics do not change as a result of its motion. If a person riding within the vehicle throws a ball straight up, that person will observe it rise vertically and fall vertically and not have to apply a force in the direction the vehicle is moving. Another person, observing the moving vehicle pass by, would observe the ball follow a curving parabolic path in the same direction as the motion of the vehicle. It is the inertia of the ball associated with its constant velocity in the direction of the vehicle's motion that ensures the ball continues to move forward even as it is thrown up and falls back down. From the perspective of the person in the car, the vehicle and everything inside of it is at rest: It is the outside world that is moving with a constant speed in the opposite direction of the vehicle. Since there is no experiment that can distinguish whether it is the vehicle that is at rest or the outside world that is at rest, the two situations are considered to be physically indistinguishable. Inertia therefore applies equally well to constant velocity motion as it does to rest. Though Sir Isaac Newton's most famous equation is , he actually wrote down a different form for his second law of motion that did not use differential calculus. Second law A modern statement of Newton's Second Law is a vector equation:[Note 1] where is the momentum of the system, and is the net (vector sum) force. If a body is in equilibrium, there is zero net force by definition (balanced forces may be present nevertheless). In contrast, the second law states that if there is an unbalanced force acting on an object it will result in the object's momentum changing over time.[10] By the definition of momentum, where m is the mass and is the velocity.[4]:9-1,9-2 If Newton's second law is applied to a system of constant mass,[Note 2] m may be moved outside the derivative operator. The equation then becomes By substituting the definition of acceleration, the algebraic version of Newton's Second Law is derived: Newton never explicitly stated the formula in the reduced form above.[11] The use of Newton's Second Law as a definition of force has been disparaged in some of the more rigorous textbooks,[4]:12-1[5]:59[12] because it is essentially a mathematical truism. Notable physicists, philosophers and mathematicians who have sought a more explicit definition of the concept of force include Ernst Mach and Walter Noll.[13][14] Third law Whenever one body exerts a force on another, the latter simultaneously exerts an equal and opposite force on the first. In vector form, if is the force of body 1 on body 2 and that of body 2 on body 1, then This law is sometimes referred to as the action-reaction law, with called the action and the reaction. Newton's Third Law is a result of applying symmetry to situations where forces can be attributed to the presence of different objects. The third law means that all forces are interactions between different bodies,[15][Note 3] and thus that there is no such thing as a unidirectional force or a force that acts on only one body. In a system composed of object 1 and object 2, the net force on the system due to their mutual interactions is zero: More generally, in a closed system of particles, all internal forces are balanced. The particles may accelerate with respect to each other but the center of mass of the system will not accelerate. If an external force acts on the system, it will make the center of mass accelerate in proportion to the magnitude of the external force divided by the mass of the system.[4]:19-1[5] Combining Newton's Second and Third Laws, it is possible to show that the linear momentum of a system is conserved.[16] In a system of two particles, if is the momentum of object 1 and the momentum of object 2, then Using similar arguments, this can be generalized to a system with an arbitrary number of particles. In general, as long as all forces are due to the interaction of objects with mass, it is possible to define a system such that net momentum is never lost nor gained.[4][5] Special theory of relativity remains valid because it is a mathematical definition.[17]:855–876 But for relativistic momentum to be conserved, it must be redefined as: where is the rest mass and the speed of light. The relativistic expression relating force and acceleration for a particle with constant non-zero rest mass moving in the direction is: is called the Lorentz factor.[18] In the early history of relativity, the expressions and were called longitudinal and transverse mass. Relativistic force does not produce a constant acceleration, but an ever-decreasing acceleration as the object approaches the speed of light. Note that approaches asymptotically an infinite value and is undefined for an object with a non-zero rest mass as it approaches the speed of light, and the theory yields no prediction at that speed. If is very small compared to , then is very close to 1 and is a close approximation. Even for use in relativity, however, one can restore the form of through the use of four-vectors. This relation is correct in relativity when is the four-force, is the invariant mass, and is the four-acceleration.[19] Free body diagrams of a block on a flat surface and an inclined plane. Forces are resolved and added together to determine their magnitudes and the net force. Since forces are perceived as pushes or pulls, this can provide an intuitive understanding for describing forces.[3] As with other physical concepts (e.g. temperature), the intuitive understanding of forces is quantified using precise operational definitions that are consistent with direct observations and compared to a standard measurement scale. Through experimentation, it is determined that laboratory measurements of forces are fully consistent with the conceptual definition of force offered by Newtonian mechanics. Historically, forces were first quantitatively investigated in conditions of static equilibrium where several forces canceled each other out. Such experiments demonstrate the crucial properties that forces are additive vector quantities: they have magnitude and direction.[3] When two forces act on a point particle, the resulting force, the resultant (also called the net force), can be determined by following the parallelogram rule of vector addition: the addition of two vectors represented by sides of a parallelogram, gives an equivalent resultant vector that is equal in magnitude and direction to the transversal of the parallelogram.[4][5] The magnitude of the resultant varies from the difference of the magnitudes of the two forces to their sum, depending on the angle between their lines of action. However, if the forces are acting on an extended body, their respective lines of application must also be specified in order to account for their effects on the motion of the body. As well as being added, forces can also be resolved into independent components at right angles to each other. A horizontal force pointing northeast can therefore be split into two forces, one pointing north, and one pointing east. Summing these component forces using vector addition yields the original force. Resolving force vectors into components of a set of basis vectors is often a more mathematically clean way to describe forces than using magnitudes and directions.[21] This is because, for orthogonal components, the components of the vector sum are uniquely determined by the scalar addition of the components of the individual vectors. Orthogonal components are independent of each other because forces acting at ninety degrees to each other have no effect on the magnitude or direction of the other. Choosing a set of orthogonal basis vectors is often done by considering what set of basis vectors will make the mathematics most convenient. Choosing a basis vector that is in the same direction as one of the forces is desirable, since that force would then have only one non-zero component. Orthogonal force vectors can be three-dimensional with the third component being at right-angles to the other two.[4][5] Equilibrium occurs when the resultant force acting on a point particle is zero (that is, the vector sum of all forces is zero). When dealing with an extended body, it is also necessary that the net torque be zero. Static equilibrium was understood well before the invention of classical mechanics. Objects that are at rest have zero net force acting on them.[22] The simplest case of static equilibrium occurs when two forces are equal in magnitude but opposite in direction. For example, an object on a level surface is pulled (attracted) downward toward the center of the Earth by the force of gravity. At the same time, a force is applied by the surface that resists the downward force with equal upward force (called a normal force). The situation produces zero net force and hence no acceleration.[3] A static equilibrium between two forces is the most usual way of measuring forces, using simple devices such as weighing scales and spring balances. For example, an object suspended on a vertical spring scale experiences the force of gravity acting on the object balanced by a force applied by the "spring reaction force", which equals the object's weight. Using such tools, some quantitative force laws were discovered: that the force of gravity is proportional to volume for objects of constant density (widely exploited for millennia to define standard weights); Archimedes' principle for buoyancy; Archimedes' analysis of the lever; Boyle's law for gas pressure; and Hooke's law for springs. These were all formulated and experimentally verified before Isaac Newton expounded his Three Laws of Motion.[3][4][5] Galileo Galilei was the first to point out the inherent contradictions contained in Aristotle's description of forces. Dynamic equilibrium was first described by Galileo who noticed that certain assumptions of Aristotelian physics were contradicted by observations and logic. Galileo realized that simple velocity addition demands that the concept of an "absolute rest frame" did not exist. Galileo concluded that motion in a constant velocity was completely equivalent to rest. This was contrary to Aristotle's notion of a "natural state" of rest that objects with mass naturally approached. Simple experiments showed that Galileo's understanding of the equivalence of constant velocity and rest were correct. For example, if a mariner dropped a cannonball from the crow's nest of a ship moving at a constant velocity, Aristotelian physics would have the cannonball fall straight down while the ship moved beneath it. Thus, in an Aristotelian universe, the falling cannonball would land behind the foot of the mast of a moving ship. However, when this experiment is actually conducted, the cannonball always falls at the foot of the mast, as if the cannonball knows to travel with the ship despite being separated from it. Since there is no forward horizontal force being applied on the cannonball as it falls, the only conclusion left is that the cannonball continues to move with the same velocity as the boat as it falls. Thus, no force is required to keep the cannonball moving at the constant forward velocity.[9] Moreover, any object traveling at a constant velocity must be subject to zero net force (resultant force). This is the definition of dynamic equilibrium: when all the forces on an object balance but it still moves at a constant velocity. A simple case of dynamic equilibrium occurs in constant velocity motion across a surface with kinetic friction. In such a situation, a force is applied in the direction of motion while the kinetic friction force exactly opposes the applied force. This results in zero net force, but since the object started with a non-zero velocity, it continues to move with a non-zero velocity. Aristotle misinterpreted this motion as being caused by the applied force. However, when kinetic friction is taken into consideration it is clear that there is no net force causing constant velocity motion.[4][5] Forces in quantum mechanics The notion "force" keeps its meaning in quantum mechanics, though one is now dealing with operators instead of classical variables and though the physics is now described by the Schrödinger equation instead of Newtonian equations. This has the consequence that the results of a measurement are now sometimes "quantized", i.e. they appear in discrete portions. This is, of course, difficult to imagine in the context of "forces". However, the potentials V(x,y,z) or fields, from which the forces generally can be derived, are treated similar to classical position variables, i.e., . This becomes different only in the framework of quantum field theory, where these fields are also quantized. However, already in quantum mechanics there is one "caveat", namely the particles acting onto each other do not only possess the spatial variable, but also a discrete intrinsic angular momentum-like variable called the "spin", and there is the Pauli principle relating the space and the spin variables. Depending on the value of the spin, identical particles split into two different classes, fermions and bosons. If two identical fermions (e.g. electrons) have a symmetric spin function (e.g. parallel spins) the spatial variables must be antisymmetric (i.e. they exclude each other from their places much as if there was a repulsive force), and vice versa, i.e. for antiparallel spins the position variables must be symmetric (i.e. the apparent force must be attractive). Thus in the case of two fermions there is a strictly negative correlation between spatial and spin variables, whereas for two bosons (e.g. quanta of electromagnetic waves, photons) the correlation is strictly positive. Thus the notion "force" loses already part of its meaning. Feynman diagrams Feynman diagram for the decay of a neutron into a proton. The W boson is between two vertices indicating a repulsion. In modern particle physics, forces and the acceleration of particles are explained as a mathematical by-product of exchange of momentum-carrying gauge bosons. With the development of quantum field theory and general relativity, it was realized that force is a redundant concept arising from conservation of momentum (4-momentum in relativity and momentum of virtual particles in quantum electrodynamics). The conservation of momentum can be directly derived from the homogeneity or symmetry of space and so is usually considered more fundamental than the concept of a force. Thus the currently known fundamental forces are considered more accurately to be "fundamental interactions".[6]:199–128 When particle A emits (creates) or absorbs (annihilates) virtual particle B, a momentum conservation results in recoil of particle A making impression of repulsion or attraction between particles A A' exchanging by B. This description applies to all forces arising from fundamental interactions. While sophisticated mathematical descriptions are needed to predict, in full detail, the accurate result of such interactions, there is a conceptually simple way to describe such interactions through the use of Feynman diagrams. In a Feynman diagram, each matter particle is represented as a straight line (see world line) traveling through time, which normally increases up or to the right in the diagram. Matter and anti-matter particles are identical except for their direction of propagation through the Feynman diagram. World lines of particles intersect at interaction vertices, and the Feynman diagram represents any force arising from an interaction as occurring at the vertex with an associated instantaneous change in the direction of the particle world lines. Gauge bosons are emitted away from the vertex as wavy lines and, in the case of virtual particle exchange, are absorbed at an adjacent vertex.[23] Fundamental forces All of the forces in the universe are based on four fundamental interactions. The strong and weak forces are nuclear forces that act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions. For example, friction is a manifestation of the electromagnetic force acting between the atoms of two surfaces, and the Pauli exclusion principle,[24] which does not permit atoms to pass through each other. Similarly, the forces in springs, modeled by Hooke's law, are the result of electromagnetic forces and the Exclusion Principle acting together to return an object to its equilibrium position. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference.[4]:12-11[5]:359 The fundamental theories for forces developed from the unification of disparate ideas. For example, Isaac Newton unified, with his universal theory of gravitation, the force responsible for objects falling near the surface of the Earth with the force responsible for the falling of celestial bodies about the Earth (the Moon) and the Sun (the planets). Michael Faraday and James Clerk Maxwell demonstrated that electric and magnetic forces were unified through a theory of electromagnetism. In the 20th century, the development of quantum mechanics led to a modern understanding that the first three fundamental forces (all except gravity) are manifestations of matter (fermions) interacting by exchanging virtual particles called gauge bosons.[25] This standard model of particle physics assumes a similarity between the forces and led scientists to predict the unification of the weak and electromagnetic forces in electroweak theory, which was subsequently confirmed by observation. The complete formulation of the standard model predicts an as yet unobserved Higgs mechanism, but observations such as neutrino oscillations suggest that the standard model is incomplete. A Grand Unified Theory that allows for the combination of the electroweak interaction with the strong force is held out as a possibility with candidate theories such as supersymmetry proposed to accommodate some of the outstanding unsolved problems in physics. Physicists are still attempting to develop self-consistent unification models that would combine all four fundamental interactions into a theory of everything. Einstein tried and failed at this endeavor, but currently the most popular approach to answering this question is string theory.[6]:212–219 The four fundamental forces of nature[26] Property/Interaction Gravitation Weak Electromagnetic Strong (Electroweak) Fundamental Residual Particles mediating: Graviton (not yet observed) W+ W Z0 γ Gluons Mesons to quarks Strength in the scale of 10−36 10−7 1 Not applicable to hadrons Images of a freely falling basketball taken with a stroboscope at 20 flashes per second. The distance units on the right are multiples of about 12 millimetres. The basketball starts at rest. At the time of the first flash (distance zero) it is released, after which the number of units fallen is equal to the square of the number of flashes. What we now call gravity was not identified as a universal force until the work of Isaac Newton. Before Newton, the tendency for objects to fall towards the Earth was not understood to be related to the motions of celestial objects. Galileo was instrumental in describing the characteristics of falling objects by determining that the acceleration of every object in free-fall was constant and independent of the mass of the object. Today, this acceleration due to gravity towards the surface of the Earth is usually designated as and has a magnitude of about 9.81 meters per second squared (this measurement is taken from sea level and may vary depending on location), and points toward the center of the Earth.[27] This observation means that the force of gravity on an object at the Earth's surface is directly proportional to the object's mass. Thus an object that has a mass of will experience a force: For an object in free-fall, this force is unopposed and the net force on the object is its weight. For objects not in free-fall, the force of gravity is opposed by the reaction forces applied by their supports. For example, a person standing on the ground experiences zero net force, since a normal force (a reaction force) is exerted by the ground upward on the person that counterbalances his weight that is directed downward.[4][5] Newton came to realize that the effects of gravity might be observed in different ways at larger distances. In particular, Newton determined that the acceleration of the Moon around the Earth could be ascribed to the same force of gravity if the acceleration due to gravity decreased as an inverse square law. Further, Newton realized that the acceleration of a body due to gravity is proportional to the mass of the other attracting body.[28] Combining these ideas gives a formula that relates the mass () and the radius () of the Earth to the gravitational acceleration: where the vector direction is given by , is the unit vector directed outward from the center of the Earth.[10] In this equation, a dimensional constant is used to describe the relative strength of gravity. This constant has come to be known as Newton's Universal Gravitation Constant,[29] though its value was unknown in Newton's lifetime. Not until 1798 was Henry Cavendish able to make the first measurement of using a torsion balance; this was widely reported in the press as a measurement of the mass of the Earth since knowing could allow one to solve for the Earth's mass given the above equation. Newton, however, realized that since all celestial bodies followed the same laws of motion, his law of gravity had to be universal. Succinctly stated, Newton's Law of Gravitation states that the force on a spherical object of mass due to the gravitational pull of mass is where is the distance between the two objects' centers of mass and is the unit vector pointed in the direction away from the center of the first object toward the center of the second object.[10] This formula was powerful enough to stand as the basis for all subsequent descriptions of motion within the solar system until the 20th century. During that time, sophisticated methods of perturbation analysis[30] were invented to calculate the deviations of orbits due to the influence of multiple bodies on a planet, moon, comet, or asteroid. The formalism was exact enough to allow mathematicians to predict the existence of the planet Neptune before it was observed.[31] Instruments like GRAVITY provide a powerful probe for gravity force detection.[32] Mercury's orbit, however, did not match that predicted by Newton's Law of Gravitation. Some astrophysicists predicted the existence of another planet (Vulcan) that would explain the discrepancies; however no such planet could be found. When Albert Einstein formulated his theory of general relativity (GR) he turned his attention to the problem of Mercury's orbit and found that his theory added a correction, which could account for the discrepancy. This was the first time that Newton's Theory of Gravity had been shown to be inexact.[33] The electrostatic force was first described in 1784 by Coulomb as a force that existed intrinsically between two charges.[17]:519 The properties of the electrostatic force were that it varied as an inverse square law directed in the radial direction, was both attractive and repulsive (there was intrinsic polarity), was independent of the mass of the charged objects, and followed the superposition principle. Coulomb's law unifies all these observations into one succinct statement.[34] Subsequent mathematicians and physicists found the construct of the electric field to be useful for determining the electrostatic force on an electric charge at any point in space. The electric field was based on using a hypothetical "test charge" anywhere in space and then using Coulomb's Law to determine the electrostatic force.[35]:4-6 to 4-8 Thus the electric field anywhere in space is defined as where is the magnitude of the hypothetical test charge. Through combining the definition of electric current as the time rate of change of electric charge, a rule of vector multiplication called Lorentz's Law describes the force on a charge moving in a magnetic field.[35] The connection between electricity and magnetism allows for the description of a unified electromagnetic force that acts on a charge. This force can be written as a sum of the electrostatic force (due to the electric field) and the magnetic force (due to the magnetic field). Fully stated, this is the law: where is the electromagnetic force, is the magnitude of the charge of the particle, is the electric field, is the velocity of the particle that is crossed with the magnetic field (). The origin of electric and magnetic fields would not be fully explained until 1864 when James Clerk Maxwell unified a number of earlier theories into a set of 20 scalar equations, which were later reformulated into 4 vector equations by Oliver Heaviside and Josiah Willard Gibbs.[36] These "Maxwell Equations" fully described the sources of the fields as being stationary and moving charges, and the interactions of the fields themselves. This led Maxwell to discover that electric and magnetic fields could be "self-generating" through a wave that traveled at a speed that he calculated to be the speed of light. This insight united the nascent fields of electromagnetic theory with optics and led directly to a complete description of the electromagnetic spectrum.[37] However, attempting to reconcile electromagnetic theory with two observations, the photoelectric effect, and the nonexistence of the ultraviolet catastrophe, proved troublesome. Through the work of leading theoretical physicists, a new theory of electromagnetism was developed using quantum mechanics. This final modification to electromagnetic theory ultimately led to quantum electrodynamics (or QED), which fully describes all electromagnetic phenomena as being mediated by wave–particles known as photons. In QED, photons are the fundamental exchange particle, which described all interactions relating to electromagnetism including the electromagnetic force.[Note 4] Strong nuclear There are two "nuclear forces", which today are usually described as interactions that take place in quantum theories of particle physics. The strong nuclear force[17]:940 is the force responsible for the structural integrity of atomic nuclei while the weak nuclear force[17]:951 is responsible for the decay of certain nucleons into leptons and other types of hadrons.[4][5] The strong force is today understood to represent the interactions between quarks and gluons as detailed by the theory of quantum chromodynamics (QCD).[38] The strong force is the fundamental force mediated by gluons, acting upon quarks, antiquarks, and the gluons themselves. The (aptly named) strong interaction is the "strongest" of the four fundamental forces. The strong force only acts directly upon elementary particles. However, a residual of the force is observed between hadrons (the best known example being the force that acts between nucleons in atomic nuclei) as the nuclear force. Here the strong force acts indirectly, transmitted as gluons, which form part of the virtual pi and rho mesons, which classically transmit the nuclear force (see this topic for more). The failure of many searches for free quarks has shown that the elementary particles affected are not directly observable. This phenomenon is called color confinement. Weak nuclear The weak force is due to the exchange of the heavy W and Z bosons. Its most familiar effect is beta decay (of neutrons in atomic nuclei) and the associated radioactivity. The word "weak" derives from the fact that the field strength is some 1013 times less than that of the strong force. Still, it is stronger than gravity over short distances. A consistent electroweak theory has also been developed, which shows that electromagnetic forces and the weak force are indistinguishable at a temperatures in excess of approximately 1015 kelvins. Such temperatures have been probed in modern particle accelerators and show the conditions of the universe in the early moments of the Big Bang. Non-fundamental forces Some forces are consequences of the fundamental ones. In such situations, idealized models can be utilized to gain physical insight. Normal force FN represents the normal force exerted on the object. The normal force is due to repulsive forces of interaction between atoms at close contact. When their electron clouds overlap, Pauli repulsion (due to fermionic nature of electrons) follows resulting in the force that acts in a direction normal to the surface interface between two objects.[17]:93 The normal force, for example, is responsible for the structural integrity of tables and floors as well as being the force that responds whenever an external force pushes on a solid object. An example of the normal force in action is the impact force on an object crashing into an immobile surface.[4][5] Friction is a surface force that opposes relative motion. The frictional force is directly related to the normal force that acts to keep two solid objects separated at the point of contact. There are two broad classifications of frictional forces: static friction and kinetic friction. The static friction force () will exactly oppose forces applied to an object parallel to a surface contact up to the limit specified by the coefficient of static friction () multiplied by the normal force (). In other words, the magnitude of the static friction force satisfies the inequality: The kinetic friction force () is independent of both the forces applied and the movement of the object. Thus, the magnitude of the force equals: Tension forces can be modeled using ideal strings that are massless, frictionless, unbreakable, and unstretchable. They can be combined with ideal pulleys, which allow ideal strings to switch physical direction. Ideal strings transmit tension forces instantaneously in action-reaction pairs so that if two objects are connected by an ideal string, any force directed along the string by the first object is accompanied by a force directed along the string in the opposite direction by the second object.[39] By connecting the same string multiple times to the same object through the use of a set-up that uses movable pulleys, the tension force on a load can be multiplied. For every string that acts on a load, another factor of the tension force in the string acts on the load. However, even though such machines allow for an increase in force, there is a corresponding increase in the length of string that must be displaced in order to move the load. These tandem effects result ultimately in the conservation of mechanical energy since the work done on the load is the same no matter how complicated the machine.[4][5][40] Elastic force Fk is the force that responds to the load on the spring An elastic force acts to return a spring to its natural length. An ideal spring is taken to be massless, frictionless, unbreakable, and infinitely stretchable. Such springs exert forces that push when contracted, or pull when extended, in proportion to the displacement of the spring from its equilibrium position.[41] This linear relationship was described by Robert Hooke in 1676, for whom Hooke's law is named. If is the displacement, the force exerted by an ideal spring equals: where is the spring constant (or force constant), which is particular to the spring. The minus sign accounts for the tendency of the force to act in opposition to the applied load.[4][5] Continuum mechanics When the drag force () associated with air resistance becomes equal in magnitude to the force of gravity on a falling object (), the object reaches a state of dynamic equilibrium at terminal velocity. where is the volume of the object in the fluid and is the scalar function that describes the pressure at all locations in space. Pressure gradients and differentials result in the buoyant force for fluids suspended in gravitational fields, winds in atmospheric science, and the lift associated with aerodynamics and flight.[4][5] is the velocity of the object.[4][5] More formally, forces in continuum mechanics are fully described by a stresstensor with terms that are roughly defined as where is the relevant cross-sectional area for the volume for which the stress-tensor is being calculated. This formalism includes pressure terms associated with forces that act normal to the cross-sectional area (the matrix diagonals of the tensor) as well as shear terms associated with forces that act parallel to the cross-sectional area (the off-diagonal elements). The stress tensor accounts for forces that cause all strains (deformations) including also tensile stresses and compressions.[3][5]:133–134[35]:38-1–38-11 Fictitious forces There are forces that are frame dependent, meaning that they appear due to the adoption of non-Newtonian (that is, non-inertial) reference frames. Such forces include the centrifugal force and the Coriolis force.[42] These forces are considered fictitious because they do not exist in frames of reference that are not accelerating.[4][5] Because these forces are not genuine they are also referred to as "pseudo forces".[4]:12-11 In general relativity, gravity becomes a fictitious force that arises in situations where spacetime deviates from a flat geometry. As an extension, Kaluza–Klein theory and string theory ascribe electromagnetism and the other fundamental forces respectively to the curvature of differently scaled dimensions, which would ultimately imply that all forces are fictitious. Rotations and torque Relationship between force (F), torque (τ), and momentum vectors (p and L) in a rotating system. Forces that cause extended objects to rotate are associated with torques. Mathematically, the torque of a force is defined relative to an arbitrary reference point as the cross-product: is the position vector of the force application point relative to the reference point. Torque is the rotation equivalent of force in the same way that angle is the rotational equivalent for position, angular velocity for velocity, and angular momentum for momentum. As a consequence of Newton's First Law of Motion, there exists rotational inertia that ensures that all bodies maintain their angular momentum unless acted upon by an unbalanced torque. Likewise, Newton's Second Law of Motion can be used to derive an analogous equation for the instantaneous angular acceleration of the rigid body: is the moment of inertia of the body is the angular acceleration of the body. This provides a definition for the moment of inertia, which is the rotational equivalent for mass. In more advanced treatments of mechanics, where the rotation over a time interval is described, the moment of inertia must be substituted by the tensor that, when properly analyzed, fully determines the characteristics of rotations including precession and nutation. [43] where is the angular momentum of the particle. Newton's Third Law of Motion requires that all objects exerting torques themselves experience equal and opposite torques,[44] and therefore also directly implies the conservation of angular momentum for closed systems that experience rotations and revolutions through the action of internal torques. Centripetal force For an object accelerating in circular motion, the unbalanced force acting on the object equals:[45] where is the mass of the object, is the velocity of the object and is the distance to the center of the circular path and is the unit vector pointing in the radial direction outwards from the center. This means that the unbalanced centripetal force felt by any object is always directed toward the center of the curving path. Such forces act perpendicular to the velocity vector associated with the motion of an object, and therefore do not change the speed of the object (magnitude of the velocity), but only the direction of the velocity vector. The unbalanced force that accelerates an object can be resolved into a component that is perpendicular to the path, and one that is tangential to the path. This yields both the tangential force, which accelerates the object by either slowing it down or speeding it up, and the radial (centripetal) force, which changes its direction.[4][5] Kinematic integrals which by Newton's Second Law must be equivalent to the change in momentum (yielding the Impulse momentum theorem). Similarly, integrating with respect to position gives a definition for the work done by a force:[4]:13-3 which is equivalent to changes in kinetic energy (yielding the work energy theorem).[4]:13-3 Power P is the rate of change dW/dt of the work W, as the trajectory is extended by a position change in a time interval dt:[4]:13-2 with the velocity. Potential energy Forces can be classified as conservative or nonconservative. Conservative forces are equivalent to the gradient of a potential while nonconservative forces are not.[4][5] Conservative forces Conservative forces include gravity, the electromagnetic force, and the spring force. Each of these forces has models that are dependent on a position often given as a radial vector emanating from spherically symmetric potentials.[48] Examples of this follow: For gravity: where is the gravitational constant, and is the mass of object n. For electrostatic forces: where is electric permittivity of free space, and is the electric charge of object n. For spring forces: where is the spring constant.[4][5] Nonconservative forces For certain physical scenarios, it is impossible to model forces as being due to gradient of potentials. This is often due to macrophysical considerations that yield forces as arising from a macroscopic statistical average of microstates. For example, friction is caused by the gradients of numerous electrostatic potentials between the atoms, but manifests as a force model that is independent of any macroscale position vector. Nonconservative forces other than friction include other contact forces, tension, compression, and drag. However, for any sufficiently detailed description, all these forces are the results of conservative ones since each of these macroscopic forces are the net results of the gradients of microscopic potentials.[4][5] The connection between macroscopic nonconservative forces and microscopic conservative forces is described by detailed treatment with statistical mechanics. In macroscopic closed systems, nonconservative forces act to change the internal energies of the system, and are often associated with the transfer of heat. According to the Second law of thermodynamics, nonconservative forces necessarily result in energy transformations within closed systems from ordered to more random conditions as entropy increases.[4][5] Units of measurement The SI unit of force is the newton (symbol N), which is the force required to accelerate a one kilogram mass at a rate of one meter per second squared, or kg·m·s−2.[49] The corresponding CGS unit is the dyne, the force required to accelerate a one gram mass by one centimeter per second squared, or g·cm·s−2. A newton is thus equal to 100,000 dynes. The gravitational foot-pound-second English unit of force is the pound-force (lbf), defined as the force exerted by gravity on a pound-mass in the standard gravitational field of 9.80665 m·s−2.[49] The pound-force provides an alternative unit of mass: one slug is the mass that will accelerate by one foot per second squared when acted on by one pound-force.[49] An alternative unit of force in a different foot-pound-second system, the absolute fps system, is the poundal, defined as the force required to accelerate a one-pound mass at a rate of one foot per second squared.[49] The units of slug and poundal are designed to avoid a constant of proportionality in Newton's Second Law. The pound-force has a metric counterpart, less commonly used than the newton: the kilogram-force (kgf) (sometimes kilopond), is the force exerted by standard gravity on one kilogram of mass.[49] The kilogram-force leads to an alternate, but rarely used unit of mass: the metric slug (sometimes mug or hyl) is that mass that accelerates at 1 m·s−2 when subjected to a force of 1 kgf. The kilogram-force is not a part of the modern SI system, and is generally deprecated; however it still sees use for some purposes as expressing aircraft weight, jet thrust, bicycle spoke tension, torque wrench settings and engine output torque. Other arcane units of force include the sthène, which is equivalent to 1000 N, and the kip, which is equivalent to 1000 lbf. Units of force (SI unit) dyne kilogram-force, pound-force poundal See also Ton-force. Force measurement See force gauge, spring scale, load cell See also 1. ^ Newton's Principia Mathematica actually used a finite difference version of this equation based upon impulse. See Impulse. 2. ^ "It is important to note that we cannot derive a general expression for Newton's second law for variable mass systems by treating the mass in F = dP/dt = d(Mv) as a variable. [...] We can use F = dP/dt to analyze variable mass systems only if we apply it to an entire system of constant mass having parts among which there is an interchange of mass." [Emphasis as in the original] (Halliday, Resnick & Krane 2001, p. 199) 3. ^ "Any single force is only one aspect of a mutual interaction between two bodies." (Halliday, Resnick & Krane 2001, pp. 78–79) 4. ^ For a complete library on quantum mechanics see Quantum mechanics – References 1. ^ Nave, C. R. (2014). "Force". Hyperphysics. Dept. of Physics and Astronomy, Georgia State University. Retrieved 15 August 2014.  2. ^ a b Heath, T.L. "The Works of Archimedes (1897). The unabridged work in PDF form (19 MB)". Internet Archive. Retrieved 2007-10-14.  3. ^ a b c d e f g h University Physics, Sears, Young & Zemansky, pp.18–38 5. ^ a b c d e f g h i j k l m n o p q r s t u v w x y Kleppner & Kolenkow 2010 6. ^ a b c Weinberg, S. (1994). Dreams of a Final Theory. Vintage Books USA. ISBN 0-679-74408-8.  7. ^ Lang, Helen S. (1998). The order of nature in Aristotle's physics : place and the elements (1. publ. ed.). Cambridge: Cambridge Univ. Press. ISBN 9780521624534.  8. ^ Hetherington, Norriss S. (1993). Cosmology: Historical, Literary, Philosophical, Religious, and Scientific Perspectives. Garland Reference Library of the Humanities. p. 100. ISBN 0-8153-1085-4.  9. ^ a b Drake, Stillman (1978). Galileo At Work. Chicago: University of Chicago Press. ISBN 0-226-16226-5 10. ^ a b c d e f Newton, Isaac (1999). The Principia Mathematical Principles of Natural Philosophy. Berkeley: University of California Press. ISBN 0-520-08817-4.  This is a recent translation into English by I. Bernard Cohen and Anne Whitman, with help from Julia Budenz. 11. ^ Howland, R. A. (2006). Intermediate dynamics a linear algebraic approach (Online-Ausg. ed.). New York: Springer. pp. 255–256. ISBN 9780387280592.  12. ^ One exception to this rule is: Landau, L. D.; Akhiezer, A. I.; Lifshitz, A. M. (196). General Physics; mechanics and molecular physics (First English ed.). Oxford: Pergamon Press. ISBN 0-08-003304-0.  Translated by: J. B. Sykes, A. D. Petford, and C. L. Petford. Library of Congress Catalog Number 67-30260. In section 7, pages 12–14, this book defines force as dp/dt. 13. ^ Jammer, Max (1999). Concepts of force : a study in the foundations of dynamics (Facsim. ed.). Mineola, N.Y.: Dover Publications. pp. 220–222. ISBN 9780486406893.  14. ^ Noll, Walter (April 2007). "On the Concept of Force" (pdf). Carnegie Mellon University. Retrieved 28 October 2013.  16. ^ Dr. Nikitin (2007). "Dynamics of translational motion". Retrieved 2008-01-04.  17. ^ a b c d e Cutnell & Johnson 2003 18. ^ "Seminar: Visualizing Special Relativity". The Relativistic Raytracer. Retrieved 2008-01-04.  19. ^ Wilson, John B. "Four-Vectors (4-Vectors) of Special Relativity: A Study of Elegant Physics". The Science Realm: John's Virtual Sci-Tech Universe. Archived from the original on 26 June 2009. Retrieved 2008-01-04.  20. ^ "Introduction to Free Body Diagrams". Physics Tutorial Menu. University of Guelph. Archived from the original on 2008-01-16. Retrieved 2008-01-02.  21. ^ Henderson, Tom (2004). "The Physics Classroom". The Physics Classroom and Mathsoft Engineering & Education, Inc. Archived from the original on 2008-01-01. Retrieved 2008-01-02.  22. ^ "Static Equilibrium". Physics Static Equilibrium (forces and torques). University of the Virgin Islands. Archived from the original on October 19, 2007. Retrieved 2008-01-02.  23. ^ a b Shifman, Mikhail (1999). ITEP lectures on particle physics and field theory. World Scientific. ISBN 981-02-2639-X.  24. ^ Nave, Carl Rod. "Pauli Exclusion Principle". HyperPhysics. University of Guelph. Retrieved 2013-10-28.  25. ^ "Fermions & Bosons". The Particle Adventure. Archived from the original on 2007-12-18. Retrieved 2008-01-04.  26. ^ "Standard model of particles and interactions". Contemporary Physics Education Project. 2000. Retrieved 2 January 2017.  27. ^ Cook, A. H. (1965). "A New Absolute Determination of the Acceleration due to Gravity at the National Physical Laboratory". Nature. 208 (5007): 279. Bibcode:1965Natur.208..279C. doi:10.1038/208279a0.  28. ^ a b Young, Hugh; Freedman, Roger; Sears, Francis and Zemansky, Mark (1949) University Physics. Pearson Education. pp. 59–82 29. ^ "Sir Isaac Newton: The Universal Law of Gravitation". Astronomy 161 The Solar System. Retrieved 2008-01-04.  30. ^ Watkins, Thayer. "Perturbation Analysis, Regular and Singular". Department of Economics. San José State University.  32. ^ "Powerful New Black Hole Probe Arrives at Paranal". Retrieved 13 August 2015.  33. ^ Siegel, Ethan (20 May 2016). "When Did Isaac Newton Finally Fail?". Forbes. Retrieved 3 January 2017.  34. ^ Coulomb, Charles (1784). "Recherches théoriques et expérimentales sur la force de torsion et sur l'élasticité des fils de metal". Histoire de l'Académie Royale des Sciences: 229–269.  35. ^ a b c Feynman volume 2 36. ^ Scharf, Toralf (2007). Polarized light in liquid crystals and polymers. John Wiley and Sons. p. 19. ISBN 0-471-74064-0. , Chapter 2, p. 19 37. ^ Duffin, William (1980). Electricity and Magnetism, 3rd Ed. McGraw-Hill. pp. 364–383. ISBN 0-07-084111-X.  38. ^ Stevens, Tab (10 July 2003). "Quantum-Chromodynamics: A Definition – Science Articles". Archived from the original on 2011-10-16. Retrieved 2008-01-04.  39. ^ "Tension Force". Non-Calculus Based Physics I. Retrieved 2008-01-04.  40. ^ Fitzpatrick, Richard (2006-02-02). "Strings, pulleys, and inclines". Retrieved 2008-01-04.  41. ^ Nave, Carl Rod. "Elasticity". HyperPhysics. University of Guelph. Retrieved 2013-10-28.  42. ^ Mallette, Vincent (1982–2008). "Inwit Publishing, Inc. and Inwit, LLC – Writings, Links and Software Distributions – The Coriolis Force". Publications in Science and Mathematics, Computing and the Humanities. Inwit Publishing, Inc. Retrieved 2008-01-04.  43. ^ Nave, Carl Rod. "Newton's 2nd Law: Rotation". HyperPhysics. University of Guelph. Retrieved 2013-10-28.  44. ^ Fitzpatrick, Richard (2007-01-07). "Newton's third law of motion". Retrieved 2008-01-04.  45. ^ Nave, Carl Rod. "Centripetal Force". HyperPhysics. University of Guelph. Retrieved 2013-10-28.  46. ^ Hibbeler, Russell C. (2010). Engineering Mechanics, 12th edition. Pearson Prentice Hall. p. 222. ISBN 0-13-607791-9.  47. ^ Singh, Sunil Kumar (2007-08-25). "Conservative force". Connexions. Retrieved 2008-01-04.  48. ^ Davis, Doug. "Conservation of Energy". General physics. Retrieved 2008-01-04.  49. ^ a b c d e Wandmacher, Cornelius; Johnson, Arnold (1995). Metric Units in Engineering. ASCE Publications. p. 15. ISBN 0-7844-0070-9.  Further reading • Corben, H.C.; Philip Stehle (1994). Classical Mechanics. New York: Dover publications. pp. 28–31. ISBN 0-486-68063-0.  • Cutnell, John D.; Johnson, Kenneth W. (2003). Physics, Sixth Edition. Hoboken, New Jersey: John Wiley & Sons Inc. ISBN 0471151831.  • Feynman, Richard P.; Leighton; Sands, Matthew (2010). The Feynman lectures on physics. Vol. I: Mainly mechanics, radiation and heat (New millennium ed.). New York: BasicBooks. ISBN 978-0465024933.  • Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (2010). The Feynman lectures on physics. Vol. II: Mainly electromagnetism and matter (New millennium ed.). New York: BasicBooks. ISBN 978-0465024940.  • Halliday, David; Resnick, Robert; Krane, Kenneth S. (2001). Physics v. 1. New York: John Wiley & Sons. ISBN 0-471-32057-9.  • Kleppner, Daniel; Kolenkow, Robert J. (2010). An introduction to mechanics (3. print ed.). Cambridge: Cambridge University Press. ISBN 0521198216.  • Parker, Sybil (1993). "force". Encyclopedia of Physics. Ohio: McGraw-Hill. p. 107,. ISBN 0-07-051400-3.  • Sears F., Zemansky M. & Young H. (1982). University Physics. Reading, Massachusetts: Addison-Wesley. ISBN 0-201-07199-1.  • Verma, H.C. (2004). Concepts of Physics Vol 1 (2004 Reprint ed.). Bharti Bhavan. ISBN 8177091875.  External links Powered by YouTube Wikipedia content is licensed under the GFDL and (CC) license
442438f01031e621
Friday, July 10, 2009 List of censors Bellow you can find a list of sites, from where I was banned at least five times in sequence. This list is of informative purposes only and it will be updated gradually. "...They can't help it. Some people are just born with a lack of oxygen..." Science and philosophy This post is motivated by recent essay "What Is Science" of Mrs. Helen Quinn (former president of the APS), which was published in the July 2009 issue of Physics Today (via ZapperZ) and discussed here and which pregnantly distinguishes subject of science from philosophy by single sentence: "Religion and philosophy are interested in reasons and purposes, but science cares only about mechanisms." Such definition has a deep meaning in AWT, because contemporary science is based on consecutive logics of formal math, which is strictly atemporal, which effectively means, that this logics can be reproduced any time without change. This gives math the power of general language for logical and exact communication. We cannot define and share our ideas exactly, until we are express them in predicate logics and symbolic language of formal math (Feynman: "Shut up and calculate!"). But from Gödel theorems follows, even the most strict / limited axiomatic system can lead to uncertain conclusions about reality and this finding has a good meaning in implicate geometry of AWT. From more general perspective formal view remains quite limited, because it operates in space dimension only, not time dimension and consecutive logics is strictly single time arrow based. So we can call the science "a philosophy of locality" or "atemporal philosophy", which leads to sort of conceptual opportunism or even hypocrisy, because due its strictly local character their proponents often didn't realize, their stance changes in time and/or it doesn't fit exactly their well minded, but more general ideas due the lack of personal feedback, which always requires wider, nonlocal perspective. Just because our mind can operate in wider concept, we can ask for mechanism in time dimension, so we can even ask "philosophical" questions about reasons and consequences and these questions even remains fully motivated from fractally nested perspective of implicate order. The question "WHY?" about causality isn't less important here, then the descriptive question "HOW?" In AWT the interactions along time dimensions leads to quantum fuzziness and chaos on both small scale of details, both on large scale (the fuzziness of vague answers about very general questions) and this character can be modeled by spreading of waves at water surface. Therefore the fuzzy character of philosophy is quite predictable and it goes as the price for its universality - it's not a manifestation of intellectual laziness or incompetence of philosophers or something similar. Therefore from AWT follows, the excessive usage of strictly formal approach leads to fuzziness at small scale (extensive landscapes of string theory and quantum gravity solutions as an example) and to separation from reality in similar way, like overly philosophical approach. A formally thinking theorist cannot explain things in intuitive way even if such explanation becomes quite simple (for example explanation of Lorentz invariance or string concept by density fluctuations of particle environment). In Weinberg's essay it is explained, why the main doctrine of positivism is wrong - if taken strictly - and why it has slowed down science in the past. And vice-versa: a philosophical mind cannot postulate formal description of phenomena even at the case, such description becomes quite simple (the derivation of parabolic equation of free fall as an example). Both approaches have their predictability power, therefore the strategy of highest fitness from evolutionary perspective is usually based on balanced equilibrium of both intuitive, both formal approach, while the intuitive approach usually goes first and the formal one finalizes intuitive ideas in reproducible manner, which can be exchanged freely without lost of information. Thursday, July 09, 2009 Burning water and water memory This post is motivated by recent ideas about dissociation of water by radiowaves, presented on Matti Pitkanen's blog, where he explains it by his theory involving "dark phase conjugate photons"(?). In 2007 radio-engineer John Kanzius developed an apparatus for cancer treatment by polarized radiowaves in 13 MHz frequency range. During desalination tests of his device with tube filled by marine watter (~ 3% solution of NaCl) he observed an evolution of hydrogen, which can be ignited by lighter (video 1, 2). Experiments were confirmed and replicated (1, 2) by Rustum Roy, a materials scientist at Pennsylvania State University. In my opinion the origin of this phenomena is more trivial and it's closely related to memory properties of water, which manifests by number of anomalies, like water autothixotropy(1), Mpemba effect, homeopathic activity of various drugs and chemicals in minute concentration etc. which are based on oligomerisation of water into form of rigid water clusters of icosahedral symmetry. The water cluster formation is based on the finding of X-ray spectroscopy, there exists only two hydrogen bridges available per molecule, so that the formation of chained flat structures simmilar to sponge or foam is preffered. Some observations indicate, the salt ions dissolved may increase surface energy during water clusters and foam formation. Note the close connection of icosahedral water clusters to sacred geometry of five elements, where the icosahedron shape is assigned just to water element, which could mean, vedic authors have general information about five-fold structure of fluids, the structure of glass and water clusters in particular. Interesting aspect of quantum behavior of water clusters is their shape memory, which originates from quantum mirage phenomena. The conformal change of shape in particular place of cluster surface is followed by redistribution of charge density, so that the molecules of water are attached/removed to cluster from opposite side in such a way, the original cluster shape is retained like of piece plasticine, although it undergoes rapid Brownian motion as a whole. In AWT analogous mechanism keeps the shape of particles during their travel through vacuum foam. From the above reasons, each water cluster propagates through density fluctuations of water like solid body of much large effective mass, then the single water molecule - so it can absorb energy in radiowave frequency energy density range (13 MHz, i.e. 5.10E-8 eV). During mutual collisions of such larger clusters the cavitation and splitting of water molecules may occur as a result of anti-Stokes scattering and various resonance phenomena. As a macroscopic demonstration of anti-Stokes scattering can serve famous Astroblaster toy and/or jet formation during water splash, explosion of cumulative warheads (bazooka) and/or collapse of black holes and supernovae (compare the recent simulations of "super rebound" effect during cluster collisions). The main trick here is, the energy of cluster surface waves affects the total bilance of translation energy during collision. Such resonance can have it's analogy in explanation of cold fusion for clusters of deuterons dissolved in palladium lattice. The electrolysis of salty water by radiowaves is interesting from practical purposes, because it leads to thermodynamically metastable mixture of hydrogen peroxide and hydrogen gas and it doesn't require cooling, diaphragms and metal electrodes affected by corrosion and surface reactions, which decrease yield during classical electrolysis. In polar organic solvents we can expect analogous reaction mechanism, interesting from preparative perspective. Furthermore we can expect a strong isotope effect here, which may become significant for heavy water production. Therefore the electrodeless electrolysis by radiowaves can have a great future and it's definitely worth of further research - despite it doesn't enable to produce fuel from watter in cheaper way, then classical thermodynamics allows. Wednesday, July 08, 2009 Aether and graphene behavior This week's hype of string theory - or just another evidence of Aether model? String theory (ST) is believed to provide description of particles by model of 1D stringy loops. While this model has worked only for bosons inside of atom nuclei (for which it was proposed originally at the beginning of 70's), it was extended later for N-dimensional strings, so called branes and the number of string theories increased significantly, but without larger success, measured in number of testable predictions. Recently Leiden University presented article "Physical Reality of String Theory Demonstrated", in which scientists modeled some aspects of phase transition in hight temperature superconductors by concept of AdS/CFT duality, developed for ST originally. Such result is no surprise for AWT, because it allows to model HT superconductivity by ballistic charge transfer through field of electrons, highly compressed by presence of hole stripes. While individual concepts of string theory (concept of branes, hidden dimensions, AdS/CFT correspondence or even holographic principle) may become relevant for particle physics, as a whole ST remains void and fringe theory, because concept of hidden dimensions violates Lorentz symmetry, which the formal model of ST is based on. Aether Wave theory therefore explains strings as a foamy density fluctuations of hypothetical dense gas, which is forming vacuum. While electrons in superconductors are heavily compressed near holes by Coulomb forces, they behave in similar way, like particles on event horizon of black holes and they forms "stringy" fluctuations of density - so we can use some of ST concepts for description of this system. If such model is still relevant for string theorists, it would simply mean, particle strings are formed by highly compressed fermion field as well - which is essentially AWT model. Such result excludes model of particles formed by isolated strings and branes, as presented in naive drawings from Brian Greene's popular books and TV shows. Instead of this, every particle is formed by compact cluster of foamy density fluctuations, formed by another particles. But string theory wasn't designed for such purpose - it was supposed to describe fermion itself, not the compact systems of fermions. While ST failed this target apparently from obvious reasons, the endeavor to model superconductivity by AdS/CFT correspondence is just an attempt to make the best of a bad job. We should realize, how string theorists are frustrated after forty years of ST development, while still having no real physical system to describe. Now they're modeling dense system of fermions instead of individual particles - and they're still happy.... Even worse - it seems, they even didn't spot the difference! The true is, HT superconductivity is conceptually quite simple phenomenon and no working knowledge of string theory is required for its intuitive understanding at all. String theorists shouldn't forget it, when pretending boldly, they can provide the very first / only description of this phenomena, explanation the less. From AWT follows, every dense cloud of compressed electrons should exhibit a superconductivity and we can model it by computer simulations of repulsing particle field, or by numerical solution of Schrödinger equation on field of charged particles, i.e. via standard means of quantum mechanics without introduction of concepts borrowed from ad hoced theories.
df934025943390c4
Open main menu Wikibooks β Quantum Mechanics/Waves and Modes < Quantum Mechanics Many misconceptions about quantum mechanics may be avoided if some concepts of field theory and quantum field theory like "normal mode" and "occupation" are introduced right from the start. They are needed for understanding the deepest and most interesting ideas of quantum mechanics anyway. Questions about this approach are welcome on the talk page. Waves and modesEdit A wave is a propagating disturbance in a continuous medium or a physical field. By adding waves or multiplying their amplitudes by a scale factor, superpositions of waves are formed. Waves must satisfy the superposition principle which states that they can go through each other without disturbing each other. It looks like there were two superimposed realities each carrying only one wave and not knowing of each other (that's what is assumed if one uses the superposition principle mathematically in the wave equations). Examples are acoustic waves and electromagnetic waves (light), but also electronic orbitals, as explained below. A standing wave is considered a one-dimensional concept by many students, because of the examples (waves on a spring or on a string) usually provided. In reality, a standing wave is a synchronous oscillation of all parts of an extended object at a definite frequency, in which the oscillation profile (in particular the nodes and the points of maximal oscillation amplitude) doesn't change. This is also called a normal mode of oscillation. The profile can be made visible in Chladni's figures and in vibrational holography. In unconfined systems, i.e. systems without reflecting walls or attractive potentials, traveling waves may also be chosen as normal modes of oscillation (see boundary conditions). A phase shift of a normal mode of oscillation is a time shift scaled as an angle in terms of the oscillation period, e.g. phase shifts by 90° and 180° (or   and  ) are time shifts by the fourth and half of the oscillation period, respectively. This operation is introduced as another operation allowed in forming superpositions of waves (mathematically, it is covered by the phase factors of complex numbers scaling the waves). • Helmholtz ran an experiment which clearly showed the physical reality of resonances in a box. (He predicted and detected the eigenfrequencies.) • experiments with standing and propagating waves Electromagnetic and electronic modesEdit Max Planck, one of the fathers of quantum mechanics. Planck was the first to suggest that the electromagnetic modes are not excited continuously but discretely by energy quanta   proportional to the frequency. By this assumption, he could explain why the high-frequency modes remain unexcited in a thermal light source: The thermal exchange energy   is just too small to provide an energy quantum   if   is too large. Classical physics predicts that all modes of oscillation (2 degrees of freedom each) — regardless of their frequency — carry the average energy  , which amounts to an infinite total energy (called ultraviolet catastrophe). This idea of energy quanta was the historical basis for the concept of occupations of modes, designated as light quanta by Einstein, also denoted as photons since the introduction of this term in 1926 by Gilbert N. Lewis. An electron beam (accelerated in a cathode ray tube similar to TV) is diffracted in a crystal and diffraction patterns analogous to the diffraction of monochromatic light by a diffraction grating or of X-rays on crystals are observed on the screen. This observation proved de Broglie's idea that not only light, but also electrons propagate and get diffracted like waves. In the attracting potential of the nucleus, this wave is confined like the acoustic wave in a guitar corpus. That's why in both cases a standing wave (= a normal mode of oscillation) forms. An electron is an occupation of such a mode. An optical cavity. An electronic orbital is a normal mode of oscillation of the electronic quantum field, very similar to a light mode in an optical cavity being a normal mode of oscillation of the electromagnetic field. The electron is said to be an occupation of an orbital. This is the main new idea in quantum mechanics, and it is forced upon us by observations of the states of electrons in multielectron atoms. Certain fields like the electronic quantum field are observed to allow its normal modes of oscillation to be excited only once at a given time, they are called fermionic. If you have more occupations to place in this quantum field, you must choose other modes (the spin degree of freedom is included in the modes), as is the case in a carbon atom, for example. Usually, the lower-energy (= lower-frequency) modes are favoured. If they are already occupied, higher-energy modes must be chosen. In the case of light, the idea that a photon is an occupation of an electromagnetic mode was found much earlier by Planck and Einstein, see below. Processes and particlesEdit All processes in nature can be reduced to the isolated time evolution of modes and to (superpositions of) reshufflings of occupations, as described in the Feynman diagrams (since the isolated time evolution of decoupled modes is trivial, it is sometimes eliminated by a mathematical redefinition which in turn creates a time dependence in the reshuffling operations; this is called Dirac's interaction picture, in which all processes are reduced to (redefined) reshufflings of occupations). For example in an emission of a photon by an electron changing its state, the occupation of one electronic mode is moved to another electronic mode of lower frequency and an occupation of an electromagnetic mode (whose frequency is the difference between the frequencies of the mentioned electronic modes) is created. Electrons and photons become very similar in quantum theory, but one main difference remains: electronic modes cannot be excited/occupied more than once (= Pauli exclusion principle) while photonic/electromagnetic modes can and even prefer to do so (= stimulated emission). This property of electronic modes and photonic modes is called fermionic and bosonic, respectively. Two photons are indistinguishable and two electrons are also indistinguishable, because in both cases, they are only occupations of modes: all that matters is which modes are occupied. The order of the occupations is irrelevant except for the fact that in odd permutations of fermionic occupations, a negative sign is introduced in the amplitude. Of course, there are other differences between electrons and photons: • The electron carries an electric charge and a rest mass while the photon doesn't. • In physical processes (see the Feynman diagrams), a single photon may be created while an electron may not be created without at the same time removing some other fermionic particle or creating some fermionic antiparticle. This is due to the conservation of charge. Mode numbers, Observables and eigenmodesEdit The system of modes to describe the waves can be chosen at will. Any arbitrary wave can be decomposed into contributions from each mode in the chosen system. For the mathematically inclined: The situation is analogous to a vector being decomposed into components in a chosen coordinate system. Decoupled modes or, as an approximation, weakly coupled modes are particlularly convenient if you want to describe the evolution of the system in time, because each mode evolves independently of the others and you can just add up the time evolutions. In many situations, it is sufficient to consider less complicated weakly coupled modes and describe the weak coupling as a perturbation. In every system of modes, you must choose some (continuous or discrete) numbering (called "quantum numbers") for the modes in the system. In Chladni's figures, you can just count the number of nodal lines of the standing waves in the different space directions in order to get a numbering, as long as it is unique. For decoupled modes, the energy or, equivalently, the frequency might be a good idea, but usually you need further numbers to distinguish different modes having the same energy/frequency (this is the situation referred to as degenerate energy levels). Usually these additional numbers refer to the symmetry of the modes. Plane waves, for example — they are decoupled in spatially homogeneous situations — can be characterized by the fact that the only result of shifting (translating) them spatially is a phase shift in their oscillation. Obviously, the phase shifts corresponding to unit translations in the three space directions provide a good numbering for these modes. They are called the wavevector or, equivalently, the momentum of the mode. Spherical waves with an angular dependence according to the spherical harmonics functions (see the pictures) — they are decoupled in spherically symmetric situations — are similarly characterized by the fact that the only result of rotating them around the z-axis is a phase shift in their oscillation. Obviously, the phase shift corresponding to a rotation by a unit angle is part of a good numbering for these modes; it is called the magnetic quantum number m (it must be an integer, because a rotation by 360° mustn't have any effect) or, equivalently, the z-component of the orbital angular momentum. If you consider sharp wavepackets as a system of modes, the position of the wavepacket is a good numbering for the system. In crystallography, the modes are usually numbered by their transformation behaviour (called group representation) in symmetry operations of the crystal, see also symmetry group, crystal system. The mode numbers thus often refer to physical quantities, called observables characterizing the modes. For each mode number, you can introduce a mathematical operation, called operator, that just multiplies a given mode by the mode number value of this mode. This is possible as long as you have chosen a mode system that actually uses and is characterized by the mode number of the operator. Such a system is called a system of eigenmodes, or eigenstates: Sharp wavepackets are no eigenmodes of the momentum operator, they are eigenmodes of the position operator. Spherical harmonics are eigenmodes of the magnetic quantum number, decoupled modes are eigenvalues of the energy operator etc. If you have a superposition of several modes, you just operate the operator on each contribution and add up the results. If you chose a different modes system that doesn't use the mode number corresponding to the operator, you just decompose the given modes into eigenmodes and again add up the results of the operator operating on the contributions. So if you have a superposition of several eigenmodes, say, a superposition of modes with different frequencies, then you have contributions of different values of the observable, in this case the energy. The superposition is then said to have an indefinite value for the observable, for example in the tone of a piano note, there is a superposition of the fundamental frequency and the higher harmonics being multiples of the fundamental frequency. The contributions in the superposition are usually not equally large, e.g. in the piano note the very high harmonics don't contribute much. Quantitatively, this is characterized by the amplitudes of the individual contributions. If there are only contributions of a single mode number value, the superposition is said to have a definite or sharp value. • the basics of wave-particle duality. If you do a position measurement, the result is the occupation of a very sharp wavepacket being an eigenmode of the position operator. These sharp wavepackets look like pointlike objects, they are strongly coupled to each other, which means that they spread soon. In measurements of such a mode number in a given situation, the result is an eigenmode of the mode number, the eigenmode being chosen at random from the contributions in the given superposition. All the other contributions are supposedly eradicated in the measurement — this is called the wave function collapse and some features of this process are questionable and disputed. The probability of a certain eigenmode to be chosen is equal to the absolute square of the amplitude, this is called Born's probability law. This is the reason why the amplitudes of modes in a superposition are called "probability amplitudes" in quantum mechanics. The mode number value of the resulting eigenmode is the result of the measurement of the observable. Of course, if you have a sharp value for the observable before the measurement, nothing is changed by the measurement and the result is certain. This picture is called the Copenhagen interpretation. A different explanation of the measurement process is given by Everett's many-worlds theory; it doesn't involve any wave function collapse. Instead, a superposition of combinations of a mode of the measured system and a mode of the measuring apparatus (an entangled state) is formed, and the further time evolutions of these superposition components are independent of each other (this is called "many worlds"). As an example: a sharp wavepacket is an eigenmode of the position observable. Thus the result of measurements of the position of such a wavepacket is certain. On the other hand, if you decompose such a wavepacket into contributions of plane waves, i.e. eigenmodes of the wavevector or momentum observable, you get all kinds of contributions of modes with many different momenta, and the result of momentum measurements will be accordingly. Intuitively, this can be understood by taking a closer look at a sharp or very narrow wavepacket: Since there are only a few spatial oscillations in the wavepacket, only a very imprecise value for the wavevector can be read off (for the mathematically inclined reader: this is a common behaviour of Fourier transforms, the amplitudes of the superposition in the momentum mode system being the Fourier transform of the amplitudes of the superposition in the position mode system). So in such a state of definite position, the momentum is very indefinite. The same is true the other way round: The more definite the momentum is in your chosen superposition, the less sharp the position will be, and it is called Heisenberg's uncertainty relation. Two different mode numbers (and the corresponding operators and observables) that both occur as characteristic features in the same mode system, e.g. the number of nodal lines in one of Chladni's figures in x direction and the number of nodal lines in y-direction or the different position components in a position eigenmode system, are said to commute or be compatible with each other (mathematically, this means that the order of the product of the two corresponding operators doesn't matter, they may be commuted). The position and the momentum are non-commuting mode numbers, because you cannot attribute a definite momentum to a position eigenmode, as stated above. So there is no mode system where both the position and the momentum (referring to the same space direction) are used as mode numbers. The Schrödinger equation, the Dirac equation etc.Edit As in the case of acoustics, where the direction of vibration, called polarization, the speed of sound and the wave impedance of the media, in which the sound propagates, are important for calculating the frequency and appearance of modes as seen in Chladni's figures, the same is true for electronic or photonic/electromagnetic modes: In order to calculate the modes (and their frequencies or time evolution) exposed to potentials that attract or repulse the waves or, equivalently, exposed to a change in refractive index and wave impedance, or exposed to magnetic fields, there are several equations depending on the polarization features of the modes: • Electronic modes (their polarization features are described by Spin 1/2) are calculated by the Dirac equation, or, to a very good approximation in cases where the theory of relativity is irrelevant, by the Schrödinger equation]] and the Pauli equation. • Photonic/electromagnetic modes (polarization: Spin 1) are calculated by Maxwell's equations (You see, 19th century already found the first quantum-mechanical equation! That's why it's so much easier to step from electromagnetic theory to quantum mechanics than from point mechanics). • Modes of Spin 0 would be calculated by the Klein-Gordon equation. It is much easier and much more physical to imagine the electron in the atom to be not some tiny point jumping from place to place or orbiting around (there are no orbits, there are orbitals), but to imagine the electron being an occupation of an extended orbital and an orbital being a vibrating wave confined to the neighbourhood of the nucleus by its attracting force. That's why Chladni's figures of acoustics and the normal modes of electromagnetic waves in a resonator are such a good analogy for the orbital pictures in quantum physics. Quantum mechanics is a lot less weird if you see this analogy. The step from electromagnetic theory (or acoustics) to quantum theory is much easier than the step from point mechanics to quantum theory, because in electromagnetics you already deal with waves and modes of oscillation and solve eigenvalue equations in order to find the modes. You just have to treat a single electron like a wave, just in the same way as light is treated in classical electromagnetics. In this picture, the only difference between classical physics and quantum physics is that in classical physics you can excite the modes of oscillation to a continuous degree, called the classical amplitude, while in quantum physics, the modes are "occupied" discretely. — Fermionic modes can be occupied only once at a given time, while Bosonic modes can be occupied several times at once. Particles are just occupations of modes, no more, no less. As there are superpositions of modes in classical physics, you get in quantum mechanics quantum superpositions of occupations of modes and the scaling and phase-shifting factors are called (quantum) amplitudes. In a Carbon atom, for example, you have a combination of occupations of 6 electronic modes of low energy (i.e. frequency). Entangled states are just superpositions of combinations of occupations of modes. Even the states of quantum fields can be completely described in this way (except for hypothetical topological defects). As you can choose different kinds of modes in acoustics and electromagnetics, for example plane waves, spherical harmonics or small wave packets, you can do so in quantum mechanics. The modes chosen will not always be decoupled, for example if you choose plane waves as the system of acoustic modes in the resonance corpus of a guitar, you will get reflexions on the walls of modes into different modes, i.e. you have coupled oscillators and you have to solve a coupled system of linear equations in order to describe the system. The same is done in quantum mechanics: different systems of eigenfunctions are just a new name for the same concept. Energy eigenfunctions are decoupled modes, while eigenfunctions of the position operator (delta-like wavepackets) or eigenfunctions of the angular momentum operator in a non-spherically symmetric system are usually strongly coupled. What happens in a measurement depends on the interpretation: In the Copenhagen interpretation you need to postulate a collapse of the wavefunction to some eigenmode of the measurement operator, while in Everett's Many-worlds theory an entangled state, i.e. a superposition of occupations of modes of the observed system and the observing measurement apparatus, is formed. The formalism of quantum mechanics and quantum field theoryEdit In Dirac's formalism, superpositions of occupations of modes are designated as state vectors or states, written as   (  being the name of the superposition), the single occupation of the mode   by   or just  . The vacuum state, i.e. the situation devoid of any occupations of modes, is written as  . Since the superposition is a linear operation, i.e. it only involves multiplication by complex numbers and addition, as in (a superposition of the single occupations of mode   and mode   with the amplitudes   and  , respectively), the states form a vector space (i.e. they are analogous to vectors in Cartesian coordinate systems). The operation of creating an occupation of a mode   is written as a generator   (for photons) and   (for electrons), and the destruction of the same occupation as a destructor   and  , respectively. A sequence of such operations is written from right to left (the order matters): In   an occupation of the electronic mode   is moved to the electronic mode   and a new occupation of the electromagnetic mode   is created — obviously, this reshuffling formula represents the emission of a photon by an electron changing its state.   is the superposition of two such processes differing in the final mode of the photon (  versus  ) with the amplitudes   and  , respectively. If the mode numbers are more complex — e.g. in order to describe an electronic mode of a Hydrogen atom, (i.e. an orbital) you need the 4 mode numbers n, l, m, s — the occupation of such a mode is written as   or   (in words: the situation after creating an occupation of mode (n, l, m, s) in the vacuum). If you have two occupations of different orbitals, you might write   or  . It is important to distinguish such a double occupation of two modes from a superposition of single occupations of the same two modes, which is written as   or  . But superpositions of multiple occupations are also possible, even superpositions of situations with different numbers or different kinds of particles:
388459bce4c57c7c
Monday, June 15, 2015 A brief introduction to basis sets In order to compute the energy we need to define mathematical functions for the orbitals.  In the case of atoms we can simply use the solutions to the Schrödinger equation for the  $\ce{H}$ atom as a starting point and find the best exponent for each function using the variational principle. But what functions should we use for molecular orbitals (MOs)?  The wave function of the $\ce{H2+}$ molecule provides a clue (Figure 1) Figure 1. Schematic representation of the wave function of the $\ce{H2+}$  molecule. It looks a bit like the sum of two 1$s$ functions centered at each nucleus (A and B), $${\Psi ^{{\text{H}}_{\text{2}}^ + }} \approx \tfrac{1}{{\sqrt 2 }}\left( {\Psi _{1s}^{{{\text{H}}_{\text{A}}}} + \Psi _{1s}^{{{\text{H}}_{\text{B}}}}} \right)$$ Thus, one way of constructing MOs is as a linear combination of atomic orbitals (the LCAO approximation), $${\phi _i}(1) = \sum\limits_{\mu  = 1}^K {{C_{\mu i}}{\chi _\mu }(1)} $$ an approximation that becomes better and better as $K$ increases.  Here $\chi _\mu$  is a mathematical function that looks like an AO, and is called a basis function (a collection of basis functions for various atoms is called a basis set), and $C_{\mu i}$ is a number (sometimes called an MO coefficient) that indicates how much basis function $\chi _\mu$ contributes to MO $i$, and is determined for each system via the variational principle.   Note that every MO is expressed in terms of all basis function, and therefore extends over the entire molecule. If we want to calculate the RHF energy of water, the basis set for the two $\ce{H}$ atoms would simply be the lowest energy solution to the Schrödinger equation for $\ce{H}$ atom $${\chi _{{{\text{H}}_{\text{A}}}}}(1) = \Psi _{1s}^{\text{H}}({r_{1A}}) = \frac{1}{{\sqrt \pi  }}{e^{ - \left| {{{\bf{r}}_1} - {{\bf{R}}_A}} \right|}}$$ For the O atom, the basis set is the AOs obtained from, say, an ROHF calculation on $\ce{O}$, i.e. $1s$, $2s$, $2p_x$, $2p_y$, and $2p_z$ functions from the solutions to the Schrödinger equation for the $\ce{H}$ atom, where the exponents ($\alpha_i$'s) have been variationally optimized for the $\ce{O}$ atom, $$\Psi _{1s}^{\text{H}},\;\Psi _{2s}^{\text{H}},\;\Psi _{2p}^{\text{H}} \xrightarrow{\frac{\partial E}{\partial \alpha_i}=0} \phi _{1s}^{\text{O}},\;\phi _{2s}^{\text{O}},\;\phi _{2p}^{\text{O}} \equiv \;\left\{ {{\chi _O}} \right\}$$ Notice that this only has to be done once, i.e. we will use this oxygen basis set for all oxygen-containing molecules.  We then provide a guess at the water structure and the basis functions are placed at the coordinates of the respective atoms.  Then we find the best MO coefficients by variational minimization, $$\frac{{\partial E}}{{\partial {C_{\mu i}}}} = 0 $$ for all $i$ and $\mu$. Thus, for water we need a total of seven basis functions to describe the five doubly occupied water MOs ($K = 7$ and $i = 1-5$ in Eq 5).  This is an example of a minimal basis set, since it is the minimum number of basis functions per atoms that makes chemical sense. One problem with the LCAO approximation is the number of 2-electron integrals it leads to, and the associated computational cost.  Let’s look at the part of the energy that comes from the Coulomb integrals   \sum\limits_{i = 1}^{N/2} {\sum\limits_{j = 1}^{N/2} {2{J_{ij}}} }  &= \sum\limits_{i = 1}^{N/2} {\sum\limits_{j = 1}^{N/2} {2\left\langle {\left. {{\phi _i}(1){\phi _j}(2)} \right|\frac{1}{{{r_{12}}}}\left| {{\phi _i}(1){\phi _j}(2)} \right.} \right\rangle } }  \\    &= \sum\limits_i^{N/2} {\sum\limits_j^{N/2} {2\left\langle {\left. {{\phi _i}(1){\phi _i}(1)} \right|\frac{1}{{{r_{12}}}}\left| {{\phi _j}(2){\phi _j}(2)} \right.} \right\rangle } }  \\    &= \sum\limits_i^{N/2} {\sum\limits_j^{N/2} {2\left\langle {{{\phi _i}{\phi _i}}}  \mathrel{\left | {\vphantom {{{\phi _i}{\phi _i}} {{\phi _j}{\phi _j}}}}  \right. }  {{{\phi _j}{\phi _j}}} \right\rangle } }  \\    &= \sum\limits_\mu ^K {\sum\limits_\nu ^K {\sum\limits_\lambda ^K {\sum\limits_\sigma ^K {\sum\limits_i^{N/2} {\sum\limits_j^{N/2} {2{C_{\mu i}}{C_{\nu i}}{C_{\lambda j}}{C_{\lambda j}}\left\langle {{{\chi _\mu }{\chi _\nu }}}  \mathrel{\left | {\vphantom {{{\chi _\mu }{\chi _\nu }} {{\chi _\lambda }{\chi _\sigma }}}}  \right. }  {{{\chi _\lambda }{\chi _\sigma }}} \right\rangle } } } } } }  \\    &= \sum\limits_\mu ^K {\sum\limits_\nu ^K {\sum\limits_\lambda ^K {\sum\limits_\sigma ^K {\tfrac{1}{2}{P_{\mu \nu }}{P_{\lambda \sigma }}\left\langle {{{\chi _\mu }{\chi _\nu }}}  \right. } We have roughly $(N/2)^2$ Coulomb integrals involving molecular orbitals but roughly $1/8K^4$ Coulomb integrals involving basis functions (the factor of 1/8 comes from the fact that some the integrals are identical and need only be computed once).  Using a minimal basis set $K$ = 80 for a small organic molecule like caffeine $\ce{(C8H10N4O2)}$ and results in ca 5,000,000 2-electron integrals involving basis functions! (That being said, you can perform an RHF energy calculation with 80 basis functions on your desktop computer in a few minutes.  The problem with $K^4$-scaling is that a corresponding calculation with 800 basis functions would take a few days on the same machine.  So you can forget about optimizing the structure on that machine.)  This is one of the reasons why modern computational quantum chemistry requires massive computers.  This is also the reason why the basis set size it a key consideration in a quantum chemistry project. The 2-electron integrals also pose another problem: the basis functions defined so far are exponential functions (also known as Slater type orbitals or STOs).  2-electron integrals involving STOs placed on four different atoms do not have analytic solutions.  As a result, most quantum chemistry programs use Gaussian type orbitals (or simply Gaussians) instead of STOs, because the 2-electron integrals involving Gaussians have analytic solutions.  Obviously, $${e^{ - \alpha {r_{1A}}}} \approx {e^{ - \beta r_{1A}^2}}$$ is a poor approximation, so a linear combination of Gaussians are used to model each STO basis function (Figure 2) $${e^{ - \alpha {r_{1A}}}} \approx \sum\limits_i^X {{a_{i\mu }}{e^{ - {\beta _i}r_{1A}^2}}}  = {\chi _\mu }$$ Figure 2. (a) An exponential function is not well represented by one Gaussian, but (b) can be well represented by a linear combination of three Gaussians. Here the $a_{i\mu}$parameters (or contraction coefficients) as well as the Gaussian exponents are determined just once for a given STO basis function. $\chi_\mu$  is a contracted basis function and the $X$ individual Gaussian functions are called primitives.  Generally, three primitives are sufficient to represent an STO, and this basis set is known at the STO-3G basis set.  $p$- and $d$-type STOs are expanded in terms of $p$- and $d$-type primitive Gaussians [e.g. $({x_1} - {x_A}){e^{ - \beta r_{1A}^2}}$ and $({x_1} - {x_A})({y_1} - {y_A}){e^{ - \beta r_{1A}^2}}$].  An RHF calculation using the STO-3G basis set is denoted RHF/STO-3G.  Unless otherwise noted, this usually also implies that the geometry is computed (i.e. the minimum energy structure is found) at this level of theory. Minimal basis sets are usually not sufficiently accurate to model reaction energies.  This is due to the fact that the atomic basis functions cannot change size to adjust to their bonding environment. However, this can be made possible by using some the contraction coefficients as variational parameters.  This will increase the basis set size (and hence the computational cost) so this must be done judiciously.  For example, we’ll get most improvement by worrying about the basis functions that describe the valence electrons that participate most in bonding.  Thus, for $\ce{O}$ atom we leave the 1$s$ core basis function alone, but “split” the valence 2$s$ basis function into linear combinations of two and one Gaussians respectively,    {\chi _{1s}} &= \sum\limits_i^3 {{a_{i1s}}{e^{ - {\beta _i}r_{1A}^2}}}   \\    {\chi _{2{s_a}}} &= \sum\limits_i^2 {{a_{i2s}}{e^{ - {\beta _i}r_{1A}^2}}}   \\   {\chi _{2{s_b}}} &= {e^{ - {\beta _{2{s_b}}}r_{1A}^2}} \\ \end{split} $$ and similarly for the 2$p$ basis functions. This is known as the 3-21G basis set (pronounced “three-two-one g” not “three-twenty one g”), which denotes that core basis functions are described by 3 contracted Gaussians, while the valence basis functions are split into two basis functions, described by 2 and 1 Gaussian each.  Thus, using the 3-21G basis set to describe water requires 13 basis functions: two basis functions on each $\ce{H}$ atom (1$s$ is the valence basis function of the H atom) and 9 basis functions on the $\ce{O}$ atom (one 1$s$ function and two each of 2$s$, 2$p_x$, 2$p_y$, and 2$p_z$). The $\chi _{2{s_a}}$  basis function is smaller (i.e., the Gaussians have a larger exponent) than the   basis function.  Thus, one can make a function of any intermediate size by (variationally) mixing these two functions (Figure 3).  The 3-21G is an example of a split valence or double zeta basis set (zeta, ζ, is often used as the symbol for the exponent, but I find it hard to write and don’t use it in my lectures).   Similarly, one can make other double zeta basis sets such as 6-31G, or triple zeta basis sets such as 6-311G. Figure 3. Sketch of two different sized $s$-type basis functions that can be used to make a basis function of intermediate size As the number of basis functions ($K$ in Eq 2) increase the error associated with the LCAO approximation should decrease and the energy should converge to what is called the Hartree-Fock limit ($E_{\text{HF}}$) that is higher than the exact energy ($E_{\text{exact}}$) (Figure 4).  The difference is known as the correlation energy, and is the error introduced by the orbital approximation $$\Psi ({{\bf{r}}_1},{{\bf{r}}_2}, \ldots {{\bf{r}}_N}) \approx \left| {{\phi _1}(1){{\bar \phi }_1}(2) \ldots {\phi _{N/2}}(N - 1){{\bar \phi }_{N/2}}(N)} \right\rangle $$ Figure 4. Plot of the energy as a function the number of basis functions. However, in the case of a one-electron molecule like $\ce{H2+}$ we would expect the energy to converge to $E_{\text{exact}}$ since there is no orbital approximation.  However, if we try this with the basis sets discussed thus far we find that this is not the case (Figure 5)! Figure 5. Plot of the energy of $\ce{H2+}$ computed using increasingly larger basis sets. What’s going on? Again we get a clue by comparing the exact wave function to the LCAO-wave function (Figure 6). Figure 6. Comparison of the exact wave function and one computed using the 6-311G basis set. We find that compared to the exact result there is not “enough wave function” between the nuclei and too much at either end.  As we increase the basis set we only add $s$-type basis functions (of varying size) to the basis set.  Since they are spherical they cannot be used to shift electron from one side of the $\ce{H}$ atom to the other.  However, $p$-functions are perfect for this (Figure 7). Figure 7. Sketch of the polarization of an s basis function by a p basis function So basis set convergence is not a matter of simply increasing the number of basis functions, it is also important to have the right mix of basis function types.  Similarly, $d$-functions can be used to “bend” $p$-functions (Figure 8). Figure 8. Sketch of the polarization of a p basis function by a d basis function Such functions are known as polarization functions, and are denoted with the following notation. For example, 6-31G(d) denotes d polarization functions on all non-$\ce{H}$ atoms and can also be written as 6-31G*.  6-31G(d,p) is a 6-31G(d) basis set where p-functions have been added on all $\ce{H}$ atoms, and can also be written 6-31G**.  A RHF/6-31G(d,p) calculation on water involves 24 basis functions: 13 basis functions for the 6-31G part (just like for 3-21G) plus 3 $p$-type polarization functions on each H atom and 5 $d$-type polarization functions (some programs use 6 Cartesian d-functions instead of the usual 5). Anions tend to have very diffuse electron distributions and very large basis functions (with very small exponents) are often needed for accurate results.  These diffuse functions are denoted with “+” signs: e.g. 6-31+G denotes one s-type and three $p$-type diffuse Gaussians on each non-$\ce{H}$ atom, and 6-31++G denotes the addition of a single diffuse $s$-type Gaussian on each $\ce{H}$-atom. Diffuse functions also tend to improve the accuracy of calculations on van der Waals complexes and other structures where the accurate representation of the outer part of the electron distribution is important. Of course there are many other basis sets available, but in general they have the same kinds of attributes as described already.  For example, aug-cc-pVTZ is a more modern basis set: “aug” stands for “augmented” meaning “augmented with diffuse functions”, “pVTZ” means “polarized valence triple zeta”, i.e. it is of roughly the same quality as 6-311++G(d,p).  “cc” stands for “correlation consistent” meaning the parameters were optimized for correlated wave functions (like MP2, see below) rather than HF wave function like Pople basis sets [such as 6-31G(d)] described thus far. Related blog posts Complete basis set extrapolation Monday, June 1, 2015 Computational Chemistry Highlights: May issue The May issue of Computational Chemistry Highlights is out. Uthrene, a radically new molecule? This work is licensed under a Creative Commons Attribution 4.0
03badb78bb555844
 Schrodinger equation vs. Bohr model Schrödinger equation belongs to Bohr-Sommerfeld model. Top page (correct Bohr model ) Strange "spin" is NOT a real thing Contents (14/6/11) Schrödinger equation = Bohr Sommerfeld models. [ An integer number of de Broglie wavelength is indispensable. ] (Fig.1)   n × de Broglie wavelength. It is known that Bohr model agrees with experimental results and Schrodinger equation in hydrogen atom, as shown on this site. In Bohr model, the circumference of the orbit is just equal to an integer number of de Broglie wavelength. Surprisingly, also in Schrodinger equation, both radial and angular wavefunctions satisfy an integer number of de Broglie wavelength, as shown in their nodes ( see this site (p.3). ) From "circular" Bohr to "elliptical" Sommerfeld model. (Fig.2) From "circular" Bohr model to "elliptical" Sommerfeld model. In 1916, Arnold Sommerfeld extended Bohr's circular orbit to "elliptical" one. Sommerfeld elliptical orbit also satisfies an integer number of de Broglie wavelength. This condition is expressed as Sommerfeld quantization rule in both radial and angular directions, as shown on this site (p.119) and this site (p.12). Elliptical orbit contains "radial" and "angular" directions. (Fig.3)   n × de Broglie wavelength. where "r" is radius, and φ is azimuthal angle. Different from simple circular Bohr's orbit, elliptical orbit contains movement in "radial" directions. We can separate each momentum at each point into angular (= tangential ) and radial components, which are perpendicular to each other. "Perpendicular" means these two components are independent from each other. So we need to calculate de Broglie waves in these two directions independently. Numbers (= n ) of de Broglie wavelength in each tangential and radial directions are given by dividing one orbital length in each direction by each de Broglie wavelength (= λ ). n = the sum of radial and angular de Broglie wavelengths. (Fig.4)   n × de Broglie wavelength is separated into angular and radial parts. When the circumference of elliptical orbit is n × de Broglie wavelemgth, this integer "n" can be expressed as the sum of angular and radial waves. See also this section. In Fig.4, n is equal to "6", which is also equal to principal quantum number. The whole path in tangential (= angular ) direction is nφ (= 4 ) × λφ (= tangnetial de Broglie wavelength ). And the radial path is equal to nr (= 2 ) × λr (= radial de Broglie wavelength ). This is Sommerfeld quantization rule in radial and angular directions. Circular orbit does NOT contain "radial" direction. (Fig.5) Simple circular orbit = Only tangential (= angular ) movement. As shown in Fig.5, when an electron is moving in circular orbit, it moves Only in tangential (= angular ) direction. So in circular orbit, the concept of radial momentum and de Broglie wavelength do NOT exist. So the orbital length becomes n (= nφ ) × tangential de Broglie wavelength. We don't need to think about the radial wave quantization, different from elliptical orbit. Circular orbit = n × "angular" de Broglie wavelength. (Fig.6)   n (= nφ ) × angular de Broglie wavelength. In Fig.6, the circumference of this circular orbit is 5 × de Broglie wavelength. ( So, the principal quantum number in energy is "5", too. About the calculation, see this section. ) This is a circular orbit, so all these "5" de Broglie waves are in tangential (= angular ) direction. "Radial" momentum and waves are NOT included in circular orbit at all. Total energy is the same in Bohr-Sommerfeld and Schrodinger hydrogens. (Fig.7) Schrodinger's hydrogen also consists of radial and angular parts. Schrodinger equation also depends on an integer number of de Broglie wavelength in radial and angular directions. See this site and this site. ( "node" is related to radial waves. ) Comparing the energies of Bohr model and Schrodinger equation, they are just consistent with each other. And the principal quantum number "n" is the sum of radial and angular ( wave ) numbers in both models. Radial wavefunctions satisfy an integer × de Broglie wavelength. (Fig.8)   1s wavefuntion = 1 × wavelength.   2s = 2 × wavelength. As shown on this site, Schrodinger wavefunctions of 1s and 2s orbitals become like Fig.8. Radial wavefunction of χ = "rR" obeys de Broglie relation, as shown in Eq.1 and this section. One orbit means the circumference from a point to the same point. In Fig.8, from r = ∞ (= start ) to r = ∞ (= end ), 1s wavefunction contains 1 × de Broglie wavelength. And 2s wavefunction contains 2 × de Broglie wavelength. Neither 1s nor 2s has angular parts, so the angular quantum number l is zero. (Eq.1) Schrodinger (radial ) equation for hydrogen atom. In the latter part of this site, there are some examples of this radial amplitude of χ = rR. 2p wavefunction consists of angular (= 1 ) and radial (= 1 ) waves. The radial wavefunction of 2p orbital is 1 × de Broglie wavelength like 1s orbital. But 2p also has angular wavefunction with quantum number l = 1. The principal quantum number n is the sum of radial and angular wave numbers. So 2p has n = 1 ( angular ) + 1 ( radial ) = 2 energy level like 2s orbital. (Fig.9)   2p wavefunction: n = 1 ( angular ) + 1 ( radial ) = 2. If you check hydrogen wavefunction based on Schrodinger equation in some websites or textbooks, you will find 1s radial wavefunction like Fig.10. rR12 shows 1/2 de Broglie wavelength (= Fig.10 right ), so one orbit ( ∞ → 0 → ∞ ) includes 1 × de Broglie wavelength in the radial direction, as shown in Fig.8 upper. (Fig.10) 1s hydrogen wavefunction = 1 × de Broglie wavelength. Sommerfeld quantization shows an integer × de Broglie wavelength in each direction. As shown in this site and this site, Sommerfeld quantization conditions are (Eq.2) Sommerfeld quantization. L is angular momentum, which is constant. φ is azimuthal angle in the plane of the orbit. pr is radial momentum of the electron. h, nφ, nr are Planck constant, angular, and radial quantum numbers, respectively. Quantization of angular de Broglie wavelength (= λφ ). (Eq.3) Sommerfeld quantization = an integer × de Broglie wavelength. Using Eq.2 and de Broglie relation ( λ = h/mv ), you find condition of Eq.2 means an integer times de Broglie wavelength (= nφ × λφ ) is contained in the orbit in tangential direction. (Fig.11) Number of de Broglie wavelength in tangential direction. For example, in Fig.11, one orbit in tangential direction contains 5 × de Broglie wavelength. Of course, this wavelength (= λφ ) is based Only on tangential momentum ( λφ = h / pφ ). Quantization of radial de Broglie wavelength (= λr ). (Fig.12) Sommerfeld radial quantization. → nr × de Broglie wavelength ! Radial quantization condition of Eq.2 also means an integer times de Broglie wavelength is contained in one orbit in radial direction. So Sommerfeld quantization conditions of Eq.2 require quantization in both radial and angular directions. Separating momentum (= wavelength ) into radial and angular components. (Fig.13) Electron's movement in "radial" and "angular" directions. In elliptical orbit, the electron is moving both in angular and radial directions, as shown in Fig.13. Using Pythagorean theorem, we can divide total momentum "p" into radial (= pr ) and tangential (= pφ ) components at each point. These two directions are perpendicular to each other at each point. So, radial (= λr ) and angular (= λφ ) de Broglie waves are independent from each other. n = the sum of angular (= nφ ) and radial (= nr ) waves. As shown in this section, total number (= n ) of de Broglie waves can be expressed as the sum of angular (= nφ ) and radial (= nr ) de Broglie waves. In Fig.14 orbit, the principal quantum number n = 6, which also expresses total number of de Broglie wavelength. Angular and radial quantum numbers (= wave numbers in each direction ) are "4" and "2", respectively. So, the relation of n = 6 = 4 + 2 holds. Schrodinger radial functions also satisfy nr × de Broglie wavelength. (Fig.15) Schrodinger's radial wavefunction is a integer de Broglie wavelength = Sommerfeld. Fig.15 shows the examples of Schrodinger's wavefunctions including one de Broglie wavelength in radial direction of one orbit. ( One orbit means from one point to the same point, ∞ → 0 → ∞ ) For example, R32 wavefunction is "3" principal number ( n = 3 ) and "2" angular (= tangential ) momentum ( l = 2 ).   So the radial wave's number is 3-2= 1. These wave's numbers have the same meaning as Bohr-Sommerfeld model, which also has an integer de Broglie waves. (Fig.16) Schrodinger's radial wavefunction, two or three waves. In the upper line of Fig.16, the radial one-round orbits are just two de Broglie wavelength. For example, in R31, the principal quantum number is "3" and the angular momentum (= tangential ) is "1". As a result, the radial wave becomes 3-1 = 2. And in the lower wavefunction, n = 3 and l = 0, so the radial wave is 3-0 = 3. Schrodinger tangential wave functions also satisfy Sommerfeld rule. (Fig.17) Schrodinger's tangential wavefunction (= angular momentum ) = Sommerfeld model. In Fig.17, the angular moementum of Spherical Harmonics in Schrodinger's hydrogen shows the tangential de Broglie wave's number in one orbit. For example, in e2iφ = cos2φ + isin2φ, one round is two de Broglie wavelength. See this section. Because the rotation by π returns its phase to the original one. As a result, the total number of radial and tangential de Broglie waves means the principal quantum number ( n = energy levels ), in both Bohr-Sommerfeld and Schrodinger's models. Reason why Schrödinger equation is wrong. (Fig.18) Negative kinetic energy is unreal. Unrealistically, the radial region of all orbitals in Schrodinger's hydrogen must be from 0 to infinity. Fig.18 shows 2p orbital of Schrodinger's hydrogen. In 2p orbitals, radial kinetic energy becomes negative in two regions ( r < a1,   a2 < r ) Of course, in this real world, kinetic energy (= 1/2mv2 ) cannot be negative, which means Schrodinger's hydrogen is unreal. In the area of r < a1, potential energy V is lower than total energy E, so, tunnel effect has Nothing to do with this 2p orbital. Furthermore, it is impossible for the bound-state electron to go away to infinity ( and then, return ! ) in all stable atoms and molecules. Classical orbit is realistic. (Fig.19) "Elliptical" orbit (angular momentum = 1). Here we think about the classial elliptical orbit with angular momentum = 1, like 2p orbital. The distance between the perihelion ( aphelion ) and nucleus is a1 ( a2 ). In classical orbit, when the electron has angular momentum, it cannot come closer to nucleus than perihelion (= a1 ). We suppose the angular momentum (= L ) of this orbit is ħ, which is constant. Using Eq.4, tangential kinetic energy (= Tφ ) becomes (Fig.20) Tangential kinetic energy becomes divergent as r → 0. As shown in Fig.20, this tangential kinetic energy diverges to infinity, as r is closer to zero. This Tφ is the inverse of r2, which cannot be cancelled by Coulomb energy (= -1/r ). So, to cancel this divergence of tangential kinetic energy, radial kientic energy must be negative ! Unfortunately, our Nature does NOT depend on this artificial rule, so Schrodinger equation is unreasonable. Radial region must be from 0 to ∞ to satisfy Schrodinger equation. (Fig.21) Picking up only realistic region from a1 to a2 → "discontinuous" radial wavefuction. Some people may think we can pick up only realistic region ( a1 < r < a2 ) with positive kinetic energy from radial wavefucntion. But it is impossible, because this wavefunction becomes discontinuous at a1 and a2 in this case. One orbit means the electron must U-turn at a1 and a2, and returns to the original point. (Fig.22) Discontinuous → Schrodinger equation does NOT hold. As a result, the discontinuous wavefunction of Fig.21 cannot hold in Schrodinger equation for hydrogen. Because "discontinuous" means the gradient (= derivative ) of this wavefunction becomes divergent to infiniy. At the points of a1 and a2, potential energy V and total energy E must be finite. So sudden U-turn at these points makes this radial wavefunction unrelated to the original equation. Continuous wavefunction contains unrealistic regions. (Fig.23) Schrodinger's 2p radial wavefunction.   Unreal "negative" kinetic energy. As shown in Fig.23, the original 2p radial wavefunction is continuous at both ends of r = 0 and ∞ Because the gradient and amplitude of the wavefunction becomes zero at these points. Instead, this wavefunction must contain unrealistic region with negative kinetic energy. As I said in Fig.18, this region has Nothing to do with tunnel effect. Schrodinger equation also uses de Broglie relation. (Eq.5) Schrodinger's radial equation for hydrogen. As shown on this site and this site, they replace the radial parts by χ = rR to satisfy de Broglie relation. As shown in Eq.5, this wavefunction becomes cosine ( or sine ) functions with some wavelength λr. (Eq.6) de Broglie relation holds in radial direction. As a result, Schrodinger equation gives de Broglie relation ( radial momentum pr = h/λr ). Here λr is radial de Broglie wavelength. (Eq.7) Sommerfeld + de Broglie relation = Schrodinger equation. Using Eq.6, Sommerfeld quantization condition of Eq.2 becomes like Eq.7. Eq.7 shows number of ( radial ) de Broglie wavelength in the whole radial path becomes "nr". This means Schrodinger's hydrogen also satisfies an integer times de Broglie wavelength like Bohr model ! Dividing wavefunction into infinitesimal segments. (Fig.24) 2p radial wavefunction consists of different wavelengths. Like in elliptical orbit, radial momentum (= wavelength ) is changing at different points. So, 2p radial wavefunction consists of infinite kinds of de Broglie wavelengths, as shown in Fig.24. Within each small segment (= dr ), we can consider the de Broglie wavelength is a constant value. (Fig.25) Quantization of radial wavelength. As a result, using different wavelengths (= λr ) at different points, Sommferfeld quantization condition in one orbit can be expressed lie Fig.25. Azimuthal wavefunction = nφ × de Broglie wavelength. (Eq.8) Azimuthal Schrodinger equation. Eq.8 is azimuthal Schrodinger equation and wavefunction (= Φ ). In this section, we show this quantized azimuthal wavefunction means an integer times ( tangential ) de Broglie wavelength. (Eq.9) Schrodinger's azimuthal wavefunction = nφ × de Broglie wavelength. Applying usual momentum operator ( in tangential direction ) to this azimuthal wavefunction (= Φ ), we obtain the tangential momentum (= pφ ), as follows, Using de Broglie relation, tangential de Broglie wavelength (= λφ ) becomes Here ħ = h/2π ( h is Planck constant ) is used. Using Eq.11, the number of de Broglie wavelength in one ( tangential ) orbit becomes This equation holds true. As a result, the azimuthal wavefunction of Eq.8 also means an integer number of de Broglie wavelength like Sommerfeld model. Calculation of elliptical Bohr-Sommerfeld model. (Fig.25) Bohr-Sommerfeld model. Here we suppose one electron is orbiting (= rotating or oscillating ) around +Ze nucleus. ( Of course, also in the Schrodinger equation of hydrogen, one electron is moving around +Ze nucleus by the Coulomb force. ) This section is based on Sommerfeld's original paper in 1916 ( Annalen der Physik [4] 51, 1-167 ). Change the rectanglar coordinates into the polar coordinates as follows, When the nucleus is at the origin and interacts with an electron through Coulomb force, the equation of the electron's motion becomes Here we define as follows, (Eq.15)   p = "constant" angular momentum. Only in this section, we use "p" as angular momentum instead of L. If one electron is moving around one central positive charge, this angular momentum ( = p ) is constant (= law of constant areal velocity). The coordinate r is a function of φ, so using Eq.15, we can express the differentiation with respect to t (=time) as follows; Here we define Using Eq.13 and Eq.16, each momentum can be expressed by, Using Eq.14, Eq.16, and Eq.18, the equation of motion becomes, From Eq.19, we obtain the same result of The solution of this σ in Eq.20 becomes, where we suppose that the electron is at the perihelion (=closest point), when φ is zero, as follows, (Fig.26) "Elliptical" orbit of hydrogen-like atom. where the nucleus is at the focus (F1), and eccentricity (=ε) is, Here we prove the equation of Eq.21 ( B=0 ) means an ellipse with the nucleus at its focus. Using the theorem of cosines in Fig.26, and from the definition of the ellipse, From Eq.24, we obtain From Eq.21, Eq.22, and Eq.25, we have As a result, σ becomes, Using Eq.16 and Eq.27, So the kinetic energy (T) becomes, From Eq.27, the potential energy (V) is, So the total energy (W) is, In the Bohr-Sommerfeld quantization condition, the following relations are used, where the angular momentum p is constant. So p becomes an integer times ħ By the way, what do Eq.32 mean ? If we use the de Broglie relation in each direction ( tangential and radial ), Eq.32 means So they express quantization of de Broglie wavelength in each direction. ( In Schrodinger equation, the zero angular momentum is possible. ) The radial quantization condition can be rewritten as Using Eq.27, And from Eq.28, From Eq.35-37, we have Doing the integration by parts in Eq.38, Here we use the following known formula (= complex integral, see this site (p.5) ), [ Proof of Eq.40. ] According to Euler's formula, cosine can be expressed using the complex number z, Using Eq.41 and Eq.42, the left side of Eq.40 is So only the latter number of Eq.44 is used as "pole" in Cauchy's residue theorem. In the residue theorem, only the coefficient of 1/(z-a) is left, like From Eq.43 to Eq.46, the result is we can prove Eq.40. Using the formula of Eq.40, Eq.38 (Eq.39) becomes where the quantization of Eq.32 is used. From Eq.48, we have Substituting Eq.48' into Eq.31, and using Eq.32, the total energy W becomes This result is completely equal to Schrodinger's hydrogen. Here we confirm Bohr-Sommerfeld solution of Eq.49 is valid also in the Schrodinger's hydrogen. As shown on this page, the radial quantization number (= nr ) means the number of de Broglie waves included in the radial orbits. And the nφ means quantized angular momentum (= de Broglie tangential wave's number). Some orbitals of Schrodinger's wavefunction are (Fig.27) Schorodinger's hydrogen = Bohr Sommerfeld model. Fig.27 shows the energy levels of Schrodinger's wavefunctions just obey Bohr-Sommerfeld quantization rules. The most important difference is that Schrodinger's solutions are always from zero to infinity, which is unreal. Schrodinger's hydrogen must contain unrealistic "s" orbitals. (Fig.28) Why Schrodinger's hydrogen must NOT have circular orbits ? Schrodinger's hydrogen eigenfunctions consist of "radial" and angular momentum (= "tangential" ) parts. As you know, Schrodinger's radial parts Always include "radial" momentum, which means there are NO circular orbitals in Schrodinger's hydrogen. Because if the circular orbitals are included in Schrodinger's hydrogen, the radial eigenfunction needs to be constant (= C ), as shown in Fig.28. So if this constant C is not zero, it diverges to infinity, when we normalize it. ( Schrodinger's radial region is from 0 to infinity. ) This is the main reason why Schrodinger's hydrogen includes many unrealistic "S" states which have no angular momentum. Angular momentum L = 0   → de Broglie waves vanish !? (Fig.29) Angular momemtum L = 0 in S wavefunction → destructive interference of de Broglie wave ! As shown on this page, Schrodinger's wavefunctions also satisfy an integer number of de Broglie wavelength. So in s orbital without angular momentum ( L = 0 ), opposite wave phases in one orbit cancel each other due to destructive interference in Schrodinger wavefunction. As a result, stable wavefunction is impossible in "s" orbitals. [ Angular momentum is zero ( L = 0 ) → electrons always "crash" into nucleus ? ] And electron in s orbital always crash ( or penetrate ) into nucleus ? Also in multi electron atoms such as sodium and potassium, 3s or 4s electrons always penetrate the inner electrons and nucleus ? As you feel, this unrealistic s state is impossible in this real world. On this page, we have proved rigorously that Schrodinger's energies are equal to Bohr Sommerfeld's model. Calculation of simple Bohr model. (Ap.1) Bohr model hydrogen. -- Circular orbit. In this section, we review ordinary Bohr model ( see also this site (p.9) ). First, as shown on this page, Bohr's single electron does NOT fall into nucleus just by acceleration. Because vacuum electromagnetic energy around a single electron is NOT energy. (Ap.2) Bohr model conditions. In Ap.2, the first equation means that the centrifugal force is equal to Coulomb force. The second equation is the sum of Coulomb potential energy and kinetic energy. The third equation means the circumference of one circular orbit is an integer (= n ) number of de Broglie wavelength. Condition "1" is included in the condition "2". ( So two conditions "2" and "3" is needed for Bohr's orbit. ) Substituting equation "1" into equation "2", Inserting v of equation "3" into equation "1", we have Here, "r0" ( n = Z = 1 ) means "Bohr radius". "Bohr radius" and "Bohr magneton" are indispensable for quantum mechanics, which is the reason why Bohr model is taught in school even now. Substituting Ap.4 into Ap.3, the total energy E of Bohr model is (Ap.5)   Bohr model = Schrodinger equation. This energy E is the same as Schrodinger's hydrogen. From Ap.4, the ratio of the particle's velocity v ( n=Z=1 ) to the light speed c is (Ap.6)   Fine structure constant α This is the famous fine structure constant α So there is a strong tie between fine structure constant and Bohr's orbit. Separating de Broglie waves into radial and angular parts. (Ap.7)   Radial (= pr ), tangential (= pφ ) In elliptical orbit, we can divide momentum (= p ) into radial (= pr ) and angular (= pφ ) components. According to the Pythagorean theorem, we have Dividing momentum (= p ) by mass (= m ), we get velocity "v". We suppose the electron travels the distance "dq" during the infinitesimal time interval "dt". Using Phythagorean theorem, the moving distance "dq" can be separated into radial and angular (= tangential ) directions. In Ap.10, "r" is the radius, and φ is the angle. Like Ap.9, we have Using de Broglie wavelength (= λ = h/p ), number of de Broglie waves contained in small region "dq" becomes ( Using Ap.9 ) In the same way, using de Broglie wavelength in radial (= λr ) and tangential (= λφ ) directions, number of waves contained in each segment becomes ( using Ap.11 ) From Ap.12 and Ap.13, we get the relation of As a result, we have (Ap.15)   Total number of de Broglie waves   n = nr + nφ Doing the line integral along the orbit in both sides of Ap.14, we get the result of Ap.15. "n" means the total numebr of de Broglie wavelength, and is equal to principal quantum number. "nr" denotes the number of radial de Broglie wavelength along radial path. "nφ" denotes the number of tangential de Broglie wavelength along tangential direction. The relation of   n = nr + nφ   holds. Actual de Broglie wavelength and each component. (Ap.16)   Actual relation between angular and radial wavelengths. We can explain the reason why de Broglie waves can be separated into angular and radial directins. The matter wave of Ap.16, has de Broglie wavelength of λ, so the total momnetum p = h/λ. As you see Ap.16, angular and radial de Broglie wavelengths are longer than the total wavelength. (   λφ, λr   >   λ ). (Ap.17) Angular and radial components of momentum "p" Angular (= pφ ) and radial (= pr ) components of the whole momentum "p" must satisfy Pythagorean theorem, as shown in Ap.17. From Ap.16, you find λ can be expressed as Substituting Ap.18 into the total momentum p of Ap.17, we find de Broglie relation satisfies Pythagorean theorem, as follows, As a result, de Broglie relation of p = h/λ agrees with classical mechanics, too. Each direction satisfies an integer number of de Broglie wavelegnth. (Ap.20)   Total, angular, radial directions = an integer × de Boglie wavelength. In hydrogen-like atom, the orbit becomes elliptical or circular. So, at two points ( apherion and perihelion ), an electron moves in angular direction. Of course, at the same point ( φ = 0 and 2π ), the wave phase must agree with each other. As shown in Ap.20, when the circumference of the whole elliptical orbits is equal to an integer number of de Broglie wavelength, the path in the angular direction also satisfies an integer × ( tangential ) de Broglie wavelength. As I said in Ap.15, the number of "radial" waves is given by nr = n - nφ. So in this case, "radial" direction also satisfies an integer (= nr ) × de Broglie wavelength. Solution of Schrödinger's hydrogen. (Ap.21)   Schrodinger equation of hydrogen. In this section, we solve the Schrodinger equation of hydrogen atom, like on this site and this site. ψ is total wavefunction of hydrogen. Total energy E is the sum of kinetic energy and Coulomb potential energy (= V ) both in Schrodinger and Bohr's hydrogens. Schrodinger equation also uses de Broglie relation ( λ = h/p ), so momentum p is replaced by operater, as shown in Ap.22. Using spherical coodinate, Ap.22 becomes They use the form of wavefunction that assumes the separation into radial (= R ) and angular (= Y ) portions, as follows, Dividing both sides of Ap.24 by we have The left side of Ap.27 is dependent only on radial variable "r". And the right side is angularly-dependent (= θ and φ ) equation. So we can consider both sides as some constant (= -l(l+1)ħ2 ) As a result, the left side of Ap.27 becomes The right side of Ap.27 becomes We replace Y by a product of single-variable functions. ( Y → θ and φ ) Inserting Ap.30 into Ap.29, and dividing it by Φ, As a result, the azimuthal wavefunction Φ can be expressed as ml must be an integer, otherwise the value of the azimuthal wave function would be different for φ = 0 and φ = 2π.  , as shown in Ap.33. So in this case, the equation of Ap.24 is broken. As shown in this section, Ap.32 means Schrodinger's hydrogen also obeys an integer number of de Broglie wavelength in angular direction.   ← Sommerfeld quantisation. For angular equation of Ap.31 to have finite solution, angular constant l must satisfy Ap.34. [ Radial equation. ] (Ap.28)   Radial equation of hydrogen. To make calculation easier, they substitute χ for rR in radial equation, as shown in Ap.35. See also this site and this site. Using this χ in Ap.28, we have As a result, the radial part of Ap.28 becomes Ap.38 uses momentum operator based on de Broglie relation in radial direction. So, this wavefunction χ shows de Broglie waves in radial direction. Substituting ρ for βr, they express χ as As you see the form of χ in Ap.40, this radial wavefunction always becomes zero at both ends ( χ = 0,   r =0 and ∞ ). So, at this point, this χ becomes an integer times de Broglie wavelength. L(ρ) is a polynomial in ρ, For this polynomial to have finite terms, the enrgy E must be quantized as follows, The principal quantum number n is the sum of angular and radial quantum numbers. n = radial + angular quantum numbers in any models. This energy solution just agrees with that of Bohr-Sommerfeld model. In both of Schrodinger and Bohr's hydrogens, the sum of radial and angular ( wave ) numbers are important for energy levels (= n ). (Ap.44) The same mechanism is used in Schrodinger and Bohr-Sommerfeld models. 2014/6/11 updated. Feel free to link to this site.
1b3d618d7fd430c9
Arquivo da tag: ciência Reformation in the Church of Science (The New Atlantis) How the truth monopoly was broken up Andrea Saltelli and Daniel Sarewitz Spring 2022 We are suffering through a pandemic of lies — or so we hear from leading voices in media, politics, and academia. Our culture is infected by a disease that has many names: fake news, post-truth, misinformation, disinformation, mal-information, anti-science. The affliction, we are told, is a perversion of the proper role of knowledge in a healthy information society. What is to be done? To restore truth, we need strategies to “get the facts straight.” For example, we need better “science communication,” “independent fact-checking,” and a relentless commitment to exposing and countering falsehoods. This is why the Washington Post fastidiously counted 30,573 “false or misleading claims” by President Trump during his four years in office. Facebook, meanwhile, partners with eighty organizations worldwide to help it flag falsehoods and inform users of the facts. And some disinformation experts recently suggested in the New York Times that the Biden administration should appoint a “reality czar,” a central authority tasked with countering conspiracy theories about Covid and election fraud, who “could become the tip of the spear for the federal government’s response to the reality crisis.” Such efforts reflect the view that untruth is a plague on our information society, one that can and must be cured. If we pay enough responsible, objective attention to distinguishing what is true from what is not, and thus excise misinformation from the body politic, people can be kept safe from falsehood. Put another way, it is an implicitly Edenic belief in the original purity of the information society, a state we have lapsed from but can yet return to, by the grace of fact-checkers. We beg to differ. Fake news is not a perversion of the information society but a logical outgrowth of it, a symptom of the decades-long devolution of the traditional authority for governing knowledge and communicating information. That authority has long been held by a small number of institutions. When that kind of monopoly is no longer possible, truth itself must become contested. This is treacherous terrain. The urge to insist on the integrity of the old order is widespread: Truth is truth, lies are lies, and established authorities must see to it that nobody blurs the two. But we also know from history that what seemed to be stable regimes of truth may collapse, and be replaced. If that is what is happening now, then the challenge is to manage the transition, not to cling to the old order as it dissolves around us. Truth, New and Improved The emergence of widespread challenges to the control of information by mainstream social institutions developed in three phases. First, new technologies of mass communication in the twentieth century — radio, television, and significant improvements in printing, further empowered by new social science methods — enabled the rise of mass-market advertising, which quickly became an essential tool for success in the marketplace. Philosophers like Max Horkheimer and Theodor Adorno were bewildered by a world where, thanks to these new forms of communication, unabashed lies in the interest of selling products could become not just an art but an industry. The rise of mass marketing created the cultural substrate for the so-called post-truth world we live in now. It normalized the application of hyperbole, superlatives, and untestable claims of superiority to the rhetoric of everyday commerce. What started out as merely a way to sell new and improved soap powder and automobiles amounts today to a rhetorical infrastructure of hype that infects every corner of culture: the way people promote their careers, universities their reputations, governments their programs, and scientists the importance of their latest findings. Whether we’re listening to a food corporation claim that its oatmeal will keep your heart healthy or a university press office herald a new study that will upend everything we know, radical skepticism would seem to be the rational stance for information consumers. Politics, Scientized In a second, partly overlapping phase in the twentieth century, science underwent a massive expansion of its role into the domain of public affairs, and thus into highly contestable subject matters. Spurred by a wealth of new instruments for measuring the world and techniques for analyzing the resulting data, policies on agriculture, health, education, poverty, national security, the environment and much more became subject to new types of scientific investigation. As never before, science became part of the language of policymaking, and scientists became advocates for particular policies. The dissolving boundary between science and politics was on full display by 1958, when the chemist Linus Pauling and physicist Edward Teller debated the risks of nuclear weapons testing on a U.S. television broadcast, a spectacle that mixed scientific claims about fallout risks with theories of international affairs and assertions of personal moral conviction. The debate presaged a radical transformation of science and its social role. Where science was once a rarefied, elite practice largely isolated from society, scientific experts were now mobilized in increasing numbers to form and inform politics and policymaking. Of course, society had long been shaped, sometimes profoundly, by scientific advances. But in the second half of the twentieth century, science programs started to take on a rapidly expanding portfolio of politically divisive issues: determining the cancer-causing potential of food additives, pesticides, and tobacco; devising strategies for the U.S. government in its nuclear arms race against the Soviet Union; informing guidelines for diet, nutrition, and education; predicting future energy supplies, food supplies, and population growth; designing urban renewal programs; choosing nuclear waste disposal sites; and on and on. Philosopher-mathematicians Silvio Funtowicz and Jerome Ravetz recognized in 1993 that a new kind of science was emerging, which they termed “post-normal science.” This kind of science was inherently contestable, both because it dealt with the irreducible uncertainties of complex and messy problems at the intersection of nature and society, and because it was being used for making decisions that were themselves value-laden and contested. Questions that may sound straightforward, such as “Should women in their forties get regular mammograms?” or “Will genetically modified crops and livestock make food more affordable?” or “Do the benefits of decarbonizing our energy production outweigh the costs?” became the focus of intractable and never-ending scientific and political disputes. This situation remained reasonably manageable through the 1990s, because science communication was still largely controlled by powerful institutions: governments, corporations, and universities. Even if these institutions were sometimes fiercely at odds, all had a shared interest in maintaining the idea of a unitary science that provided universal truths upon which rational action should be based. Debates between experts may have raged — often without end — but one could still defend the claim that the search for truth was a coherent activity carried out by special experts working in pertinent social institutions, and that the truths emerging from their work would be recognizable and agreed-upon when finally they were determined. Few questioned the fundamental notion that science was necessary and authoritative for determining good policy choices across a wide array of social concerns. The imperative remained to find facts that could inform action — a basic tenet of Enlightenment rationality. Science, Democratized The rise of the Internet and social media marks the third phase of the story, and it has now rendered thoroughly implausible any institutional monopoly on factual claims. As we are continuing to see with Covid, the public has instantly available to it a nearly inexhaustible supply of competing and contradictory claims, made by credentialed experts associated with august institutions, about everything from mask efficacy to appropriate social distancing and school closure policies. And many of the targeted consumers of these claims are already conditioned to be highly skeptical of the information they receive from mainstream media. Today’s information environment certainly invites mischievous seeding of known lies into public discourse. But bad actors are not the most important part of the story. Institutions can no longer maintain their old stance of authoritative certainty about information — the stance they need to justify their actions, or to establish a convincing dividing line between true news and fake news. Claims of disinterest by experts acting on behalf of these institutions are no longer plausible. People are free to decide what information, and in which experts, they want to believe. The Covid lab-leak hypothesis was fake news until that news itself became fake. Fact-checking organizations are themselves now subject to accusations of bias: Recently, Facebook flagged as “false” a story in the esteemed British Medical Journal about a shoddy Covid vaccine trial, and the editors of the journal in turn called Facebook’s fact-checking “inaccurate, incompetent and irresponsible.” No political system exists without its share of lies, obfuscation, and fake news, as Plato and Machiavelli taught. Yet even those thinkers would be puzzled by the immense power of modern technologies to generate stories. Ideas have become a battlefield, and we are all getting lost in the fog of the truth wars. When everything seems like it can be plausible to someone, the term “fake news” loses its meaning. The celebrated expedient that an aristocracy has the right and the mission to offer “noble lies” to the citizens for their own good thus looks increasingly impotent. In October 2020, U.S. National Institutes of Health director Francis Collins, a veritable aristocrat of the scientific establishment, sought to delegitimize the recently released Great Barrington Declaration. Crafted by a group he referred to as “fringe epidemiologists” (they were from Harvard, Stanford, and Oxford), the declaration questioned the mainstream lockdown approach to the pandemic, including school and business closures. “There needs to be a quick and devastating published take down,” Collins wrote in an email to fellow aristocrat Anthony Fauci. But we now live in a moment where suppressing that kind of dissent has become impossible. By May 2021, that “fringe” became part of a new think tank, the Brownstone Institute, founded in reaction to what they describe as “the global crisis created by policy responses to the Covid-19 pandemic.” From this perspective, policies advanced by Collins and Fauci amounted to “a failed experiment in full social and economic control” reflecting “a willingness on the part of the public and officials to relinquish freedom and fundamental human rights in the name of managing a public health crisis.” The Brownstone Institute’s website is a veritable one-stop Internet shopping haven for anyone looking for well-credentialed expert opinions that counter more mainstream expert opinions on Covid. Similarly, claims that the science around climate change is “settled,” and that therefore the world must collectively work to decarbonize the global energy system by 2050, have engendered a counter-industry of dissenting experts, organizations, and websites. At this point, one might be forgiven for speculating that the public is being fed such a heavy diet of Covid and climate change precisely because these are problems that have been framed politically as amenable to a scientific treatment. But it seems that the more the authorities insist on the factiness of facts, the more suspect these become to larger and larger portions of the populace. A Scientific Reformation The introduction of the printing press in the mid-fifteenth century triggered a revolution in which the Church lost its monopoly on truth. Millions of books were printed in just a few decades after Gutenberg’s innovation. Some people held the printing press responsible for stoking collective economic manias and speculative bubbles. It allowed the widespread distribution of astrological almanacs in Europe, which fed popular hysteria around prophesies of impending doom. And it allowed dissemination of the Malleus Maleficarum, an influential treatise on demonology that contributed to rising persecution of witches. Though the printing press allowed sanctioned ideas to spread like never before, it also allowed the spread of serious but hitherto suppressed ideas that threatened the legitimacy of the Church. A range of alternative philosophical, moral, and ideological perspectives on Christianity became newly accessible to ever-growing audiences. So did exposés of institutional corruption, such as the practice of indulgences — a market for buying one’s way out of purgatory that earned the Church vast amounts of money. Martin Luther, in particular, understood and exploited the power of the printing press in pursuing his attacks on the Church — one recent historical account, Andrew Pettegree’s book Brand Luther, portrays him as the first mass-market communicator. “Beginning of the Reformation”: Martin Luther directs the posting of his Ninety-five Theses, protesting the practice of the sale of indulgences, to the door of the castle church in Wittenberg on October 31, 1517. W. Baron von Löwenstern, 1830 / Library of Congress To a religious observer living through the beginning of the Reformation, the proliferation of printed material must have appeared unsettling and dangerous: the end of an era, and the beginning of a threatening period of heterodoxy, heresies, and confusion. A person exposed to the rapid, unchecked dispersion of printed matter in the fifteenth century might have called many such publications fake news. Today many would say that it was the Reformation itself that did away with fake news, with the false orthodoxies of a corrupted Church, opening up a competition over ideas that became the foundation of the modern world. Whatever the case, this new world was neither neat nor peaceful, with the religious wars resulting from the Church’s loss of authority over truth continuing until the mid-seventeenth century. Like the printing press in the fifteenth century, the Internet in the twenty-first has radically transformed and disrupted conventional modes of communication, destroyed the existing structure of authority over truth claims, and opened the door to a period of intense and tumultuous change. Those who lament the death of truth should instead acknowledge the end of a monopoly system. Science was the pillar of modernity, the new privileged lens to interpret the real world and show a pathway to collective good. Science was not just an ideal but the basis for a regime, a monopoly system. Within this regime, truth was legitimized in particular private and public institutions, especially government agencies, universities, and corporations; it was interpreted and communicated by particular leaders of the scientific community, such as government science advisors, Nobel Prize winners, and the heads of learned societies; it was translated for and delivered to the laity in a wide variety of public and political contexts; it was presumed to point directly toward right action; and it was fetishized by a culture that saw it as single and unitary, something that was delivered by science and could be divorced from the contexts in which it emerged. Such unitary truths included above all the insistence that the advance of science and technology would guarantee progress and prosperity for everyone — not unlike how the Church’s salvific authority could guarantee a negotiated process for reducing one’s punishment for sins. To achieve this modern paradise, certain subsidiary truths lent support. One, for example, held that economic rationality would illuminate the path to universal betterment, driven by the principle of comparative advantage and the harmony of globalized free markets. Another subsidiary truth expressed the social cost of carbon emissions with absolute precision to the dollar per ton, with the accompanying requirement that humans must control the global climate to the tenth of a degree Celsius. These ideas are self-evidently political, requiring monopolistic control of truth to implement their imputed agendas. An easy prophesy here is that wars over scientific truth will intensify, as did wars over religious truth after the printing press. Those wars ended with the Peace of Westphalia in 1648, followed, eventually, by the creation of a radically new system of governance, the nation-state, and the collapse of the central authority of the Catholic Church. Will the loss of science’s monopoly over truth lead to political chaos and even bloodshed? The answer largely depends upon the resilience of democratic institutions, and their ability to resist the authoritarian drift that seems to be a consequence of crises such as Covid and climate change, to which simple solutions, and simple truths, do not pertain. Both the Church and the Protestants enthusiastically adopted the printing press. The Church tried to control it through an index of forbidden books. Protestant print shops adopted a more liberal cultural orientation, one that allowed for competition among diverse ideas about how to express and pursue faith. Today we see a similar dynamic. Mainstream, elite science institutions use the Internet to try to preserve their monopoly over which truths get followed where, but the Internet’s bottom-up, distributed architecture appears to give a decisive advantage to dissenters and their diverse ideologies and perspectives. Holding on to the idea that science always draws clear boundaries between the true and the false will continue to appeal strongly to many sincere and concerned people. But if, as in the fifteenth century, we are now indeed experiencing a tumultuous transition to a new world of communication, what we may need is a different cultural orientation toward science and technology. The character of this new orientation is only now beginning to emerge, but it will above all have to accommodate the over-abundance of competing truths in human affairs, and create new opportunities for people to forge collective meaning as they seek to manage the complex crises of our day. By The Times Editorial Board March 7, 2022 3 AM PT Marcelo Leite: Virada Psicodélica – Artigo aponta injustiça psicodélica contra saber indígena (Folha de S.Paulo) Marcelo Leite 7 de março de 2022 A cena tem algo de surreal: pesquisador europeu com o corpo tomado por grafismos indígenas tem na cabeça um gorro com dezenas de eletrodos para eletroencefalografia (EEG). Um membro do povo Huni Kuin sopra rapé na narina do branco, que traz nas costas mochila com aparelhos portáteis para registrar suas ondas cerebrais. A Expedition Neuron aconteceu em abril de 2019, em Santa Rosa do Purus (AC). No programa, uma tentativa de diminuir o fosso entre saberes tradicionais sobre uso da ayahuasca e a consagração do chá pelo chamado renascimento psicodélico para a ciência. O resultado mais palpável da iniciativa, até aqui, apareceu num controverso texto sobre ética, e não dados, de pesquisa. O título do artigo no periódico Transcultural Psychiatry prometia: “Superando Injustiças Epistêmicas no Estudo Biomédico da Ayahuasca – No Rumo de Regulamentação Ética e Sustentável”. Desde a publicação, em 6 de janeiro, o texto gerou mais calor que luz –mesmo porque tem sido criticado fora das vistas do público, não às claras. Os autores Eduardo Ekman Schenberg, do Instituto Phaneros, e Konstantin Gerber, da PUC-SP, questionam a autoridade da ciência com base na dificuldade de empregar placebo em experimentos com psicodélicos, na ênfase dada a aspectos moleculares e no mal avaliado peso do contexto (setting) para a segurança do uso, quesito em que cientistas teriam muito a aprender com indígenas. Entre os alvos das críticas figuram pesquisas empreendidas na última década pelos grupos de Jaime Hallak na USP de Ribeirão Preto e de Dráulio de Araújo no Instituto do Cérebro da UFRN, em particular sobre efeito da ayahuasca na depressão. Procurados, cientistas e colaboradores desses grupos não responderam ou preferiram não se pronunciar. O potencial antidepressivo da dimetiltriptamina (DMT), principal composto psicoativo do chá, está no foco também de pesquisadores de outros países. Mas outras substâncias psicodélicas, como MDMA e psilocibina, estão mais próximas de obter reconhecimento de reguladores como medicamentos psiquiátricos. Dado o efeito óbvio de substâncias como a ayahuasca na mente e no comportamento da pessoa, argumentam Schenberg e Gerber, o sistema duplo-cego (padrão ouro de ensaios biomédicos) ficaria inviabilizado: tanto o voluntário quanto o experimentador quase sempre sabem se o primeiro tomou um composto ativo ou não. Isso aniquilaria o valor supremo atribuído a estudos desse tipo no campo psicodélico e na biomedicina em geral. Outro ponto criticado por eles está na descontextualização e no reducionismo de experimentos realizados em hospitais ou laboratórios, com o paciente cercado de aparelhos e submetido a doses fixadas em miligramas por quilo de peso. A precisão é ilusória, afirmam, com base no erro de um artigo que cita concentração de 0,8 mg/ml de DMT e depois fala em 0,08 mg/ml. A sanitização cultural do setting, por seu lado, faria pouco caso dos elementos contextuais (floresta, cânticos, cosmologia, rapé, danças, xamãs) que para povos como os Huni Kuin são indissociáveis do que a ayahuasca tem a oferecer e ensinar. Ao ignorá-los, cientistas estariam desprezando tudo o que os indígenas sabem sobre uso seguro e coletivo da substância. Mais ainda, estariam ao mesmo tempo se apropriando e desrespeitando esse conhecimento tradicional. Uma atitude mais ética de pesquisadores implicaria reconhecer essa contribuição, desenvolver protocolos de pesquisa com participação indígena, registrar coautoria em publicações científicas, reconhecer propriedade intelectual e repartir eventuais lucros com tratamentos e patentes. “A complementaridade entre antropologia, psicanálise e psiquiatria é um dos desafios da etnopsiquiatria”, escrevem Schenberg e Gerber. “A iniciativa de levar ciência biomédica à floresta pode ser criticada como uma tentativa de medicalizar o xamanismo, mas também pode constituir uma possibilidade de diálogo intercultural centrado na inovação e na resolução de ‘redes de problemas’.” “É particularmente notável que a biomedicina se aventure agora em conceitos como ‘conexão’ e ‘identificação com a natureza’ [nature-relatedness] como efeito de psicodélicos, mais uma vez, portanto, se aproximando de conclusões epistêmicas derivadas de práticas xamânicas. O desafio final seria, assim, entender a relação entre bem-estar da comunidade e ecologia e como isso pode ser traduzido num conceito ocidental de saúde integrada.” As reações dos poucos a criticar abertamente o texto e suas ideias grandiosas podem ser resumidas num velho dito maldoso da academia: há coisas boas e novas no artigo, mas as coisas boas não são novas e as coisas novas não são boas. Levar EEG para a floresta do Acre, por exemplo, não resolveria todos os problemas. Schenberg é o elo de ligação entre o artigo na Transcultural Psychiatry e a Expedition Neuron, pois integrou a incursão ao Acre em 2019 e colabora nesse estudo de EEG com o pesquisador Tomas Palenicek, do Instituto Nacional de Saúde Mental da República Checa. Eis um vídeo de apresentação, em inglês: “Estamos engajados, Konstantin e eu, em projeto inovador com os Huni Kuin e pesquisadores europeus, buscando construir uma parceria epistemicamente justa, há mais de três anos”, respondeu Schenberg quando questionado sobre o cumprimento, pelo estudo com EEG, das exigências éticas apresentadas no artigo. Na apresentação da Expedition Neuron, ele afirma: “Nessa primeira expedição curta e exploratória [de 2019], confirmamos que há interesse mútuo de cientistas e uma cultura indígena tradicional da Amazônia em explorar conjuntamente a natureza da consciência e como sua cura tradicional funciona, incluindo –pela primeira vez– registros de atividade cerebral num cenário que muitos considerariam demasiado desafiador tecnicamente”. “Consideramos de supremo valor investigar conjuntamente como os rituais e medicinas dos Huni Kuin afetam a cognição humana, as emoções e os vínculos de grupo e analisar a base neural desses estados alterados de consciência, incluindo possivelmente experiências místicas na floresta.” Schenberg e seus colaboradores planejam nova expedição aos Huni Kuin para promover registros de EEG múltiplos e simultâneos com até sete indígenas durante cerimônias com ayahuasca. A ideia é testar a “possibilidade muito intrigante” de sincronia entre cérebros: “Interpretada pelos Huni Kuin e outros povos ameríndios como um tipo de portal para o mundo espiritual, a ayahuasca é conhecida por fortalecer intensa e rapidamente vínculos comunitários e sentimentos de empatia e proximidade com os outros.” Os propósitos de Schenberg e Gerber não convenceram a antropóloga brasileira Bia Labate, diretora do Instituto Chacruna em São Francisco (EUA). “Indígenas não parecem ter sido consultados para a produção do texto, não há vozes nativas, não são coautores, e não temos propostas específicas do que seria uma pesquisa verdadeiramente interétnica e intercultural.” Para a antropóloga, ainda que a Expedition Neuron tenha conseguido autorização para a pesquisa, algo positivo, não configura “epistemologia alternativa à abordagem cientificista e etnocêntrica”. Uma pesquisa interétnica, em sua maneira de ver, implicaria promover uma etnografia que levasse a sério a noção indígena de que plantas são espíritos, têm agência própria, e que o mundo natural também é cultural, tem subjetividade, intencionalidade. “Todos sabemos que a bebida ayahuasca não é a mesma coisa que ayahuasca freeze dried [liofilizada]; que o contexto importa; que os rituais e coletivos que participam fazem diferença. Coisas iguais ou análogas já haviam sido apontadas pela literatura antropológica, cujas referências foram deixadas de lado pelos autores.” Labate também discorda de que os estudos com ayahuasca no Brasil negligenciem o reconhecimento de quem chegou antes a ela: “Do ponto de vista global, é justamente uma marca e um diferencial da pesquisa científica brasileira o fato de que houve, sim, diálogo com participantes das religiões ayahuasqueiras. Estes também são sujeitos legítimos de pesquisa, e não apenas os povos originários”. Schenberg e Palenicek participaram em 2020 de um encontro com outra antropóloga, a franco-colombiana Emilia Sanabria, líder no projeto Encontros de Cura, do Conselho Nacional de Pesquisa Científica da França (CNRS). Ao lado do indígena Leopardo Yawa Bane, o trio debateu o estudo com EEG no painel virtual “Levando o Laboratório até a Ayahuasca”, da Conferência Interdisciplinar sobre Pesquisa Psicodélica (ICPR). Há vídeo disponível, em inglês: Sanabria, que fala português e conhece os Huni Kuin, chegou a ser convidada por Schenberg para integrar a expedição, mas declinou, por avaliar que não se resolveria a “incomensurabilidade epistemológica” entre o pensamento indígena e o que a biomedicina quer provar. Entende que a discussão proposta na Transcultural Psychiatry é importante, apesar de complexa e não exatamente nova. Em entrevista ao blog, afirmou que o artigo parece reinventar a roda, ao desconsiderar um longo debate sobre a assimilação de plantas e práticas tradicionais (como a medicina chinesa) pela ciência ocidental: “Não citam a reflexão anterior. É bom que ponham a discussão na mesa, mas há bibliografia de mais de um século”. A antropóloga declarou ver problema na postura do artigo, ao apresentar-se como salvador dos nativos. “Não tem interlocutor indígena citado como autor”, pondera, corroborando a crítica de Labate, como se os povos originários precisassem ser representados por não índios. “A gente te dá um espacinho aqui no nosso mundo.” A questão central de uma colaboração respeitosa, para Sanabria, é haver prioridade e utilidade no estudo também para os Huni Kuin, e não só para os cientistas. Ao apresentar esse questionamento no painel, recebeu respostas genéricas de Schenberg e Palenicek, não direta e concretamente benéficas para os Huni Kuin –por exemplo, que a ciência pode ajudar na rejeição de patentes sobre ayahuasca. Na visão da antropóloga, “é linda a ideia de levar o laboratório para condições naturalistas”, mas não fica claro como aquela maquinaria toda se enquadraria na lógica indígena. No fundo, trata-se de um argumento simétrico ao brandido pelos autores do artigo contra a pesquisa psicodélica em ambiente hospitalar: num caso se descontextualiza a experiência psicodélica total, socializada; no outro, é a descontextualização tecnológica que viaja e invade a aldeia. Sanabria vê um dilema quase insolúvel, para povos indígenas, na pactuação de protocolos de pesquisa com a renascida ciência psicodélica. O que em 2014 parecia para muitos uma nova maneira de fazer ciência, com outros referenciais de avaliação e prova, sofreu uma “virada capitalista” desde 2018 e terminou dominado pela lógica bioquímica e de propriedade intelectual. “Os povos indígenas não podem cair fora porque perdem seus direitos. Mas também não podem entrar [nessa lógica], porque aí perdem sua perspectiva identitária.” “Molecularizar na floresta ou no laboratório dá no mesmo”, diz Sanabria. “Não vejo como reparação de qualquer injustiça epistêmica. Não vejo diferença radical entre essa pesquisa e o estudo da Fernanda [Palhano-Fontes]”, referindo-se à crítica “agressiva” de Schenberg e Gerber ao teste clínico de ayahuasca para depressão no Instituto do Cérebro da UFRN, extensiva aos trabalhos da USP de Ribeirão Preto. A dupla destacou, por exemplo, o fato de autores do estudo da UFRN indicarem no artigo de 2019 que 4 dos 29 voluntários no experimento ficaram pelo menos uma semana internados no Hospital Universitário Onofre Lopes, em Natal. Lançaram, com isso, a suspeita de que a segurança na administração de ayahuasca tivesse sido inadequadamente tratada. “Nenhum desses estudos tentou formalmente comparar a segurança no ambiente de laboratório com qualquer um dos contextos culturais em que ayahuasca é comumente usada”, pontificaram Schenberg e Gerber. “Porém, segundo nosso melhor conhecimento, nunca se relatou que 14% das pessoas participantes de um ritual de ayahuasca tenham requerido uma semana de hospitalização.” O motivo de internação, contudo, foi trivial: pacientes portadores de depressão resistente a medicamentos convencionais, eles já estavam hospitalizados devido à gravidade de seu transtorno mental e permaneceram internados após a intervenção. Ou seja, a internação não teve a ver com terem tomado ayahuasca. Este blog também questionou Schenberg sobre o possível exagero em pinçar um erro que poderia ser de digitação (0,8 mg/ml ou 0,08 mg/ml), no artigo de 2015 da USP de Ribeirão, como flagrante de imprecisão que poria em dúvida a superioridade epistêmica da biomedicina psicodélica. “Se dessem mais atenção aos relatos dos voluntários/pacientes, talvez tivessem se percebido do fato”, retorquiu o pesquisador do Instituto Phaneros. “Além da injustiça epistêmica com os indígenas, existe a injustiça epistêmica com os voluntários/pacientes, que também discutimos brevemente no artigo.” Schenberg tem vários trabalhos publicados que se encaixariam no paradigma biomédico agora em sua mira. Seria seu artigo com Gerber uma autocrítica sobre a atividade pregressa? “Sempre fui crítico de certas limitações biomédicas e foi somente com muito esforço que consegui fazer meu pós-doc sem, por exemplo, usar um grupo placebo, apesar de a maioria dos colegas insistirem que assim eu deveria fazer, caso contrário ‘não seria científico’…”. “No fundo, o argumento é circular, usando a biomedicina como critério último para dar respostas à crítica à biomedicina”, contesta Bia Labate. “O texto não resolve o que se propõe a resolver, mas aprofunda o gap [desvão] entre epistemologias originárias e biomédicas ao advogar por novas maneiras de produzir biomedicina a partir de critérios de validação… biomédicos.” Americans’ Trust in Scientists, Other Groups Declines (Pew Research Center) Republicans’ confidence in medical scientists down sharply since early in the coronavirus outbreak By Brian Kennedy, Alec Tyson and Cary Funk February 15, 2022 How we did this Pew Research Center conducted this study to understand how much confidence Americans have in groups and institutions in society, including scientists and medical scientists. We surveyed 14,497 U.S. adults from Nov. 30 to Dec. 12, 2021. The survey was conducted on Pew Research Center’s American Trends Panel (ATP) and included an oversample of Black and Hispanic adults from the Ipsos KnowledgePanel. A total of 3,042 Black adults (single-race, not Hispanic) and 3,716 Hispanic adults were sampled. This is made possible by The Pew Charitable Trusts, which received support from Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation. Americans’ confidence in groups and institutions has turned downward compared with just a year ago. Trust in scientists and medical scientists, once seemingly buoyed by their central role in addressing the coronavirus outbreak, is now below pre-pandemic levels. Chart shows public confidence in scientists and medical scientists has declined over the last year Overall, 29% of U.S. adults say they have a great deal of confidence in medical scientists to act in the best interests of the public, down from 40% who said this in November 2020. Similarly, the share with a great deal of confidence in scientists to act in the public’s best interests is down by 10 percentage points (from 39% to 29%), according to a new Pew Research Center survey. The new findings represent a shift in the recent trajectory of attitudes toward medical scientists and scientists. Public confidence in both groups had increased shortly after the start of the coronavirus outbreak, according to an April 2020 survey. Current ratings of medical scientists and scientists have now fallen below where they were in January 2019, before the emergence of the coronavirus. Scientists and medical scientists are not the only groups and institutions to see their confidence ratings decline in the last year. The share of Americans who say they have a great deal of confidence in the military to act in the public’s best interests has fallen 14 points, from 39% in November 2020 to 25% in the current survey. And the shares of Americans with a great deal of confidence in K-12 public school principals and police officers have also decreased (by 7 and 6 points, respectively). Large majorities of Americans continue to have at least a fair amount of confidence in medical scientists (78%) and scientists (77%) to act in the public’s best interests. These ratings place them at the top of the list of nine groups and institutions included in the survey. A large majority of Americans (74%) also express at least a fair amount of confidence in the military to act in the public’s best interests. Roughly two-thirds say this about police officers (69%) and K-12 public school principals (64%), while 55% have at least a fair amount of confidence in religious leaders. The public continues to express lower levels of confidence in journalists, business leaders and elected officials, though even for these groups, public confidence is tilting more negative. Four-in-ten say they have a great deal or a fair amount of confidence in journalists and business leaders to act in the public’s best interests; six-in-ten now say they have not too much or no confidence at all in these groups. Ratings for elected officials are especially negative: 24% say they have a great deal or fair amount of confidence in elected officials, compared with 76% who say they have not too much or no confidence in them. The survey was fielded Nov. 30 through Dec. 12, 2021, among 14,497 U.S. adults, as the omicron variant of the coronavirus was first detected in the United States – nearly two years since the coronavirus outbreak took hold. Recent surveys this year have found declining ratings for how President Joe Biden has handled the coronavirus outbreak as well as lower ratings for his job performance – and that of Congress – generally. Partisan differences over trust in medical scientists, scientists continue to widen since the coronavirus outbreak Democrats remain more likely than Republicans to express confidence in medical scientists and scientists to act in the public’s best interests. Chart shows Democrats remain more confident than Republicans in medical scientists; ratings fall among both groups However, there has been a significant decline in public confidence in medical scientists and scientists among both partisan groups. Among Democrats and Democratic-leaning independents, nine-in-ten express either a great deal (44%) or a fair amount (46%) of confidence in medical scientists to act in the public’s best interests. However, the share expressing strong confidence in medical scientists has fallen 10 points since November 2020. There has been a similar decline in the share of Democrats holding the strongest level of confidence in scientists since November 2020. (Half of the survey respondents were asked about their confidence in “medical scientists,” while the other half were asked about “scientists.”) Still, ratings for medical scientists, along with those for scientists, remain more positive than those for other groups in the eyes of Democrats and independents who lean to the Democratic Party. None of the other groups rated on the survey garner as much confidence; the closest contenders are public school principals and the military. About three-quarters (76%) of Democrats and Democratic leaners have at least a fair amount of confidence in public school principals; 68% say the same about the military. There has been a steady decline in confidence in medical scientists among Republicans and Republican leaners since April 2020. In the latest survey, just 15% have a great deal of confidence in medical scientists, down from 31% who said this in April 2020 and 26% who said this in November 2020. There has been a parallel increase in the share of Republicans holding negative views of medical scientists, with 34% now saying they have not too much or no confidence at all in medical scientists to act in the public’s best interests – nearly three times higher than in January 2019, before the coronavirus outbreak. Republicans’ views of scientists have followed a similar trajectory. Just 13% have a great deal of confidence in scientists, down from a high of 27% in January 2019 and April 2020. The share with negative views has doubled over this time period; 36% say they have not too much or no confidence at all in scientists in the latest survey. Republicans’ confidence in other groups and institutions has also declined since the pandemic took hold. The share of Republicans with at least a fair amount of confidence in public school principals is down 27 points since April 2020. Views of elected officials, already at low levels, declined further; 15% of Republicans have at least a fair amount of confidence in elected officials to act in the public’s best interests, down from 37% in April 2020. Race and ethnicity, education, partisan affiliation each shape confidence in medical scientists People’s assessments of scientists and medical scientists are tied to several factors, including race and ethnicity as well as levels of education and partisan affiliation. Chart shows confidence in medical scientists declines among White, Black and Hispanic adults since April 2020 Looking across racial and ethnic groups, confidence in medical scientists declined at least modestly among White and Black adults over the past year. The decline was especially pronounced among White adults. There is now little difference between how White, Black and Hispanic adults see medical scientists. This marks a shift from previous Pew Research Center surveys, where White adults were more likely than Black adults to express high levels of confidence in medical scientists. Among White adults, the share with a great deal of confidence in medical scientists to act in the best interests of the public has declined from 43% to 29% over the past year. Ratings are now lower than they were in January 2019, before the coronavirus outbreak in the U.S. Among Black adults, 28% say they have a great deal of confidence in medical scientists to act in the public’s best interests, down slightly from November 2020 (33%). The share of Hispanic adults with a strong level of trust in medical scientists is similar to the share who expressed the same level of trust in November 2020, although the current share is 16 points lower than it was in April 2020 (29% vs 45%), shortly after measures to address the coronavirus outbreak began. Ratings of medical scientists among Hispanic adults continue to be lower than they were before the coronavirus outbreak. In January 2019, 37% of Hispanic adults said they had a great deal of confidence in medical scientists. While the shares of White, Black and Hispanic adults who express a great deal of confidence in medical scientists have declined since the early stages of the coronavirus outbreak in the U.S., majorities of these groups continue to express at least a fair amount of confidence in medical scientists, and the ratings for medical scientists compare favorably with those of other groups and institutions rated in the survey. Chart shows White Democrats express higher levels of confidence in medical scientists than Black, Hispanic Democrats Confidence in scientists tends to track closely with confidence in medical scientists. Majorities of White, Black and Hispanic adults have at least a fair amount of confidence in scientists. And the shares with this view continue to rank at or above those for other groups and institutions. For more on confidence in scientists over time among White, Black and Hispanic adults, see the Appendix. Confidence in medical scientists and scientists across racial and ethnic groups plays out differently for Democrats and Republicans. White Democrats (52%) are more likely than Hispanic (36%) and Black (30%) Democrats to say they have a great deal of confidence in medical scientists to act in the public’s best interests. However, large majorities of all three groups say they have at least a fair amount of confidence in medical scientists. Among Republicans and Republican leaners, 14% of White adults say they have a great deal of confidence in medical scientists, while 52% say they have a fair amount of confidence. Views among Hispanic Republicans are very similar to those of White Republicans, in contrast to differences seen among Democrats. There are similar patterns in confidence in scientists. (However, the sample size for Black Republicans in the survey is too small to analyze on these measures.) See the Appendix for more. Americans with higher levels of education express more positive views of scientists and medical scientists than those with lower levels of education, as has also been the case in past Center surveys. But education matters more in assessments by Democrats than Republicans. Chart shows college-educated Democrats express high levels of confidence in medical scientists Democrats and Democratic leaners with at least a college degree express a high level of confidence in medical scientists: 54% have a great deal of confidence and 95% have at least a fair amount of confidence in medical scientists to act in the public’s interests. By comparison, a smaller share of Democrats who have not graduated from college have confidence in medical scientists. Among Republicans and Republican leaners, college graduates are 9 points more likely than those with some college experience or less education to express a great deal of confidence in medical scientists (21% vs. 12%). There is a similar difference between those with higher and lower education levels among Democrats when it comes to confidence in scientists. Among Republicans, differences by education are less pronounced; there is no significant difference by education level in the shares holding the strongest level of confidence in scientists to act in the public’s interests. See the Appendix for details. Report Materials Complete Report PDF Topline Questionnaire Weaving Indigenous knowledge into the scientific method (Nature) Saima May Sidik 11 January 2022; Correction 24 January 2022 Dominique David-Chavez working with Randal Alicea, a Caribbean Indigenous farmer, in his tobacco drying shed in Cidra, Borikén. Dominique David-Chavez works with Randal Alicea, an Indigenous farmer, in his tobacco-drying shed in Cidra, Borikén (Puerto Rico).Credit: Norma Ortiz Many scientists rely on Indigenous people to guide their work — by helping them to find wildlife, navigate rugged terrain or understand changing weather trends, for example. But these relationships have often felt colonial, extractive and unequal. Researchers drop into communities, gather data and leave — never contacting the locals again, and excluding them from the publication process. Today, many scientists acknowledge the troubling attitudes that have long plagued research projects in Indigenous communities. But finding a path to better relationships has proved challenging. Tensions surfaced last year, for example, when seven University of Auckland academics argued that planned changes to New Zealand’s secondary school curriculum, to “ensure parity between mātauranga Māori”, or Maori knowledge, and “other bodies of knowledge”, could undermine trust in science. Last month, the University of Auckland’s vice-chancellor, Dawn Freshwater, announced a symposium to be held early this year, at which different viewpoints can be discussed. In 2016, the US National Science Foundation (NSF) launched Navigating the New Arctic — a programme that encouraged scientists to explore the wide-reaching consequences of climate change in the north. A key sentence in the programme description reflected a shift in perspective: “Given the deep knowledge held by local and Indigenous residents in the Arctic, NSF encourages scientists and Arctic residents to collaborate on Arctic research projects.” The Natural Sciences and Engineering Research Council of Canada and New Zealand’s Ministry of Business, Innovation and Employment have made similar statements. So, too, have the United Nations cultural organization UNESCO and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. But some Indigenous groups feel that despite such well-intentioned initiatives, their inclusion in research is only a token gesture to satisfy a funding agency. There’s no road map out of science’s painful past. Nature asked three researchers who belong to Indigenous communities in the Americas and New Zealand, plus two funders who work closely with Northern Indigenous communities, how far we’ve come toward decolonizing science — and how researchers can work more respectfully with Indigenous groups. DANIEL HIKUROA: Weave folklore into modern science Daniel Hikuroa is an Earth systems and environmental humanities researcher at Te Wānanga o Waipapa, University of Auckland, New Zealand, and a member of the Māori community. We all have a world view. Pūrākau, or traditional stories, are a part of Māori culture with great potential for informing science. But what you need to understand is that they’re codified according to an Indigenous world view. For example, in Māori tradition, we have these things called taniwha that are like water serpents. When you think of taniwha, you think, danger, risk, be on your guard! Taniwha as physical entities do not exist. Taniwha are a mechanism for describing how rivers behave and change through time. For example, pūrākau say that taniwha live in a certain part of the Waikato River, New Zealand’s longest, running for 425 kilometres through the North Island. That’s the part of the river that tends to flood. Fortunately, officials took knowledge of taniwha into account when they were designing a road near the Waikato river in 2002. Because of this, we’ve averted disasters. Sometimes, it takes a bit of explanation to convince non-Indigenous scientists that pūrākau are a variation on the scientific method. They’re built on observations and interpretations of the natural world, and they allow us to predict how the world will function in the future. They’re repeatable, reliable, they have rigour, and they’re accurate. Once scientists see this, they have that ‘Aha!’ moment where they realize how well Western science and pūrākau complement each other. We’re very lucky in New Zealand because our funding agencies help us to disseminate this idea. In 2005, the Ministry of Research, Science and Technology (which has since been incorporated into the Ministry of Business, Innovation and Employment) developed a framework called Vision Mātauranga. Mātauranga is the Māori word for knowledge, but it also includes the culture, values and world view of Māori people. Whenever a scientist applies for funding, they’re asked whether their proposal addresses a Māori need or can draw on Māori knowledge. The intent of Vision Mātauranga is to broaden the science sector by unlocking the potential of Māori mātauranga. In the early days of Vision Mātauranga, some Indigenous groups found themselves inundated with last-minute requests from researchers who just wanted Indigenous people to sign off on their proposals to make their grant applications more competitive. It was enormously frustrating. These days, most researchers are using the policy with a higher degree of sophistication. Vision Mātauranga is at its best when researchers develop long-term relationships with Indigenous groups so that they know about those groups’ dreams and aspirations and challenges, and also about their skill sets. Then the conversation can coalesce around where those things overlap with the researchers’ own goals. The University of Waikato in Hamilton has done a great job with this, establishing a chief-to-chief relationship in which the university’s senior management meets maybe twice a year with the chiefs of the Indigenous groups in the surrounding area. This ongoing relationship lets the university and the Indigenous groups have high-level discussions that build trust and can inform projects led by individual labs. We’ve made great progress towards bridging Māori culture and scientific culture, but attitudes are still evolving — including my own. In 2011, I published my first foray into using Māori knowledge in science, and I used the word ‘integrate’ to describe the process of combining the two. I no longer use that word, because I think weaving is a more apt description. When you weave two strands together, the integrity of the individual components can remain, but you end up with something that’s ultimately stronger than what you started with. DOMINIQUE DAVID-CHAVEZ: Listen and learn with humility Dominique David-Chavez is an Indigenous land and data stewardship researcher at Colorado State University in Fort Collins, and a member of the Arawak Taíno community. People often ask how can we integrate Indigenous knowledge into Western science. But framing the question in this way upholds the unhealthy power dynamic between Western and Indigenous scientists. It makes it sound as though there are two singular bodies of knowledge, when in fact Indigenous knowledge — unlike Western science — is drawn from thousands of different communities, each with its own knowledge systems. At school, I was taught this myth that it was European and American white men who discovered all these different physical systems on Earth — on land, in the skies and in the water. But Indigenous people have been observing those same systems for hundreds or thousands of years. When Western scientists claim credit for discoveries that Indigenous people made first, they’re stealing Indigenous people’s contributions to science. This theft made me angry, but it also drove me. I decided to undertake graduate studies so that I could look critically at how we validate who creates knowledge, who creates science and who are the scientists. To avoid perpetuating harmful power dynamics, researchers who want to work in an Indigenous people’s homeland should first introduce themselves to the community, explain their skills and convey how their research could serve the community. And they should begin the work only if the community invites them to. That invitation might take time to come! The researchers should also build in time to spend in the community to listen, be humbled and learn. If you don’t have that built-in relational accountability, then maybe you’re better off in a supporting role. Overall, my advice to Western researchers is this: always be questioning your assumptions about where science came from, where it’s going and what part you should be playing in its development. MARY TURNIPSEED: Fund relationship building and follow-ups Mary Turnipseed is an ecologist and grantmaker at the Gordon and Betty Moore Foundation, Palo Alto, California. I’ve been awarding grants in the Arctic since 2015, when I became a marine-conservation programme officer at the Gordon and Betty Moore Foundation. A lesson I learnt early on about knowledge co-production — the term used for collaborations between academics and non-academics — is to listen. In the non-Indigenous parts of North America, we’re used to talking, but flipping that on its end helps us to work better with Indigenous communities. Listening to our Indigenous Alaskan Native partners is often how I know whether a collaboration is working well or not. If the community is supportive of a particular effort, that means they’ve been able to develop a healthy relationship with the researchers. We have quarterly check-ins with our partners about how projects are going; and, in non-pandemic times, I frequently travelled to Alaska to talk directly with our partners. One way in which we help to spur productive relationships is by giving research teams a year of preliminary funding — before they even start their research — so that they can work with Indigenous groups to identify the questions their research will address and decide how they’re going to tackle them. We really need more funding agencies to set aside money for this type of early relationship-building, so that everyone goes into a project with the same expectations, and with a level of trust for one another. People working on the Ikaaġvik Sikukun collaboration in the snow cutting on ice core samples. Members of the Ikaaġvik Sikukun collaboration at the Native Village of Kotzebue, Alaska.Credit: Sarah Betcher/Farthest North Films Developing relationships takes time, so it’s easiest when Indigenous communities have a research coordinator, such as Alex Whiting (environmental programme director for the Native Village of Kotzebue), to handle all their collaborations. I think the number of such positions could easily be increased tenfold, and I’d love to see the US federal government offer more funding for these types of position. Funding agencies should provide incentives for researchers to go back to the communities that they’ve worked with and share what they’ve found. There’s always talk among Indigenous groups about researchers who come in, collect data, get their PhDs and never show up again. Every time that happens, it hurts the community, and it hurts the next researchers to come. I think it’s essential for funding agencies to prevent this from happening. ALEX WHITING: Develop a toolkit to decolonize relationships Alex Whiting is an environmental specialist in Kotzebue, Alaska, and a formally adopted member of the Qikiktagrukmiut community. A lot of the time, researchers who operate in a colonial way aren’t aware of the harm they’re doing. But many people are realizing that taking knowledge without involving local people is not only unethical, but inefficient. In 1997, the Native Village of Kotzebue — a federally recognized seat of tribal government representing the Qikiktagrukmiut, northwest Alaska’s original inhabitants — hired me as its environmental programme director. I helped the community to develop a research protocol that lays out our expectations of scientists who work in our community, and an accompanying questionnaire. By filling in the one-page questionnaire, researchers give us a quick overview of what they plan to do; its relevance and potential benefit to our community; the need for local involvement; and how we’ll be compensated financially. This provides us with a tool through which to develop relationships with researchers, make sure that our priorities and rights are addressed, and hold researchers accountable. Making scientists think about how they’ll engage with us has helped to make research a more equitable, less extractive activity. We cannot force scientists to deal with us. It’s a free country. But the Qikiktagrukmiut are skilled at activities such as boating, travelling on snow and capturing animals — and those skills are extremely useful for fieldwork, as is our deep historical knowledge of the local environment. It’s a lot harder for scientists to accomplish their work without our involvement. Many scientists realize this, so these days we get 6–12 research proposals per year. We say yes to most of them. The NSF’s Navigating the New Arctic programme has definitely increased the number of last-minute proposals that communities such as ours get swamped with a couple of weeks before the application deadline. Throwing an Indigenous component into a research proposal at the last minute is definitely not an ideal way to go about things, because it doesn’t give us time to fully consider the research before deciding whether we want to participate. But at least the NSF has recognized that working with Indigenous people is a thing! They’re just in the growing-pains phase. Not all Indigenous groups have had as much success as we have, and some are still experiencing the extractive side of science. But incorporating Indigenous knowledge into science can create rapid growths in understanding, and we’re happy we’ve helped some researchers do this in a respectful way. NATAN OBED: Fund research on Indigenous priorities Natan Obed is president of Inuit Tapiriit Kanatami, and a member of the Inuit community. Every year, funding agencies devote hundreds of millions of dollars to work that occurs in the Inuit homeland in northern Canada. Until very recently, almost none of those agencies considered Inuit peoples’ priorities. These Indigenous communities face massive social and economic challenges. More than 60% of Inuit households are food insecure, meaning they don’t always have enough food to maintain an active, healthy life. On average, one-quarter as many doctors serve Inuit communities as serve urban Canadian communities. Our life expectancy is ten years less than the average non-Indigenous Canadian’s. The list goes on. And yet, very little research is devoted to addressing these inequities. Last year, the Inuit advocacy organization Inuit Tapiriit Kanatami (the name means ‘Inuit are united in Canada’) collaborated with the research network ArcticNet to start its own funding programme, which is called the Inuit Nunangat Research Program (INRP). Funding decisions are led entirely by Inuit people to ensure that all grants support research on Inuit priorities. Even in the programme’s first year, we got more requests than we could fund. We selected 11 proposals that all relate directly to the day-to-day lives of Inuit people. For example, one study that we’re funding aims to characterize a type of goose that has newly arrived in northern Labrador; another focuses on how social interactions spread disease in Inuit communities. Our goal with the INRP is twofold: first, we want to generate knowledge that addresses Inuit concerns, and second, we want to create an example of how other granting agencies can change so that they respect the priorities of all groups. We’ve been moderately successful in getting some of the main Canadian granting agencies, such as the Canadian Institutes of Health Research, to allocate more resources to things that matter to Inuit people. I’d like to think that the INRP gives them a model for how to become even more inclusive. We hope that, over the next ten years, it will become normal for granting agencies to consider the needs of Indigenous communities. But we also know that institutions change slowly. Looking back at where we’ve been, we have a lot to be proud of, but we still have a huge task ahead of us. These interviews have been edited for length and clarity. For better science, increase Indigenous participation in publishing (Nature) 10 January 2022 Amending long-established processes to include fresh perspectives is challenging, but journal editor Lisa Loseto is trying to find a path forward. Saima May Sidik Lisa Loseto stands by a campfire.Credit: Oksana Schimnowski What is Arctic Science planning to do moving forward? What do you hope these actions will achieve? This interview has been edited for length and clarity. ‘Não há linha clara que separe ciência da pseudociência’, diz professor de Princeton (BBC News Brasil) Carlos Serrano – @carliserrano BBC News Mundo 12 dezembro 2021 A relação entre o conhecimento genuíno e as doutrinas marginais é mais próxima do muitos querem aceitar, diz historiador especialista em história da ciência Terraplanistas, antivacinas, criacionistas, astrólogos, telepatas, numerólogos, homeopatas… Para as instituições científicas, essas práticas e movimentos enquadram-se na categoria das “pseudociências”. Ou seja, doutrinas baseadas em fundamentos que seus adeptos consideram científicas e, a partir daí, criam uma corrente que se afasta do que é normalmente aceito pelo mundo acadêmico. Mas como distinguir o que é ciência daquilo que se faz passar por ciência? Essa tarefa é muito mais complicada do que parece, segundo Michael Gordin, professor da Universidade Princeton, nos Estados Unidos, e especialista em história da ciência. Gordin é autor do livro On the Fringe: Where Science Meets Pseudoscience (“Na Fronteira: Onde a Ciência Encontra a Pseudociência”, em tradução livre). Seu livro detalha como operam as pseudociências e como, do seu ponto de vista, são uma consequência inevitável do progresso científico. Em entrevista à BBC News Mundo (o serviço em espanhol da BBC), Gordin detalha a complexa relação entre o que se considera ciência verdadeira e o que ele chama de doutrinas marginais. Michael Gordin Michael Gordin, autor do livro “Na Fronteira: Onde a Ciência Encontra a Pseudociência” (em tradução livre do inglês) BBC News Mundo – O senhor afirma que não existe uma linha definida separando a ciência da pseudociência, mas a ciência tem um método claro e comprovável. Esta não seria uma diferença clara com relação à pseudociência? Michael Gordin – Acredita-se normalmente que a ciência tem um único método, mas isso não é verdade. A ciência tem muitos métodos. Os geólogos fazem seu trabalho de forma muito diferente dos físicos teóricos, e os biólogos moleculares, dos neurocientistas. Alguns cientistas trabalham no campo, observando o que acontece. Outros trabalham em laboratório, sob condições controladas. Outros fazem simulações. Ou seja, a ciência tem muitos métodos, que são heterogêneos. A ciência é dinâmica, e esse dinamismo dificulta a definição dessa linha. Podemos tomar um exemplo concreto e dizer que se trata de ciência ou de pseudociência. É fácil com um exemplo concreto. O problema é que essa linha não é consistente e, quando você observa uma maior quantidade de casos, haverá coisas que antes eram consideradas ciência e agora são consideradas pseudociências, como a astrologia. Existem temas como a deriva dos continentes, que inicialmente era considerada uma teoria marginal e agora é uma teoria básica da geofísica. Quase tudo o que hoje se considera pseudociência já foi ciência no passado, que foi refutada com o passar do tempo e os que continuam a apoiá-la são considerados lunáticos ou charlatães. Ou seja, a definição do que é ciência ou pseudociência é dinâmica ao longo do tempo. Esta é uma das razões da dificuldade desse julgamento. Considerada ciência no passado, a astrologia encontra-se hoje no rol das pseudociências – ou doutrinas marginais, segundo Michael Gordin BBC News Mundo – Mas existem coisas que não se alteram ao longo do tempo. Por exemplo, 2+2 sempre foi igual a 4. Isso quer dizer que a ciência trabalha com base em princípios que não permitem interpretações… Gordin – Bem, isso não é necessariamente certo. Dois óvnis mais dois óvnis são quatro óvnis. É interessante que você tenha escolhido a matemática que, de fato, não é uma ciência empírica, pois ela não se refere ao mundo exterior. É uma série de regras que usamos para determinar certas coisas. Uma das razões pelas quais é muito complicado fazer a distinção é o fato de que as doutrinas marginais observam o que é considerado ciência estabelecida e adaptam a elas seus argumentos e suas técnicas. Um exemplo é o “criacionismo científico”, que defende que o mundo foi criado em sete dias, 6.000 anos atrás. Existem publicações de criacionismo científico que incluem gráficos matemáticos sobre as razões de decomposição de vários isótopos, para tentar comprovar que a Terra tem apenas 6.000 anos. Seria genial afirmar que usar a matemática e apresentar gráficos é ciência, mas a realidade é que quase todas as doutrinas marginais usam a matemática de alguma forma. Os cientistas discordam sobre o tipo de matemática utilizada, mas existem, por exemplo, pessoas que defendem que a matemática avançada utilizada na teoria das cordas já não é científica, porque perdeu a verificação empírica. Trata-se de matemática de alto nível, feita por doutores das melhores universidades, mas existe um debate interno na ciência, entre os físicos, que discutem se ela deve ou não ser considerada ciência. Não estou dizendo que todos devem ser criacionistas, mas, quando a mecânica quântica foi proposta pela primeira vez, algumas pessoas diziam: “isso parece muito estranho”, “ela não se atém às medições da forma em que acreditamos que funcionem” ou “isso realmente é ciência?” Terra plana Nos últimos anos, popularizou-se entre alguns grupos a ideia de que a Terra é plana BBC News Mundo – Então o sr. afirma que as pseudociências ou doutrinas marginais têm algum valor? Gordin – A questão é que muitas coisas que consideramos inovadoras provêm dos limites do conhecimento ortodoxo. O que quero dizer são basicamente três pontos: primeiro, que não existe uma linha divisória clara; segundo, que compreender o que fica de cada lado da linha exige a compreensão do contexto; e, terceiro, que o processo normal da ciência produz doutrinas marginais. Não podemos descartar essas doutrinas, pois elas são inevitáveis. Elas são um produto derivado da forma como as ciências funcionam. BBC News Mundo – Isso significa que deveríamos ser mais tolerantes com as pseudociências? Gordin – Os cientistas, como qualquer outra pessoa, têm tempo e energia limitados e não podem pesquisar tudo. Por isso, qualquer tempo que for dedicado a refutar ou negar a legitimidade de uma doutrina marginal é tempo que deixa de ser usado para fazer ciência — e talvez nem surta resultados. As pessoas vêm refutando o criacionismo científico há décadas. Elas trataram de desmascarar a telepatia por ainda mais tempo e ela segue rondando à nossa volta. Existem diversos tipos de ideias marginais. Algumas são muito politizadas e chegam a ser nocivas para a saúde pública ou o meio ambiente. É a estas, a meu ver, que precisamos dedicar atenção e recursos para sua eliminação ou pelo menos explicar por que elas estão erradas. Mas não acho que outras ideias, como acreditar em óvnis, sejam especificamente perigosas. Acredito que nem mesmo o criacionismo seja tão perigoso como ser antivacinas, ou acreditar que as mudanças climáticas são uma farsa. Devemos observar as pseudociências como algo inevitável e abordá-las de forma pragmática. Temos uma quantidade de recursos limitada e precisamos escolher quais doutrinas podem causar danos e como enfrentá-las. Devemos simplesmente tratar de reduzir os danos que elas podem causar? Esse é o caso da vacinação obrigatória, cujo objetivo é evitar os danos, mas sem necessariamente convencer os opositores que eles estão equivocados. Devemos persuadi-los de que estão equivocados? Isso precisa ser examinado caso a caso. Existem em várias partes do mundo grupos que se opõem às vacinas contra a covid-19 BBC News Mundo – Como então devemos lidar com as pseudociências? Gordin – Uma possibilidade é reconhecer que são pessoas interessadas na ciência. Um terraplanista, por exemplo, é uma pessoa interessada na configuração da Terra. Significa que é alguém que teve interesse em pesquisar a natureza e, por alguma razão, seguiu a direção incorreta. Pode-se então perguntar por que isso aconteceu. Pode-se abordar a pessoa, dizendo: “se você não acredita nesta evidência, em qual tipo de evidência você acreditaria?” ou “mostre-me suas evidências e vamos conversar”. É algo que poderíamos fazer, mas vale a pena fazê-lo? É uma doutrina que não considero perigosa. Seria um problema se todos os governos do mundo pensassem que a Terra é plana, mas não vejo esse risco. A versão contemporânea do terraplanismo surgiu há cerca de 15 anos. Acredito que os acadêmicos ainda não compreendem muito bem como aconteceu, nem por que aconteceu tão rápido. Outra coisa que podemos fazer é não necessariamente persuadi-los de que estão equivocados, porque talvez eles não aceitem, mas tentar entender como esse movimento surgiu e se expandiu. Isso pode nos orientar sobre como enfrentar ameaças mais sérias. As pessoas que acreditam nas doutrinas marginais muitas vezes tomam elementos da ciência estabelecida para traçar suas conclusões BBC News Mundo – Ameaças mais sérias como os antivacinas… Gordin – As vacinas foram inventadas no século 18, sempre houve pessoas que se opusessem a elas, em parte porque todas as vacinas apresentam risco, embora seja muito baixo. Ao longo do tempo, a forma como se lidou com a questão foi a instituição de um sistema de seguro que basicamente diz o seguinte: você precisa receber a vacina, mas se você receber e tiver maus resultados, nós compensaremos você por esses danos. Tenho certeza de que isso ocorrerá com a vacina contra a covid, mas ainda não conhecemos todo o espectro, nem a seriedade dos danos que ela poderá causar. Mas os danos e a probabilidade de sua ocorrência parecem ser muito baixos. Com relação aos antivacinas que acreditam, por exemplo, que a vacina contra a covid contém um chip, a única ação que pode ser tomada para o bem da saúde pública é torná-la obrigatória. Foi dessa forma que se conseguiu erradicar a pólio na maior parte do mundo, mesmo com a existência dos opositores à vacina. BBC News Mundo – Mas torná-la obrigatória pode fazer com que alguém diga que a ciência está sendo usada com propósitos políticos ou ideológicos… Gordin – Tenho certeza de que, se o Estado impuser uma vacina obrigatória, alguém dirá isso. Mas não se trata de ideologia. O Estado já obriga tantas coisas e já existem vacinas que são obrigatórias. E o Estado faz todo tipo de afirmações científicas. Não é permitido o ensino do criacionismo nas escolas, por exemplo, nem a pesquisa de clonagem de seres humanos. Ou seja, o Estado já interveio muitas vezes em disputas científicas e procura fazer isso segundo o consenso científico. BBC News Mundo – As pessoas que adotam as pseudociências o fazem com base no ceticismo, que é exatamente um dos valores fundamentais da ciência. É um paradoxo, não? Gordin – Este é um dos motivos por que acredito que não haja uma linha divisória clara entre a ciência e a pseudociência. O ceticismo é uma ferramenta que todos nós utilizamos. A questão é sobre qual tipo de assuntos você é cético e o que pode convencê-lo de um fato específico. No século 19, havia um grande debate se os átomos realmente existiam ou não. Hoje, praticamente nenhum cientista duvida da sua existência. É assim que a ciência funciona. O foco do ceticismo se move de um lado para outro com o passar do tempo. Quando esse ceticismo se dirige a assuntos que já foram aceitos, às vezes ocorrem problemas, mas há ocasiões em que isso é necessário. A essência da teoria da relatividade de Einstein é que o éter — a substância através da qual as ondas de luz supostamente viajavam — não existe. Para isso, Einstein concentrou seu ceticismo em um postulado fundamental, mas o fez dizendo que poderiam ser preservados muitos outros conhecimentos que já eram considerados estabelecidos. Portanto, o ceticismo deve ter um propósito. Se você for cético pelo simples fato de sê-lo, este é um processo que não produz avanços. O ceticismo é um dos princípios básicos da ciência BBC News Mundo – É possível que, no futuro, o que hoje consideramos ciência seja descartado como pseudociência? Gordin – No futuro, haverá muitas doutrinas que serão consideradas pseudociências, simplesmente porque existem muitas coisas que ainda não entendemos. Existem muitas coisas que não entendemos sobre o cérebro ou o meio ambiente. No futuro, as pessoas olharão para muitas teorias e dirão que estão erradas. Não é suficiente que uma teoria seja incorreta para que seja considerada pseudociência. É necessário que existam pessoas que acreditem que ela é correta, mesmo que o consenso afirme que se trata de um equívoco e que as instituições científicas considerem que, por alguma razão, ela é perigosa. But the taming of the coronavirus conceals failures in public health The Economist – Nov 8th 2021 Immunity has been acquired at a terrible cost Just another endemic disease Edward Carr: Deputy editor, The Economist Opinião – Reinaldo José Lopes: Periódico científico da USP dá palco para negacionista da crise climática (Folha de S.Paulo) Não existe multipartidarismo sobre a lei da gravidade ou pluralismo ideológico acerca da teoria da evolução 30.out.2021 às 7h00 O veterano meteorologista Luiz Carlos Molion é uma espécie de pregador itinerante do evangelho da pseudociência. Alguns anos atrás, andou rodando o interiorzão do país, a soldo de uma concessionária de tratores, oferecendo sua bênção pontifícia às 12 tribos do agro. Em suas palestras, garantia às tropas de choque da fronteira agrícola brasileira que desmatar não interfere nas chuvas (errado), que as emissões de gás carbônico não esquentam a Terra (errado) e que, na verdade, estamos caminhando para uma fase de resfriamento global (errado). Existem formas menos esquálidas de terminar uma carreira que se supunha científica, mas parece que Molion realmente acredita no que fala. O que realmente me parece inacreditável, no entanto, é que um periódico científico editado pela maior universidade da América Latina abra suas portas para as invectivas de um ex-pesquisador como ele. Foi o que aconteceu na última edição da revista Khronos, editada pelo Centro Interunidade de História da Ciência da USP. Numa seção intitulada “Debates”, Molion publicou o artigo “Aquecimento global antropogênico: uma história controversa”. Nele, Molion requenta (perdoai, Senhor, a volúpia dos trocadilhos) sua mofada marmita de negacionista, atacando a suposta incapacidade do IPCC, o painel do clima da ONU, de prever com acurácia o clima futuro deste planetinha usando modelos computacionais (errado). O desplante era tamanho que provocou protestos formais de diversos pesquisadores de prestígio da universidade, membros do conselho do centro uspiano. Na carta assinada por eles e outros colegas, lembram que Molion não tem quaisquer publicações relevantes em revistas científicas sobre o tema das mudanças climáticas há décadas e que, para cúmulo da sandice, o artigo nem faz referência… à história da ciência, que é o tema da publicação, para começo de conversa. A resposta do editor do periódico Khronos e diretor do centro, Gildo Magalhães, não poderia ser mais desanimadora. Diante do protesto dos professores, eis o que ele disse: “Não cabe no ambiente acadêmico fazer censura a ideias. Na universidade não deveria haver partido único. Quem acompanha os debates nos congressos climatológicos internacionais sabe que fora da grande mídia o aquecimento global antropogênico é matéria cientificamente controversa. Igualar sumariamente uma opinião diversa da ortodoxa com o reles negacionismo científico, como foi o caso no Brasil com a vacina contra a Covid, prejudica o entendimento e em nada ajuda o diálogo”. Não sei se Magalhães está mentindo deliberadamente ou é apenas muito mal informado, mas a afirmação de que o aquecimento causado pelo homem é matéria controversa nos congressos da área é falsa. O acompanhamento dos periódicos científicos mostra de forma inequívoca que questionamentos como os de Molion não são levados a sério por praticamente ninguém. A controvérsia citada por Magalhães inexiste. É meio constrangedor ter de explicar isso para um professor da USP, mas as referências a debate de “ideias” e “partido único” não têm lugar na ciência. Se as suas ideias não são baseadas em experimentos e observações feitos com rigor e submetidos ao crivo de outros membros da comunidade científica, elas não deveriam ter lugar num periódico acadêmico. Não existe multipartidarismo sobre a lei da gravidade ou pluralismo ideológico acerca da teoria da evolução. Negar isso é abrir a porteira para retrocessos históricos. If DNA is like software, can we just fix the code? (MIT Technology Review) In a race to cure his daughter, a Google programmer enters the world of hyper-personalized drugs. Erika Check Hayden February 26, 2020 To create atipeksen, Yu borrowed from recent biotech successes like gene therapy. Some new drugs, including cancer therapies, treat disease by directly manipulating genetic information inside a patient’s cells. Now doctors like Yu find they can alter those treatments as if they were digital programs. Change the code, reprogram the drug, and there’s a chance of treating many genetic diseases, even those as unusual as Ipek’s. The new strategy could in theory help millions of people living with rare diseases, the vast majority of which are caused by genetic typos and have no treatment. US regulators say last year they fielded more than 80 requests to allow genetic treatments for individuals or very small groups, and that they may take steps to make tailor-made medicines easier to try. New technologies, including custom gene-editing treatments using CRISPR, are coming next. Where it had taken decades for Ionis to perfect its drug, Yu now set a record: it took only eight months for Yu to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine. “I never thought we would be in a position to even contemplate trying to help these patients,” says Stanley Crooke, a biotechnology entrepreneur and founder of Ionis Pharmaceuticals, based in Carlsbad, California. “It’s an astonishing moment.” Antisense drug Right now, though, insurance companies won’t pay for individualized gene drugs, and no company is making them (though some plan to). Only a few patients have ever gotten them, usually after heroic feats of arm-twisting and fundraising. And it’s no mistake that programmers like Mehmet Kuzu, who works on data privacy, are among the first to pursue individualized drugs. “As computer scientists, they get it. This is all code,” says Ethan Perlstein, chief scientific officer at the Christopher and Dana Reeve Foundation. A nonprofit, the A-T Children’s Project, funded most of the cost of designing and making Ipek’s drug. For Brad Margus, who created the foundation in 1993 after his two sons were diagnosed with A-T, the change between then and now couldn’t be more dramatic. “We’ve raised so much money, we’ve funded so much research, but it’s so frustrating that the biology just kept getting more and more complex,” he says. “Now, we’re suddenly presented with this opportunity to just fix the problem at its source.” Ipek was only a few months old when her father began looking for a cure. A geneticist friend sent him a paper describing a possible treatment for her exact form of A-T, and Kuzu flew from Sunnyvale, California, to Los Angeles to meet the scientists behind the research. But they said no one had tried the drug in people: “We need many more years to make this happen,” they told him. Timothy Yu, of Boston Children's Hospital Timothy Yu, of Boston Children’s HospitalCourtesy Photo (Yu) Kuzu didn’t have years. After he returned from Los Angeles, Margus handed him a thumb drive with a video of a talk by Yu, a doctor at Boston Children’s Hospital, who described how he planned to treat a young girl with Batten disease (a different neurodegenerative condition) in what press reports would later dub “a stunning illustration of personalized genomic medicine.” Kuzu realized Yu was using the very same gene technology the Los Angeles scientists had dismissed as a pipe dream. That technology is called “antisense.” Inside a cell, DNA encodes information to make proteins. Between the DNA and the protein, though, come messenger molecules called RNA that ferry the gene information out of the nucleus. Think of antisense as mirror-image molecules that stick to specific RNA messages, letter for letter, blocking them from being made into proteins. It’s possible to silence a gene this way, and sometimes to overcome errors, too. Though the first antisense drugs appeared 20 years ago, the concept achieved its first blockbuster success only in 2016. That’s when a drug called nusinersen, made by Ionis, was approved to treat children with spinal muscular atrophy, a genetic disease that would otherwise kill them by their second birthday. Yu, a specialist in gene sequencing, had not worked with antisense before, but once he’d identified the genetic error causing Batten disease in his young patient, Mila Makovec, it became apparent to him he didn’t have to stop there. If he knew the gene error, why not create a gene drug? “All of a sudden a lightbulb went off,” Yu says. “Couldn’t one try to reverse this? It was such an appealing idea, and such a simple idea, that we basically just found ourselves unable to let that go.” Yu admits it was bold to suggest his idea to Mila’s mother, Julia Vitarello. But he was not starting from scratch. In a demonstration of how modular biotech drugs may become, he based milasen on the same chemistry backbone as the Ionis drug, except he made Mila’s particular mutation the genetic target. Where it had taken decades for Ionis to perfect a drug, Yu now set a record: it took only eight months for him to make milasen, try it on animals, and convince the US Food and Drug Administration to let him inject it into Mila’s spine. “What’s different now is that someone like Tim Yu can develop a drug with no prior familiarity with this technology,” says Art Krieg, chief scientific officer at Checkmate Pharmaceuticals, based in Cambridge, Massachusetts. Source code As word got out about milasen, Yu heard from more than a hundred families asking for his help. That’s put the Boston doctor in a tough position. Yu has plans to try antisense to treat a dozen kids with different diseases, but he knows it’s not the right approach for everyone, and he’s still learning which diseases might be most amenable. And nothing is ever simple—or cheap. Each new version of a drug can behave differently and requires costly safety tests in animals. Kuzu had the advantage that the Los Angeles researchers had already shown antisense might work. What’s more, Margus agreed that the A-T Children’s Project would help fund the research. But it wouldn’t be fair to make the treatment just for Ipek if the foundation was paying for it. So Margus and Yu decided to test antisense drugs in the cells of three young A-T patients, including Ipek. Whichever kid’s cells responded best would get picked. Ipek at play Ipek may not survive past her 20s without treatment.Matthew Monteith While he waited for the test results, Kuzu raised about $200,000 from friends and coworkers at Google. One day, an email landed in his in-box from another Google employee who was fundraising to help a sick child. As he read it, Kuzu felt a jolt of recognition: his coworker, Jennifer Seth, was also working with Yu. Seth’s daughter Lydia was born in December 2018. The baby, with beautiful chubby cheeks, carries a mutation that causes seizures and may lead to severe disabilities. Seth’s husband Rohan, a well-connected Silicon Valley entrepreneur, refers to the problem as a “tiny random mutation” in her “source code.” The Seths have raised more than $2 million, much of it from co-workers. Custom drug By then, Yu was ready to give Kuzu the good news: Ipek’s cells had responded the best. So last September the family packed up and moved from California to Cambridge, Massachusetts, so Ipek could start getting atipeksen. The toddler got her first dose this January, under general anesthesia, through a lumbar puncture into her spine. After a year, the Kuzus hope to learn whether or not the drug is helping. Doctors will track her brain volume and measure biomarkers in Ipek’s cerebrospinal fluid as a readout of how her disease is progressing. And a team at Johns Hopkins will help compare her movements with those of other kids, both with and without A-T, to observe whether the expected disease symptoms are delayed. One serious challenge facing gene drugs for individuals is that short of a healing miracle, it may ultimately be impossible to be sure they really work. That’s because the speed with which diseases like A-T progress can vary widely from person to person. Proving a drug is effective, or revealing that it’s a dud, almost always requires collecting data from many patients, not just one. “It’s important for parents who are ready to pay anything, try anything, to appreciate that experimental treatments often don’t work,” says Holly Fernandez Lynch, a lawyer and ethicist at the University of Pennsylvania. “There are risks. Trying one could foreclose other options and even hasten death.” Kuzu says his family weighed the risks and benefits. “Since this is the first time for this kind of drug, we were a little scared,” he says. But, he concluded, “there’s nothing else to do. This is the only thing that might give hope to us and the other families.” Another obstacle to ultra-personal drugs is that insurance won’t pay for them. And so far, pharmaceutical companies aren’t interested either. They prioritize drugs that can be sold thousands of times, but as far as anyone knows, Ipek is the only person alive with her exact mutation. That leaves families facing extraordinary financial demands that only the wealthy, lucky, or well connected can meet. Developing Ipek’s treatment has already cost $1.9 million, Margus estimates. Some scientists think agencies such as the US National Institutes of Health should help fund the research, and will press their case at a meeting in Bethesda, Maryland, in April. Help could also come from the Food and Drug Administration, which is developing guidelines that may speed the work of doctors like Yu. The agency will receive updates on Mila and other patients if any of them experience severe side effects. The FDA is also considering giving doctors more leeway to modify genetic drugs to try in new patients without securing new permissions each time. Peter Marks, director of the FDA’s Center for Biologics Evaluation and Research, likens traditional drug manufacturing to factories that mass-produce identical T-shirts. But, he points out, it’s now possible to order an individual basic T-shirt embroidered with a company logo. So drug manufacturing could become more customized too, Marks believes. Custom drugs carrying exactly the message a sick kid’s body needs? If we get there, credit will go to companies like Ionis that developed the new types of gene medicine. But it should also go to the Kuzus—and to Brad Margus, Rohan Seth, Julia Vitarello, and all the other parents who are trying save their kids. In doing so, they are turning hyper-personalized medicine into reality. Erika Check Hayden is director of the science communication program at the University of California, Santa Cruz. This story was part of our March 2020 issue. The predictions issue Inventing the Universe (The New Atlantis) Winter 2020 David Kordahl Two new books on quantum theory could not, at first glance, seem more different. The first, Something Deeply Hidden, is by Sean Carroll, a physicist at the California Institute of Technology, who writes, “As far as we currently know, quantum mechanics isn’t just an approximation of the truth; it is the truth.” The second, Einstein’s Unfinished Revolution, is by Lee Smolin of the Perimeter Institute for Theoretical Physics in Ontario, who insists that “the conceptual problems and raging disagreements that have bedeviled quantum mechanics since its inception are unsolved and unsolvable, for the simple reason that the theory is wrong.” Given this contrast, one might expect Carroll and Smolin to emphasize very different things in their books. Yet the books mirror each other, down to chapters that present the same quantum demonstrations and the same quantum parables. Carroll and Smolin both agree on the facts of quantum theory, and both gesture toward the same historical signposts. Both consider themselves realists, in the tradition of Albert Einstein. They want to finish his work of unifying physical theory, making it offer one coherent description of the entire world, without ad hoc exceptions to cover experimental findings that don’t fit. By the end, both suggest that the completion of this project might force us to abandon the idea of three-dimensional space as a fundamental structure of the universe. But with Carroll claiming quantum mechanics as literally true and Smolin claiming it as literally false, there must be some underlying disagreement. And of course there is. Traditional quantum theory describes things like electrons as smeary waves whose measurable properties only become definite in the act of measurement. Sean Carroll is a supporter of the “Many Worlds” interpretation of this theory, which claims that the multiple measurement possibilities all simultaneously exist. Some proponents of Many Worlds describe the existence of a “multiverse” that contains many parallel universes, but Carroll prefers to describe a single, radically enlarged universe that contains all the possible outcomes running alongside each other as separate “worlds.” But the trouble, says Lee Smolin, is that in the real world as we observe it, these multiple possibilities never appear — each measurement has a single outcome. Smolin takes this fact as evidence that quantum theory must be wrong, and argues that any theory that supersedes quantum mechanics must do away with these multiple possibilities. So how can such similar books, informed by the same evidence and drawing upon the same history, reach such divergent conclusions? Well, anyone who cares about politics knows that this type of informed disagreement happens all the time, especially, as with Carroll and Smolin, when the disagreements go well beyond questions that experiments could possibly resolve. But there is another problem here. The question that both physicists gloss over is that of just how much we should expect to get out of our best physical theories. This question pokes through the foundation of quantum mechanics like rusted rebar, often luring scientists into arguments over parables meant to illuminate the obscure. With this in mind, let’s try a parable of our own, a cartoon of the quantum predicament. In the tradition of such parables, it’s a story about knowing and not knowing. We fade in on a scientist interviewing for a job. Let’s give this scientist a name, Bobby Alice, that telegraphs his helplessness to our didactic whims. During the part of the interview where the Reality Industries rep asks him if he has any questions, none of them are answered, except the one about his starting salary. This number is high enough to convince Bobby the job is right for him. Knowing so little about Reality Industries, everything Bobby sees on his first day comes as a surprise, starting with the campus’s extensive security apparatus of long gated driveways, high tree-lined fences, and all the other standard X-Files elements. Most striking of all is his assigned building, a structure whose paradoxical design merits a special section of the morning orientation. After Bobby is given his project details (irrelevant for us), black-suited Mr. Smith–types tell him the bad news: So long as he works at Reality Industries, he may visit only the building’s fourth floor. This, they assure him, is standard, for all employees but the top executives. Each project team has its own floor, and the teams are never allowed to intermix. The instructors follow this with what they claim is the good news. Yes, they admit, this tightly tiered approach led to worker distress in the old days, back on the old campus, where the building designs were brutalist and the depression rates were high. But the new building is designed to subvert such pressures. The trainers lead Bobby up to the fourth floor, up to his assignment, through a construction unlike any research facility he has ever seen. The walls are translucent and glow on all sides. So do the floor and ceiling. He is guided to look up, where he can see dark footprints roving about, shadows from the project team on the next floor. “The goal here,” his guide remarks, “is to encourage a sort of cultural continuity, even if we can’t all communicate.” Over the next weeks, Bobby Alice becomes accustomed to the silent figures floating above him. Eventually, he comes to enjoy the fourth floor’s communal tracking of their fifth-floor counterparts, complete with invented names, invented personalities, invented purposes. He makes peace with the possibility that he is himself a fantasy figure for the third floor. Then, one day, strange lights appear in a corner of the ceiling. Naturally phlegmatic, Bobby Alice simply takes notes. But others on the fourth floor are noticeably less calm. The lights seem not to follow any known standard of the physics of footfalls, with lights of different colors blinking on and off seemingly at random, yet still giving the impression not merely of a constructed display but of some solid fixture in the fifth-floor commons. Some team members, formerly of the same anti-philosophical bent as most hires, now spend their coffee breaks discussing increasingly esoteric metaphysics. Productivity declines. Meanwhile, Bobby has set up a camera to record data. As a work-related extracurricular, he is able in the following weeks to develop a general mathematical description that captures an unexpected order in the flashing lights. This description does not predict exactly which lights will blink when, but, by telling a story about what’s going on between the frames captured by the camera, he can predict what sorts of patterns are allowed, how often, and in what order. Does this solve the mystery? Apparently it does. Conspiratorial voices on the fourth floor go quiet. The “Alice formalism” immediately finds other applications, and Reality Industries gives Dr. Alice a raise. They give him everything he could want — everything except access to the fifth floor. In time, Bobby Alice becomes a fourth-floor legend. Yet as the years pass — and pass with the corner lights as an apparently permanent fixture — new employees occasionally massage the Alice formalism to unexpected ends. One worker discovers that he can rid the lights of their randomness if he imagines them as the reflections from a tank of iridescent fish, with the illusion of randomness arising in part because it’s a 3-D projection on a 2-D ceiling, and in part because the fish swim funny. The Alice formalism offers a series of color maps showing the different possible light patterns that might appear at any given moment, and another prominent interpreter argues, with supposed sincerity (although it’s hard to tell), that actually not one but all of the maps occur at once — each in parallel branching universes generated by that spooky alien light source up on the fifth floor. As the interpretations proliferate, Reality Industries management occasionally finds these side quests to be a drain on corporate resources. But during the Alice decades, the fourth floor has somehow become the company’s most productive. Why? Who knows. Why fight it? The history of quantum mechanics, being a matter of record, obviously has more twists than any illustrative cartoon can capture. Readers interested in that history are encouraged to read Adam Becker’s recent retelling, What Is Real?, which was reviewed in these pages (“Make Physics Real Again,” Winter 2019). But the above sketch is one attempt to capture the unusual flavor of this history. Like the fourth-floor scientists in our story who, sight unseen, invented personas for all their fifth-floor counterparts, nineteenth-century physicists are often caricatured as having oversold their grasp on nature’s secrets. But longstanding puzzles — puzzles involving chemical spectra and atomic structure rather than blinking ceiling lights — led twentieth-century pioneers like Niels Bohr, Wolfgang Pauli, and Werner Heisenberg to invent a new style of physical theory. As with the formalism of Bobby Alice, mature quantum theories in this tradition were abstract, offering probabilistic predictions for the outcomes of real-world measurements, while remaining agnostic about what it all meant, about what fundamental reality undergirded the description. From the very beginning, a counter-tradition associated with names like Albert Einstein, Louis de Broglie, and Erwin Schrödinger insisted that quantum models must ultimately capture something (but probably not everything) about the real stuff moving around us. This tradition gave us visions of subatomic entities as lumps of matter vibrating in space, with the sorts of orbital visualizations one first sees in high school chemistry. But once the various quantum ideas were codified and physicists realized that they worked remarkably well, most research efforts turned away from philosophical agonizing and toward applications. The second generation of quantum theorists, unburdened by revolutionary angst, replaced every part of classical physics with a quantum version. As Max Planck famously wrote, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Since this inherited framework works well enough to get new researchers started, the question of what it all means is usually left alone. Of course, this question is exactly what most non-experts want answered. For past generations, books with titles like The Tao of Physics and Quantum Reality met this demand, with discussions that wildly mixed conventions of scientific reportage with wisdom literature. Even once quantum theories themselves became familiar, interpretations of them were still new enough to be exciting. Today, even this thrill is gone. We are now in the part of the story where no one can remember what it was like not to have the blinking lights on the ceiling. Despite the origins of quantum theory as an empirical framework — a container flexible enough to wrap around whatever surprises experiments might uncover — its success has led today’s theorists to regard it as fundamental, a base upon which further speculations might be built. Regaining that old feeling of disorientation now requires some extra steps. As interlopers in an ongoing turf war, modern explainers of quantum theory must reckon both with arguments like Niels Bohr’s, which emphasize the theory’s limits on knowledge, and with criticisms like Albert Einstein’s, which demand that the theory represent the real world. Sean Carroll’s Something Deeply Hidden pitches itself to both camps. The title stems from an Einstein anecdote. As “a child of four or five years,” Einstein was fascinated by his father’s compass. He concluded, “Something deeply hidden had to be behind things.” Carroll agrees with this, but argues that the world at its roots is quantum. We only need courage to apply that old Einsteinian realism to our quantum universe. Carroll is a prolific popularizer — alongside his books, his blog, and his Twitter account, he has also recorded three courses of lectures for general audiences, and for the last year has released a weekly podcast. His new book is appealingly didactic, providing a sustained defense of the Many Worlds interpretation of quantum mechanics, first offered by Hugh Everett III as a graduate student in the 1950s. Carroll maintains that Many Worlds is just quantum mechanics, and he works hard to convince us that supporters aren’t merely perverse. In the early days of electrical research, followers of James Clerk Maxwell were called Maxwellians, but today all physicists are Maxwellians. If Carroll’s project pans out, someday we’ll all be Everettians. Standard applications of quantum theory follow a standard logic. A physical system is prepared in some initial condition, and modeled using a mathematical representation called a “wave function.” Then the system changes in time, and these changes, governed by the Schrödinger equation, are tracked in the system’s wave function. But when we interpret the wave function in order to generate a prediction of what we will observe, we get only probabilities of possible experimental outcomes. Carroll insists that this quantum recipe isn’t good enough. It may be sufficient if we care only to predict the likelihood of various outcomes for a given experiment, but it gives us no sense of what the world is like. “Quantum mechanics, in the form in which it is currently presented in physics textbooks,” he writes, “represents an oracle, not a true understanding.” Most of the quantum mysteries live in the process of measurement. Questions of exactly how measurements force determinate outcomes, and of exactly what we sweep under the rug with that bland word “measurement,” are known collectively in quantum lore as the “measurement problem.” Quantum interpretations are distinguished by how they solve this problem. Usually, solutions involve rejecting some key element of common belief. In the Many Worlds interpretation, the key belief we are asked to reject is that of one single world, with one single future. The version of the Many Worlds solution given to us in Something Deeply Hidden sidesteps the history of the theory in favor of a logical reconstruction. What Carroll enunciates here is something like a quantum minimalism: “There is only one wave function, which describes the entire system we care about, all the way up to the ‘wave function of the universe’ if we’re talking about the whole shebang.” Putting this another way, Carroll is a realist about the quantum wave function, and suggests that this mathematical object simply is the deep-down thing, while everything else, from particles to planets to people, are merely its downstream effects. (Sorry, people!) The world of our experience, in this picture, is just a tiny sliver of the real one, where all possible outcomes — all outcomes for which the usual quantum recipe assigns a non-zero probability — continue to exist, buried somewhere out of view in the universal wave function. Hence the “Many Worlds” moniker. What we experience as a single world, chock-full of foreclosed opportunities, Many Worlders understand as but one swirl of mist foaming off an ever-breaking wave. The position of Many Worlds may not yet be common, but neither is it new. Carroll, for his part, is familiar enough with it to be blasé, presenting it in the breezy tone of a man with all the answers. The virtue of his presentation is that whether or not you agree with him, he gives you plenty to consider, including expert glosses on ongoing debates in cosmology and field theory. But Something Deeply Hidden still fails where it matters. “If we train ourselves to discard our classical prejudices, and take the lessons of quantum mechanics at face value,” Carroll writes near the end, “we may eventually learn how to extract our universe from the wave function.” But shouldn’t it be the other way around? Why should we have to work so hard to “extract our universe from the wave function,” when the wave function itself is an invention of physicists, not the inerrant revelation of some transcendental truth? Interpretations of quantum theory live or die on how well they are able to explain its success, and the most damning criticism of the Many Worlds interpretation is that it’s hard to see how it improves on the standard idea that probabilities in quantum theory are just a way to quantify our expectations about various measurement outcomes. Carroll argues that, in Many Worlds, probabilities arise from self-locating uncertainty: “You know everything there is to know about the universe, except where you are within it.” During a measurement, “a single world splits into two, and there are now two people where I used to be just one.” “For a brief while, then, there are two copies of you, and those two copies are precisely identical. Each of them lives on a distinct branch of the wave function, but neither of them knows which one it is on.” The job of the physicist is then to calculate the chance that he has ended up on one branch or another — which produces the probabilities of the various measurement outcomes. If, alongside Carroll, you convince yourself that it is reasonable to suppose that these worlds exist outside our imaginations, you still might conclude, as he does, that “at the end of the day it doesn’t really change how we should go through our lives.” This conclusion comes in a chapter called “The Human Side,” where Carroll also dismisses the possibility that humans might have a role in branching the wave function, or indeed that we have any ultimate agency: “While you might be personally unsure what choice you will eventually make, the outcome is encoded in your brain.” These views are rewarmed arguments from his previous book, The Big Picture, which I reviewed in these pages (“Pop Goes the Physics,” Spring 2017) and won’t revisit here. Although this book is unlikely to turn doubters of Many Worlds into converts, it is a credit to Carroll that he leaves one with the impression that the doctrine is probably consistent, whether or not it is true. But internal consistency has little power against an idea that feels unacceptable. For doctrines like Many Worlds, with key claims that are in principle unobservable, some of us will always want a way out. Lee Smolin is one such seeker for whom Many Worlds realism — or “magical realism,” as he likes to call it — is not real enough. In his new book, Einstein’s Unfinished Revolution, Smolin assures us that “however weird the quantum world may be, it need not threaten anyone’s belief in commonsense realism. It is possible to be a realist while living in the quantum universe.” But if you expect “commonsense realism” by the end of his book, prepare for a surprise. Smolin is less congenial than Carroll, with a brooding vision of his fellow scientists less as fellow travelers and more as members of an “orthodoxy of the unreal,” as Smolin stirringly puts it. Smolin is best known for his role as doomsayer about string theory — his 2006 book The Trouble with Physics functioned as an entertaining jeremiad. But while his books all court drama and are never boring, that often comes at the expense of argumentative care. Einstein’s Unfinished Revolution can be summarized briefly. Smolin states early on that quantum theory is wrong: It gives probabilities for many and various measurement outcomes, whereas the world of our observation is solid and singular. Nevertheless, quantum theory can still teach us important lessons about nature. For instance, Smolin takes at face value the claim that entangled particles far apart in the universe can communicate information to each other instantaneously, unbounded by the speed of light. This ability of quantum entities to be correlated while separated in space is technically called “nonlocality,” which Smolin enshrines as a fundamental principle. And while he takes inspiration from an existing nonlocal quantum theory, he rejects it for violating other favorite physical principles. Instead, he elects to redo physics from scratch, proposing partial theories that would allow his favored ideals to survive. This is, of course, an insane act of hubris. But no red line separates the crackpot from the visionary in theoretical physics. Because Smolin presents himself as a man up against the status quo, his books are as much autobiography as popular science, with personality bleeding into intellectual commitments. Smolin’s last popular book, Time Reborn (2013), showed him changing his mind about the nature of time after doing bedtime with his son. This time around, Smolin tells us in the preface about how he came to view the universe as nonlocal: I vividly recall that when I understood the proof of the theorem, I went outside in the warm afternoon and sat on the steps of the college library, stunned. I pulled out a notebook and immediately wrote a poem to a girl I had a crush on, in which I told her that each time we touched there were electrons in our hands which from then on would be entangled with each other. I no longer recall who she was or what she made of my poem, or if I even showed it to her. But my obsession with penetrating the mystery of nonlocal entanglement, which began that day, has never since left me. The book never seriously questions whether the arguments for nonlocality should convince us; Smolin’s experience of conviction must stand in for our own. These personal detours are fascinating, but do little to convince skeptics. Once you start turning the pages of Einstein’s Unfinished Revolution, ideas fly by fast. First, Smolin gives us a tour of the quantum fundamentals — entanglement, nonlocality, and all that. Then he provides a thoughtful overview of solutions to the measurement problem, particularly those of David Bohm, whose complex legacy he lingers over admiringly. But by the end, Smolin abandons the plodding corporate truth of the scientist for the hope of a private perfection. Many physicists have never heard of Bohm’s theory, and some who have still conclude that it’s worthless. Bohm attempted to salvage something like the old classical determinism, offering a way to understand measurement outcomes as caused by the motion of particles, which in turn are guided by waves. This conceptual simplicity comes at the cost of brazen nonlocality, and an explicit dualism of particles and waves. Einstein called the theory a “physical fairy-tale for children”; Robert Oppenheimer declared about Bohm that “we must agree to ignore him.” Bohm’s theory is important to Smolin mainly as a prototype, to demonstrate that it’s possible to situate quantum mechanics within a single world — unlike Many Worlds, which Smolin seems to dislike less for physical than for ethical reasons: “It seems to me that the Many Worlds Interpretation offers a profound challenge to our moral thinking because it erases the distinction between the possible and the actual.” In his survey, Smolin sniffs each interpretation as he passes it, looking for a whiff of the real quantum story, which will preserve our single universe while also maintaining the virtues of all the partial successes. When Smolin finally explains his own idiosyncratic efforts, his methods — at least in the version he has dramatized here — resemble some wild descendant of Cartesian rationalism. From his survey, Smolin lists the principles he would expect from an acceptable alternative to quantum theory. He then reports back to us on the incomplete models he has found that will support these principles. Smolin’s tour leads us all over the place, from a review of Leibniz’s Monadology (“shockingly modern”), to a new law of physics he proposes (the “principle of precedence”), to a solution to the measurement problem involving nonlocal interactions among all similar systems everywhere in the universe. Smolin concludes with the grand claim that “the universe consists of nothing but views of itself, each from an event in its history.” Fine. Maybe there’s more to these ideas than a casual reader might glean, but after a few pages of sentences like, “An event is something that happens,” hope wanes. For all their differences, Carroll and Smolin similarly insist that, once the basic rules governing quantum systems are properly understood, the rest should fall into place. “Once we understand what’s going on for two particles, the generalization to 1088 particles is just math,” Carroll assures us. Smolin is far less certain that physics is on the right track, but he, too, believes that progress will come with theoretical breakthroughs. “I have no better answer than to face the blank notebook,” Smolin writes. This was the path of Bohr, Einstein, Bohm and others. “Ask yourself which of the fundamental principles of the present canon must survive the coming revolution. That’s the first page. Then turn again to a blank page and start thinking.” Physicists are always tempted to suppose that successful predictions prove that a theory describes how the world really is. And why not? Denying that quantum theory captures something essential about the character of those entities outside our heads that we label with words like “atoms” and “molecules” and “photons” seems far more perverse, as an interpretive strategy, than any of the mainstream interpretations we’ve already discussed. Yet one can admit that something is captured by quantum theory without jumping immediately to the assertion that everything must flow from it. An invented language doesn’t need to be universal to be useful, and it’s smart to keep on honing tools for thinking that have historically worked well. As an old mentor of mine, John P. Ralston, wrote in his book How to Understand Quantum Mechanics, “We don’t know what nature is, and it is not clear whether quantum theory fully describes it. However, it’s not the worst thing. It has not failed yet.” This seems like the right attitude to take. Quantum theory is a fabulously rich subject, but the fact that it has not failed yet does not allow us to generalize its results indefinitely. There is value in the exercises that Carroll and Smolin perform, in their attempts to imagine principled and orderly universes, to see just how far one can get with a straitjacketed imagination. But by assuming that everything is captured by the current version of quantum theory, Carroll risks credulity, foreclosing genuinely new possibilities. And by assuming that everything is up for grabs, Smolin risks paranoia, ignoring what is already understood. Perhaps the agnostics among us are right to settle in as permanent occupants of Reality Industries’ fourth floor. We can accept that scientists have a role in creating stories that make sense, while also appreciating the possibility that the world might not be made of these stories. To the big, unresolved questions — questions about where randomness enters in the measurement process, or about how much of the world our physical theories might capture — we can offer only a laconic who knows? The world is filled with flashing lights, and we should try to find some order in them. Scientific success often involves inventing a language that makes the strange sensible, warping intuitions along the way. And while this process has allowed us to make progress, we should never let our intuitions get so strong that we stop scanning the ceiling for unexpected dazzlements. David Kordahl is a graduate student in physics at Arizona State University. David Kordahl, “Inventing the Universe,” The New Atlantis, Number 61, Winter 2020, pp. 114-124. 5 Pandemic Mistakes We Keep Repeating (The Atlantic) Zeynep Tufekci February 26, 2021 We can learn from our failures. Risk Compensation Rules in Place of Mechanisms and Intuitions Outdoors? It’s the opposite. Scolding and Shaming Harm Reduction The Balance Between Knowledge And Action So why isn’t this story more widely appreciated? So we didn’t get our initial vaccine jubilation. Climate crisis: world is at its hottest for at least 12,000 years – study (The Guardian) Damian Carrington, Environment editor @dpcarrington Wed 27 Jan 2021 16.00 GMT The world’s continuously warming climate is revealed also in contemporary ice melt at glaciers, such as with this one in the Kenai mountains, Alaska (seen September 2019). Photograph: Joe Raedle/Getty Images The planet is hotter now than it has been for at least 12,000 years, a period spanning the entire development of human civilisation, according to research. Analysis of ocean surface temperatures shows human-driven climate change has put the world in “uncharted territory”, the scientists say. The planet may even be at its warmest for 125,000 years, although data on that far back is less certain. The research, published in the journal Nature, reached these conclusions by solving a longstanding puzzle known as the “Holocene temperature conundrum”. Climate models have indicated continuous warming since the last ice age ended 12,000 years ago and the Holocene period began. But temperature estimates derived from fossil shells showed a peak of warming 6,000 years ago and then a cooling, until the industrial revolution sent carbon emissions soaring. This conflict undermined confidence in the climate models and the shell data. But it was found that the shell data reflected only hotter summers and missed colder winters, and so was giving misleadingly high annual temperatures. “We demonstrate that global average annual temperature has been rising over the last 12,000 years, contrary to previous results,” said Samantha Bova, at Rutgers University–New Brunswick in the US, who led the research. “This means that the modern, human-caused global warming period is accelerating a long-term increase in global temperatures, making today completely uncharted territory. It changes the baseline and emphasises just how critical it is to take our situation seriously.” The world may be hotter now than any time since about 125,000 years ago, which was the last warm period between ice ages. However, scientists cannot be certain as there is less data relating to that time. One study, published in 2017, suggested that global temperatures were last as high as today 115,000 years ago, but that was based on less data. The new research is published in the journal Nature and examined temperature measurements derived from the chemistry of tiny shells and algal compounds found in cores of ocean sediments, and solved the conundrum by taking account of two factors. First, the shells and organic materials had been assumed to represent the entire year but in fact were most likely to have formed during summer when the organisms bloomed. Second, there are well-known predictable natural cycles in the heating of the Earth caused by eccentricities in the orbit of the planet. Changes in these cycles can lead to summers becoming hotter and winters colder while average annual temperatures change only a little. Combining these insights showed that the apparent cooling after the warm peak 6,000 years ago, revealed by shell data, was misleading. The shells were in fact only recording a decline in summer temperatures, but the average annual temperatures were still rising slowly, as indicated by the models. “Now they actually match incredibly well and it gives us a lot of confidence that our climate models are doing a really good job,” said Bova. The study looked only at ocean temperature records, but Bova said: “The temperature of the sea surface has a really controlling impact on the climate of the Earth. If we know that, it is the best indicator of what global climate is doing.” She led a research voyage off the coast of Chile in 2020 to take more ocean sediment cores and add to the available data. Jennifer Hertzberg, of Texas A&M University in the US, said: “By solving a conundrum that has puzzled climate scientists for years, Bova and colleagues’ study is a major step forward. Understanding past climate change is crucial for putting modern global warming in context.” Lijing Cheng, at the International Centre for Climate and Environment Sciences in Beijing, China, recently led a study that showed that in 2020 the world’s oceans reached their hottest level yet in instrumental records dating back to the 1940s. More than 90% of global heating is taken up by the seas. Cheng said the new research was useful and intriguing. It provided a method to correct temperature data from shells and could also enable scientists to work out how much heat the ocean absorbed before the industrial revolution, a factor little understood. The level of carbon dioxide today is at its highest for about 4m years and is rising at the fastest rate for 66m years. Further rises in temperature and sea level are inevitable until greenhouse gas emissions are cut to net zero. ‘Star Wars without Darth Vader’ – why the UN climate science story names no villains (Climate Home News) Published on 12/01/2021, 4:10pm By Joe Lo WIRED Staff, Science, 06.23.2008 12:00 PM petabyte age Marian Bantjes The End of Theory: The Data Deluge Makes the Scientific Method Obsolete Feeding the Masses: Data In, Crop Predictions Out Chasing the Quark: Sometimes You Need to Throw Information Away Winning the Lawsuit: Data Miners Dig for Dirt Tracking the News: A Smarter Way to Predict Riots and Wars __Spotting the Hot Zones: __ Now We Can Monitor Epidemics Hour by Hour __ Sorting the World:__ Google Invents New Way to Manage Data __ Watching the Skies:__ Space Is Big — But Not Too Big to Map Scanning Our Skeletons: Bone Images Show Wear and Tear Tracking Air Fares: Elaborate Algorithms Predict Ticket Prices Predicting the Vote: Pollsters Identify Tiny Voting Blocs Pricing Terrorism: Insurers Gauge Risks, Costs Visualizing Big Data: Bar Charts for Words Big data and the end of theory? (The Guardian) Mark Graham, Fri 9 Mar 2012 14.39 GM domain experts who traditionally craft carefully targeted hypotheses and research strategies. predictions about its shoppers. Chris Anderson, Science, 06.23.2008 12:00 PM Chris Anderson ( is the editor in chief of Wired. Science and Policy Collide During the Pandemic (The Scientist) Science and Policy Collide During the Pandemic Diana Kwon Sep 1, 2020 Cultural divisions between scientists and policymakers Bringing science and policy closer together Indigenous knowledge still undervalued – study (EurekaAlert!) News Release 3-Sep-2020 Respondents describe a power imbalance in environmental decision-making Anglia Ruskin University New research has found that Indigenous knowledge is regularly underutilised and misunderstood when making important environmental decisions. Published in a special edition of the journal People and Nature, the study investigates how to improve collaborations between Indigenous knowledge holders and scientists, and recommends that greater equity is necessary to better inform decision-making and advance common environmental goals. The research, led by Dr Helen Wheeler of Anglia Ruskin University (ARU), involved participants from the Arctic regions of Norway, Sweden, Greenland, Russia, Canada, and the United States. Indigenous peoples inhabit 25% of the land surface and have strong links to their environment, meaning they can provide unique insights into natural systems. However, the greater resources available to scientists often creates a power imbalance when environmental decisions are made. The study’s Indigenous participants identified numerous problems, including that Indigenous knowledge is often perceived as less valuable than scientific knowledge and added as anecdotes to scientific studies. They also felt that Indigenous knowledge was being forced into frameworks that did not match Indigenous people’s understanding of the world and is often misinterpreted through scientific validation. One participant expressed the importance of Indigenous knowledge being reviewed by Indigenous knowledge holders, rather than by scientists. Another concern was that while funding for Arctic science was increasing, the same was not happening for research rooted in Indigenous knowledge or conducted by Indigenous peoples. Gunn-Britt Retter, Head of the Arctic and Environmental Unit of the Saami Council, said: “Although funding for Arctic science is increasing, we are not experiencing this same trend for Indigenous knowledge research. “Sometimes Indigenous organisations feel pressured to agree to requests for collaboration with scientists so that we can have some influence in decision-making, even when these collaborations feel tokenistic and do not meet the needs of our communities. This is because there is a lack of funding for Indigenous-led research.” Victoria Buschman, Inupiaq Inuit wildlife and conservation biologist at the University of Washington, said: “Much of the research community has not made adequate space for Indigenous knowledge and continues to undermine its potential for information decision-making. We must let go of the narrative that working with Indigenous knowledge is too challenging.” The study concludes that values, laws, institutions, funding and mechanisms of support that create equitable power-relations between collaborators are necessary for successful relationships between scientists and Indigenous groups. Lead author Dr Helen Wheeler, Lecturer in Zoology at Anglia Ruskin University (ARU), said: “The aim of this study was to understand how to work better with Indigenous knowledge. For those who do research on Indigenous people’s land, such as myself, I think this is an important question to ask. “Our study suggests there are still misconceptions about Indigenous knowledge, particularly around the idea that it is limited in scope or needs verifying by science to be useful. Building capacity for research within Indigenous institutions is also a high priority, which will ensure Indigenous groups have greater power when it comes to informed decision-making. “Indigenous knowledge is increasingly used in decision-making at many levels from developing international policy on biodiversity to local decisions about how to manage wildlife. However, as scientists and decision-makers use knowledge, they must do so in a way that reflects the needs of Indigenous knowledge holders. This should lead to better decisions and more equitable and productive partnerships.” Related Journal Article Study suggests religious belief does not conflict with interest in science, except among Americans (PsyPost) Beth Ellwood – August 31, 2020 A new study suggests that the conflict between science and religion is not universal but instead depends on the historical and cultural context of a given country. The findings were published in Social Psychological and Personality Science. It is widely believed that religion and science are incompatible, with each belief system involving contradictory understandings of the world. However, as study author Jonathan McPhetres and his team point out, the majority of research on this topic has been conducted in the United States. “One of my main areas of research is trying to improve trust in science and finding ways to better communicate science. In order to do so, we must begin to understand who is more likely to be skeptical towards science (and why),” McPhetres, an assistant professor of psychology at Durham University, told PsyPost. In addition, “there’s a contradiction between scientific information and many traditional religious teachings; the conflict between science and religion also seems more pronounced in some areas and for some people (conservative/evangelical Christians). So, I have partly been motivated to see exactly how true this intuition is.” First, nine initial studies that involved a total of 2,160 Americans found that subjects who scored higher in religiosity showed more negative implicit and explicit attitudes about science. Those high in religiosity also showed less interest in science-related activities and a decreased interest in reading or learning about science. “It’s important to understand that these results don’t show that religious people hate or dislike science. Instead, they are simply less interested when compared to a person who is less religious,” McPhetres said. Next, the researchers analyzed data from the World Values Survey (WEVs) involving 66,438 subjects from 60 different countries. This time, when examining the relationship between religious belief and interest in science, correlations were less obvious. While on average, the two concepts were negatively correlated, the strength of the relationship was small and varied by country. Finally, the researchers collected additional data from 1,048 subjects from five countries: Brazil, the Philippines, South Africa, Sweden, and the Czech Republic. Here, the relationship between religiosity and attitudes about science was, again, small. Furthermore, greater religiosity was actually related to greater interest in science. Based on these findings from 11 different studies, the authors suggest that the conflict between religion and science, while apparent in the United States, may not generalize to other parts of the world, a conclusion that “severely undermines the hypothesis that science and religion are necessarily in conflict.” Given that the study employed various assessments of belief in science, including implicit attitudes toward science, interest in activities related to science, and choice of science-related topics among a list of other topics, the findings are particularly compelling. “There are many barriers to science that need not exist. If we are to make our world a better place, we need to understand why some people may reject science and scientists so that we can overcome that skepticism. Everyone can contribute to this goal by talking about science and sharing cool scientific discoveries and information with people every chance you get,” McPhetres said. The study, “Religious Americans Have Less Positive Attitudes Toward Science, but This Does Not Extend to Other Cultures”, was authored by Jonathon McPhetres, Jonathan Jong, and Miron Zuckerman. In a polarized world, what does ‘follow the science’ mean? (The Christian Science Monitor) Why We Wrote This Science is all about asking questions, but when scientific debates become polarized it can be difficult for average citizens to interpret the merits of various arguments. August 12, 2020 By Christa Case Bryant Staff writer, Story Hinckley Staff writer Should kids go back to school?  One South Korean contact-tracing study suggests that is a bad idea. In analyzing 5,706 COVID-19 patients and their 59,073 contacts, it concluded – albeit with a significant caveat – that 10- to 19-year-olds were the most contagious age group within their household. A study out of Iceland, meanwhile, found that children under 10 are less likely to get infected and less likely than adults to become ill if they are infected. Coauthor Kári Stefánsson, who is CEO of a genetics company tracking the disease’s spread, said the study didn’t find a single instance of a child infecting a parent. So when leaders explain their decision on whether to send kids back to school by saying they’re “following the science,” citizens could be forgiven for asking what science they’re referring to exactly – and how sure they are that it’s right.  But it’s become difficult to ask such questions amid the highly polarized debate around pandemic policies. While areas of consensus have emerged since the pandemic first hit the United States in March, significant gaps remain. Those uncertainties have opened the door for contrarians to gain traction in popular thought. Some Americans see them as playing a crucial role, challenging a fear-driven groupthink that is inhibiting scientific inquiry, driving unconstitutional restrictions on individual freedom and enterprise, and failing to grapple with the full societal cost of shutting down businesses, churches, and schools. Public health experts who see shutdowns as crucial to saving lives are critical of such actors, due in part to fears that they are abetting right-wing resistance to government restrictions. They have also voiced criticism that some contrarians appear driven by profit or political motives more than genuine concern about public health. The deluge of studies and competing interpretations have left citizens in a tough spot, especially when data or conclusions are shared on Twitter or TV without full context – like a handful of puzzle pieces thrown in your face, absent any box top picture to help you fit them together.  “You can’t expect the public to go through all the science, so you rely on people of authority, someone whom you trust, to parse that for you,” says Aleszu Bajak, a science and data journalist who teaches at Northeastern University in Boston. “But now you have more than just the scientists in their ivory tower throwing out all of this information. You have competing pundits, with different incentives, drawing on different science of varying quality.” The uncertainties have also posed a challenge for policymakers, who haven’t had the luxury of waiting for the full arc of scientific inquiry to be completed. “The fact is, science, like everything else, is uncertain – particularly when it comes to predictions,” says John Holdren, who served as director of the White House Office of Science and Technology Policy for the duration of President Barack Obama’s eight-year tenure. “I think seasoned, experienced decision-makers understand that. They understand that there will be uncertainties, even in the scientific inputs to their decision-making process, and they have to take those into account and they have to seek approaches that are resilient to uncertain outcomes.”  Some say that in an effort to reassure citizens that shutdowns were implemented based on scientific input, policymakers weren’t transparent enough about the underlying uncertainties.  “We’ve heard constantly that politicians are following the science. That’s good, of course, but … especially at the beginning, science is tentative, it changes, it’s evolving fast, it’s uncertain,” Prof. Sir Paul Nurse, director of the Francis Crick Institute in London, recently told a British Parliament committee. One of the founding partners of his independent institute is Imperial College, whose researchers’ conclusions were a leading driver of U.S. and British government shutdowns. “You can’t just have a single top line saying we’re following science,” he adds. “It has to be more dealing with what we know about the science and what we don’t.”  Rick Bowmer/AP Granite School District teachers join others gathered at the Granite School District Office on Aug. 4, 2020, in Salt Lake City, to protest the district’s plans for reopening. Teachers showed up in numbers to make sure the district’s school board knew their concerns. A focus on uncertainty One scientist who talks a lot about unknowns is John Ioannidis, a highly cited professor of medicine, epidemiology, and population health at Stanford University in California. Dr. Ioannidis, who has made a career out of poking holes in his colleagues’ research, agrees that masks and social distancing are effective but says there are open questions about how best to implement them. He has also persistently questioned just how deadly COVID-19 is and to what extent shutdowns are affecting mental health, household transmission to older family members, and the well-being of those with non-COVID-19-relatedconditions. It’s very difficult, he says, to do randomized trials for things like how to reopen, and different countries and U.S. states have done things in different ways. “For each one of these decisions, action plans – people said we’re using the best science,” he says. “But how can it be that they’re all using the best science when they’re so different?” Many scientists say they and their colleagues have been open about the uncertainties,despite a highly polarized debate around the pandemic and the 2020 election season ramping up.  “One of the remarkable things about this pandemic is the extent to which many people in the scientific community are explicit about what’s uncertain,” says Marc Lipsitch, a professor of epidemiology and director of the Center for Communicable Disease Dynamics at the Harvard T.H. Chan School of Public Health who is working on a study about how biases can affect COVID-19 research. “There has been a sort of hard core of scientists, even with different policy predispositions, who have been insistent on that.” “In some ways the politicized nature has made people more aware of the uncertainties,” adds Professor Lipsitch, who says Twitter skeptics push him and his colleagues to strengthen their arguments. “That’s a good voice to have in the back of your head.”  For the Harvard doctor, Alex Berenson is not that voice. But a growing number of frustrated Americans have gravitated toward the former New York Times reporter’s brash, unapologetic challenging of prevailing narratives. His following on Twitter has grown from around 10,000 to more than 182,000 and counting.  Mr. Berenson, who investigated big business before leaving The New York Times in 2010 to write spy novels, dives into government data, quotes from scientific studies, and takes to Twitter daily to rail against what he sees as a dangerous overreaction driven by irrational fear and abetted by a liberal media agenda and corporate interests – particularly tech companies, whose earnings have soared during the shutdowns. He refers satirically to those advocating government restrictions as “Team Apocalypse.” Dr. Lipsitch says that while public health experts pushing for lockdown like himself could be considered hawks while contrarians like Mr. Berenson could be considered doves, his “name-calling” doesn’t take into account the fact that most scientists have at least a degree of nuance. “It’s really sort of unsophisticated to say there are two camps, but it serves some people’s interest to demonize the other side,” he says. Mr. Berenson, the author of a controversial 2019 book arguing that marijuana increases the risk of mental illness and violence, has been accused of cherry-picking data and conflating correlation and causation. Amazon initially blocked publication of his booklet “Unreported Truths about COVID-19 and Lockdowns: Part 1” until Elon Musk got wind of it and called out the tech giant on Twitter. Mr. Berenson prevailed and recently released Part 2 on the platform, which has already become Amazon’s No. 1 best-seller among history of science and medicine e-books. He strives to broaden the public’s contextual understanding of fatality rates, emphasizing that the vast majority of deaths occur among the elderly; in Italy, for instance, the median age of people who died is 81. He calls into question the reliability of COVID-19 death tolls, which according to the Centers for Disease Control and Prevention can be categorized as such even without a positive test if the disease is assumed to have caused or even contributed to a death. Earlier this spring, when a prominent model was forecasting overwhelmed hospitals in New York, he pointed out that their projection was quadruple that of the actual need.  “Nobody had the guts or brains to ask – why is your model off by a factor of four today, and you made it last week?” says Mr. Berenson, referring to the University of Washington’s Institute for Health Metrics and Evaluation projection in early April and expressing disappointment that his former colleagues in the media are not taking a harder look at such questions. “I think unfortunately people have been blinded by ideology.” Politicization of science Amid a sense of urgency, fear, and frustration with Americans who refuse to fall in line with government restrictions as readily as their European or especially Asian counterparts, Mr. Berenson and Dr. Ioannidis have faced blowback for airing questions about those restrictions and the science behind them. Mr. Berenson’s book installments have prompted criticism that he’s looking for profits at the expense of public health, which he has denied. Dr. Ioannidis’ involvement in an April antibodies study in Santa Clara, California, which purported to show that COVID-19 is much less deadly than was widely believed was discredited by other scientists due to questions about the accuracy of the test used and a BuzzFeed report that it was partially funded by JetBlue Airways’ cofounder. Dr. Ioannidis says those questions were fully addressed within two weeks in a revised version that showed with far more extensive data that the test was accurate, and adds he had been unaware of the $5,000 donation, which came through the Stanford development office and was anonymized. The dismay grew when BuzzFeed News reported in July that a month before the Santa Clara study, he had offered to convene a small group of world-renowned scientists to meet with President Donald Trump and help him solve the pandemic “by intensifying efforts to understand the denominator of infected people (much larger than what is documented to-date)” and developing a more targeted, data-driven approach than long-term shutdowns, which he said would “jeopardiz[e] so many lives,” according to emails obtained by BuzzFeed While the right has seized on Dr. Ioannidis’ views and some scientists say it’s hard not to conclude that his work is driven by a political agenda, the Greek doctor maintains that partisanship is antithetical to the scientific method, which requires healthy skepticism, among other things. “Even the word ‘science’ has been politicized. It’s very sad,” he says, observing that in the current environment, scientific conclusions are used to shame, smear, and “cancel” the opposite view. “I think it’s very unfortunate to use science as a silencer of dissent.” The average citizen, he adds, is filtering COVID-19 debates through their belief systems, media sources, and political ideology, which can leave science at a disadvantage in the public square. “Science hasn’t been trained to deal with these kinds of powerful companions that are far more vocal and better armed to penetrate into social discourse,” says Dr. Ioannidis. The polarization has been fueled in part by absolutist pundits. In a recent week, “The Rachel Maddow Show” on MSNBC daily hammered home the rising rate in cases, trumpeted the daily death toll, and quoted Dr. Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases since 1984, while “The Tucker Carlson Show” on Fox News did not once mention government data, featuring instead anecdotes from business owners who have been affected by the shutdowns and calling into question the authority of unelected figures such as Dr. Fauci. Fed on different media diets, it’s not surprising that partisan views on the severity of the pandemic have diverged further in recent months, with 85% of Democrats seeing it as a major threat – nearly double the percent of Republicans, according to a Pew Research poll from mid-July. And in a related division that predates the pandemic, another Pew poll from February showed that Republicans are less likely to support scientists taking an active role in social policy matters – just 43% compared with 73% for Democrats and Democratic-leaning independents. “If you have more of a populist type of worldview, where you are concerned that elites and scientists and officials act in their own interests first, it becomes very easy to make assumptions that they are doing something to control the population,” says Prof. Asheley Landrum, a psychologist at Texas Tech University who specializes in science communication. Beyond following the science Determining what exactly “the science” says is only one part of the equation; figuring out precisely how to “follow” it poses another set of challenges for policymakers on questions like whether to send students back to school. “Even if you had all the science pinned down, there are still some tough value judgments about the dangers of multiplying the pandemic or the dangers of keeping kids at home,” says Dr. Holdren, President Obama’s science adviser, an engineer and physicist who now co-directs the science, technology, and public policy program at Harvard Kennedy School. Dr. Lipsitch echoes that point and offers an example of two schools that both have a 10% risk of an outbreak. In one, where there are older students from high-income families who are more capable of learning remotely, leaders may decide that the 10% risk isn’t worth reopening. But in another school with the same assessed risk, where the students are younger and many depend on free and reduced lunch, a district may decide the risk is a trade-off they’re willing to make in support of the students’ education and well-being. “Following the science just isn’t enough,” says Dr. Lipsitch. “It’s incumbent on responsible leaders to use science to do the reasoning about how to do the best thing given your values, but it’s not an answer.” News Release 20-Jul-2020 Penn State Related Journal Article Scientists launch ambitious conservation project to save the Amazon (Mongabay) Series: Amazon Conservation by Shanna Hanbury on 27 July 2020 • The Science Panel for the Amazon (SPA), an ambitious cooperative project to bring together the existing scientific research on the Amazon biome, has been launched with the support of the United Nation’s Sustainable Development Solutions Network. • Modeled on the authoritative UN Intergovernmental Panel on Climate Change reports, the first Amazon report is planned for release in April 2021; that report will include an extensive section on Amazon conservation solutions and policy suggestions backed up by research findings. • The Science Panel for the Amazon consists of 150 experts — including climate, ecological, and social scientists; economists; indigenous leaders and political strategists — primarily from the Amazon countries • According to Carlos Nobre, one of the leading scientists on the project, the SPA’s reports will aim not only to curb deforestation, but to propose an ongoing economically feasible program to conserve the forest while advancing human development goals for the region, working in tandem with, and in support of, ecological systems. Butterflies burst into the sky above an Amazonian river. Image © Fernando Lessa / The Nature Conservancy. With the Amazon rainforest predicted to be at, or very close to, its disastrous rainforest-to-savanna tipping point, deforestation escalating at a frightening pace, and governments often worsening the problem, the need for action to secure the future of the rainforest has never been more urgent. Now, a group of 150 leading scientific and economic experts on the Amazon basin have taken it upon themselves to launch an ambitious conservation project. The newly founded Science Panel for the Amazon (SPA) aims to consolidate scientific research on the Amazon and propose solutions that will secure the region’s future — including the social and economic well-being of its thirty-five-million inhabitants. “Never before has there been such a rigorous scientific evaluation on the Amazon,” said Carlos Nobre, the leading Amazon climatologist and one of the chairs of the Scientific Panel. The newly organized SPA, he adds, will model its work on the style of the authoritative reports produced by the UN Intergovernmental Panel on Climate Change (IPCC) in terms of academic diligence and the depth and breadth of analysis and recommendations. The Amazon Panel, is funded by the United Nation’s Sustainable Development Solutions Network and supported by prominent political leaders, such as former Colombian President, Juan Manuel Santos and the elected leader of the Coordinator of Indigenous Organizations of the Amazon River Basin, José Gregorio Díaz Mirabal. The SPA plans to publish its first report by April 2021. Timber illegally logged within an indigenous reserve seized by IBAMA, Brazil’s environmental agency, before the election of Jair Bolsonaro. Under the Bolsonaro administration, IBAMA has been largely defunded. Image courtesy of IBAMA. Reversing the Amazon Tipping Point Over the last five decades, the Amazon rainforest lost almost a fifth of its forest cover, putting the biome on the edge of a dangerous cliff. Studies show that if 3 to 8% more forest cover is lost, then deforestation combined with escalating climate change is likely to cause the Amazon ecosystem to collapse. After this point is reached, the lush, biodiverse rainforest will receive too little precipitation to maintain itself and quickly shift from forest into a degraded savanna, causing enormous economic damage across the South American continent, and releasing vast amounts of forest-stored carbon to the atmosphere, further destabilizing the global climate. Amazon researchers are now taking a proactive stance to prevent the Amazon Tipping Point: “Our message to political leaders is that there is no time to waste,” Nobre wrote in the SPA’s press release. Amid escalating forest loss in the Amazon, propelled by the anti-environmentalist agenda of Brazilian President Jair Bolsonaro, experts fear that this year’s burning season, already underway, may exceed the August 2019 wildfires that shocked the world. Most Amazon basin fires are not natural in cause, but intentionally set, often by land grabbers invading indigenous territories and other conserved lands, and causing massive deforestation. “We are burning our own money, resources and biodiversity — it makes no sense,” Sandra Hacon told Mongabay; she is a prominent biologist at the Brazilian biomedical Oswaldo Cruz Foundation and has studied the effects of Amazon forest fires on health. It is expected that air pollution caused by this year’s wildfire’s, when combined with COVID-19 symptoms, will cause severe respiratory impacts across the region. Bolivian ecologist Marielos Penã-Claros, notes the far-reaching economic importance of the rainforest: “The deforestation of the Amazon also has a negative effect on the agricultural production of Uruguay or Paraguay, thousands of kilometers away.” The climate tipping point, should it be passed, would negatively effect every major stakeholder in the Amazon, likely wrecking the agribusiness and energy production sectors — ironically, the sectors responsible for much of the devastation today. “I hope to show evidence to the world of what is happening with land use in the Amazon and alert other governments, as well as state and municipal-level leadership. We have a big challenge ahead, but it’s completely necessary,” said Hacon. Cattle ranching is the leading cause of deforestation in the Brazilian Amazon, but researchers say there is enough already degraded land there to support significant cattle expansion without causing further deforestation. The SPA may in its report suggest viable policies for curbing cattle-caused deforestation. Image ©Henrique Manreza / The Nature Conservancy. Scientists offer evidence, and also solutions Creating a workable blueprint for the sustainable future of the Amazon rainforest is no simple task. The solutions mapped out, according to the Amazon Panel’s scientists, will seek to not only prevent deforestation and curb global climate change, but to generate a new vision and action plan for the Amazon region and its residents — especially, fulfilling development goals via a sustainable standing-forest economy. The SPA, Nobre says, will make a critical break with the purely technical approach of the United Nation’s IPCC, which banned policy prescriptions entirely from its reports. In practice, this has meant that while contributing scientists can show the impacts of fossil fuels on the atmosphere, they cannot recommend ending oil subsidies, for example. “We inverted this logic, and the third part of the [SPA] report will be entirely dedicated to searching for policy suggestions,” Nobre says. “We need the forest on its feet, the empowerment of the traditional peoples and solutions on how to reach development goals.” Researchers across many academic fields (ranging from climate science and economics to history and meteorology) are collaborating on the SPA Panel, raising hopes that scientific consensus on the Amazon rainforest can be reached, and that conditions for research cooperation will greatly improve. Indigenous Munduruku dancers in the Brazilian Amazon. The SPA intends to gather Amazon science and formulate socio-economic solutions in order to make sound recommendations to policymakers. Image by Mauricio Torres / Mongabay. SPA participants hope that a thorough scientific analysis of the rainforest’s past, present and future will aid in the formulation of viable public policies designed to preserve the Amazon biome — hopefully leading to scientifically and economically informed political decisions by the governments of Amazonian nations. “We are analyzing not only climate but biodiversity, human aspects and preservation beyond the climate issues,” Paulo Artaxo, an atmospheric physicist at the University of São Paulo, told Mongabay. Due to the urgency of the COVID-19 pandemic, the initiative’s initial dates for a final report were pushed forward by several months, and a conference in China cancelled entirely. But the 150-strong team is vigorously pushing forward, and the first phase of the project — not publicly available — is expected to be completed by the end of the year. The hope on the horizon is that a unified voice from the scientific community will trigger long-lasting positive changes in the Amazon rainforest. “More than ever, we need to hear the voices of the scientists to enable us to understand how to save the Amazon from wanton and unthinking destruction,” said Jeffrey Sachs, the director of the UN Sustainable Development Solutions Network, on the official launch website called The Amazon We Want. Banner image: Aerial photo of an Amazon tributary surrounded by rainforest. Image by Rhett A. Butler / Mongabay. “Como pesquisadores, precisamos ter a humildade de assumir que nos deparamos com os limites da técnica e da ciência” (Revista Pesquisa Fapesp) Depoimento concedido a Christina Queiroz. 5 de julho de 2020 “A chegada da Covid-19 causou um impacto muito forte em todos os meus colegas na Universidade Federal do Amazonas [Ufam]. Com minha esposa, estou fazendo um isolamento rigoroso em Manaus, porque tenho quase 60 anos, tomo remédios para controlar pressão e diabetes. Vivemos semanas muito tristes, marcadas por muita dor e sofrimento. Como indígena, sigo perdendo amigos, familiares e lideranças de longa data. Fomos pegos de surpresa. Não acreditávamos na possibilidade de uma tragédia humanitária como essa. Faço parte de uma geração de indígenas que tem fé no poder da ciência, da tecnologia e acredita nos avanços proporcionados pela modernidade. No nosso pensamento, o vírus representa um elemento a mais da natureza. E, por causa da nossa fé no poder da ciência e da medicina científica, não esperávamos uma submissão tão grande da humanidade a um elemento tão pequeno e invisível. Assim, a primeira consequência da chegada da pandemia foi pedagógica e causou reflexões sobre nossa compreensão do mundo e de nós mesmos.  Como pesquisadores acadêmicos, também precisamos ter a humildade de assumir que nos deparamos com os limites da técnica e da ciência. Ter humildade não significa se apequenar, mas, sim, buscar complementar os conhecimentos acadêmicos com outros saberes, para além da ciência eurocêntrica, e isso inclui as ciências indígenas. Ficou evidente o quanto é perigosa a trajetória que a humanidade está tomando, um caminho à deriva, sem lideranças, sem horizonte claro à possibilidade da própria existência humana. Somos uma sociedade que caminha para sua autodestruição. A natureza mostrou sua força, evidenciou que a palavra final é dela, e não dos humanos.  Com o passar das semanas, essa ideia foi sendo incorporada em nossa maneira de compreender, explicar, aceitar e conviver com a nova realidade. Os povos indígenas apresentam cosmovisões milenares, mas que são atualizadas de tempos em tempos, como tem acontecido na situação atual. Passamos a olhar para a nova situação como uma oportunidade para empreender uma revisão cosmológica, filosófica, ontológica e epistemológica da nossa existência e buscar formas pedagógicas para sofrer menos. Nós, indígenas, somos profundamente emotivos. Amamos a vida e nossa existência não é pautada pela materialidade. O momento atual representa uma situação única de formação, pois afeta nossas emoções e valores. Ficamos surpresos com o pouco amor à vida das elites econômicas e de parte dos governantes, mas também de uma parcela significativa da população. A pandemia revelou essas deficiências.  Por outro lado, um dos elementos que emergiu desse processo é uma profunda solidariedade, que tem permitido aos povos indígenas sobreviver no contexto atual. Identificamos fragilidades e limites. Também potencializamos nossas fortalezas. Uma delas, a valorização do conhecimento tradicional, considerado elemento do passado. Redescobrimos o valor do Sistema Único de Saúde [SUS], com toda a fragilidade que foi imposta a ele por diferentes governos. O SUS tem sido um gigante em um momento muito difícil para toda a sociedade. Coordeno o curso de formação de professores indígenas da Faculdade de Educação da Ufam e me envolvo diariamente em discussões como essas com os alunos. São mais de 300 estudantes que fazem parte desse programa, divididos em cinco turmas. Recentemente, um deles morreu por conta de complicações causadas pelo novo coronavírus. No Amazonas, há mais de 2 mil professores indígenas atuando nas escolas das aldeias. Tenho muito trabalho com atividades burocráticas, para atualizar o registro acadêmico dos alunos e analisar suas pendências. Estamos planejando como fazer a retomada das atividades presenciais de ensino, mas essa retomada só deve acontecer em 2021. Enquanto isso, seminários on-line permitem dar continuidade ao processo de ensino-aprendizagem e ajudam a fomentar a volta de um espírito de solidariedade entre os estudantes indígenas, a valorização da natureza e a recuperação de saberes tradicionais sobre plantas e ervas medicinais. Em condições normais, a possibilidade de participar de tantos seminários e discussões não seria possível. Essas reflexões realizadas durante os encontros virtuais vão se transformar em material didático e textos publicados. Escrever esses textos me ajuda na compreensão da realidade e permite que esse saber seja compartilhado.  Estamos realizando uma pesquisa para identificar quantos alunos do programa dispõem de equipamentos e acesso à internet. Muitos estão isolados em suas aldeias, alguns deles se refugiaram em lugares ainda mais remotos e só acessam a internet em situações raras e pontuais, quando precisam ir até as cidades. Em Manaus, constatamos que apenas 30% dos estudantes da Faculdade de Educação da Ufam dispõem de equipamento pessoal para utilizar a internet. No interior, entre os alunos dos territórios, esse percentual deve ser de menos de 10%. Devemos ter os resultados desse levantamento nas próximas semanas. Sou professor há 30 anos e trabalho com organizações e lideranças indígenas e vejo como esse fator dificulta o planejamento de qualquer atividade remota. Quando tivermos os resultados dessa pesquisa, a ideia é ter uma base de dados para que o movimento indígena se organize para solucionar o problema. Essa situação de ensino remoto pode se prolongar e precisamos estar preparados para não prejudicar os direitos dos alunos e vencer a batalha da inclusão digital. Há 50 dias, vivíamos o pico da pandemia em Manaus. Estávamos apavorados, com 140 mortes diárias e as pessoas sendo enterradas em valas coletivas. Essa semana foi a primeira que sentimos um alívio. Hoje, 25 de junho, foi o primeiro dia em que nenhuma morte por coronavírus foi registrada na cidade. O medo agora é que pessoas desinformadas, ou menos sensíveis à vida, com o relaxamento das regras de isolamento, provoquem uma segunda onda de contaminação. Percebemos que as pessoas abandonaram as práticas de isolamento e muitas nem sequer utilizam máscaras. Mas começamos a sair do fundo do poço, inclusive o existencial. As estruturas montadas para o caos, como os hospitais de campanha, estão sendo desmontadas.  Tivemos perdas de lideranças e pajés indígenas irreparáveis e insubstituíveis. Com a morte desses sábios, universos de sabedoria milenar desapareceram. Os pajés são responsáveis por produzir e manter o conhecimento tradicional, que só é repassado para alguns poucos herdeiros escolhidos, que precisam ser formados em um processo ritualístico longo e repleto de sacrifícios. As gerações mais jovens apresentam dificuldades para seguir esses protocolos e, por causa disso, o conhecimento tradicional tem enfrentado dificuldades em ser repassado. Eu e meus colegas da Ufam e dos movimentos indígenas estamos incentivando a nova geração a criar estratégias para absorver essa sabedoria, porque muitos sábios seguirão vivos. Escolas e universidades também podem colaborar com o processo, reconhecendo a importância desses saberes. Com os jovens, estamos insistindo que chegou a hora de garantir a continuidade dos saberes tradicionais.  Com a melhoria da situação em Manaus, minha preocupação agora se voltou para o interior, onde foram notificadas 24 mortes nas últimas 24 horas. A população do interior representa menos de 50% da do Amazonas, estado onde as principais vítimas têm sido indígenas, do mesmo modo que acontece em Roraima. Toda minha família vive em São Gabriel da Cachoeira, incluindo minha mãe de 87 anos. A cidade já registrou mais de 3 mil casos e 45 mortes e ainda não atingiu o pico da pandemia. Há cerca de 800 comunidades no entorno do município e sabemos que o vírus já se espalhou por quase todas elas. Porém há algo que nos alivia. Inicialmente ficamos apavorados, pensando que o vírus causaria um genocídio na população da cidade e seus entornos. O único hospital de São Gabriel não possui leitos de UTI [Unidade de Terapia Intensiva]. Passados 45 dias da notificação do primeiro caso na cidade, apesar das perdas significativas, vemos que as pessoas têm conseguido sobreviver à doença se cuidando em suas próprias casas, com medicina tradicional e fortalecendo laços de solidariedade. Minha mãe ficou doente, apresentou os sintomas da Covid-19. Também meus irmãos e uma sobrinha de minha mãe de 67 anos. Eles não foram testados. Decidiram permanecer em suas casas e cuidar uns dos outros, se valendo de ervas e cascas de árvores da medicina tradicional. Sobreviveram. Sabiam que ir para o hospital lotado naquele momento significaria morrer, pois a estrutura é precária e eles ficariam sozinhos. Ao optar por permanecer em casa, possivelmente transmitiram a doença um ao outro, mas a solidariedade fez a diferença. Um cuidou do outro. Culturalmente, a ideia de isolar o doente é algo impossível para os indígenas, pois seria interpretado como abandono, falta de solidariedade e desumanidade, o que é reprovável. Os laços de solidariedade vão além do medo de se contaminar.”
4a1f4e8774b3dd63
Bound-State Spectra for Two Delta Function Potentials Initializing live version Download to Desktop Requires a Wolfram Notebook System This Demonstration shows the bound-state spectra of a particle of mass in the presence of two attractive potentials separated by a distance , . Since the Fourier transform of this potential is factorizable, , the bound-state spectra are easily obtained using the momentum-space Schrödinger equation. The energies are normalized to the magnitude of the symmetric-state energy at . Note that the second (antisymmetric) bound state appears only when the distance between the functions exceeds the critical value . Contributed by: Christopher R. Jamell and Yogesh N. Joglekar (IUPUI) (March 2011) Open content licensed under CC BY-NC-SA Snapshot 1: typical symmetric and antisymmetric bound-state energies as a function of distance between the two attractive functions Snapshot 2: for a weak potential strength , a second bound-state, the antisymmetric state, appears only when the distance between the two functions is large, Snapshot 3: for a strong potential , the symmetric and antisymmetric states become degenerate when the distance between the two functions is increased Analytical and numerical treatment of the Schrodinger equation in momentum space can be found in W. A. Karr, C. R. Jamell, and Y. N. Joglekar, "Numerical Approach to Schrodinger Equation in Momentum Space," arXiv, 2009. Feedback (field required) Email (field required) Name Occupation Organization
c789b6df6e82780d
Recent zbMATH articles in MSC 26B 2022-06-24T15:10:38.853281Z Werkzeug Global injectivity of differentiable maps via \(W\)-condition in \(\mathbb{R}^2\) 2022-06-24T15:10:38.853281Z "Liu, Wei"|liu.wei.1|liu.wei|liu.wei.2|liu.wei.7|liu.wei.8|liu.wei.9|liu.wei.5|liu.wei.3 Summary: In this paper, we study the intrinsic relations between the global injectivity of the differentiable local homeomorphism map \(F\) and the rate of the Spec \((F)\) tending to zero, where Spec \((F)\) denotes the set of all (complex) eigenvalues of Jacobian matrix \(JF(x)\), for all \(x\in \mathbb{R}^2\). They depend deeply on the \(W\)-condition which extends the \(*\)-condition and the \(B\)-condition. The \(W\)-condition reveals the rate that tends to zero of the real eigenvalues of \(JF\), which can not exceed \(\displaystyle O\Big(x\ln x(\ln \frac{\ln x}{\ln\ln x})^2\Big)^{-1}\) by the half-Reeb component method. This improves the theorems of \textit{C. Gutierrez} and \textit{Nguyen Van Chau} [Discrete Contin. Dyn. Syst. 17, No. 2, 397--402 (2007; Zbl 1118.37024)] and \textit{R. Rabanal} [Bull. Braz. Math. Soc. (N.S.) 41, No. 1, 73--82 (2010; Zbl 1191.14074)]. The \(W\)-condition is optimal for the half-Reeb component method in this paper setting. This work is related to the Jacobian conjecture. A class of non-cylindrical domains for parabolic equations 2022-06-24T15:10:38.853281Z "Domínguez Corella, Alberto" "Rivera Noriega, Jorge" Summary: We present a class of non-cylindrical domains where Dirichlet-type problems for parabolic equations, such as the heat equation, can be posed and solved. The regularity for the boundary of this class of domains is a mixed Lipschitz condition, as described in the bulk of the paper. The main tool is an adequate version of the implicit function theorem for functions with this kind of regularity. It is proved that the class introduced herein is of the same type as domains previously considered by several authors. Some integral inequalities for coordinated log-\(h\)-convex interval-valued functions 2022-06-24T15:10:38.853281Z "Shi, Fangfang" "Ye, Guoju" "Zhao, Dafang" "Liu, Wei" Summary: We introduce and investigate the coordinated log-\( h \)-convexity for interval-valued functions. Also, we prove some new Jensen type inequalities and Hermite-Hadamard type inequalities, which generalize some known results in the literature. Moreover, some examples are given to illustrate our results. Direct and inverse Cauchy problems for generalized space-time fractional differential equations 2022-06-24T15:10:38.853281Z "Restrepo, Joel E." "Suragan, Durvudkhan" Summary: In this paper, explicit solutions of a class of generalized space-time fractional Cauchy problems with time-variable coefficients are given. The representation of a solution involves kernels given by convergent infinite series of fractional integro-differential operators, which can be extensively and efficiently applied for analytic and computational goals. Time-fractional operators of complex orders with respect to a given function are used. Further, we study inverse Cauchy problems of finding time dependent coefficients for fractional wave and heat type equations, which involve the explicit representation of the solution of the direct Cauchy problem and a recent method to recover variable coefficients for the considered inverse problems. Concrete examples and particular cases of the obtained results are discussed. On Lipschitz approximations in second order Sobolev spaces and the change of variables formula 2022-06-24T15:10:38.853281Z "Hashash, Paz" "Ukhlov, Alexander" The authors prove that any twice weakly differentiable function can be approximated by Lipschitz continuous functions. Namely, the following theorem holds. {Theorem.} Let \(\Omega\subset\mathbb R^n\) be an open set and \(f\in W^{2}_{2,\operatorname{loc}}(\Omega)\), \(1\leq p < \infty\). Then there exists a sequence of closed sets \(\{C_k\}_k^{\infty}\) such that for every \(k=1,2,\dots\), \(C_k\subset C_{k+1}\subset\Omega\), the restriction \(f^*|_{C_k}\) is a Lipschitz continuous function defined $p$-quasi everywhere in \(C_k\) and \[ \operatorname{cap}_p\left(\Omega\setminus \bigcup\limits_{k=1}^{\infty}C_k \right) = 0. \] Here \(f^*\) is the precise representative of \(f\). Note this also holds for vector-valued functions \(f\in W^{2}_{2,\operatorname{loc}}(\Omega; \mathbb R^m)\). For proving the theorem the Poincaré inequality and a Chebyshev type inequality are employed. Then there are two important applications of the result. \textit{The refined Luzin type theorem}: If \(f\in W^{2}_{2,\operatorname{loc}}(\Omega)\), then for each \(\varepsilon>0\) there exists an open set \(U_\varepsilon\) of \(p\)-capacity less than \(\varepsilon\) such that \(f^*\) is Lipschitz continuous on the set \(\Omega\setminus U_\varepsilon\). \textit{The change of variables formula}: If \(\varphi\in W^{2}_{2,\operatorname{loc}}(\Omega; \mathbb R^n)\), then there exists a Borel set \(S\subset\Omega\), \(\operatorname{cap}_p(S) = 0\), such that the mapping \(\varphi:\Omega\setminus S \to \mathbb R^n\) has the Luzin \(N\)-property and the change of variables formula \[ \int\limits_A u\circ \varphi|J(x,\varphi)|\, dx = \int\limits_{\mathbb R^n\setminus\varphi(S)}u(y)N_\varphi(A,y)\, dy \] holds for every measurable set \(A\subset\Omega\) and every nonnegative measurable function \(u:\mathbb R^n\to \mathbb R\). Reviewer: Nikita Evseev (Novosibirsk) \(W^{s,\frac{n}{s}}\)-maps with positive distributional Jacobians 2022-06-24T15:10:38.853281Z "Li, Siran" "Schikorra, Armin" Summary: We extend the well-known result that any \(f\in W^{1,n}({\Omega},\mathbb{R}^n)\), \({\Omega}\subset\mathbb{R}^n\) with strictly positive Jacobian is actually continuous: it is also true for fractional Sobolev spaces \(W^{s,\frac{n}{s}}({\Omega})\) for any \(s\geq\frac{n}{n+1}\), where the sign condition on the Jacobian is understood in a distributional sense. Along the way we also obtain extensions to fractional Sobolev spaces \(W^{s,\frac{n}{s}}\) of the degree estimates known for \(W^{1,n}\)-maps with positive or non-negative Jacobian, such as the sense-preserving property. On mappings of Euclidean spaces with alternative metrics 2022-06-24T15:10:38.853281Z "Afanas'eva, O. S." "Salimov, R. R." Summary: Developing a \(p\)-modules technique applied to a family of curves in Euclidean space \((\mathbb{R}^n,\mu, d)\) with a locally finite Borel measure \(\mu\) and metric \(d\), the authors establish a finite Lipschitz and Hölder's properties of \(Q\)-homeomorphisms acting from \((\mathbb{R}^n,\mu, d)\) the space into Euclidean space \(\mathbb{R}^n\) of the standard metric and Lebesgue measure. Gateaux differentiability revisited 2022-06-24T15:10:38.853281Z "Abbasi, Malek" "Kruger, Alexander Y." "Théra, Michel" Summary: We revisit some basic concepts and ideas of the classical differential calculus and convex analysis extending them to a broader frame. We reformulate and generalize the notion of Gâteaux differentiability and propose new notions of generalized derivative and generalized subdifferential in an arbitrary topological vector space. Meaningful examples preserving the key properties of the original notion of derivative are provided. Concentration of product spaces 2022-06-24T15:10:38.853281Z "Kazukawa, Daisuke" Summary: We investigate the relation between the concentration and the product of metric measure spaces. We have the natural question whether, for two concentrating sequences of metric measure spaces, the sequence of their product spaces also concentrates. A partial answer is mentioned in \textit{M. Gromov}'s book [Metric structures for Riemannian and non-Riemannian spaces. Transl. from the French by Sean Michael Bates. With appendices by M. Katz, P. Pansu, and S. Semmes. Edited by J. LaFontaine and P. Pansu. 3rd printing. Basel: Birkhäuser (2007; Zbl 1113.53001)]. We obtain a complete answer for this question. Auxiliary-function minimization algorithms 2022-06-24T15:10:38.853281Z "Byrne, Charles L." Summary: Let \(C\) be a nonempty subset of an arbitrary set \(X\) and \(f:X\to\mathbb{R}\). The objective is to minimize \(f(x)\) over \(x\in C\). We get \(x^k\), for \(k=1,2,\ldots\), by minimizing \(G_k(x)=f(x)+g_k(x)\) over all \(x\in X\). We call this approach an \textit{auxiliary-function} (AF) method if \(g_k:X\to[0,+\infty]\), \(g_k(x^{k-1})=0\), and \(g_k(x)<+\infty\) if and only if \(x\in C\). Then \(\{f(x^k)\}\downarrow\beta^\ast\ge-\infty\). We consider conditions on the auxiliary functions \(g_k\) that guarantee that \(\beta^\ast=\beta\dot{=}\inf_{x\in C}f(x)\). An AF algorithm is said to be in the SUMMA class if the SUMMA inequality, \(G_k(x)-G_k(x^k)\ge g_{k+1}(x)\), for all \(x\in X\), holds for all \(k\), in which case it follows that \(\beta^\ast=\beta\). We consider a variety of AF algorithms that either are in the SUMMA class or can be reformulated to be such. We also study some AF algorithms that are not in the SUMMA class, but for which \(\beta^\ast=\beta\). This leads to a larger class, the SUMMA2 class of AF algorithms. An AF algorithm is a proximal minimization algorithm (PMA) if \(g_k(x)=d(x,x^{k-1})\), where \(d:X\times X\to[0,+\infty]\) is a distance, so that \(d(x,y)=0\) if and only if \(x=y\). Optimization transfer (OT) algorithms in statistics can be reformulated as PMA algorithms, as can the alternating-minimization (AM) algorithms of Csiszár and Tusnády. The ``five-point property'' (5PP) in AM, used by Csiszár and Tusnády to get \(\beta^\ast=\beta\), is equivalent to the SUMMA inequality, while the ``weak'' 5PP (w5PP) implies membership in the SUMMA 2 class. PDE-constrained vector variational problems governed by curvilinear integral functionals 2022-06-24T15:10:38.853281Z "Treanţă, Savin" Summary: This paper is concerned with necessary and sufficient efficiency conditions for a new class of multiobjective optimization problems. More precisely, we formulate and prove efficiency conditions for a multidimensional multiobjective variational problem of minimizing a vector of path-independent curvilinear integral functionals subject to nonlinear equality and inequality constraints involving higher-order partial derivatives. Under generalized \((\rho,b)\)-quasiinvexity assumptions, sufficient efficiency conditions for a feasible solution are established. Modulus of continuity of the quantum \(f\)-entropy with respect to the trace distance 2022-06-24T15:10:38.853281Z "Pinelis, Iosif" Summary: A well-known result due to \textit{M. Fannes} [Commun. Math. Phys. 31, 291--294 (1973; Zbl 1125.82310)] is a certain upper bound on the modulus of continuity of the von Neumann entropy with respect to the trace distance between density matrices; this distance is the maximum probability of distinguishing between the corresponding quantum states. Much more recently, \textit{K. M. R. Audenaert} [J. Phys. A, Math. Theor. 40, No. 28, 8127--8136 (2007; Zbl 1119.81017)] obtained an exact expression of this modulus of continuity. In the present note, Audenaert's result is extended to a broad class of entropy functions indexed by arbitrary continuous convex functions \(f\) in place of the Shannon-von Neumann function \(x\mapsto x\log_2x\). The proof is based on the Schur majorization. Bright and dark solitons in a nonlinear saturable medium 2022-06-24T15:10:38.853281Z "Kudryashov, Nikolay A." Summary: The generalized nonlinear Schrödinger equation for description of optical solitons in a saturable medium is studied. The transformation variables are used for finding solitary wave solutions. Optical solitons for the description of pulse propagation are obtained in the form of implicit functions. Analytical solutions of the generalized nonlinear Schrödinger equation obtained for bright and dark solitons in a saturable medium are demonstrated. Necessary optimality conditions in generalized convex multi-objective optimization involving nonconvex constraints 2022-06-24T15:10:38.853281Z "Günther, Christian" "Tammer, Christiane" "Yao, Jen-Chih" Summary: The aim of this paper is to derive necessary optimality conditions for Pareto efficient solutions of multi-objective optimization problems involving not necessarily convex constraints. The objective function is acting between a real linear topological pre-image space and a finite-dimensional image space and is assumed to be componentwise generalized convex (e.g., semi-strictly quasi-convex or quasi-convex). The first and second author [Math. Methods Oper. Res. 84, No. 2, 359--387 (2016; Zbl 1370.90241); Pure Appl. Funct. Anal. 3, No. 3, 429--461 (2018; Zbl 1474.90402)] showed that the set of Pareto efficient solutions of a constrained multi-objective optimization problem can be computed completely by considering two corresponding unconstrained multi-objective optimization problems. By using these results and by applying methods of generalized differentiation, we show that it is possible to derive necessary optimality conditions for a problem with a nonconvex feasible set. These optimality conditions have a simple structure because the normal cone with respect to the constraints is not involved. Finally, we apply our results to multi-objective approximation problems with a not necessarily convex feasible set and derive necessary optimality conditions. Growth conditions on a function and the error bound condition 2022-06-24T15:10:38.853281Z "Balashov, M. V." Let \(f\) be a real-valued function defined on a neighborhood of a smooth manifold \(Q\) with boundary, \(f_{\ast }:=\inf_{x\in Q}\), and assume that \(\Omega:=f^{-1}\left( f_{\ast }\right) \) is nonempty and \(L:=\sup_{x\in Q}\left\Vert f^{\prime }\left( x\right) \right\Vert \) is finite. Denote the Fréchet gradient of \(f\) by \(f^{\prime }\), the tangent space of \(Q\) at \(x\in Q\) by \(T_{x}\), the metric projection mapping onto \(T_{x}\) by \(P_{T_{x}}\), and let \(\rho _{\Omega }\left( x\right) :=\inf_{a\in \Omega }\left\Vert x-a\right\Vert\). The authors prove that, if \(Q\) is proximally smooth with constant \(R\) and \(f\) is weakly convex with constant \(\beta >0\) (that is, if \(f+\frac{\beta }{2}\left\Vert \cdot \right\Vert ^{2}\) is convex), then the quadratic growth condition \[ \exists \alpha >0\text{ such that }\frac{\alpha }{2}\rho _{\Omega }^{2}\left( x\right) \leq f\left( x\right) -f_{\ast }\text{ }\forall x\in Q\cap f^{-1}\left( \left] -\infty ,\tau \right] \right) \] implies the error bound condition \[ \exists \nu >0\text{ such that }\nu \rho _{\Omega }\left( x\right) \leq \left\Vert P_{T_{x}}f^{\prime }\left( x\right) \right\Vert \text{ }\forall x\in Q\cap f^{-1}\left( \left] -\infty ,\tau \right] \right) \text{,} \] with \(\nu :=\frac{\alpha -\beta }{2}-\frac{L}{R}\) (assuming that \(\nu >0\)). In the case when \(Q\) is the solution set of a system of equations, they give sufficient conditions of a different nature for an error bound of a different type to hold. Reviewer: Juan-Enrique Martínez-Legaz (Barcelona)
7119935391d39a1e
Quantum operation From Wikipedia, the free encyclopedia Jump to navigation Jump to search In quantum mechanics, a quantum operation (also known as quantum dynamical map or quantum process) is a mathematical formalism used to describe a broad class of transformations that a quantum mechanical system can undergo. This was first discussed as a general stochastic transformation for a density matrix by George Sudarshan.[1] The quantum operation formalism describes not only unitary time evolution or symmetry transformations of isolated systems, but also the effects of measurement and transient interactions with an environment. In the context of quantum computation, a quantum operation is called a quantum channel. Note that some authors use the term "quantum operation" to refer specifically to completely positive (CP) and non-trace-increasing maps on the space of density matrices, and the term "quantum channel" to refer to the subset of those that are strictly trace-preserving.[2] Quantum operations are formulated in terms of the density operator description of a quantum mechanical system. Rigorously, a quantum operation is a linear, completely positive map from the set of density operators into itself. In the context of quantum information, one often imposes the further restriction that a quantum operation must be physical,[3] that is, satisfy for any state . Some quantum processes cannot be captured within the quantum operation formalism;[4] in principle, the density matrix of a quantum system can undergo completely arbitrary time evolution. Quantum operations are generalized by quantum instruments, which capture the classical information obtained during measurements, in addition to the quantum information. The Schrödinger picture provides a satisfactory account of time evolution of state for a quantum mechanical system under certain assumptions. These assumptions include • The system is non-relativistic • The system is isolated. The Schrödinger picture for time evolution has several mathematically equivalent formulations. One such formulation expresses the time rate of change of the state via the Schrödinger equation. A more suitable formulation for this exposition is expressed as follows: The effect of the passage of t units of time on the state of an isolated system S is given by a unitary operator Ut on the Hilbert space H associated to S. This means that if the system is in a state corresponding to vH at an instant of time s, then the state after t units of time will be Ut v. For relativistic systems, there is no universal time parameter, but we can still formulate the effect of certain reversible transformations on the quantum mechanical system. For instance, state transformations relating observers in different frames of reference are given by unitary transformations. In any case, these state transformations carry pure states into pure states; this is often formulated by saying that in this idealized framework, there is no decoherence. For interacting (or open) systems, such as those undergoing measurement, the situation is entirely different. To begin with, the state changes experienced by such systems cannot be accounted for exclusively by a transformation on the set of pure states (that is, those associated to vectors of norm 1 in H). After such an interaction, a system in a pure state φ may no longer be in the pure state φ. In general it will be in a statistical mix of a sequence of pure states φ1, ..., φk with respective probabilities λ1, ..., λk. The transition from a pure state to a mixed state is known as decoherence. Numerous mathematical formalisms have been established to handle the case of an interacting system. The quantum operation formalism emerged around 1983 from work of Karl Kraus, who relied on the earlier mathematical work of Man-Duen Choi. It has the advantage that it expresses operations such as measurement as a mapping from density states to density states. In particular, the effect of quantum operations stays within the set of density states. Recall that a density operator is a non-negative operator on a Hilbert space with unit trace. Mathematically, a quantum operation is a linear map Φ between spaces of trace class operators on Hilbert spaces H and G such that • If S is a density operator, Tr(Φ(S)) ≤ 1. • Φ is completely positive, that is for any natural number n, and any square matrix of size n whose entries are trace-class operators and which is non-negative, then is also non-negative. In other words, Φ is completely positive if is positive for all n, where denotes the identity map on the C*-algebra of matrices. Note that, by the first condition, quantum operations may not preserve the normalization property of statistical ensembles. In probabilistic terms, quantum operations may be sub-Markovian. In order that a quantum operation preserve the set of density matrices, we need the additional assumption that it is trace-preserving. In the context of quantum information, the quantum operations defined here, i.e. completely positive maps that do not increase the trace, are also called quantum channels or stochastic maps. The formulation here is confined to channels between quantum states; however, it can be extended to include classical states as well, therefore allowing quantum and classical information to be handled simultaneously. Kraus operators[edit] Kraus' theorem (named after Karl Kraus) characterizes completely positive maps, that model quantum operations between quantum states. Informally, the theorem ensures that the action of any such quantum operation on a state can always be written as , for some set of operators satisfying , where is the identity operator. Statement of the theorem[edit] Theorem.[5] Let and be Hilbert spaces of dimension and respectively, and be a quantum operation between and . Then, there are matrices mapping to such that, for any state , Conversely, any map of this form is a quantum operation, provided is satisfied. The matrices are called Kraus operators. (Sometimes they are known as noise operators or error operators, especially in the context of quantum information processing, where the quantum operation represents the noisy, error-producing effects of the environment.) The Stinespring factorization theorem extends the above result to arbitrary separable Hilbert spaces H and G. There, S is replaced by a trace class operator and by a sequence of bounded operators. Unitary equivalence[edit] Kraus matrices are not uniquely determined by the quantum operation in general. For example, different Cholesky factorizations of the Choi matrix might give different sets of Kraus operators. The following theorem states that all systems of Kraus matrices representing the same quantum operation are related by a unitary transformation: Theorem. Let be a (not necessarily trace-preserving) quantum operation on a finite-dimensional Hilbert space H with two representing sequences of Kraus matrices and . Then there is a unitary operator matrix such that In the infinite-dimensional case, this generalizes to a relationship between two minimal Stinespring representations. It is a consequence of Stinespring's theorem that all quantum operations can be implemented by unitary evolution after coupling a suitable ancilla to the original system. These results can be also derived from Choi's theorem on completely positive maps, characterizing a completely positive finite-dimensional map by a unique Hermitian-positive density operator (Choi matrix) with respect to the trace. Among all possible Kraus representations of a given channel, there exists a canonical form distinguished by the orthogonality relation of Kraus operators, . Such canonical set of orthogonal Kraus operators can be obtained by diagonalising the corresponding Choi matrix and reshaping its eigenvectors into square matrices. There also exists an infinite-dimensional algebraic generalization of Choi's theorem, known as "Belavkin's Radon-Nikodym theorem for completely positive maps", which defines a density operator as a "Radon–Nikodym derivative" of a quantum channel with respect to a dominating completely positive map (reference channel). It is used for defining the relative fidelities and mutual informations for quantum channels. For a non-relativistic quantum mechanical system, its time evolution is described by a one-parameter group of automorphisms {αt}t of Q. This can be narrowed to unitary transformations: under certain weak technical conditions (see the article on quantum logic and the Varadarajan reference), there is a strongly continuous one-parameter group {Ut}t of unitary transformations of the underlying Hilbert space such that the elements E of Q evolve according to the formula The system time evolution can also be regarded dually as time evolution of the statistical state space. The evolution of the statistical state is given by a family of operators {βt}t such that Clearly, for each value of t, SU*t S Ut is a quantum operation. Moreover, this operation is reversible. This can be easily generalized: If G is a connected Lie group of symmetries of Q satisfying the same weak continuity conditions, then the action of any element g of G is given by a unitary operator U: This mapping gUg is known as a projective representation of G. The mappings SU*g S Ug are reversible quantum operations. Quantum measurement[edit] Quantum operations can be used to describe the process of quantum measurement. The presentation below describes measurement in terms of self-adjoint projections on a separable complex Hilbert space H, that is, in terms of a PVM (Projection-valued measure). In the general case, measurements can be made using non-orthogonal operators, via the notions of POVM. The non-orthogonal case is interesting, as it can improve the overall efficiency of the quantum instrument. Binary measurements[edit] Quantum systems may be measured by applying a series of yes–no questions. This set of questions can be understood to be chosen from an orthocomplemented lattice Q of propositions in quantum logic. The lattice is equivalent to the space of self-adjoint projections on a separable complex Hilbert space H. Consider a system in some state S, with the goal of determining whether it has some property E, where E is an element of the lattice of quantum yes-no questions. Measurement, in this context, means submitting the system to some procedure to determine whether the state satisfies the property. The reference to system state, in this discussion, can be given an operational meaning by considering a statistical ensemble of systems. Each measurement yields some definite value 0 or 1; moreover application of the measurement process to the ensemble results in a predictable change of the statistical state. This transformation of the statistical state is given by the quantum operation Here E can be understood to be a projection operator. General case[edit] In the general case, measurements are made on observables taking on more than two values. When an observable A has a pure point spectrum, it can be written in terms of an orthonormal basis of eigenvectors. That is, A has a spectral decomposition where EA(λ) is a family of pairwise orthogonal projections, each onto the respective eigenspace of A associated with the measurement value λ. Measurement of the observable A yields an eigenvalue of A. Repeated measurements, made on a statistical ensemble S of systems, results in a probability distribution over the eigenvalue spectrum of A. It is a discrete probability distribution, and is given by Measurement of the statistical state S is given by the map That is, immediately after measurement, the statistical state is a classical distribution over the eigenspaces associated with the possible values λ of the observable: S is a mixed state. Non-completely positive maps[edit] Shaji and Sudarshan argued in a Physical Review Letters paper that, upon close examination, complete positivity is not a requirement for a good representation of open quantum evolution. Their calculations show that, when starting with some fixed initial correlations between the observed system and the environment, the map restricted to the system itself is not necessarily even positive. However, it is not positive only for those states that do not satisfy the assumption about the form of initial correlations. Thus, they show that to get a full understanding of quantum evolution, non completely-positive maps should be considered as well.[4][6][7] See also[edit] 1. ^ Sudarshan, E. C. G.; Mathews, P. M.; Rau, Jayaseetha (1961-02-01). "Stochastic Dynamics of Quantum-Mechanical Systems". Physical Review. American Physical Society (APS). 121 (3): 920–924. Bibcode:1961PhRv..121..920S. doi:10.1103/physrev.121.920. ISSN 0031-899X. 2. ^ Weedbrook, Christian; Pirandola, Stefano; García-Patrón, Raúl; Cerf, Nicolas J.; Ralph, Timothy C.; et al. (2012-05-01). "Gaussian quantum information". Reviews of Modern Physics. 84 (2): 621–669. arXiv:1110.3234. Bibcode:2012RvMP...84..621W. doi:10.1103/revmodphys.84.621. hdl:1721.1/71588. ISSN 0034-6861. S2CID 119250535. 3. ^ Nielsen & Chuang (2010). 4. ^ a b Pechukas, Philip (1994-08-22). "Reduced Dynamics Need Not Be Completely Positive". Physical Review Letters. American Physical Society (APS). 73 (8): 1060–1062. Bibcode:1994PhRvL..73.1060P. doi:10.1103/physrevlett.73.1060. ISSN 0031-9007. PMID 10057614. 5. ^ This theorem is proved in Nielsen & Chuang (2010), Theorems 8.1 and 8.3. 6. ^ Shaji, Anil; Sudarshan, E.C.G. (2005). "Who's afraid of not completely positive maps?". Physics Letters A. Elsevier BV. 341 (1–4): 48–54. Bibcode:2005PhLA..341...48S. doi:10.1016/j.physleta.2005.04.029. ISSN 0375-9601. 7. ^ Cuffaro, Michael E.; Myrvold, Wayne C. (2013). "On the Debate Concerning the Proper Characterisation of Quantum Dynamical Evolution". Philosophy of Science. University of Chicago Press. 80 (5): 1125–1136. arXiv:1206.3794. doi:10.1086/673733. ISSN 0031-8248.
c8164a29e1974e10
I am solving the Schrödinger equation for a particle in a hybrid system. Specifically I have to solve the following differential equation $$ - \frac{\hbar^2}{2}\frac{d}{dx}(\frac{1}{m^{*}(x)}\frac{d\psi}{dx}) -\frac{\hbar^2}{2m^{*}(x)} \frac{d^2\psi}{dy^2} + V(x)\psi = E\psi \tag{1} $$ I would like to know: Is this problem separable? I.e. can the solutions be written as $\psi(x,y)=\Phi(x)\chi(y)$? To investigate this we plug it in into the differential equation above and find after some rearrangement: $$ \frac{1}{\chi(y)}\frac{d^2\chi(y)}{dy^2} + \frac{m^{*}(x)}{\Phi(x)} \cdot \frac{d}{dx}(\frac{1}{m^{*}(x)}\frac{d\Phi}{dx}) + 2m^{*}(x) \cdot \frac{V(x)-E}{\hbar^2} = 0 $$ Now, the first term depends only on y while the two other terms depend only on x. Therefore one can separate the above into two differential equations, a simple one for $\chi(y)$ and a nasty one for $\Phi(x)$. Does this prove that the solutions of (1) are separable? And say they are, how would I go about calculating the solutions to the differential equation for $\Phi(x)$. The spatial dependence of the mass $m^{*}(x)$ is rather annoying. 1 Answer 1 The solutions can't all be written in that form, nor would one expect them to be written in that form. For example, since this is a linear equation, you could have a linear combination of different solutions of that form. The most you can hope for is that solutions of the form $\Phi(x) \chi(y)$ are in some sense a basis of the space of solutions. Your Answer
03dfaafc85780fab
8 Historical Figures Who Did Unspeakable Things 1. Erwin Schrödinger Erwin Schrödinger was an Austrian-Irish physicist who won a Nobel Prize for physics in 1933 for the formulation of the Schrödinger equation. He was famous for his quantum superposition thought experiment in which a cat, sealed in a box with poison and radioactive material, is somehow simultaneously alive and dead, until the box is opened. He also made serious contributions to the fields of thermodynamics, statistical mechanics, general relativity, and cosmology. However, when Schrödinger was not busy with scientific research, he was living a depraved private life that would have labeled him today as a sexual offender. Even if he was married, Schrödinger found his wife (Anny) really unattractive, so he had a lot of mistresses. He also kept a bunch of black books, in which he actually detailed his sexual escapades. Yes, when it comes to morality, Schrödinger was really repugnant, but he was not a criminal. What was criminal was the “lolita complex” ascribed to Schrödinger by biographers. Among other things, he tutored a pair of 14-year-old twin girls named Withi and Ithi Junger. When the girls were only 17 years old, Schrödinger impregnated Ithi. She was forced into a catastrophic abortion that left her sterile. 1 2 ... 8» Share on facebook Share on twitter Share on pinterest Share on whatsapp Share on email Leave a Comment Your email address will not be published.
edc1162db45acd11
All Issues Volume 27, 2022 Volume 26, 2021 Volume 25, 2020 Volume 24, 2019 Volume 23, 2018 Volume 22, 2017 Volume 21, 2016 Volume 20, 2015 Volume 19, 2014 Volume 18, 2013 Volume 17, 2012 Volume 16, 2011 Volume 15, 2011 Volume 14, 2010 Volume 13, 2010 Volume 12, 2009 Volume 11, 2009 Volume 10, 2008 Volume 9, 2008 Volume 8, 2007 Volume 7, 2007 Volume 6, 2006 Volume 5, 2005 Volume 4, 2004 Volume 3, 2003 Volume 2, 2002 Volume 1, 2001 Discrete and Continuous Dynamical Systems - B September 2007 , Volume 8 , Issue 2 Select all articles A linear-quadratic control problem with discretionary stopping Shigeaki Koike, Hiroaki Morimoto and Shigeru Sakaguchi 2007, 8(2): 261-277 doi: 10.3934/dcdsb.2007.8.261 +[Abstract](2862) +[PDF](195.4KB) We study the variational inequality for a 1-dimensional linear-quadratic control problem with discretionary stopping. We establish the existence of a unique strong solution via stochastic analysis and the viscosity solution technique. Finally, the optimal policy is shown to exist from the optimality conditions. Stability enhancement of a 2-D linear Navier-Stokes channel flow by a 2-D, wall-normal boundary controller Roberto Triggiani 2007, 8(2): 279-314 doi: 10.3934/dcdsb.2007.8.279 +[Abstract](2810) +[PDF](431.6KB) Consider a 2-D, linearized Navier-Stokes channel flow with periodic boundary conditions in the streamwise direction and subject to a wall-normal control on the top wall. There exists an infinite-dimensional subspace $E^0$, where the normal component $v$ of the velocity vector, as well as the vorticity $\omega$, are not influenced by the control. The corresponding control-free dynamics for $v$ and $\omega$ on $E^0$ are inherently exponentially stable, though with limited decay rate. In the case of the linear 2-D channel, the stability margin of the component $v$ on the complementary space $Z$ can be enhanced by a prescribed decay rate, by means of an explicit, 2-D wall-normal controller acting on the top wall, whose space component is subject to algebraic rank conditions. Moreover, its support may be arbitrarily small. Corresponding optimal decays, by the same 2-D wall-normal controller, of the tangential component $u$ of the velocity vector; of the pressure $p$; and of the vorticity $\omega$ over $Z$ are also obtained, to complete the optimal analysis. Optimal investment-consumption strategy in a discrete-time model with regime switching Ka Chun Cheung and Hailiang Yang 2007, 8(2): 315-332 doi: 10.3934/dcdsb.2007.8.315 +[Abstract](3307) +[PDF](217.2KB) This paper analyzes the investment-consumption problem of a risk averse investor in discrete-time model. We assume that the return of a risky asset depends on the economic environments and that the economic environments are ranked and described using a Markov chain with an absorbing state which represents the bankruptcy state. We formulate the investor's decision as an optimal stochastic control problem. We show that the optimal investment strategy is the same as that in Cheung and Yang [5], and a closed form expression of the optimal consumption strategy has been obtained. In addition, we investigate the impact of economic environment regime on the optimal strategy. We employ some tools in stochastic orders to obtain the properties of the optimal strategy. Global stability of two epidemic models Qingming Gou and Wendi Wang 2007, 8(2): 333-345 doi: 10.3934/dcdsb.2007.8.333 +[Abstract](2664) +[PDF](210.1KB) In this paper we study the global stability of two epidemic models by ruling out the presence of periodic orbits, homoclinic orbits and heteroclinic cycles. One model incorporates exponential growth, horizontal transmission, vertical transmission and standard incidence. The other one incorporates constant recruitment, disease-induced death, stage progression and bilinear incidence. For the first model, it is shown that the global dynamics is completely determined by the basic reproduction number $R_0$. If $R_0\leq1$, the disease free equilibrium is globally asymptotically stable, whereas the unique endemic equilibrium is globally asymptotically stable if $R_0>1$. For the second model, it is shown that the disease-free equilibrium is globally stable if $R_0\leq1$, and the disease is persistent if $R_0>1$. Sufficient conditions for the global stability of an endemic equilibrium of the model are also presented. Distributional chaos via isolating segments Piotr Oprocha and Pawel Wilczynski 2007, 8(2): 347-356 doi: 10.3934/dcdsb.2007.8.347 +[Abstract](2567) +[PDF](179.4KB) Recently, Srzednicki and Wójcik developed a method based on Wazewski Retract Theorem which allows, via construction of so called isolating segments, a proof of topological chaos (positivity of topological entropy) for periodically forced ordinary differential equations. In this paper we show how to arrange isolating segments to prove that a given system exhibits distributional chaos. As an example, we consider planar differential equation ż$=(1+e^{i \kappa t}|z|^2)\bar{z}$ for parameter values $0<\kappa \leq 0.5044$. Sharp global existence and blowing up results for inhomogeneous Schrödinger equations Jianqing Chen and Boling Guo 2007, 8(2): 357-367 doi: 10.3934/dcdsb.2007.8.357 +[Abstract](3115) +[PDF](195.8KB) In this paper, we first give an important interpolation inequality. Secondly, we use this inequality to prove the existence of local and global solutions of an inhomogeneous Schrödinger equation. Thirdly, we construct several invariant sets and prove the existence of blowing up solutions. Finally, we prove that for any $\omega>0$ the standing wave $e^{i \omega t} \phi (x)$ related to the ground state solution $\phi$ is strongly unstable. Reformed post-processing Galerkin method for the Navier-Stokes equations Yinnian He and R. M.M. Mattheij 2007, 8(2): 369-387 doi: 10.3934/dcdsb.2007.8.369 +[Abstract](2677) +[PDF](213.1KB) In this article we compare the post-processing Galerkin (PPG) method with the reformed PPG method of integrating the two-dimensional Navier-Stokes equations in the case of non-smooth initial data $u_0 \epsilon\in H^1_0(\Omega)^2$ with div$u_0=0$ and $f,~f_t\in L^\infty(R^+;L^2(\Omega)^2)$. We give the global error estimates with $H^1$ and $L^2$-norm for these methods. Moreover, if the data $\nu$ and the $\lim_{t \rightarrow \infty}f(t)$ satisfy the uniqueness condition, the global error estimates with $H^1$ and $L^2$-norm are uniform in time $t$. The difference between the PPG method and the reformed PPG method is that their error bounds are of the same forms on the interval $[1,\infty)$ and the reformed PPG method has a better error bound than the PPG method on the interval $[0,1]$. Detecting perfectly insulated obstacles by shape optimization techniques of order two Lekbir Afraites, Marc Dambrine, Karsten Eppler and Djalil Kateb 2007, 8(2): 389-416 doi: 10.3934/dcdsb.2007.8.389 +[Abstract](2492) +[PDF](397.3KB) The paper extends investigations of identification problems by shape optimization methods for perfectly conducting inclusions to the case of perfectly insulating material. The Kohn and Vogelius criteria as well as a tracking type objective are considered for a variational formulation. In case of problems in dimension two, the necessary condition implies immediately a perfectly matching situation for both formulations. Similar to the perfectly conducting case, the compactness of the shape Hessian is shown and the ill-posedness of the identification problem follows. That is, the second order quadratic form is no longer coercive. We illustrate the general results by some explicit examples and we present some numerical results. Multiple bifurcations of a predator-prey system Dongmei Xiao and Kate Fang Zhang 2007, 8(2): 417-433 doi: 10.3934/dcdsb.2007.8.417 +[Abstract](3241) +[PDF](795.6KB) The bifurcation analysis of a generalized predator-prey model depending on all parameters is carried out in this paper. The model, which was first proposed by Hanski et al. [6], has a degenerate saddle of codimension 2 for some parameter values, and a Bogdanov-Takens singularity (focus case) of codimension 3 for some other parameter values. By using normal form theory, we also show that saddle bifurcation of codimension 2 and Bogdanov-Takens bifurcation of codimension 3 (focus case) occur as the parameter values change in a small neighborhood of the appropriate parameter values, respectively. Moreover, we provide some numerical simulations using XPPAUT to show that the model has two limit cycles for some parameter values, has one limit cycle which contains three positive equilibria inside for some other parameter values, and has three positive equilibria but no limit cycles for other parameter values. A competition-diffusion system with a refuge Daozhou Gao and Xing Liang 2007, 8(2): 435-454 doi: 10.3934/dcdsb.2007.8.435 +[Abstract](2679) +[PDF](272.5KB) In this paper, a model composed of two Lotka-Volterra patches is considered. The system consists of two competing species $X, Y$ and only species $Y$ can diffuse between patches. It is proved that the system has at most two positive equilibria and then that permanence implies global stability. Furthermore, to answer the question whether the refuge is effective to protect $Y$, the properties of positive equilibria and the dynamics of the system are studied when $X$ is a much stronger competitor. The role of evanescent modes in randomly perturbed single-mode waveguides Josselin Garnier 2007, 8(2): 455-472 doi: 10.3934/dcdsb.2007.8.455 +[Abstract](2525) +[PDF](228.7KB) Pulse propagation in randomly perturbed single-mode waveguides is considered. By an asymptotic analysis the pulse front propagation is reduced to an effective equation with diffusion and dispersion. Apart from a random time shift due to a random total travel time, two main phenomena can be distinguished. First, coupling and energy conversion between forward- and backward-propagating modes is responsible for an effective diffusion of the pulse front. This attenuation and spreading is somewhat similar to the one-dimensional case addressed by the O'Doherty-Anstey theory. Second, coupling between the forward-propagating mode and the evanescent modes results in an effective dispersion. In the case of small-scale random fluctuations we show that the second mechanism is dominant. Homogenization in random media and effective medium theory for high frequency waves Guillaume Bal 2007, 8(2): 473-492 doi: 10.3934/dcdsb.2007.8.473 +[Abstract](3629) +[PDF](279.4KB) We consider the homogenization of the wave equation with high frequency initial conditions propagating in a medium with highly oscillatory random coefficients. By appropriate mixing assumptions on the random medium, we obtain an error estimate between the exact wave solution and the homogenized wave solution in the energy norm. This allows us to consider the limiting behavior of the energy density of high frequency waves propagating in highly heterogeneous media when the wavelength is much larger than the correlation length in the medium. Distributional convergence of null Lagrangians under very mild conditions Marc Briane and Vincenzo Nesi 2007, 8(2): 493-510 doi: 10.3934/dcdsb.2007.8.493 +[Abstract](2309) +[PDF](249.8KB) We consider sequences $U^\epsilon$ in $W^{1,m}(\Omega;\RR^n)$, where $\Omega$ is a bounded connected open subset of $\RR^n$, $2\leq m\leq n$. The classical result of convergence in distribution of any null Lagrangian states, in particular, that if $U^\ep$ converges weakly in $W^{1,m}(\Omega)$ to $U$, then det$(DU^\epsilon)$ converges to det$(DU)$ in $\D'(\Omega)$. We prove convergence in distribution under weaker assumptions. We assume that the gradient of one of the coordinates of $U^\epsilon$ is bounded in the weighted space $L^2(\Omega,A^\epsilon(x)dx;\RR^n)$, where $A_\epsilon$ is a non-equicoercive sequence of symmetric positive definite matrix-valued functions, while the other coordinates are bounded in $W^{1,m}(\Omega)$. Then, any $m$-homogeneous minor of the Jacobian matrix of $U^\epsilon$ converges in distribution to a generalized minor provide that $|A_\epsilon^{-1}|^{n/2}$ converges to a Radon measure which does not load any point of $\Omega$. A counter-example shows that this latter condition cannot be removed. As a by-product we derive improved div-curl results in any dimension $n\geq 2$. Generators of Feller semigroups with coefficients depending on parameters and optimal estimators Jerome A. Goldstein, Rosa Maria Mininni and Silvia Romanelli 2007, 8(2): 511-527 doi: 10.3934/dcdsb.2007.8.511 +[Abstract](2078) +[PDF](301.6KB) We consider the realization of the operator $L_{\theta, a}u(x) $:$= x^{2 a}u''(x) \ + \ (a x^{2 a - 1} + \theta x^a)u'(x)$, acting on $C[0,+\infty]$, for $\theta\in\R$, $a\in\R$. We show that $L_{\theta, a}$, with the so called Wentzell boundary conditions, generates a Feller semigroup for any $\theta\in\R$, $a\in\R$. The problem of finding optimal estimators for the corresponding diffusion processes is also discussed, in connection with some models in financial mathematics. Here $C[0,+\infty]$ is the space of all real valued continuous functions on $[0,+\infty)$ which admit finite limit at $+\infty$. 2020 Impact Factor: 1.327 5 Year Impact Factor: 1.492 2020 CiteScore: 2.2 Special Issues Email Alert [Back to Top]
a58a18ecf0df48a1
Tag Archives: #capitalism The Few Body Problem & the Metaphysics of Stupidity 13. III 2022 A vibrating string represents the collective motion of a system of (practically) an infinite number of atoms. Its properties and behavior are very different from those of its constituents. When the collective sets in, the system loses knowledge of its building blocks and obeys an altogether different set of rules: A string made of nickel atoms behaves (acoustically) the same way as a plastic string composed of complicated organic molecules. In terms of complexity, the collective is a nonlinear function of the size. A one-body problem is easy to handle. A two-body is more complicated, but in most cases tractable. Three-body is very difficult, while the few-body problem is impossible. However, an infinite-body problem is easy. Loss of granularity washes away as the number of degrees of freedom increases. The wave equation describing a vibrating string is significantly simpler than the Schrödinger equation for a single atom; their discoveries are separated by two centuries. Collective IQ < Average IQ When it comes to intelligence, a similar pattern unfolds — size is its enemy. As the group grows, at some point, it inevitably begins to get stupider. It is not difficult to fool a single person. All you need is some persuasive skills and a little intelligence. Fooling two people can be complicated – they can compare their thoughts and come up with non-overlapping objections and increase resistance to persuasion by filtering out the nonsense more effectively. Fooling a few, say five, people is practically impossible, even if they are of average intelligence. They retain their individuality (and independent thinking) while their cooperation still remains strong. Manipulating large masses, however, can be very easy (as witnessed by numerous historical examples and confirmed by the experience of the last five years). Large groups would believe what even its stupidest members would reject on their own. As the group grows beyond a certain size, the task of deceiving them becomes progressively easier. Individual wisdom and constructive cooperation changes and gives way to collective thinking where individuality is lost. In large groups, the collective IQ resides significantly below the average IQ – no matter how intelligent individuals are, their collective intelligence will be low. Although this inequality is an empirical observation, it is never violated in practice. Size inspires special behavior: When a group become large, it has no resemblance to and no logic of individual behavior. Masses can always be manipulated with stories that would never work on individuals. It becomes increasingly more difficult to rebel against the consensus – the loss of individuality that results after such capitulation of the mind leads to loss of resistance to persuasion. You can disagree with collective stupidity, but your resistance is inconsequential. Subjectlessness of humanity Only individuals can be wise; institutions are well designed, at best. (Peter Sloterdijk) Financial markets are often miscast as an example of an intelligent collective. Although they are treated as such, markets are not an entity in the true sense of that word, but a self-optimizing medium. All market participants have the same well-defined objectives, which streamline and unify their actions and push them to act in the same direction by doing everything possible in order to maximize profit. This leads to the propagation of ideas by the smartest players to everyone else and orients everyone towards the “smart consensus”, what is considered ex-ante as an optimal action. Corporations are collectives. However, in their (misguided) attempts to emulate some of the market’s behavior, like meritocracy, transparency, and accountability, and transpose them to the context where they don’t belong, they create obstacles and impediments to their efficient functioning and permanent sources of corporate dysfunctionality. There is a long history of their continuous struggle against underlying the trappings which come with that predicament. Casting businessmen (successful or unsuccessful) as political leaders is a bad idea, a very bad one, actually. Seeing society as a corporation and running it as such, cannot lead to good outcomes. Humanity is even further removed from a market-like medium than corporations. It consists of people with heterogeneous (most often conflicting) objectives. Their goals cannot be quantified and are far from unifiable.  When applied to humanity, the classic model of learning from harm collapses before this fact. In the words of Peter Sloterdijk: Humanity is a priori learning impaired because it is not a subject. It has no self, no intellectual coherence, no reliable organ of wakefulness, no self-reflection capable of learning, no identity — building common memory. Humanity cannot be wiser than a single human being. It has no body of its own with which to learn the hard way – no hand to learn first-hand – but rather a foreign body, its place of residence, the earth, which does not become wise, but transforms into a desert[1]. Humanity is to humans what a vibrating string is to atoms — its intelligence is inferior to even the sub-average intelligence of all humans. The intelligence problem and the power of 16-percenters Think of how stupid the average person is, and realize half of them are stupider than that. (George Carlin) Things don’t look encouraging when observed at higher resolution. This is a graph of the IQ distribution. The average IQ is around 100 with 68% of population residing inside the two standard deviations range, between 85 and 115, which means that about 16% are of deep sub-average intelligence. These numbers are fairly robust across different countries in the developed world. This distribution becomes particularly alarming when applied to a large relatively non-oppressive country. In the context of modern liberal societies, the synergy of stupidity, size, and democracy reinforces the malignant potential of the stupidity of the collective. Transcription of these numbers to America implies that about 53 million (16%) people (entire population of France) are of sub-average intelligence, out of which 7 million (entire Bulgaria) is seriously impaired. These people are empowered to express their opinion and impose their will in the ballot box. By mobilizing the left side of the distribution behind a single political movement – a maneuver that represents a collectivization of mediocrity — makes them even stupider by lowering their collective IQ further, and persuading them to believe in pretty much anything. When their discontent is streamlined and wrapped into a single narrative, in an electoral democratic system, these 16-percenters can become a decisive factor[2]. Empowered by their malignant stupidity, such people are capable of committing the most extreme atrocities as they have been throughout human history. Humanity cannot outgrow its own death drive Intelligence is not a theoretical quantity, but represents a behavioral quality of creatures in an open environment. (Peter Sloterdijk) Humans are generally intelligent, but this individual intelligence fails to get collectivized. This has only become worse with progress and the general trend of increasing acceleration and addiction to speed. The long term has become so long that it now exceeds our capacity for statistical prediction, but the short-term has accelerated so much that snap decisions are the only decisions ever made. The stakes have become higher – short-term survival is no longer guaranteed, which leads to a shift of focus. In the face of the urgency of short-term survival, long-term foresight collapses. This defines the tradeoff — the lower the odds of survival, the weaker the desires and capacities for grasping the long-term. As the group size increases and individuality fades away, collectivization inevitably leads to abdication of responsibilities. This leads to collective myopia, which attracts its membership and supports the group’s desire to grow. As a consequence, we no longer engage in intergenerational projects — passing the baton to the next generation is the best we can do (as a collective).   This removal of the long-term perspective, its subversion, leaves power dominated by short-term forces, which under the capricious conditions of the market forces requires adaptive, liquid or transient strategies as a basic skill set. At a systemic level, change is taking the form of positive feedback. In conditions of general info acceleration and hypercomplexity, as conscious and rational will become unable to adjust to the trends, the trends themselves become self-reinforcing (up to the point of collapse)[3]. For years now, the Right-wing populism of the capitalist West has been tapping into the left side of the IQ distribution. This has proven to be a very successful strategy for their project. Unsurprisingly, in the most spectacular staging of abdication of collective responsibility, thus cultivated populist movement became the epicenter of insane resistance to simple measures of containment of the COVID pandemic. At the core of the incoherent response to the pandemic – the spectacular failure of adjusting to the most straightforward problem of self-defense of the collective body – resides collective abdication of responsibility. This was a simple test of common sense, accepting the most basic measures any single human would normally have no problems accepting, but which collectively encountered resistance on a large scale (bordering on hysterical) causing, at the end, massive casualties, financial and economic damages, and unnecessary complications and extension of the pandemic. The resistance to alignment with simple and logical adjustment to an existential threat is just another illustration of the erosion of basic survival instincts caused by decades of deliberate and programmatic anti-science project and glorification of mediocrity. In the world of infinite acceleration, humanity is spontaneously converging towards a state of maximum cognitive incompetence, a collective Dunning-Kruger effect. According to the latest statistics, there are about 41 million Q-anon believers in the United States. However, this does not mean that capitalist democracies carry exclusive blame for the degradation of intellect and the rising rate of malignant stupidity. Rather, it is a combination of human nature and the law of large numbers. As much as Soviet-style communism pretended to have sought to divert the inevitable self-destructiveness of capitalism, it merely reinvented different and more efficient ways of self-destruction. A similar story goes with fascism. Communism’s record of ecological misconduct, which has penetrated deep into the territory of criminal, is just one of many examples of its self-destructive overdrive. Its pretended ideological attempts to be something else from what it really was were just failed diversions that merely accelerated the inevitable. Welcome to Asbest Russia is the largest country in the world by size. Nazis dreamed of conquering it as the Lebensraum for the new super-race. They failed, but so did the Russians. Instead of converting their resource-rich land into a prosperous superpower, despite Russia’s considerable cultural heritage, they have been struggling for centuries and still resemble in many ways a third-world country with staggering levels of large-scale corruption, chronic scarcity, high levels of poverty, and rampant inequality. After the failure of the Soviet experiment, Russia became a different type of Lebensraum for malignant stupidity of griftopian turbocapitalism and a laboratory of myopic ecological experimentation. On the east side of the Ural mountain range, about 1000 miles east of Moscow and 2000 miles north of Kabul, resides the town of Asbest, the three forming a nearly perfect rectangled triangle. Asbest (the Russian word for asbestos) is one of hundreds of mono towns of the post-revolutionary Soviet Union, established according to the tenets of planned economy. As its name suggests, Asbest is the center of asbestos mining, with the largest open pit asbestos mine in the world, 1000 ft deep and the size of half of Manhattan. As 59 countries have outlawed usage of asbestos and phased out any production due to its carcinogenic effects on humans, Asbest has become the world’s largest producer of the substance, which, by global ecological standards, is considered a criminal enterprise. About 70% of Asbest’s budget comes from the asbestos industry. At the town’s entrance, drivers are greeted by what looks like a béton-brut installation in place of a welcoming billboard – a concrete structure, suggestive of a stylized arrow pointing downwards, with a coat of arms, representing asbestos fibers through a ring of fire at the top, and the text, below, broken in two lines: Asbest, my town and my fate! It is not clear if this was supposed to be ironic or not, but it certainly has an ominous vibe and strong overtones of dark humor. There are numerous motivational billboards in the town itself with text emphasizing the compulsory optimism of yesteryear, the most striking one stating: Asbestos is our future! Asbest, my town and my fate Breaking rocks and extracting the chrysotile from the mining pit is usually done with dynamite. This creates enormous clouds of asbestos dust, which covers everything in the town, from cars, rooftops, window, and parks, to fruits and vegetables people grow in their gardens. Compared to the rest of the Sverdlovsk Oblast, Asbest has 30-40% higher incidence of cancer, a fact that remains carefully hidden from the public. Most of workers in asbestos processing plant have persistent coughs, a symptom of exposure to what they call the white needles, and strange skin ailments. Its population is slowly depleting with high mortality — the town has been losing about 1% of its population every year since the 1990s. And as if afraid to miss inserting yet another piece of irony here, local authorities have erected a monument to residents who have died (presumably from asbestos exposure) made of an asbestos block with the inscribed text: Live and Remember. After the collapse of communism, without skipping a beat, the town of Asbest transitioned seamlessly from the clutches of ideological incompetence of the Soviet era to the unconditional greed of post-communist kleptocracy. Unlike other mono towns (where about 25 million people, 16% of the Russian population, still live), which became dying cities, Asbest did not die instantaneously. Rather, it repositioned for a slow death. Instead of regulating human nature, capitalism as well as both communism and fascism only continue to reaffirm, time and again, what humans are truly capable of and enabled the full realization of that potential. And we haven’t seen the last of it, not yet. Free or oppressed, unable to avoid the degradation of collective intellect and preserve the wisdom of the few, humanity will always find ways to hurt itself. Like post-communist Russia, Western democracy has been caught in a hypnotic ritualistic trance of the spectacle of its own cultural creation and self-consumption, the two fatal modes of modernity Jean Baudrillard identified as: Carnival & Cannibal. The self-imposed ignorance and collective myopia have reached the point where the West has elevated its own annihilation to a supreme aesthetic act. Against that backdrop Asbest is our future has acquired a universal metaphoric ring as a mantra of the directionless escape of mankind where the endgame appears unavoidable — a slow death in a hyperoptimized dystopian trap. This is the realization of Arthur Schnitzler’s vision of the human race as an illness of some higher organism, within which it has found a purpose and meaning, but which it also sought to destroy, in the same way virus strives to annihilate the ailing human organism and in that process destroys itself. [1] Peter Sloterdijk, Infinite Mobilization, Polity (2020) [2] These numbers, although larger or comparable to the USA, are less alarming when it comes to Russia, China or India. In the former two, high coercive powers of the state prevent large-scale stupidity to metastasize, while in India, where more than 50% of the country is under no one’s control, it is the fragmentation and absence of coherence along the lines of language, religion, culture, education, and social hierarchies, that prevent the collective to set in. [3] Zygmunt Bauman, Liquid Times: Living in an Age of Uncertainty, Polity (2006) The Year of the Abject: Making Sense of Nonsense 31. XII 2021 Within the boundaries of what one defines as subject (a part of oneself) and object (something that exists independently of oneself), there reside pieces that were once categorized as a part of oneself or one’s identity that have since been rejected – the abject. (Julia Kristeva) Unless we are consciously drawn to it, we, for the most part, are not fully aware of our saliva. It is part of our body, an utterly neutral liquid, which we produce and swallow continuously as long as we are awake. However, this is true only as long as it remains in our bodies. Imagine periodically spiting into a glass and attempting to drink it once it fills up. The very thought of this causes utter disgust. As soon as our bodily fluids have become alienated from us, they become abject. Abject represents the taboo element of the self; it rejects and disturbs social reason and the communal consensus that underpins social order. The Indian caste of eunuchs represent a castrated remainder of a fully functional biological body, cast out, distanced, but not completely. In modern capitalism, the excluded segment of the population — those who fell through the cracks and can no longer be reintegrated into normal functioning of society – is the neoliberal equivalent of Indian eunuchs. Their proximity reinforces an anxiety that their destiny could become everyone’s prospect; their presence is a reminder how narrow the gap is between a comfortable middle class life and precarity. We have an ambivalent relationship with the abject — we are both drawn to and repelled by it. The ambivalence and inherent dialectics of the concept is encapsulated in the very word, which can function both as an adjective/noun as well as a verb. The verb to abject comes from the Latin abicere, which means to throw away or to cast out. The action of abjection refers to an impulse or operation to reject that which disturbs or threatens the stability of the self and is inassimilable. As an adjective, abject has two meanings: 1) Extremely unpleasant and degrading (living in abject poverty), and 2) Completely without pride or dignity (an abject apology). [1] The abject functions both as a repulsive and as an attractive fixed point of subjectivity. The concept is at the same time constructive (in the formation of identity and one’s relationship to the world) and destructive (in what it does to the subject): Abjection, the operation to abject, is fundamental to the maintenance of subjectivity and society, while the condition to be abject is subversive of both formations. The key to this duality is that the abject is not fully exogenous. The body of the excluded The volume of humans that are made redundant by the global triumph of capitalism has grown so much that it exceeds the managerial capacity of the planet. They cannot be re-assimilated into the “normal” life pattern and reprocessed back into the category of “useful” members of society. (Zygmunt Bauman) By its very nature, capitalism generates abject social bodies as a part of an excess population. Unlike criminals, social outcasts, homeless, illegal immigrants or general categories of aliens, who are transported beyond the boundaries of the enclosure of prosperity, the redundant white underclass has escaped the transportation and remains on the inside where economic balance and social equilibrium are sought. However, the longer the redundant population stays inside and rubs shoulders with the useful rest, the less the lines separating normality and abnormality appear reassuringly unambiguous. Assignment to waste becomes everyone’s potential prospect[2]. The white underclass represents the abject social body which cannot be completely objectivized but whose presence threatens the existing symbolic order. They cannot be fully reassimilated into normal life patterns and reprocessed back into the category of useful members of society — they lack the skills required for reintegration — but they cannot be discarded either; they carry a sense of entitlement as a constitutive element of the cultural and historical heritage that defines today’s America. The abject lean on subject’s stability — their presence threatens the implicit culturally established boundaries of what is considered normal, causing the subject to feel vulnerable because its boundaries are under threat. The white underclass cannot be ingested or incorporated into the system — they are like bodily fluids that have departed the (social) body — appalling, but, at the same time, a part of the (social) body-image that carries a prospect of everyone’s destiny. The very thought of their reintegration, has becomes revolting, while, at the same time, they cannot be fully objectivized either. The abject gambit The abject hovers at the boundary of what is assimilable, thinkable, but is itself unassimilable which means that we have to contemplate its otherness in its proximity to us but without it being able to be incorporated. It is the other that comes from within (so it is part of ourselves) that we have to reject and expel in order to protect our boundaries[3]. The abject is a great mobilizing mechanism. While the state of being abject is threatening to the self and others, the operation of abjecting involves rituals of purity that bring about social stability. Abjection seeks to stabilize, while the abject inherently disrupts[4]. When the mass of the excluded increases to a size impossible to ignore, they trigger rituals of abjection, which work themselves into identity politics.The repulsion and efforts to distance from the excludedthe abjection – which reinforces the self-awareness of the social standing of regular folks, are in conflict with the attraction by the powers the abject population enjoys and exudes. They are the power bottoms in this relationship as they define the location, robustness and porousness of the boundaries of the enclosure. Fascination with the abject’s power pulls the viewers in, while they remain at arm’s length because of the threats the abject exert. This makes the excluded a tool that drives the wedge between different social groups and prepares the population for political usage of the abject as leverage. Objectifying minorities has been institutionalized in America since its inception — from slavery and Jim Crow to ghetto and hyperghetto, prisons, wars, opioids, and other tools of soft and hard marginalization. However, with the rise of the white underclass in the second half of the 20th century, American ideology has become highly nuanced around the questions of exclusion. To a large extent, the Right wing has stuck to its white supremacists roots of yesteryear (either in a closeted form or explicitly) while centrists, both Left and Right, have shown greater initiative in modernizing the process. However, when it came to exclusion of the white underclass, the problem proved to be more difficult. Complicated by globalization, technology, the decline of American manufacturing, weaning off conventional energy sources and the general decay of demand for labor, low-skill jobs have been disappearing irreversibly, and the ranks of white underclass grew unstoppably together with their discontent. Social outcasts and minorities are relatively easy to objectivize. Permanently excluded – criminals, drug addicts, homeless – they have already been cast out. The residual, white precariat, which has always been perceived as a building block of this country’s social fiber, remains still on the inside, but unable to get reintegrated within the context of modern developments. In a white dominated/ruled society the marginalization of the excluded white subproletariat has been a political hard sell. They grew in size and have acquired a sense of entitlement minorities never could. Their sudden political awareness, no matter how fragile, has become an expression of pleasurable transgressive desires. As a new center of social subjectivity, theydraw their power from this position, which serves as an inspiration for their own identity politics. The emergence of 21st century Right-wing populism represents the biggest innovation on that terrain. Right-wingers now recognize the abject as a source of political leverage and, instead of exclusion, their program revolves around subjectivizing them. Voluntarily casting oneself as abject — identification with the white subproletariat – has become a quest for authenticity, aimed at acquiring a stigma in order to become a credible voice of the marginalized. This is the core of the modern populist abject gambit. Poetic catharsis: Politics in the kingdom of unreason Poetic catharsis is an impure process that protects from the abject only by dint of being immersed in it. (Julia Kristeva) In past autocratic systems, leaders had their own eccentricities and aberrations (e.g. Stalin’s paranoia, Kim Jong-Il’s sadistic personality disorder or vindictive narcissism of countless number of dictators and autocrats), but societies, collectively, didn’t suffer from them — there was a variety of afflictions that coexisted without any coordination with their leader –people were depressed, anxious, indifferent, etc. while their leaders remained an idiosyncratic singularity. In contrast, in contemporary populism the leader is styled as an embodiment of collective afflictions – he becomes a performance artist who functions as a concentrated version of collective social traumas, grievances, and anxieties. He appropriates the collective paranoia towards the deep state, the sovereign citizens fetish, the second amendment fixation, tax evasion obsession… Self-abjection of the Western political Right is pseudo-authenticity at all cost: Racism, misogyny, denialism, antivaxerism, conspiracy fantasies, and other flat-earth derivatives channel widespread collective anxieties through their leader. Perceived as a medium of grievance and spokesmen for collective traumas, politicians of the populist Right have been absolved of any accountability. Their biggest strength and their superpower is the absolute absence of any shame and embarrassment, even when faced with undeniable proof of their incompetence, lies, criminality and lack of an ethical backbone, no matter how obvious and damaging their culpability might be. They have been set free to establish new benchmarks of shamelessness, a unique political skill that always keeps them one step ahead of their political opponents, which has opened an entirely new political terrain never accessible before. The Populist politics now function as poetic catharsis: Through mimicry with their constituents political leaders no longer lead but surrender, resulting in a fragile and shifty consensus that is reinforced with their each action. Their activity consists of looking for themes that create resonance points capable of producing the loudest reverberations. Politics becomes hyper-optimized — there is not a single spec of life that is not used as leverage – but, in that process, it loses its robustness, becomes thinly spread and fractures under tinniest of shocks. The emergence of the rapidly growing white underclass and its irreversible marginalization in the last decades is beginning to get recognized as the fatal flaw of the American experiment, an outcome that is in conflict with its founding axioms and an evolving national trauma threatening to void it. Things have gone terribly wrong in the last 50 years — the accidental wounding of the American white malehood by the inner workings of neoliberalism has been the unintended consequence of capitalist progress[5] with which the system has not been prepared to deal with in any form. The Right wing populism of the last decade has become the last desperate attempt to save this failing experiment regardless of costs. Defeated in the ballot box, the battle to save wounded white malehood has assumed a less conventional form. In its desperation it has escalated to a suicide mission whose contours were unambiguously underlined in the first week of the past year. As much as the political center may want to distance itself from the white underclass and its populist political representation, the significance of that moment forces them to pause and rethink one more time whether they are really prepared to win this battle and write the obituary for the American experiment. If 2017 was the year when unreason was set free, then 2021 is the year of its proliferation – it is everywhere and nowhere. There are no more individual GOP members or voices anymore, only the opaque background of unreason against which they perform a choreographed dance of non-overlapping sequential appearances on the center stage of political spectacle hoping for a moment of public attention to make the absurd palatable and promote abnormal as the facts of life. The Right-wing political kabuki functions like a medieval mechanism of an astronomical clock on a church of unreason. The puppet-apostles of that church have a fixed position on a slow rotating carousel, parading through the window of shared reality in a mechanized procession, always one at a time, like luggage pieces on the conveyer belt of baggage claim from a flight which arrived without passengers, occasionally voicing their presence through monologues of nonsense, hoping that someone would notice them. The mechanism of the rotating Apostles inside the Prague Astronomical Clock of the Orloj church [1] Rina Ayra, Abjection and Representation: An Exploration of Abjection in the Visual Arts, Film and Literature, Palgrave Macmillan; 2014th edition (2014) [2] Zygmunt Bauman, Wasted LivesModernity and Its Outcasts, Polity (2003) [3] Julia Kristeva, Powers of Horror: An Essay on Abjection, Columbia University Press; Reprint edition (1982). [4] Rina Ayra ibid. [5] Wendy Brown, In the Ruins of Neoliberalism: The Rise of Antidemocratic Politics in the West, Columbia University Press (2019) 28. VIII 2020 In the next few years, social disorder in developed countries could take new dimension as demographic imbalances continue to weaken state structures further. This could be expressed through two different modes. 1) The discontent of ethnically excluded (e.g. Western Europe’s post-colonial minority populations) spreads to absorb and articulate the sentiments of other exclusions. 2) The discontent of the permanently excluded, like African Americans, provokes a reaction of the redundant natives, the white underclass, and triggers their uprising and backlash. Civil warfare, initially misdiagnosed as increase in crime, would escalate. The scramble for protection (which has already begun) assumes new forms, as the states cannot provide it due to lack of funding and legitimation. The state’s monopoly on violence is breached and reorganized through the expansion of private protection armies, right-wing militias, and different privatized police structures. This process had already been accomplished in the post-socialist countries about 25 years ago and is likely to serve as a blueprint for a similar transformation in the western world. Western democratic states where these transformations take place will gradually converge towards failed states. Contours of this program are already inscribed in the appointments for high public offices by the current administration. Combined with the other side-effects of globalization and the underlying social fragmentation, these developments will lead to further criminalization of societies and polarization of distribution with escalation of corruption and dismantling of the institutions of the democratic state as a natural consequence, implying further instabilities. Organized crime will blossom and reinforce its legitimacy, while developed countries will converge closer towards criminal oligarchies or other authoritarian structures. As an economic system, capitalism (at this point) is showing an advanced decline in capacity to underwrite a stable society. What follows after such a disintegration of a system is a prolonged period of social entropy and disorder. For a significant length of time, a society would slip into less than a society – a society-lite — until it may or may not recover and again become a society in the full meaning of the term. Out of all possible paths, this is the most radical outcome, one that is without a historical precedent and one we seem to be least prepared for. It corresponds to what Wolfgang Streeck calls the Interregnum: Disintegration of society as such, a perpetual anotherhood – pregnancy without childbirth — a trajectory where current times of trouble continue indefinitely. The divided subject of labor market 17. IX 2017 For the first time since the advent of industrial age, new technology is destroying more jobs than it is able to remobilize. Productivity and employment have begun to diverge from each other since the last years of the 20th century – productivity accelerates while employment decelerates. This is the new reality. While good for profits, this is becoming a major setback for labor, a source of positive feedback in the system and a destabilizing force for the entire economy and society. The profit maximization equation can no longer be satisfied: The recipient of wages (and social benefits) is expected to perform an impossible task of supporting increasing consumption, which accounts for an ever growing fraction of GDP, while being paid less in an environment of rising living costs. Credit, which had been conceived as the magic bullet aimed at bridging this imbalance, has turned to be another source of positive feedback leading to unsustainable borrowing and balance sheet crisis from which it is difficult to engineer economic and social recovery. Work is at a crossing point of history, going through a significant transformation, second since industrial age, with profound economic and social implications. Both new technology and credit, together with dismantling of the welfare state, have been the drivers of surplus labor and erosion of demand. It is becoming clear that we need less labor to produce the same output and that further rise in growth is conceivable without a rise in employment and wages. Work has become the biggest bubble which is about to burst. This is the limit where economic and social rationalities collide. Disappearance of work in work based societies is no longer only an economic issue, but a wider social and political problem and a crisis of the entire system of values. Work alone A priori, there is nothing appealing about wage work. It is all about the employers; they set the rules, workers comply[1]. Work is generally an unpleasant task, something we rather would not do. It goes against our nature and conflicts with our free will. Unlike work for subsistence, which we (most of the times reluctantly) do, wage work is an outcome of a voluntary optimization process. Workers effectively agree to surrender a portion of their free time in exchange for salaries. When seen from the modern perspective, work defines our social identity. It is a gift to society and our contribution to the project “better future”, a sacrifice we are willing to make for collective wellbeing. Work is viewed as our moral duty, social obligation and the road to personal success. However, work as we know it today is a relatively recent phenomenon. For example, in Ancient Greece freedom was exclusively located in the political realm and necessity was a prepolitical phenomenon. Those who had to work were slaves to necessity considered incapable of making ethical decisions, and therefore, not part of political life[2]. The modern notion of labor appeared with the advent of manufacturing capitalism. From the modern perspective, production was not governed by economic rationality. The objective was to work as much as it takes to earn a wage necessary for subsistence rather than earn beyond that by working as much as possible. The economic rationalization of labor was a major novelty at the time. It presented a radical subversion of the way of life. In order to overcome workers’ unwillingness to work long hours, factory owners had to pay them meager wages, which forced the former to put in long hours every day of the week in order to earn enough to survive. Labor became part of reality distinct form everyday life. However, in the course of time, with development of industrial society, work became the Siamese twin of life. Technology and labor in postindustrial age While in pre-industrial societies innovation and competition were strictly prohibited, postindustrial age, in contrast, is characterized by its addiction to innovation. Innovation has turned out as a major trigger of a reinforcing mechanism of economic exhaustion. The primary reason is that innovation is a source of rent — prices are no longer commensurate with production costs, but contain a scarcity premium. Profit centers always compete in terms of their capacity to innovate. Higher output leads to more investment in innovations which lead to new technologies, which means higher output and even more innovations. However, technology reduces need for labor and so the workers have to work for lower wages, which reduces labor costs of production and increases output, which means more investment into new technologies, which further reduces the need for labor and lowers wages further. This process continues until it exhausts itself and there is no more room for labor. When labor is scarce, workers have some bargaining power – they could refuse to work and the producers are willing to make concessions to workers. As long as profit margins are high, there will be money for everyone. Problems begin when margins begin to compress. Cost cutting eliminates jobs either through automation or relocation to regions with cheap labor or forces the workers to accept lower wages. As a consequence of innovation, work ceases to be the main productive force and wages the main production cost. Output is produced more by capital than by labor, and labor gradually loses bargaining power as its choices become reducible to dilemma between poorer working conditions and unemployment. As a consequence of these developments we have had tree major trends that emerged in the past decades: Decline of wages, reduction of government spending (a.k.a. dismantling of the welfare state), and continued rise of consumption as a fraction of GDP (currently near 70%). Over time they have created cumulative imbalances and dead-end conditions, which have resulted in the 2008 crisis and conditions where further recovery from the crisis is becoming increasingly more difficult to engineer. These trends define the current landscape. Any attempt at change becomes a source of positive feedback that only destabilizes things further. Devalorization of labor and the new standard of subsistence Credit is another source of positive feedback. Low wages force more reliance on credit which causes higher living costs (more liabilities and less money for subsistence), so more people have to work (e.g. not just the head of the household, but their partners, kids….), and they have to work longer hours which further increases labor surplus and forces lower wages and amplify reliance on credit which increases living costs further. Servicing debt becomes the main liability, which further undermines bargaining power of the workers. This continues until debt becomes a burden than can no longer be born. In some sense, we are being pulled back towards early industrial age. In those days, the unwillingness to work beyond subsistence had caused employers to pay lower wages to force workers to work long hours in order to earn for their basic needs. Labor market was inefficient: Demand for labor was high, but workers were reluctant to work. Early industrial era worker had a limited capacity to desire and the opportunity of earning more was less attractive than that of working less. Salaries had to be low to force people to work hard in order to earn for subsistence. Although, the end result (low wages) coincides with the current predicament, the causality chain is different. Late 20th century economies grow only if people consume beyond their needs. The ability to desire – the consumer libido — has to be maintained systematically and that mechanism has to be incorporated into ideology as work ethics and wage work to become closely associated with social status. With pressure to maximize profits, and therefore limit wages, this program could only be achieved if wage recipients continued to borrow more and more, especially if their liabilities continue to grow. For that, they need jobs, but jobs do not pay. So, they have to work harder, put in longer hours, to be able to survive. Unlike early industrial age when scarcity of labor was the dominant factor, in post-industrial economies, supply of labor continue to climb together with costs of living high. Preindustrial concept of “enough”, which in the early days defied economic rationality, gained new life in the light of postindustrial developments. Its meaning is now being redefined by credit. The problem is no longer the individual attitude towards work, but the collective response to the cumulative effects of excess rationality. Credit redefines what subsistence means. It is a conversion factor from desires to needs. As seen from the workers’ side, the effect of increased efficiency of production, brought about by technology, is offset by credit. It naturally extends what our needs are and sets a new standard of subsistence and determines how much we have to earn for survival. Contrary to the economic dogma and cults of free market ideology, competition has led to suboptimal outcome for labor. Despite all technological advances, there has not been a commensurate decrease in working hours. Work won’t be revolutionized, it will be auctioned The objective of profit centers is to make money and, if they happen to create jobs, that is good, but not necessary if it negatively affects their profitability. Keeping this as priority for the future, changes of the labor force would have to be made accordingly. Some contours of the fragmented labor force are already beginning to show along these lines of adjustment. The assembly line has colonized a wide range of jobs. With the rise of cognitive economy and de-emphasis of material production, workers are divided into four main categories: Inventors of ideas and desires, educators (responsible for reproduction of labor), salesmen of products and producers of desires, and routine laborers[3]. We could refer to them metaphorically as over the counter or OTC (first three) and exchange jobs (the last one). OTC jobs can never be made generic; they always carry some unique component of personal skills that cannot be fully automated. Routine laborers, on the other hand, require no particular social skills. They are an extension of assembly line workers, but in a wider context that includes technical and intellectual skills. They are always replaceable and therefore treated as expandable. Extrapolation of the current trends leads to a limit where workers become a shadow category. They no longer exist, only their time does, always ready to engage in exchange for a temporary salary. In that environment, the next step towards improving the efficiency of transaction between capital and labor are job auctions. A finite term, e.g. 2000-hour or zero-hour, job would be offered in an auction and given to the lowest bidder. Profit centers would face high flexibility at expense of labor force whose bargaining power could decrease further. The labor force would be self-trained and offer high-level skills on an increasingly precarious landscape. Those with superior skills could demand additional accommodation that could smooth their consumption across periods without jobs, which could create a need for intermediaries, job brokers who have stables of workers with standardized skills on whose behalf they bid for part time jobs. Added flexibility of employers eliminates pressure to have a long-term view and strategy. Instead, there is a sequence of short-term tactical positions with an ability to quickly adjust labor costs to different market conditions. If this is indeed the case, it could create a reinforcing mechanism where their output trails the economy and never completely recovers or rebounds. Disappearance of permanent jobs would have a dramatic impact on credit market. It would increase urge to save more and would affect ability of long-term borrowing, with direct impact on housing market, education, consumption, etc. and, therefore, adverse effects on economic growth. In the extreme, demand for labor completely disappears — everyone works for himself. This is the most radical social transformation from society of workers to society of employers. The ultimate irony is people employ themselves but end up working long hours and paying themselves poorly. Work is gradually emerging as the biggest hoax in the history of humankind. We have come a long way from the early days of capitalism where its basic antagonism was defined by the dynamics of capital and labor. It is reduction of life to work, and not capitalist exploitation, what makes work alienating. This particular aspect is what has led to the rapid dead end. In taking work as a given, we have depoliticized it, or removed it from the realm of political critique. Wage work continues to be accepted as the primary mechanism for income distribution, as an ethical obligation, and as a means of defining others and ourselves as social and political subjects[4]. There is an urgency to emancipate ourselves form work. Crisis of work is signaling also a crisis of imagination. We cannot imagine postwork society. This is the biggest problem. [1] “Work is a paid activity, performed on behalf of a third party, to achieve goals we have not set for ourselves, according to procedures and schedules laid by the persons paying our wages.” (Andre Görz, Critique of Economic Reason, Verso 1989) [2] ibid. [3] Richard Sennett, The Corrosion of Character: The Personal Consequences of Work in the New Capitalism (New York: W. W. Norton & Co., 1998 ) [4] Kathi Weeks, The Problem with Work, Duke University Press (2011) Adventures in heterotopia: The things we left behind 25. IX 2016 Invention of a ship is invention of a shipwreck, invention of a plane is invention of a plane crash, and invention of nuclear energy is invention of a nuclear meltdown. (Paul Virilio) Galileo’s real heresy was not so much his rediscovery that the Earth revolved around the sun, but his constitution of an infinitely open space. His findings dissolved the idea of the medieval concept of emplacement*. The space suddenly opened and disrupted the existing order of things. Localization gave way to trajectory and emplacement to extension. A thing’s place was no longer anything but a point on its trajectory, the stability of a thing was only its movement indefinitely slowed down. There was no up & down anymore, no celestial hierarchy. Instead of the universe resting on the back of a giant turtle, suddenly, everything was moving and out of place. Nobody was in charge anymore, and that was OK. The heavens were in a state of celestial anarchy. This was the emancipatory core of Galileo’s revolution. To a medieval mind, this was a picture of utter chaos. The idea of creation and design was seriously undermined and with it what was believed to be the Big Guy’s mandate (and authority). The Church, as His shopkeeper and interpreter of His will, saw this as bad for business and a problem for the franchise. Understandably, they had an issue with it, pronounced Galileo an evildoer and threatened him with violence. Galileo recanted, but it didn’t matter – religion’s golden days were over. Four centuries later our experience of space is undergoing the second revolution, this time far more disruptive. With information technology and infinite connectivity, time is contracting, distances are shrinking and space compactifying. The space of trajectories is giving way to networks & sites. Different geographies are becoming nodes on the global grid, equidistant from each other. The outside is gradually disappearing, absorbed by the expanding and elastic inside. The world has become smaller, but within that world, things no longer have a fixed place; they are displaced and delocalized. Permanently and irreversibly. The Network is a subversion of all terrestrial hierarchies. The concepts of center and periphery have lost their traditional meaning. All things are both equally important and irrelevant. Everything is now everywhere and nowhere — compactification and delocalization at the same time. An absolute rule of equivalence. The tyranny of transparency. The source of both claustrophobia and agoraphobia. The ultimate triumph of dialectics, simultaneously both oppressive and liberating. Things are no longer constrained by physical separation, seasons of the year, time zone, weather, climate… Companies can relocate to countries with cheap labor and real estate, lower taxes and accommodative political climate. As long as the place is on the grid, and eventually all geographies will be, it doesn’t matter where one is. The Network is everywhere and so are the factories and companies and everything else. People are no longer bound to a particular locale; they don’t even have to leave their homes to perform work. Everyone is gradually losing their identity in the face of persistent deterritorialization and uprootedness. Unprecedented wealth accumulation afforded by the Network gives rise to a new, ungovernable, global overclass which now makes all major political decisions. States are powerless to interfere and effectively become their extended arm. As a rising tide lifts all boats, crime becomes more prosperous, organized and powerful – increasing fraction of global wealth comes from and is destined to criminal sources. Gradually, everything becomes subordinated to the interests of global oligarchies and their prosperity comes at high social costs. The pressure of equivalence is crushing everything in sight, histories, cultures, identities, futures, and symbolic meaning. The same way Galileo wreaked havoc in outer space and disrupted celestial order, post-modern creation of the Network has been a disruption of terrestrial order with the dissolution of historically rigid social structures. New technology has revealed every segment of society as an instrument of production, a human resource to be arranged, rearranged and disposed of. It has created major economic advantages and unprecedented opportunities for profit making. But this embrace of convenience doesn’t come free of charge. Removal of market frictions, economic rigidities, and erasure of borders, resulted in physical and cultural displacement, loss of identity, corruption, omnipresence of crime, rise in violence, dismantling of the welfare state and a rise of carceral state, populism, regressive policies and political chaos. The very same technology that has proven to create the main economic advantage has also reduced the system’s ability to change. The system has lost the ability to adapt and with it, its main advantage, its vitality. It has suffered an autoimmune failure and is no longer able to recover from crises. This is the shipwreck, the plain crash and the nuclear meltdown. *Michel Foucault, Of Other Spaces, Heterotopias (1967) There is something wrong with the future 28.VIII 2016 Give me back the Berlin wall Give me Stalin and St. Paul Give me Christ Or give me Hiroshima Destroy another fetus now We don’t like children anyhow I’ve seen the future, baby: It is murder[1] Re-contextualization of murder: Society and human nature [1] Leonard Cohen, The Future [3] Franco Berardi, After the Future [4] Dardot & Laval [5] David Buss, The Murderer Next Door I have returned there where I had never been 30. VI 2016 Debt and guilt are two intimately related concepts. In some languages (Sanskrit, Aramaic, Hebrew, German) the two words even have the same root — the German makes it particularly explicit: Schulden (debt) vs. Schuld (guilt). In the same way guilt implies that we will have to atone in the future (or in the afterlife) for the sins committed today, debt is a handover of a part of our future in exchange for present consumption. The dynamics of capital accumulation is based on the perpetual process of investment in a borrowed future. “Borrow today and repay later” logic carries an implicit bet on the future. Without an optimistic outlook on the future, there is no lending or borrowing. Debt links the present and the future in a circular way: A prosperous future cannot happen without the present, and the present cannot take off without a belief in (better) future. In this way, the very concept of the future undergoes a transformation in capitalism: It no longer represents a timeline we experience, but a concept we envision. By now, accumulation of debt has become so pervasive that today there is more debt than wealth in the world. No debt will ever be repaid. It exists in a virtual space with an understanding that it can never be allowed to intersect with the real world. Today, debt links institutions and individuals through virtual default — everyone is both a victim and an accomplice in that game[1]. So why does debt still persist? Debt defines the power structure inherent in the debtor-creditor relation. It has become the main instrument of biopolitics, especially in the last decades of neoliberal hegemony. In the absence of a real collateral (like house, car or any material good), creditor feels entitled to impose upon the debtor’s modes of behavior consistent with initial expectations of debt issuance. It is logical for the creditor to demand from the debtor maintenance of a lifestyle that guarantees his creditworthiness and ability to honor his obligations. For example, in the case of welfare (social debt), government has the power and (it assumes) the rights to pressure the welfare recepient into a conduct that increases his chances of getting back on track — rehabilitated and reintegrated into the mainstream society — so that his social debt is effectively reduced. In the past, the United States, and other developed countries, used to finance the production of others — this was the traditional center-periphery interaction. Its credit-financed growth, which came to a halt in 2007, created domestic imbalances. This “domestic debt” had to be paid by borrowing from abroad — borrowing to service an already existing debt — a grand pyramid scheme of a sort. In an odd and misguided interpretation of the theory of comparative advantages, the United States specialized in the production of debt, but in international currency (US dollar). This enabled others, e.g. China, to “buy dollars” in exchange for its commodities[2]. To put it more bluntly, the United States imported from China commodities, labor and real products, in exchange for debt – a piece of paper, an IOU. (Who really got a better deal here, or who could get potentially screwed in this transaction?) Thus came about a strange situation in which the emerging world producers, the periphery, also became the net world creditors on condition, however, that payment of debt never be demanded. United States, the world’s largest economy, owes foreign countries more than $6 trillion dollars, about 1/3 of its GDP (and another $10-12tr domestically). To China alone, it owes $1.2tr, to Japan $1.1tr and to European countries around $1.5tr — about 2/3 of its total foreign debt is concentrated in three economic regions. In principle, these three (and not to forget, rather powerful) creditors have the right to tell the United States how to “behave” — how to conduct its policies to insure its ability to service and repay its debt. In turn, the US is incentivized to comply with whatever the imposed rules, this implicit “code of conduct”, in order to maintain its creditworthiness and ability to borrow more in the future.  Global capital, thus, can demand access to the US political process, and, in order to allow that access, the US laws should be modified accordingly: Global creditors are given a way to have a say about who is elected in policy making offices, including the president of the United States. This is how debt becomes an instrument of global governance. This is the same mechanism already seen at play when IMF and the European Union used their “creditor rights” to disagree with the results of the Greek elections, their choice of the finance minister and a general shape of the local political landscape, followed by their insistence to impose austerity measures in order to insure Greece’s ability to service its debt to the large European banks and to the detriment of the Greek economy and people. In this way, democratic process becomes compromised by influence of global capital which demands as collateral the ability to protect its interests through presence in domestic policy or eventually access to the real US assets, demand tighter regulations and smaller financial markets as a way of reducing the default risk, or more favorable trade agreements. Submission to the tyranny of the Global becomes the other side of debt. Our lives become arranged to harmonize with demands of extraterritorial capital flows over which local politics has no jurisdiction and little or no influence. In order to keep global capital happy, budgets have to be balanced, welfare state dismantled, safety net removed and precarity and asymptotic unemployment as a way of life accepted. In this constellation of things politics becomes the problem instead of solution and status quo the only (peaceful) way ahead. The acceptance of the existing democratic mechanisms as the ultimate frame is preventing a radical (or any other) transformation. Peaceful social life is itself an expression of the (temporary) victory of one class- the ruling one, with the state as an apparatus of class domination. Unable to perform the functions that states generally do, all states eventually become failed states. Compromised democracy and loss of autonomy is the price to pay for excessive government debt. This is a perpetual process whose end is becoming only more elusive with time. It looks increasingly less like atonement and more like an eternal damnation. [1] Jean Baudrillard, The Transparency of Evil, Verso 2009 [2] Massimo Amato & Luca Fantacci, Saving the Market from Capitalism, Polity 2014
ed15deeb9ce56023
Copenhagen interpretation The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics principally attributed to Niels Bohr and Werner Heisenberg.[1] It is one of the oldest of numerous proposed interpretations of quantum mechanics, as features of it date to the development of quantum mechanics during 1925–1927, and it remains one of the most commonly taught.[2] There is no definitive historical statement of what the Copenhagen interpretation is. There are some fundamental agreements and disagreements between the views of Bohr and Heisenberg.[3][4] For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed,[5]: 133  while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process, which could take place within the quantum system.[6] Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties which cannot all be observed or measured simultaneously.[7] Moreover, the act of "observing" or "measuring" an object is irreversible, no truth can be attributed to an object except according to the results of its measurement. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' mental arbitrariness.[8]: 85–90  Over the years, there have been many objections to aspects of Copenhagen-type interpretations, including the discontinuous and stochastic nature of the "observation" or "measurement" process, the apparent subjectivity of requiring an observer, the difficulty of defining what might count as a measuring device, and the seeming reliance upon classical physics in describing such devices. Starting in 1900, investigations into atomic and subatomic phenomena forced a revision to the basic concepts of classical physics. However, it was not until a quarter-century had elapsed that the revision reached the status of a coherent theory. During the intervening period, now known as the time of the "old quantum theory", physicists worked with approximations and heuristic corrections to classical physics. Notable results from this period include Max Planck's calculation of the blackbody radiation spectrum, Albert Einstein's explanation of the photoelectric effect, Einstein and Peter Debye's work on the specific heat of solids, Niels Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, Bohr's model of the hydrogen atom and Arnold Sommerfeld's extension of the Bohr model to include relativistic effects. From 1922 through 1925, this method of heuristic corrections encountered increasing difficulties; for example, the Bohr–Sommerfeld model could not be extended from hydrogen to the next simplest case, the helium atom.[9] The transition from the old quantum theory to full-fledged quantum physics began in 1925, when Werner Heisenberg presented a treatment of electron behavior based on discussing only "observable" quantities, meaning to Heisenberg the frequencies of light that atoms absorbed and emitted.[10] Max Born then realized that in Heisenberg's theory, the classical variables of position and momentum would instead be represented by matrices, mathematical objects that can be multiplied together like numbers with the crucial difference that the order of multiplication matters. Erwin Schrödinger presented an equation that treated the electron as a wave, and Born discovered that the way to successfully interpret the wave function that appeared in the Schrödinger equation was as a tool for calculating probabilities.[11] Quantum mechanics cannot easily be reconciled with everyday language and observation, and has often seemed counter-intuitive to physicists, including its inventors.[note 1] The ideas grouped together as the Copenhagen interpretation suggest a way to think about how the mathematics of quantum theory relates to physical reality. Origin and use of the termEdit The Niels Bohr Institute in Copenhagen The term refers to the city of Copenhagen in Denmark, and was apparently coined during the 1950s.[12] Earlier, during the mid-1920s, Heisenberg had been an assistant to Bohr at his institute in Copenhagen, where they helped originate quantum mechanical theory.[13][14] At the 1927 Solvay Conference, a dual talk by Max Born and Heisenberg declared "we consider quantum mechanics to be a closed theory, whose fundamental physical and mathematical assumptions are no longer susceptible of any modification."[15][16] In 1929, Heisenberg gave a series of invited lectures at the University of Chicago explaining the new field of quantum mechanics. The lectures then served as the basis for his textbook, The Physical Principles of the Quantum Theory, published in 1930.[17] In the book's preface, Heisenberg wrote: On the whole, the book contains nothing that is not to be found in previous publications, particularly in the investigations of Bohr. The purpose of the book seems to me to be fulfilled if it contributes somewhat to the diffusion of that 'Kopenhagener Geist der Quantentheorie' [Copenhagen spirit of quantum theory] if I may so express myself, which has directed the entire development of modern atomic physics. The term 'Copenhagen interpretation' suggests something more than just a spirit, such as some definite set of rules for interpreting the mathematical formalism of quantum mechanics, presumably dating back to the 1920s.[18] However, no such text exists, and the writings of Bohr and Heisenberg contradict each other on several important issues.[4] It appears that the particular term, with its more definite sense, was coined by Heisenberg around 1955,[12] while criticizing alternative "interpretations" (e.g., David Bohm's[19]) that had been developed.[20][21] Lectures with the titles 'The Copenhagen Interpretation of Quantum Theory' and 'Criticisms and Counterproposals to the Copenhagen Interpretation', that Heisenberg delivered in 1955, are reprinted in the collection Physics and Philosophy.[22] Before the book was released for sale, Heisenberg privately expressed regret for having used the term, due to its suggestion of the existence of other interpretations, that he considered to be "nonsense".[23] In a 1960 review of Heisenberg's book, Bohr's close collaborator Léon Rosenfeld called the term an "ambiguous expression" and suggested it be discarded.[24] However, this did not come to pass, and the term entered widespread use.[12][21] There is no uniquely definitive statement of the Copenhagen interpretation.[4][25][26][27] The term encompasses the views developed by a number of scientists and philosophers during the second quarter of the 20th century.[28] This lack of a single, authoritative source that establishes the Copenhagen interpretation is one difficulty with discussing it; another complication is that the philosophical background familiar to Einstein, Bohr, Heisenberg, and contemporaries is much less so to physicists and even philosophers of physics in more recent times.[9] Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics,[29] and Bohr distanced himself from what he considered Heisenberg's more subjective interpretation.[3] Bohr offered an interpretation that is independent of a subjective observer, or measurement, or collapse; instead, an "irreversible" or effectively irreversible process causes the decay of quantum coherence which imparts the classical behavior of "observation" or "measurement".[6][30][31][32] Different commentators and researchers have associated various ideas with the term.[16] Asher Peres remarked that very different, sometimes opposite, views are presented as "the Copenhagen interpretation" by different authors.[note 2] N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced.[34][35] Mermin described the Copenhagen interpretation as coming in different "versions", "varieties", or "flavors".[36] Some basic principles generally accepted as part of the interpretation include the following:[3] 1. Quantum mechanics is intrinsically indeterministic. 2. The correspondence principle: in the appropriate limit, quantum theory comes to resemble classical physics and reproduces the classical predictions. 3. The Born rule: the wave function of a system yields probabilities for the outcomes of measurements upon that system. 4. Complementarity: certain properties cannot be jointly defined for the same system at the same time. In order to talk about a specific property of a system, that system must be considered within the context of a specific laboratory arrangement. Observable quantities corresponding to mutually exclusive laboratory arrangements cannot be predicted together, but considering multiple such mutually exclusive experiments is necessary to characterize a system. Hans Primas and Roland Omnès give a more detailed breakdown that, in addition to the above, includes the following:[8]: 85  1. Quantum physics applies to individual objects. The probabilities computed by the Born rule do not require an ensemble or collection of "identically prepared" systems to understand. 2. The results provided by measuring devices are essentially classical, and should be described in ordinary language. This was particularly emphasized by Bohr, and was accepted by Heisenberg.[note 3] 3. Per the above point, the device used to observe a system must be described in classical language, while the system under observation is treated in quantum terms. This is a particularly subtle issue for which Bohr and Heisenberg came to differing conclusions. According to Heisenberg, the boundary between classical and quantum can be shifted in either direction at the observer's discretion. That is, the observer has the freedom to move what would become known as the "Heisenberg cut" without changing any physically meaningful predictions.[8]: 86  On the other hand, Bohr argued both systems are quantum in principle, and the object-instrument distinction (the "cut") is dictated by the experimental arrangement. For Bohr, the "cut" was not a change in the dynamical laws that govern the systems in question, but a change in the language applied to them.[4][39] 4. During an observation, the system must interact with a laboratory device. When that device makes a measurement, the wave function of the systems collapses, irreversibly reducing to an eigenstate of the observable that is registered. The result of this process is a tangible record of the event, made by a potentiality becoming an actuality.[note 4] 5. Statements about measurements that are not actually made do not have meaning. For example, there is no meaning to the statement that a photon traversed the upper path of a Mach–Zehnder interferometer unless the interferometer were actually built in such a way that the path taken by the photon is detected and registered.[8]: 88  6. Wave functions are objective, in that they do not depend upon personal opinions of individual physicists or other such arbitrary influences.[8]: 509–512  Another issue of importance where Bohr and Heisenberg disagreed is wave–particle duality. Bohr maintained that the distinction between a wave view and a particle view was defined by a distinction between experimental setups, whereas Heisenberg held that it was defined by the possibility of viewing the mathematical formulas as referring to waves or particles. Bohr thought that a particular experimental setup would display either a wave picture or a particle picture, but not both. Heisenberg thought that every mathematical formulation was capable of both wave and particle interpretations.[40][41] Nature of the wave functionEdit A wave function is a mathematical entity that provides a probability distribution for the outcomes of each possible measurement on a system. Knowledge of the quantum state together with the rules for the system's evolution in time exhausts all that can be predicted about the system's behavior. Generally, Copenhagen-type interpretations deny that the wave function provides a directly apprehensible image of an ordinary material body or a discernible component of some such,[42][43] or anything more than a theoretical concept. Probabilities via the Born ruleEdit The Born rule is essential to the Copenhagen interpretation.[44] Formulated by Max Born in 1926, it gives the probability that a measurement of a quantum system will yield a given result. In its simplest form, it states that the probability density of finding a particle at a given point, when measured, is proportional to the square of the magnitude of the particle's wave function at that point.[note 5] A common perception of "the" Copenhagen interpretation is that an important part of it is the "collapse" of the wave function.[3] In the act of measurement, it is postulated, the wave function of a system can change suddenly and discontinuously. Prior to a measurement, a wave function involves the various probabilities for the different potential outcomes of that measurement. But when the apparatus registers one of those outcomes, no traces of the others linger. Heisenberg spoke of the wave function as representing available knowledge of a system, and did not use the term "collapse", but instead termed it "reduction" of the wave function to a new state representing the change in available knowledge which occurs once a particular phenomenon is registered by the apparatus.[49] According to Howard and Faye, the writings of Bohr do not mention wave function collapse.[12][3] Because they assert that the existence of an observed value depends upon the intercession of the observer, Copenhagen-type interpretations are sometimes called "subjective". This term is rejected by many Copenhagenists because the process of observation is mechanical and does not depend on the individuality of the observer.[50] Wolfgang Pauli, for example, insisted that measurement results could be obtained and recorded by "objective registering apparatus".[5]: 117–123  As Heisenberg wrote, In the 1970s and 1980s, the theory of decoherence helped to explain the appearance of quasi-classical realities emerging from quantum theory,[51] but was insufficient to provide a technical explanation for the apparent wave function collapse.[52] Completion by hidden variables?Edit In metaphysical terms, the Copenhagen interpretation views quantum mechanics as providing knowledge of phenomena, but not as pointing to 'really existing objects', which it regards as residues of ordinary intuition. This makes it an epistemic theory. This may be contrasted with Einstein's view, that physics should look for 'really existing objects', making itself an ontic theory.[53] The metaphysical question is sometimes asked: "Could quantum mechanics be extended by adding so-called "hidden variables" to the mathematical formalism, to convert it from an epistemic to an ontic theory?" The Copenhagen interpretation answers this with a strong 'No'.[54] It is sometimes alleged, for example by J.S. Bell, that Einstein opposed the Copenhagen interpretation because he believed that the answer to that question of "hidden variables" was "yes". By contrast, Max Jammer writes "Einstein never proposed a hidden variable theory."[55] Einstein explored the possibility of a hidden variable theory, and wrote a paper describing his exploration, but withdrew it from publication because he felt it was faulty.[56][57] Acceptance among physicistsEdit During the 1930s and 1940s, views about quantum mechanics attributed to Bohr and emphasizing complementarity became commonplace among physicists. Textbooks of the time generally maintained the principle that the numerical value of a physical quantity is not meaningful or does not exist until it is measured.[58]: 248  Prominent physicists associated with Copenhagen-type interpretations have included Lev Landau,[58][59] Wolfgang Pauli,[59] Rudolf Peierls,[60] Asher Peres,[61] Léon Rosenfeld,[4] and Ray Streater.[62] Throughout much of the 20th century, the Copenhagen tradition had overwhelming acceptance among physicists.[58][63] According to a very informal poll (some people voted for multiple interpretations) conducted at a quantum mechanics conference in 1997,[64] the Copenhagen interpretation remained the most widely accepted label that physicists applied to their own views. A similar result was found in a poll conducted in 2011.[65] The nature of the Copenhagen interpretation is exposed by considering a number of experiments and paradoxes. Schrödinger's catEdit This thought experiment highlights the implications that accepting uncertainty at the microscopic level has on macroscopic objects. A cat is put in a sealed box, with its life or death made dependent on the state of a subatomic particle.[8]: 91  Thus a description of the cat during the course of the experiment—having been entangled with the state of a subatomic particle—becomes a "blur" of "living and dead cat." But this can't be accurate because it implies the cat is actually both dead and alive until the box is opened to check on it. But the cat, if it survives, will only remember being alive. Schrödinger resists "so naively accepting as valid a 'blurred model' for representing reality."[66] How can the cat be both alive and dead? In Copenhagen-type views, the wave function reflects our knowledge of the system. The wave function   means that, once the cat is observed, there is a 50% chance it will be dead, and 50% chance it will be alive.[61] (Some versions of the Copenhagen interpretation reject the idea that a wave function can be assigned to a physical system that meets the everyday definition of "cat"; in this view, the correct quantum-mechanical description of the cat-and-particle system must include a superselection rule.[62]: 51 ) Wigner's friendEdit "Wigner's friend" is a thought experiment intended to make that of Schrödinger's cat more striking by involving two conscious beings, traditionally known as Wigner and his friend.[8]: 91–92  (In more recent literature, they may also be known as Alice and Bob, per the convention of describing protocols in information theory.[67]) Wigner puts his friend in with the cat. The external observer believes the system is in state  . However, his friend is convinced that the cat is alive, i.e. for him, the cat is in the state  . How can Wigner and his friend see different wave functions? In a Heisenbergian view, the answer depends on the positioning of Heisenberg cut, which can be placed arbitrarily (at least according to Heisenberg, though not to Bohr[4]). If Wigner's friend is positioned on the same side of the cut as the external observer, his measurements collapse the wave function for both observers. If he is positioned on the cat's side, his interaction with the cat is not considered a measurement.[68] Different Copenhagen-type interpretations take different positions as to whether observers can be placed on the quantum side of the cut.[68] Double-slit experimentEdit In the basic version of this experiment, a light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves); the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through.[69]: 73–76  According to Bohr's complementarity principle, light is neither a wave nor a stream of particles. A particular experiment can demonstrate particle behavior (passing through a definite slit) or wave behavior (interference), but not both at the same time.[70] The same experiment can in theory be performed with any physical system: electrons, protons, atoms, molecules, viruses, bacteria, cats, humans, elephants, planets, etc. In practice it has been performed for light, electrons, buckminsterfullerene,[71][72] and some atoms. Due to the smallness of Planck's constant it is practically impossible to realize experiments that directly reveal the wave nature of any system bigger than a few atoms; but in general quantum mechanics considers all matter as possessing both particle and wave behaviors. Larger systems (like viruses, bacteria, cats, etc.) are considered as "classical" ones but only as an approximation, not exactly.[note 6] Einstein–Podolsky–Rosen paradoxEdit This thought experiment involves a pair of particles prepared in what later authors would refer to as an entangled state. In a 1935 paper, Einstein, Boris Podolsky, and Nathan Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that, "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity". From this, they inferred that the second particle must have a definite value of position and of momentum prior to either being measured.[73] Bohr's response to the EPR paper was published in the Physical Review later that same year.[74] He argued that EPR had reasoned fallaciously. Because measurements of position and of momentum are complementary, making the choice to measure one excludes the possibility of measuring the other. Consequently, a fact deduced regarding one arrangement of laboratory apparatus could not be combined with a fact deduced by means of the other, and so, the inference of predetermined position and momentum values for the second particle was not valid. Bohr concluded that EPR's "arguments do not justify their conclusion that the quantum description turns out to be essentially incomplete."[74] Incompleteness and indeterminismEdit Niels Bohr and Albert Einstein, pictured here at Paul Ehrenfest's home in Leiden (December 1925), had a long-running collegial dispute about what quantum mechanics implied for the nature of reality. Einstein was an early and persistent critic of the Copenhagen school. Bohr and Heisenberg advanced the position that no physical property could be understood without an act of measurement, while Einstein refused to accept this. Abraham Pais recalled a walk with Einstein when the two discussed quantum mechanics: "Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it."[75] While Einstein did not doubt that quantum mechanics was a correct physical theory in that it gave correct predictions, he maintained that it could not be a complete theory. The most famous product of his efforts to argue the incompleteness of quantum theory is the Einstein–Podolsky–Rosen thought experiment, which was intended to show that physical properties like position and momentum have values even if not measured.[note 7] The argument of EPR was not generally persuasive to other physicists.[58]: 189–251  Carl Friedrich von Weizsäcker, while participating in a colloquium at Cambridge, denied that the Copenhagen interpretation asserted "What cannot be observed does not exist". Instead, he suggested that the Copenhagen interpretation follows the principle "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes."[25] Einstein was likewise dissatisfied with the indeterminism of quantum theory. Regarding the possibility of randomness in nature, Einstein said that he was "convinced that He [God] does not throw dice."[80] Bohr, in response, reputedly said that "it cannot be for us to tell God, how he is to run the world".[note 8] The "shifty split"Edit Much criticism of Copenhagen-type interpretations has focused on the need for a classical domain where observers or measuring devices can reside, and the imprecision of how the boundary between quantum and classical might be defined. John Bell called this the "shifty split".[6] As typically portrayed, Copenhagen-type interpretations involve two different kinds of time evolution for wave functions, the deterministic flow according to the Schrödinger equation and the probabilistic jump during measurement, without a clear criterion for when each kind applies. Why should these two different processes exist, when physicists and laboratory equipment are made of the same matter as the rest of the universe?[81] And if there is somehow a split, where should it be placed? Steven Weinberg writes that the traditional presentation gives "no way to locate the boundary between the realms in which [...] quantum mechanics does or does not apply."[82] The problem of thinking in terms of classical measurements of a quantum system becomes particularly acute in the field of quantum cosmology, where the quantum system is the universe.[83][84] How does an observer stand outside the universe in order to measure it, and who was there to observe the universe in its earliest stages? Advocates of Copenhagen-type interpretations have disputed the seriousness of these objections. Rudolf Peierls noted that "the observer does not have to be contemporaneous with the event"; for example, we study the early universe through the cosmic microwave background, and we can apply quantum mechanics to that just as well as to any electromagnetic field.[60] Likewise, Asher Peres argued that physicists are, conceptually, outside those degrees of freedom that cosmology studies, and applying quantum mechanics to the radius of the universe while neglecting the physicists in it is no different from quantizing the electric current in a superconductor while neglecting the atomic-level details.[39] You may object that there is only one universe, but likewise there is only one SQUID in my laboratory.[39] E. T. Jaynes,[85] an advocate of Bayesian probability, argued that probability is a measure of a state of information about the physical world, and so regarding it as a physical phenomenon would be an example of a mind projection fallacy. Jaynes described the mathematical formalism of quantum physics as "a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up together by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble".[86] The ensemble interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "Copenhagen done right".[87] More recently, interpretations inspired by quantum information theory like QBism[88] and relational quantum mechanics[89] have attracted support.[65][90] Under realism and determinism, if the wave function is regarded as ontologically real, and collapse is entirely rejected, a many worlds theory results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function.[91] The transactional interpretation is also explicitly nonlocal.[92] Some physicists espoused views in the "Copenhagen spirit" and then went on to advocate other interpretations. For example, David Bohm and Alfred Landé both wrote textbooks that put forth ideas in the Bohr–Heisenberg tradition, and later promoted nonlocal hidden variables and an ensemble interpretation respectively.[58]: 453  John Archibald Wheeler began his career as an "apostle of Niels Bohr";[93] he then supervised the PhD thesis of Hugh Everett that proposed the many-worlds interpretation. After supporting Everett's work for several years, he began to distance himself from the many-worlds interpretation in the 1970s.[94][95] Late in life, he wrote that while the Copenhagen interpretation might fairly be called "the fog from the north", it "remains the best interpretation of the quantum that we have".[96] Other physicists, while influenced by the Copenhagen tradition, have expressed frustration at how it took the mathematical formalism of quantum theory as given, rather than trying to understand how it might arise from something more fundamental. This dissatisfaction has motivated new interpretative variants as well as technical work in quantum foundations.[63][97] Physicists who have suggested that the Copenhagen tradition needs to be built upon or extended include Rudolf Haag and Anton Zeilinger.[84][98] See alsoEdit 1. ^ As Heisenberg wrote in Physics and Philosophy (1958): "I remember discussions with Bohr which went through many hours till very late at night and ended almost in despair; and when at the end of the discussion I went alone for a walk in the neighbouring park I repeated to myself again and again the question: Can nature possibly be so absurd as it seemed to us in these atomic experiments?" 3. ^ Bohr declared, "In the first place, we must recognize that a measurement can mean nothing else than the unambiguous comparison of some property of the object under investigation with a corresponding property of another system, serving as a measuring instrument, and for which this property is directly determinable according to its definition in everyday language or in the terminology of classical physics."[37] Heisenberg wrote, "Every description of phenomena, of experiments and their results, rests upon language as the only means of communication. The words of this language represent the concepts of ordinary life, which in the scientific language of physics may be refined to the concepts of classical physics. These concepts are the only tools for an unambiguous communication about events, about the setting up of experiments and about their results."[38]: 127  4. ^ Heisenberg wrote, "It is well known that the 'reduction of the wave packets' always appears in the Copenhagen interpretation when the transition is completed from the possible to the actual. The probability function, which covered a wide range of possibilities, is suddenly reduced to a much narrower range by the fact that the experiment has led to a definite result, that actually a certain event has happened. In the formalism this reduction requires that the so-called interference of probabilities, which is the most characteristic phenomena [sic] of quantum theory, is destroyed by the partly undefinable and irreversible interactions of the system with the measuring apparatus and the rest of the world."[38]: 125  Bohr suggested that "irreversibility" was "characteristic of the very concept of observation", an idea that Weizsäcker would later elaborate upon, trying to formulate a rigorous mathematical notion of irreversibility using thermodynamics, and thus show that irreversibility results in the classical approximation of the world.[4] See also Stenholm.[31] 5. ^ While Born himself described his contribution as the "statistical interpretation" of the wave function,[45][46] the term "statistical interpretation" has also been used as a synonym for the ensemble interpretation.[47][48] 6. ^ The meaning of "larger" is not easy to quantify. As Omnès writes, "One cannot even expect a sweeping theorem stating once and for all that every macroscopic object obeys classical physics as soon as it is big enough, when, for instance, the number of its atoms is large enough. There are two reasons for this. The first one comes from chaotic systems: it turns out that their classical dynamical evolution ends up showing significant differences at the level of Planck's constant after a finite time. Another even more cogent reason is that one now knows examples of superconducting macroscopic systems behaving in a quantum way under special circumstances ... The theorems predicting classical behavior of a macroscopic quantum system must therefore rely upon specific dynamical conditions, which will have to be made clear, though they hold very frequently."[8]: 202  7. ^ The published form of the EPR argument was due to Podolsky, and Einstein himself was not satisfied with it. In his own publications and correspondence, Einstein used a different argument to insist that quantum mechanics is an incomplete theory.[76][77][78][79] 1. ^ See, for example: • Przibram, K., ed. (2015) [1967]. Letters on Wave Mechanics: Correspondence with H. A. Lorentz, Max Planck, and Erwin Schrödinger. Translated by Klein, Martin J. Philosophical Library/Open Road. ISBN 9781453204689. the Copenhagen Interpretation of quantum mechanics, [was] developed principally by Heisenberg and Bohr, and based on Born's statistical interpretation of the wave function. • Buckley, Paul; Peat, F. David; Bohm; Dirac; Heisenberg; Pattee; Penrose; Prigogine; Rosen; Rosenfeld; Somorjai; Weizsäcker; Wheeler (1979). "Leon Rosenfeld". In Buckley, Paul; Peat, F. David (eds.). A Question of Physics: Conversations in Physics and Biology. University of Toronto Press. pp. 17–33. ISBN 9781442651661. JSTOR 10.3138/j.ctt15jjc3t.5. The Copenhagen interpretation of quantum theory, ... grew out of discussions between Niels Bohr and Werner Heisenberg... • Gbur, Gregory J. (2019). Falling Felines and Fundamental Physics. Yale University Press. pp. 264–290. doi:10.2307/j.ctvqc6g7s.17. S2CID 243353224. Heisenberg worked under Bohr at an institute in Copenhagen. Together they compiled all existing knowledge of quantum physics into a coherent system that is known today as the Copenhagen interpretation of quantum mechanics. 2. ^ See, for example: 3. ^ a b c d e Faye, Jan (2019). "Copenhagen Interpretation of Quantum Mechanics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 4. ^ a b c d e f g Camilleri, K.; Schlosshauer, M. (2015). "Niels Bohr as Philosopher of Experiment: Does Decoherence Theory Challenge Bohr's Doctrine of Classical Concepts?". Studies in History and Philosophy of Modern Physics. 49: 73–83. arXiv:1502.06547. Bibcode:2015SHPMP..49...73C. doi:10.1016/j.shpsb.2015.01.005. S2CID 27697360. 5. ^ a b Pauli, Wolfgang (1994) [1958]. "Albert Einstein and the development of physics". In Enz, C. P.; von Meyenn, K. (eds.). Writings on Physics and Philosophy. Berlin: Springer-Verlag. 6. ^ a b c Bell, John (1990). "Against 'measurement'". Physics World. 3 (8): 33–41. doi:10.1088/2058-7058/3/8/26. ISSN 2058-7058. 7. ^ Omnès, Roland (1999). "The Copenhagen Interpretation". Understanding Quantum Mechanics. Princeton University Press. pp. 41–54. doi:10.2307/j.ctv173f2pm.9. S2CID 203390914. Bohr, Heisenberg, and Pauli recognized its main difficulties and proposed a first essential answer. They often met in Copenhagen ... 'Copenhagen interpretation has not always meant the same thing to different authors. I will reserve it for the doctrine held with minor differences by Bohr, Heisenberg, and Pauli. 8. ^ a b c d e f g h Omnès, R. (1994). The Interpretation of Quantum Mechanics. Princeton University Press. ISBN 978-0-691-03669-4. OCLC 439453957. 9. ^ a b Chevalley, Catherine (1999). "Why Do We Find Bohr Obscure?". In Greenberger, Daniel; Reiter, Wolfgang L.; Zeilinger, Anton (eds.). Epistemological and Experimental Perspectives on Quantum Physics. Springer Science+Business Media. pp. 59–74. doi:10.1007/978-94-017-1454-9. ISBN 978-9-04815-354-1. 10. ^ van der Waerden, B. L. (1968). "Introduction, Part II". Sources of Quantum Mechanics. Dover. ISBN 0-486-61881-1. 11. ^ Bernstein, Jeremy (2005). "Max Born and the Quantum Theory". American Journal of Physics. 73 (11): 999–1008. Bibcode:2005AmJPh..73..999B. doi:10.1119/1.2060717. 12. ^ a b c d Howard, Don (2004). "Who invented the Copenhagen Interpretation? A study in mythology" (PDF). Philosophy of Science. 71 (5): 669–682. CiteSeerX doi:10.1086/425941. JSTOR 10.1086/425941. S2CID 9454552. 13. ^ Dolling, Lisa M.; Gianelli, Arthur F.; Statile, Glenn N., eds. (2003). "Introduction". The Tests of Time: Readings in the Development of Physical Theory. Princeton University Press. pp. 359–370. doi:10.2307/j.ctvcm4h07.52. The generally accepted interpretation of Quantum Theory was formulated by Niels Bohr, Werner Heisenberg, and Wolfgang Pauli during the early part of the twentieth century at Bohr's laboratory in Copenhagen, Denmark. This account, commonly referred to as the "Copenhagen Interpretation"... 14. ^ Brush, Stephen G. (1980). "The Chimerical Cat: Philosophy of Quantum Mechanics in Historical Perspective". Social Studies of Science. Sage Publications, Ltd. 10 (4): 393–447. doi:10.1177/030631278001000401. JSTOR 284918. S2CID 145727731. On the other side, Niels Bohr was the leading spokesman for the new movement in physics, and thus it acquired the name 'Copenhagen Interpretation.' 15. ^ Bacciagaluppi, Guido; Valentini, Antony (2009-10-22). Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference. Cambridge University Press. p. 408. ISBN 978-0-521-81421-8. (This book contains a translation of the entire authorized proceedings of the 1927 Solvay conference from the original transcripts.) 16. ^ a b Bokulich, Alisa (2006). "Heisenberg Meets Kuhn: Closed Theories and Paradigms". Philosophy of Science. 73 (1): 90–107. doi:10.1086/510176. ISSN 0031-8248. JSTOR 10.1086/510176. S2CID 170902096. 17. ^ Mehra, J.; Rechenberg, H. (2001). The Historical Development of Quantum Theory: Volume 4. Springer-Verlag. p. 266. ISBN 9780387906423. OCLC 928788723. 18. ^ See, for example: • Smith, Quentin (1997). "The Ontological Interpretation of the Wave Function of the Universe". The Monist. Oxford University Press. 80 (1): 160–185. doi:10.5840/monist19978015. JSTOR 27903516. Since the late 1920s, the orthodox interpretation was taken to be the Copenhagen Interpretation • Weinberg, Steven (2018). "The Trouble with Quantum Mechanics". Third Thoughts. Harvard University Press. pp. 124–142. ISBN 9780674975323. JSTOR j.ctvckq5b7.17. One response to this puzzle was given in the 1920s by Niels Bohr, in what came to be called the Copenhagen interpretation of quantum mechanics. • Hanson, Norwood Russell (1959). "Five Cautions for the Copenhagen Interpretation's Critics". Philosophy of Science. The University of Chicago Press, Philosophy of Science Association. 26 (4): 325–337. doi:10.1086/287687. JSTOR 185366. S2CID 170786589. Feyerabend and Bohm are almost exclusively concerned with the inadequacies of the Bohr-Interpretation (which originates in Copenhagen). Both understress a much less incautious view, which I shall call 'the Copenhagen Interpretation' (which originates in Leipzig and presides at Göttingen, Munich, Cambridge, Princeton,―and almost everywhere else too). 19. ^ Bohm, David (1952). "A Suggested Interpretation of the Quantum Theory in Terms of 'Hidden' Variables. I & II". Physical Review. 85 (2): 166–193. Bibcode:1952PhRv...85..166B. doi:10.1103/PhysRev.85.166. 20. ^ Kragh, H. (1999). Quantum Generations: A History of Physics in the Twentieth Century. Princeton University Press. p. 210. ISBN 978-0-691-01206-3. OCLC 450598985. In fact, the term 'Copenhagen interpretation' was not used in the 1930s but first entered the physicists' vocabulary in 1955 when Heisenberg used it in criticizing certain unorthodox interpretations of quantum mechanics. 21. ^ a b Camilleri, Kristian (May 2009). "Constructing the Myth of the Copenhagen Interpretation". Perspectives on Science. 17 (1): 26–57. doi:10.1162/posc.2009.17.1.26. ISSN 1063-6145. S2CID 57559199. 22. ^ a b Heisenberg, Werner (1958). Physics and Philosophy. Harper. 23. ^ "I avow that the term ‘Copenhagen interpretation’ is not happy since it could suggest that there are other interpretations, like Bohm assumes. We agree, of course, that the other interpretations are nonsense, and I believe that this is clear in my book, and in previous papers. Anyway, I cannot now, unfortunately, change the book since the printing began enough time ago." Quoted in Freire Jr., Olival (2005). "Science and exile: David Bohm, the hot times of the Cold War, and his struggle for a new interpretation of quantum mechanics". Historical Studies in the Physical and Biological Sciences. 36 (1): 31–35. 24. ^ Rosenfeld, Léon (1960). "Heisenberg, Physics and Philosophy". Nature. 186 (4728): 830–831. Bibcode:1960Natur.186..830R. doi:10.1038/186830a0. S2CID 12979706. 25. ^ a b Cramer, John G. (1986). "The Transactional Interpretation of Quantum Mechanics". Reviews of Modern Physics. 58 (3): 649. Bibcode:1986RvMP...58..647C. doi:10.1103/revmodphys.58.647. Archived from the original on 2012-11-08. 26. ^ Maleeh, Reza; Amani, Parisa (December 2013). "Pragmatism, Bohr, and the Copenhagen Interpretation of Quantum Mechanics". International Studies in the Philosophy of Science. 27 (4): 353–367. doi:10.1080/02698595.2013.868182. ISSN 0269-8595. S2CID 170415674. 27. ^ Boge, Florian J. (2018). Quantum Mechanics Between Ontology and Epistemology. Cham: Springer. p. 2. ISBN 978-3-319-95765-4. OCLC 1086564338. 28. ^ Scheibe, Erhard (1973). The Logical Analysis of Quantum Mechanics. Pergamon Press. ISBN 9780080171586. OCLC 799397091. [T]here is no point in looking for the Copenhagen interpretation as a unified and consistent logical structure. Terms such as "Copenhagen interpretation" or "Copenhagen school" are based on the history of the development of quantum mechanics; they form a simplified and often convenient way of referring to the ideas of a number of physicists who played an important role in the establishment of quantum mechanics, and who were collaborators of Bohr's at his Institute or took part in the discussions during the crucial years. On closer inspection, one sees quite easily that these ideas are divergent in detail and that in particular the views of Bohr, the spiritual leader of the school, form a separate entity which can now be understood only by a thorough study of as many as possible of the relevant publications by Bohr himself. 29. ^ Camilleri, Kristian (September 2007). "Bohr, Heisenberg and the divergent views of complementarity". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 38 (3): 514–528. Bibcode:2007SHPMP..38..514C. doi:10.1016/j.shpsb.2006.10.002. 30. ^ Bohr, Niels (1985) [May 16, 1947]. Kalckar, Jørgen (ed.). Niels Bohr: Collected Works. Vol. 6: Foundations of Quantum Physics I (1926-1932). pp. 451–454. 31. ^ a b Stenholm, Stig (1983). "To fathom space and time". In Meystre, Pierre (ed.). Quantum Optics, Experimental Gravitation, and Measurement Theory. Plenum Press. p. 121. The role of irreversibility in the theory of measurement has been emphasized by many. Only this way can a permanent record be obtained. The fact that separate pointer positions must be of the asymptotic nature usually associated with irreversibility has been utilized in the measurement theory of Daneri, Loinger and Prosperi (1962). It has been accepted as a formal representation of Bohr's ideas by Rosenfeld (1966). 32. ^ Haake, Fritz (April 1, 1993). "Classical motion of meter variables in the quantum theory of measurement". Physical Review A. 47 (4): 2506–2517. Bibcode:1993PhRvA..47.2506H. doi:10.1103/PhysRevA.47.2506. PMID 9909217. 33. ^ Peres, Asher (2002). "Popper's experiment and the Copenhagen interpretation". Studies in History and Philosophy of Modern Physics. 33: 23. arXiv:quant-ph/9910078. doi:10.1016/S1355-2198(01)00034-X. 34. ^ Mermin, N. David (1989). "What's Wrong with this Pillow?". Physics Today. 42 (4): 9. Bibcode:1989PhT....42d...9D. doi:10.1063/1.2810963. 35. ^ Mermin, N. David (2004). "Could Feynman have said this?". Physics Today. 57 (5): 10–11. Bibcode:2004PhT....57e..10M. doi:10.1063/1.1768652. 36. ^ Mermin, N. David (2017-01-01). "Why QBism Is Not the Copenhagen Interpretation and What John Bell Might Have Thought of It". In Bertlmann, Reinhold; Zeilinger, Anton (eds.). Quantum [Un]Speakables II. The Frontiers Collection. Springer International Publishing. pp. 83–93. arXiv:1409.2454. doi:10.1007/978-3-319-38987-5_4. ISBN 9783319389851. S2CID 118458259. 37. ^ Bohr, N. (1939). "The Causality Problem in Atomic Physics". New Theories in Physics. Paris: International Institute of Intellectual Co-operation. pp. 11–30. OCLC 923465888. 38. ^ a b Heisenberg, Werner (1971) [1959]. "Criticism and counterproposals to the Copenhagen interpretation of quantum theory". Physics and Philosophy: the Revolution in Modern Science. London: George Allen & Unwin. pp. 114–128. 39. ^ a b c Peres, Asher (1998-12-01). "Interpreting the Quantum World". Studies in History and Philosophy of Modern Physics. 29 (4): 611–620. arXiv:quant-ph/9711003. doi:10.1016/S1355-2198(98)00017-3. ISSN 1355-2198. 40. ^ Camilleri, K. (2006). "Heisenberg and the wave–particle duality". Studies in History and Philosophy of Modern Physics. 37 (2): 298–315. Bibcode:2006SHPMP..37..298C. doi:10.1016/j.shpsb.2005.08.002. 41. ^ Camilleri, K. (2009). Heisenberg and the Interpretation of Quantum Mechanics: the Physicist as Philosopher. Cambridge UK: Cambridge University Press. ISBN 978-0-521-88484-6. OCLC 638813030. 42. ^ Bohr, N. (1928). "The Quantum Postulate and the Recent Development of Atomic Theory". Nature. 121 (3050): 580–590. Bibcode:1928Natur.121..580B. doi:10.1038/121580a0., p. 586: "there can be no question of an immediate connexion with our ordinary conceptions". 43. ^ Heisenberg, W. (1959/1971). 'Language and reality in modern physics', Chapter 10, pp. 145–160, in Physics and Philosophy: the Revolution in Modern Science, George Allen & Unwin, London, ISBN 0-04-530016 X, p. 153: "our common concepts cannot be applied to the structure of the atoms." 44. ^ Bohr, N. (1928). "The Quantum Postulate and the Recent Development of Atomic Theory". Nature. 121 (3050): 580–590. Bibcode:1928Natur.121..580B. doi:10.1038/121580a0., p. 586: "In this connexion [Born] succeeded in obtaining a statistical interpretation of the wave functions, allowing a calculation of the probability of the individual transition processes required by the quantum postulate." 45. ^ Born, M. (1955). "Statistical interpretation of quantum mechanics". Science. 122 (3172): 675–679. Bibcode:1955Sci...122..675B. doi:10.1126/science.122.3172.675. PMID 17798674. 46. ^ "... the statistical interpretation, which I have first suggested and which has been formulated in the most general way by von Neumann, ..." Born, M. (1953). The interpretation of quantum mechanics, Br. J. Philos. Sci., 4(14): 95–106. 47. ^ Ballentine, L.E. (1970). "The statistical interpretation of quantum mechanics". Rev. Mod. Phys. 42 (4): 358–381. Bibcode:1970RvMP...42..358B. doi:10.1103/revmodphys.42.358. 48. ^ Born, M. (1949). Einstein's statistical theories, in Albert Einstein: Philosopher Scientist, ed. P.A. Schilpp, Open Court, La Salle IL, volume 1, pp. 161–177. 50. ^ "Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature." Heisenberg, W. (1959/1971). Criticism and counterproposals to the Copenhagen interpretation of quantum theory, Chapter 8, pp. 114–128, in Physics and Philosophy: the Revolution in Modern Science, third impression 1971, George Allen & Unwin, London, at p. 121. 51. ^ See, for example: 52. ^ Schlosshauer, M. (2019). "Quantum Decoherence". Physics Reports. 831: 1–57. arXiv:1911.06282. Bibcode:2019PhR...831....1S. doi:10.1016/j.physrep.2019.10.001. S2CID 208006050. 53. ^ Jammer, M. (1982). 'Einstein and quantum physics', pp. 59–76 in Albert Einstein: Historical and Cultural Perspectives; the Centennial Symposium in Jerusalem, edited by G. Holton, Y. Elkana, Princeton University Press, Princeton NJ, ISBN 0-691-08299-5. On pp. 73–74, Jammer quotes a 1952 letter from Einstein to Besso: "The present quantum theory is unable to provide the description of a real state of physical facts, but only of an (incomplete) knowledge of such. Moreover, the very concept of a real factual state is debarred by the orthodox theoreticians. The situation arrived at corresponds almost exactly to that of the good old Bishop Berkeley." 54. ^ Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys. 43: 172–198. Translation as 'The actual content of quantum theoretical kinematics and mechanics' here: "Since the statistical nature of quantum theory is so closely [linked] to the uncertainty in all observations or perceptions, one could be tempted to conclude that behind the observed, statistical world a "real" world is hidden, in which the law of causality is applicable. We want to state explicitly that we believe such speculations to be both fruitless and pointless. The only task of physics is to describe the relation between observations." 56. ^ Belousek, D.W. (1996). "Einstein's 1927 unpublished hidden-variable theory: its background, context and significance". Stud. Hist. Phil. Mod. Phys. 21 (4): 431–461. Bibcode:1996SHPMP..27..437B. doi:10.1016/S1355-2198(96)00015-9. 57. ^ Holland, P (2005). "What's wrong with Einstein's 1927 hidden-variable interpretation of quantum mechanics?". Foundations of Physics. 35 (2): 177–196. arXiv:quant-ph/0401017. Bibcode:2005FoPh...35..177H. doi:10.1007/s10701-004-1940-7. S2CID 119426936. 58. ^ a b c d e Jammer, Max (1974). The Philosophy of Quantum Mechanics. John Wiley and Sons. ISBN 0-471-43958-4. 59. ^ a b Mermin, N. David (2019-01-01). "Making better sense of quantum mechanics". Reports on Progress in Physics. 82 (1): 012002. arXiv:1809.01639. Bibcode:2019RPPh...82a2002M. doi:10.1088/1361-6633/aae2c6. ISSN 0034-4885. PMID 30232960. S2CID 52299438. 60. ^ a b Peierls, Rudolf (1991). "In defence of "measurement"". Physics World. 4 (1): 19–21. doi:10.1088/2058-7058/4/1/19. ISSN 2058-7058. 61. ^ a b Peres, Asher (1993). Quantum Theory: Concepts and Methods. Kluwer. pp. 373–374. ISBN 0-7923-2549-4. OCLC 28854083. 62. ^ a b Streater, R. F. (2007). Lost causes in and beyond physics. Berlin: Springer. ISBN 978-3-540-36582-2. OCLC 185022108. 63. ^ a b Appleby, D. M. (2005). "Facts, Values and Quanta". Foundations of Physics. 35 (4): 637. arXiv:quant-ph/0402015. Bibcode:2005FoPh...35..627A. doi:10.1007/s10701-004-2014-6. S2CID 16072294. 64. ^ Max Tegmark (1998). "The Interpretation of Quantum Mechanics: Many Worlds or Many Words?". Fortschr. Phys. 46 (6–8): 855–862. arXiv:quant-ph/9709032. Bibcode:1998ForPh..46..855T. doi:10.1002/(SICI)1521-3978(199811)46:6/8<855::AID-PROP855>3.0.CO;2-Q. 65. ^ a b M. Schlosshauer; J. Kofler; A. Zeilinger (2013). "A Snapshot of Foundational Attitudes Toward Quantum Mechanics". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 44 (3): 222–230. arXiv:1301.1069. Bibcode:2013SHPMP..44..222S. doi:10.1016/j.shpsb.2013.04.004. S2CID 55537196. 67. ^ Fuchs, Christopher A.; Mermin, N. David; Schack, Rüdiger (August 2014). "An introduction to QBism with an application to the locality of quantum mechanics". American Journal of Physics. 82 (8): 749–754. arXiv:1311.5253. Bibcode:2014AmJPh..82..749F. doi:10.1119/1.4874855. ISSN 0002-9505. 68. ^ a b Nurgalieva, Nuriya; Renner, Renato (2020-07-02). "Testing quantum theory with thought experiments". Contemporary Physics. 61 (3): 193–216. arXiv:2106.05314. Bibcode:2020ConPh..61..193N. doi:10.1080/00107514.2021.1880075. ISSN 0010-7514. 69. ^ Plotnitsky, Arkady (2012). Niels Bohr and Complementarity: An Introduction. US: Springer. pp. 75–76. ISBN 978-1461445173. 70. ^ Rosenfeld, L. (1953). "Strife about Complementarity". Science Progress (1933- ). 41 (163): 393–410. ISSN 0036-8504. 71. ^ Nairz, Olaf; Brezger, Björn; Arndt, Markus; Zeilinger, Anton (2001). "Diffraction of Complex Molecules by Structures Made of Light". Physical Review Letters. 87 (16): 160401. arXiv:quant-ph/0110012. Bibcode:2001PhRvL..87p0401N. doi:10.1103/PhysRevLett.87.160401. PMID 11690188. S2CID 21547361. 72. ^ Brezger, Björn; Hackermüller, Lucia; Uttenthaler, Stefan; Petschinka, Julia; Arndt, Markus; Zeilinger, Anton (2002). "Matter-Wave Interferometer for Large Molecules". Physical Review Letters. 88 (10): 100404. arXiv:quant-ph/0202158. Bibcode:2002PhRvL..88j0404B. doi:10.1103/PhysRevLett.88.100404. PMID 11909334. S2CID 19793304. 73. ^ Einstein, A.; Podolsky, B.; Rosen, N (1935-05-15). "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?" (PDF). Physical Review. 47 (10): 777–780. Bibcode:1935PhRv...47..777E. doi:10.1103/PhysRev.47.777. 74. ^ a b Bohr, N. (1935-10-13). "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?" (PDF). Physical Review. 48 (8): 696–702. Bibcode:1935PhRv...48..696B. doi:10.1103/PhysRev.48.696. 75. ^ Pais, Abraham (1979). "Einstein and the quantum theory". Reviews of Modern Physics. 51 (4): 863–914. Bibcode:1979RvMP...51..863P. doi:10.1103/RevModPhys.51.863. 76. ^ Harrigan, Nicholas; Spekkens, Robert W. (2010). "Einstein, incompleteness, and the epistemic view of quantum states". Foundations of Physics. 40 (2): 125. arXiv:0706.2661. Bibcode:2010FoPh...40..125H. doi:10.1007/s10701-009-9347-0. S2CID 32755624. 77. ^ Howard, D. (1985). "Einstein on locality and separability". Studies in History and Philosophy of Science Part A. 16 (3): 171–201. Bibcode:1985SHPSA..16..171H. doi:10.1016/0039-3681(85)90001-9. 78. ^ Sauer, Tilman (2007-12-01). "An Einstein manuscript on the EPR paradox for spin observables". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 38 (4): 879–887. Bibcode:2007SHPMP..38..879S. CiteSeerX doi:10.1016/j.shpsb.2007.03.002. ISSN 1355-2198. 79. ^ Einstein, Albert (1949). "Autobiographical Notes". In Schilpp, Paul Arthur (ed.). Albert Einstein: Philosopher-Scientist. Open Court Publishing Company. 80. ^ Letter to Max Born (4 December 1926); The Born-Einstein Letters. Translated by Born, Irene. New York: Walker and Company. 1971. ISBN 0-8027-0326-7. OCLC 439521601. 81. ^ Weinberg, Steven (November 2005). "Einstein's Mistakes". Physics Today. 58 (11): 31. Bibcode:2005PhT....58k..31W. doi:10.1063/1.2155755. 82. ^ Weinberg, Steven (19 January 2017). "The Trouble with Quantum Mechanics". New York Review of Books. Retrieved 8 January 2017. 83. ^ 'Since the Universe naturally contains all of its observers, the problem arises to come up with an interpretation of quantum theory that contains no classical realms on the fundamental level.', Claus Kiefer (2002). "On the interpretation of quantum theory – from Copenhagen to the present day". Time. p. 291. arXiv:quant-ph/0210152. Bibcode:2003tqi..conf..291K. 84. ^ a b Haag, Rudolf (2010). "Some people and some problems met in half a century of commitment to mathematical physics". The European Physical Journal H. 35 (3): 263–307. Bibcode:2010EPJH...35..263H. doi:10.1140/epjh/e2010-10032-4. S2CID 59320730. 85. ^ Jaynes, E. T. (1989). "Clearing up Mysteries – The Original Goal" (PDF). Maximum Entropy and Bayesian Methods: 7. 86. ^ Jaynes, E. T. (1990). "Probability in Quantum Theory". In Zurek, W. H. (ed.). Complexity, Entropy, and the Physics of Information. Addison-Wesley. pp. 381–404. ISBN 9780201515060. OCLC 946145335. 87. ^ Hohenberg, P. C. (2010-10-05). "Colloquium : An introduction to consistent quantum theory". Reviews of Modern Physics. 82 (4): 2835–2844. doi:10.1103/RevModPhys.82.2835. ISSN 0034-6861. 88. ^ Healey, Richard (2016). "Quantum-Bayesian and Pragmatist Views of Quantum Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 89. ^ See, for example: 90. ^ Becker, Kate (2013-01-25). "Quantum physics has been rankling scientists for decades". Boulder Daily Camera. Retrieved 2013-01-25. 91. ^ Goldstein, Sheldon (2017). "Bohmian Mechanics". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 92. ^ Kastner, R. E. (May 2010). "The Quantum Liar Experiment in Cramer's transactional interpretation". Studies in History and Philosophy of Modern Physics. 41 (2). doi:10.1016/j.shpsb.2010.01.001. 93. ^ Gleick, James (1992). Genius: The Life and Science of Richard Feynman. Vintage Books. ISBN 978-0-679-74704-8. OCLC 223830601. 94. ^ Wheeler, John Archibald (1977). "Include the observer in the wave function?". In Lopes, J. Leite; Paty, M. (eds.). Quantum Mechanics: A Half Century Later. D. Reidel Publishing. 95. ^ Byrne, Peter (2012). The Many Worlds of Hugh Everett III: Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family. Oxford University Press. ISBN 978-0-199-55227-6. OCLC 809554486. 96. ^ Wheeler, John Archibald (2000-12-12). "'A Practical Tool,' But Puzzling, Too". New York Times. Retrieved 2020-12-25. 97. ^ Fuchs, Christopher A. (2018). "Copenhagen Interpretation Delenda Est?". American Journal of Physics. 87 (4): 317–318. arXiv:1809.05147. Bibcode:2018arXiv180905147F. doi:10.1119/1.5089208. S2CID 224755562. 98. ^ Zeilinger, Anton (1999). "A foundational principle for quantum mechanics". Foundations of Physics. 29 (4): 631–643. doi:10.1023/A:1018820410908. Suffice it to say here that, in my view, the principle naturally supports and extends the Copenhagen interpretation of quantum mechanics. It is evident that one of the immediate consequences is that in physics we cannot talk about reality independent of what can be said about reality. Likewise it does not make sense to reduce the task of physics to just making subjective statements, because any statements about the physical world must ultimately be subject to experiment. Therefore, while in a classical worldview, reality is a primary concept prior to and independent of observation with all its properties, in the emerging view of quantum mechanics the notions of reality and of information are on an equal footing. One implies the other and neither one is sufficient to obtain a complete understanding of the world. Further readingEdit
e2fea01bd813a360
PreprintPDF Available Abstract and Figures This paper explores the assumptions underpinning de Broglie's concept of a wavepacket and the various conceptual questions and issues. It also explores how the alternative ring current model of an electron (or of matter-particles in general) relates to Louis de Broglie's λ = h/p relation and rephrases the theory in terms of the wavefunction as well as the wave equation(s) for an electron in free space. Content may be subject to copyright. De Broglie’s matter-wave: concept and issues Jean Louis Van Belle, Drs, MAEc, BAEc, BPhil 9 May 2020 This paper explores the assumptions underpinning de Broglies concept of a wavepacket and the various conceptual questions and issues. It also explores how the alternative the ring current model of an electron (or of matter-particles in general) relates to Louis de Broglie’s λ = h/p relation and rephrases De Broglie’s matter-wave: concept and issues De Broglie’s wavelength and the Compton radius De Broglie’s ideas on the matter-wave are oft-quoted and are usually expressed in de Broglie’s λ = h/p relation. However, there is remarkably little geometric or physical interpretation of it : what is that wavelength, exactly? The relation itself is easy enough to read: λ goes to infinity as p goes to zero. In contrast, for p = mv going to p = mc, this length becomes the Compton wavelength λ = h/p = h/mc. This must mean something, obviously, but what exactly? Mainstream theory does not answer this question because the Heisenberg interpretation of quantum mechanics essentially refuses to look into a geometric or physical interpretation of de Broglie’s relation and/or the underlying concept of the matter-wave or wavefunction which, lest we forget, must somehow represent the particle itself. In contrast, we will request the reader to think of the (elementary) wavefunction as representing a current ring. To be precise, we request the reader to think of the (elementary) wavefunction r = ψ = a·eiθ as representing the physical position of a pointlike elementary charge pointlike but not dimensionless moving at the speed of light around the center of its motion in a space that is defined by the electron’s Compton radius a = ħ/mc. This radius which effectively doubles up as the amplitude of the wavefunction can easily be derived from (1) Einstein’s mass-energy equivalence relation, (2) the Planck-Einstein relation, and (3) the formula for a tangential velocity, as shown below: This easy derivation already gives a more precise explanation of Prof. Dr. Patrick R. LeClair’s interpretation of the Compton wavelength as “the scale above which the particle can be localized in a particle-like sense” , but we may usefully further elaborate the details by visualizing the model (Figure Wikipedia offers an overview of the mainstream view(s) in regard to a physical interpretation of the matter-wave and/or the de Broglie wavelength by quoting from the papers by Erwin Schrödinger, Max Born and Werner Heisenberg at the occasion of the 5th Solvay Conference (1927). These views are part of what is rather loosely referred to as the Copenhagen interpretation of quantum mechanics. The non-zero dimension of the elementary charge explains the small anomaly in the magnetic moment which is, therefore, not anomalous at all. For more details, see our paper on the electron model. It is a derivation one can also use to derive a theoretical radius for the proton (or for any elementary particle, really). It works perfectly well for the muon, for example. However, for the proton, an additional assumption in regard to the proton’s angular momentum and magnetic moment is needed to ensure it fits the experimentally established radius. We shared the derivation with Prof. Dr. Randolf Pohl and the PRad team but we did not receive any substantial comments so far, except for the PRad spokesman (Prof. Dr. Ashot Gasparan) confirming the Standard Model does not have any explanation for the proton radius from first principles and, therefore, encouraging us to continue our theoretical research. In contrast, Prof. Dr. Randolf Pohl suggested the concise calculations come across as numerological only. We hope this paper might help to make him change his mind! Prof. Dr. Patrick LeClair, Introduction to Modern Physics, Course Notes (PH253), 3 February 2019, p. 10. 1) and exploring how it fits de Broglie’s intuitions in regard to the matter-wave, which is what we set out to do in this paper. Figure 1: The ring current model of an electron Of course, the reader will, most likely, not be familiar with the ring current model or using the term Erwin Schrödinger coined for it the Zitterbewegung model and we should, therefore, probably quote an unlikely authority on it so as to establish some early credentials “The variables [of Dirac’s wave equation] give rise to some rather unexpected phenomena frequency oscillatory motion of small amplitude superposed on the regular motion which December 12, 1933) Indeed, the dual radius of the electron (Thomson versus Compton radius) and the Zitterbewegung model combine to explain the wave-particle duality of the electron and, therefore, diffraction and/or interference as well as Compton scattering itself. We will not dwell on these aspects of the ring current electron model because we have covered them in (too) lengthy papers before. Indeed, we will want to stay focused on the prime objective of this paper, which is a geometric or physical interpretation of the Before we proceed, we must note the momentum of the pointlike charge which we denote by p in the illustration must be distinguished from the momentum of the electron as a whole. The momentum of We will analyze de Broglie’s views based on his paper for the 1927 Solvay Conference: Louis de Broglie, La Nouvelle Dynamique des Quanta (the new quantum dynamics), 5th Solvay Conference, 1927. This paper has the advantage of being concise and complete at the same time. Indeed, its thirty pages were written well after the publication of his thesis on the new canique ondulatoire (1924) but the presentation helped him to secure the necessary fame which would then lead to him getting the 1929 Nobel Prize for Physics. For an overview of other eminent views, we refer to our paper on the 1921 and 1927 Solvay Conferences. the pointlike charge will always be equal to p = mc. The rest mass of the pointlike charge must, therefore, be zero. However, its velocity give its an effective mass which one can calculate to be equal to meff = me/2. Let us now review de Broglie’s youthful intuitions. De Broglie’s dissipating wavepacket The ring current model of an electron incorporates the wavelike nature of an electron: the frequency of the oscillation is the frequency of the circulatory or oscillatory motion (Zitterbewegung) of the pointlike electric charge. Hence, the intuition of the young Louis de Broglie that an electron must have a frequency was, effectively, a stroke of genius. However, as the magnetic properties of an electron were, by then, not well established and this may explain why Louis de Broglie is either not aware of it or refuses to further build on it. Let us have a closer look at his paper for the 1927 Solvay Conference, titled La Nouvelle Dynamique des Quanta, which we may translate as The New Quantum Dynamics. The logic is, by now, well known: we think of the particle as a wave packet composed of waves of slightly different frequencies νi. This leads to a necessary distinction between the group and phase velocities of the wave. The group velocity corresponds to the classical velocity v of the particle, which is often expressed as a fraction or relative We consciously use a vector notation to draw attention to the rather particular direction of p and c: they must be analyzed as tangential vectors in this model. We may refer to one of our previous papers here (Jean Louis Van Belle, An Explanation of the Electron and Its Wavefunction, 26 March 2020). The calculations involve a relativistically correct analysis of an oscillation in two independent directions: we effectively interpret circular motion as a two-dimensional oscillation. Such oscillation is, mathematically speaking, self-evident (Euler’s function is geometric sum of a sine and a cosine) but its physical interpretation is, obviously, not self-evident at all! We must quality the remark on youthfulness. Louis de Broglie was, obviously, quite young when developing his key intuitions. However, he does trace his own ideas on the matter-wave back to the time of writing of his PhD thesis, which is 1923-1924. Hence, he was 32 years old at the time, not nineteen! The reader will also know that, after WW II, Louis de Broglie would distance him from modern interpretations of his own theory and modern quantum physics by developing a realist interpretation of quantum physics himself. This interpretation would culminate in the de Broglie-Bohm theory of the pilot wave. We do not think there is any need for such alternative theories: we should just go back to where de Broglie went wrong and connect the dots. The papers and interventions by Ernest Rutherford at the 1921 Conference do, however, highlight the magnetic dipole property of the electron. It should also be noted that Arthur Compton would highlight in his famous paper on Compton scattering, which he published in 1923 and was an active participant in the 1927 Conference itself. Louis de Broglie had extraordinary exposure to all of the new ideas, as his elder brother Maurice Duc de Broglie had already engaged him scientific secretary for the very first Solvay Conference in 1911, when Louis de Broglie was just 19 years. More historical research may reveal why Louis de Broglie did not connect the dots. As mentioned, he must have been very much aware of the limited but substantial knowledge on the magnetic moment of an electron as highlighted by Ernest Rutherford and others at the occasion of the 1921 Solvay We invite the reader to check our exposé against de Broglie’s original 1927 paper in the Solvay Conference proceedings. We will try to stick closely to the symbols that are used in this paper, such as the nu (ν) symbol for the velocity β= v/c. The assumption is then that we know how the phase frequencies νi are related to wavelengths λi. This is modeled by a so-called dispersion relation, which is usually written in terms of the angular frequencies ωi = 2π·νi and the wave numbers ki = 2π/λi. The relation between the frequencies νi and the wavelengths λi (or between angular frequencies ωi and wavenumbers ki) is referred to as the dispersion relation because it effectively determines if and how the wave packet will disperse or dissipate. Indeed, wave packets have a rather nasty property: they dissipate away. A real-life electron does not. Prof. H. Pleijel, then Chairman of the Nobel Committee for Physics of the Royal Swedish Academy of Sciences, dutifully notes this rather inconvenient property in the ceremonial speech for the 1933 Nobel Prize, which was awarded to Heisenberg for nothing less than the creation of quantum mechanics wave packet. […] As a result of this theory on is forced to the conclusion to conceive of matter as composed of unchangeable particles, must be modified.” This should sound very familiar to you. However, it is, obviously, not true: real-life particles electrons or atoms traveling in space do not dissipate. Matter may change form and extent in space a little bit such as, for example, when we are forcing them through one or two slits but not fundamentally so! The concept of an angular frequency (radians per time unit) may be more familiar to you than the concept of a wavenumber (radians per distance unit). Both are related through the velocity of the wave (which is the velocity of the component wave here, so that is the phase velocity vp): To be precise, Heisenberg got a postponed prize from 1932. Erwin Schrödinger and Paul A.M. Dirac jointly got the 1933 prize. Prof. Pleijel acknowledges all three in more or less equal terms in the introduction of his speech: “This year’s Nobel Prizes for Physics are dedicated to the new atomic physics. The prizes, which the Academy of have created and developed the basic ideas of modern atomic physics. The wave-particle duality of the ring current model should easily explain single-electron diffraction and interference (the electromagnetic oscillation which keeps the charge swirling would necessarily interfere with itself when being forced through one or two slits), but we have not had the time to engage in detailed research here. We will slightly nuance this statement later but we will not fundamentally alter it. We think of matter-particles as an electric charge in motion. Hence, as it acts on a charge, the nature of the centripetal force that keeps the particle together must be electromagnetic. Matter-particles, therefore, combine wave-particle duality. Of course, it makes a difference when this electromagnetic oscillation, and the electric charge, move through a slit or in free space. We will come back to this later. The point to note is: matter-particles do not dissipate. Feynman actually notes that at the very beginning of his Lectures on quantum mechanics, when describing the double-slit We should let this problem rest. We will want to look a related but somewhat different topic: the wave equation. However, before we do so, we should discuss one more conceptual issue with de Broglies concept of a matter-wave packet: the problem of (non-)localization. De Broglies non-localized wavepacket The idea of a particle includes the idea of a more or less well-known position. Of course, we may rightfully assume we cannot know this position exactly for the following reasons: 1. The precision of our measurements may be limited (Heisenberg referred to this as an 2. Our measurement might disturb the position and, as such, cause the information to get lost and, as a result, introduce an uncertainty (Unbestimmtheit). 3. One may also think the uncertainty is inherent to Nature (Ungewissheit). We think that despite all thought experiments and Bells No-Go Theorem the latter assumption remains non-proven. Indeed, we fully second the crucial comment/question/criticism from H.A. Lorentz after the presentation of the papers by Louis de Broglie, Max Born and Erwin Schrödinger, Werner Heisenberg, and Niels Bohr at the occasion of the 1927 Solvay Conference here: Why should we elevate indeterminism to a philosophical principle? However, the root cause of the uncertainty does not matter. The point is this: the necessity to model a particle as a wave packet rather than as a single wave is usually motivated by the need to confine it to a certain region. Let us, once again, quote Richard Feynman here: If an amplitude to find a particle at different places is given by ei(ω·tk·x), whose absolute square is a constant, that would mean that the probability of finding a particle is the same at all points. That means we do not know where it isit can be anywherethere is a great uncertainty in its location. On the other hand, if the position of a particle is more or less well known and we can predict it fairly accurately, then the probability of finding it in different places must be confined to a certain region, whose length we call Δx. Outside this region, the probability is zero. Now this probability is the absolute square of an amplitude, and if the absolute square is zero, the amplitude is also zero, so that we have a wave train whose length is Δx, and the wavelength (the distance between nodes of the waves in the train) of that wave train is what corresponds to the particle momentum. Indeed, one of the properties of the idea of a particle is that it must be somewhere at any point in time, and that somewhere must be defined in terms of one-, two- or three-dimensional physical space. Now, we do not quite see how the idea of a wave train or a wavepacket solves that problem. A composite experiment for electrons: “Electrons always arrive in identical lumps.” A mathematical proof is only as good as its assumptions and we, therefore, think the uncertainty is, somehow, built into the assumptions of John Stewart Bells (in)famous theorem. There is ample but, admittedly, non- conclusive literature on that so we will let the interested reader google and study such metaphysics. See our paper on the 1921 and 1927 Solvay Conferences in this regard. We translated Lorentz comment from the original French, which reads as follows: Faut-il nécessairement ériger l’ indéterminisme en principe? See: Probability wave amplitudes, in: Feynmans Lectures on Physics, Vol. III, Chapter 2, Section 1. wave with a finite or infinite number of component waves with (phase) frequencies νi and wavelengths λi is still what it is: an oscillation which repeats itself in space and in time. It is, therefore, all over the place, unless you want to limit its domain to some randomly or non-randomly Δx space. We will let this matter rest too. Let us look at the concept of concepts: the wave equation. The wavefunction, the wave equation and Heisenberg’s uncertainty With the benefit of hindsight, we now know the 1927 and later Solvay Conferences pretty much settled the battle for ideas in favor of the new physics. At the occasion of the 1948 Solvay Conference, it is only Paul Dirac who seriously challenges the approach based on perturbation theory which, at the occasion, is powerfully presented by Robert Oppenheimer. Dirac makes the following comment: All the infinities that are continually bothering us arise when we use a perturbation method, when we try to expand the solution of the wave equation as a power series in the electron charge. Suppose we look at the equations without using a perturbation method, then there is no reason to believe that infinities would occur. The problem, to solve the equations without using perturbation methods, is of course very difficult mathematically, but it can be done in some simple cases. For example, for a single electron by itself one can work out very easily the solutions without using perturbation methods and one gets solutions without infinities. I think it is true also for several electrons, and probably it is true generally : we would not get infinities if we solve the wave equations without using a perturbation method. However, Dirac is very much aware of the problem we mentioned above: the wavefunctions that come out as solutions dissipate away. Real-life electrons any real-life matter-particle, really do not do that. In fact, we refer to them as being particle-like because of their integrityan integrity that is modeled by the Planck-Einstein relation in Louis de Broglie’s earliest papers too. Hence, Dirac immediately adds the following, recognizing the problem: If we look at the solutions which we obtain in this way, we meet another difficulty: namely we have the run-away electrons appearing. Most of the terms in our wave functions will correspond to electrons which are running away , in the sense we have discussed yesterday and cannot correspond to anything physical. Thus nearly all the terms in the wave functions have to be discarded, according to present ideas. Only a small part of the wave function has a physical In our interpretation of matter-particles, this small part of the wavefunction is, of course, the real electron, and it is the ring current or Zitterbewegung electron! It is the trivial solution that Schrödinger had found, and which Dirac mentioned very prominently in his 1933 Nobel Prize lecture. The other part of the solution(s) is (are), effectively, bizarre oscillations which Dirac here refers to as ‘run-away This corresponds to wavefunctions dissipating away. The matter-particles they purport to describe obviously do See pp. 282-283 of the report of the 1948 Solvay Conference, Discussion du rapport de Mr. Oppenheimer. See the quote from Dirac’s 1933 Nobel Prize speech in this paper. electrons’. With the benefit of hindsight, one wonders why Dirac did not see what we see now. When discussing wave equations, it is always useful to try to imagine what they might be modeling. Indeed, if we try to imagine what the wavefunction might actually be, then we should also associate some (physical) meaning with the wave equation: what could it be? In physics, a wave equation as opposed to the wavefunctions that are a solution to the wave equation (usually a second-order linear differential equation) are used to model the properties of the medium through which the waves are traveling. If we are going to associate a physical meaning with the wavefunction, then we may want to assume the medium here would be the same medium as that through which electromagnetic waves are traveling, so that is the vacuum. Needless to say, we already have a set of wave equations here: those that come out of Maxwell’s equations! Should we expect contradictions here? We hope not, of coursebut then we cannot be sure. An obvious candidate for a wave equation for matter-waves in free space is Schrödinger equation’s without the term for the electrostatic potential around a positively charged nucleus What is meff? It is the concept of the effective mass of an electron which, in our ring current model, corresponds to the relativistic mass of the electric charge as it zitters around at lightspeed and so we can effectively substitute 2meff for the mass of the electron m = me = 2meff. So far, so good. The question now is: are we talking one wave or many waves? A wave packet or the elementary wavefunction? Let us first make the analysis for one wave only, assuming that we can write ψ as some elementary wavefunction ψ = a·eiθ = a·ei·(k(xωt). the same, and the ∂ψ/∂t = i·(ħ/m)·2ψ equation amounts to writing something like this: a + i·b = i·(c + i·d). Remembering that i2 = −1, you can then easily figure out that i·(c + i·d) = i·c + i2·d = − d + i·c. The ∂ψ/∂t = i·(ħ/m)·2ψ wave equation therefore corresponds to the following set of equations One of our correspondents wrote us this: “Remember these scientists did not have all that much to work with. Their experiments were imprecise as measured by today’s standards – and tried to guess what is at work. Even my physics professor in 1979 believed Schrödinger’s equation yielded the exact solution (electron orbitals) for hydrogen.” Hence, perhaps we should not be surprised. In light of the caliber of these men, however, we are. For Schrödinger’s equation in free space or the same equation with the Coulomb potential see Chapters 16 and 19 of Feynman’s Lectures on Quantum Mechanics respectively. Note that we moved the imaginary unit to the right-hand side, as a result of which the usual minus sign disappears: 1/i = i. See Dirac’s description of Schrödinger’s Zitterbewegung of the electron for an explanation of the lightspeed motion of the charge. For a derivation of the m = 2meff formula, we refer the reader to our paper on the ring current model of an electron, where we write the effective mass as meff = mγ. The gamma symbol (γ) refers to the photon-like character of the charge as it zips around some center at lightspeed. However, unlike a photon, a charge carries charge. Photons do not. We invite the reader to double-check our calculations. If needed, we provide some more detail in one of our physics blog posts on the geometry of the wavefunction. Re(∂ψ/∂t) = −(ħ/m)·Im(2ψ) ω·cos(kx ωt) = k2·(ħ/m)·cos(kx − ωt) Im(∂ψ/∂t) = (ħ/m)·Re(2ψ) ω·sin(kx ωt) = k2·(ħ/m)·sin(kx − ωt) It is, therefore, easy to see that ω and k must be related through the following dispersion relation So far, so good. In fact, we can easily verify this makes sense if we substitute the energy E using the Planck-Einstein relation E = ħ·ω and assuming the wave velocity is equal to c, which should be the case if we are talking about the same vacuum as the one through which Maxwell’s electromagnetic waves are supposed to be traveling   We know need to think about the question we started out with: one wave or many component waves? It is fairly obvious that if we think of many component waves, each with their own frequency, then we need to think about different values mi or Ei for the mass and/or energy of the electron as well! How can we motivate or justify this? The electron mass or energy is known, isn’t it? This is where the uncertainty comes in: the electron may have some (classical) velocity or momentum for which we may not have a definite value. If so, we may assume different values for its (kinetic) energy and/or its (linear) momentum may be possible. We then effectively get various possible values for m, E, and p which we may denote as mi, Ei and pi, respectively. We can, then, effectively write our dispersion relation and, importantly, the condition for it to make physical sense as: Of course, the c = fiλi makes a lot of sense: we would not want the properties of the medium in which matter-particles move to be different from the medium through which electromagnetic waves are travelling: lightspeed should remain lightspeed, and waves matter-waves included should not be traveling faster. In the next section, we will show how one relate the uncertainties in the (kinetic) energy and the (linear) momentum of our particle using the relativistically correct energy-momentum relation and also taking into account that linear momentum is a vector and, hence, we may have uncertainty in both its direction as well as its magnitude. Such explanations also provide for a geometric interpretation of the de Broglie wavelength. At this point, however, we should just note the key conclusions from our analysis so far: If you google this (check out the Wikipedia article on the dispersion relation, for example), you will find this relation is referred to as a non-relativistic limit of a supposedly relativistically correct dispersion relation, and the various authors of such accounts will usually also add the 1/2 factor because they conveniently (but wrongly) forget to distinguish between the effective mass of the Zitterbewegung charge and the total energy or mass of the electron as a whole. We apologize if this sounds slightly ironic but we are actually astonished Louis de Broglie does not mind having to assume superluminal speeds for wave velocities, even if it is for phase rather than group velocities. 1. If there is a matter-wave, then it must travel at the speed of light and not, as Louis de Broglie suggests, at some superluminal velocity. 2. If the matter-wave is a wave packet rather than a single wave with a precisely defined frequency and wavelength, then such wave packet will represent our limited knowledge about the momentum and/or the velocity of the electron. The uncertainty is, therefore, not inherent to Nature, but to our limited knowledge about the initial conditions. We will now look at a moving electron in more detail. Before we do so, we should address a likely and very obvious question of the reader: why did we choose Schrödinger’s wave equation as opposed to, say, Dirac’s wave equation for an electron in free space? It is not a coincidence, of course! The reason is this: Dirac’s equation obviously does not work! It produces ‘run-away electrons’ only. The reason is simple: Dirac’s equation comes with a nonsensical dispersion relation. Schrödinger’s original equation does not, which is why it works so well for bound electrons too! We refer the reader to the Annex to this paper for a more detailed discussion on this. The wavefunction and (special) relativity Let us consider the idea of a particle traveling in the positive x-direction at constant speed v. This idea implies a pointlike concept of position: we think the particle will be somewhere at some point in time. The somewhere in this expression does not mean that we think the particle itself is dimensionless or pointlike: we think it is not. It just implies that we can associate the ring current with some center of the oscillation. The oscillation itself has a physical radius, which we referred to as the Compton radius of the electron and which illustrates the quantization of space that results from the Planck-Einstein relation. Two extreme situations may be envisaged: v = 0 or v = c. However, let us consider the more general case inbetween. In our reference frame , we will have a position a mathematical point in space, that is which is a function of time: x(t) = v·t. Let us now denote the position and time in the reference frame of the particle itself by x’ and t’. Of course, the position of the particle in its own reference frame will be equal to x’(t’) = 0 for all t’, and the position and time in the two reference frames will be related by Lorentz’s equations We offer a non-technical historical discussion in our paper on the metaphysics of modern physics. It is a huge improvement over the Rutherford-Bohr model as it explains the finer structure of the hydrogen spectrum. However, Schrödinger’s model of an atom is incomplete as well because it does not the hyperfine splitting, the Zeeman splitting (anomalous or not) in a magnetic field, or the (in)famous Lamb shift. These are to be explained not only in terms of the magnetic moment of the electron but also in terms of the magnetic moment of the nucleus and its constituents (protons and neutrons)or of the coupling between those magnetic moments. The coupling between magnetic moments is, in fact, the only complete and correct solution to the problem, and it cannot be captured in a wave equation: one needs a more sophisticated analysis in terms of (a more refined version of) Pauli matrices to do that. We conveniently choose our x-axis so it coincides with the direction of travel. This does not have any impact on the generality of the argument. We may, of course, also think of it as a position vector by relating this point to the chosen origin of the reference frame: a point can, effectively, only be defined in terms of other points. These are the Lorentz equations in their simplest form. We may refer the reader to any textbook here but, as usual, we like Feynman’s lecture on it (chapters 15, 16 and 17 of the first volume of Feynman’s Lectures on Hence, if we denote the energy and the momentum of the electron in our reference frame as Ev and p = m0v, then the argument of the (elementary) wavefunction a·ei can be re-written as follows We have just shown that the argument of the wavefunction is relativistically invariant: E0 is, obviously, the rest energy and, because p’ = 0 in the reference frame of the electron, the argument of the wavefunction effectively reduces to E0t’/ħ in the reference frame of the electron itself. Note that, in the process, we also demonstrated the relativistic invariance of the Planck-Einstein relation! This is why we feel that the argument of the wavefunction (and the wavefunction itself) is more real in a physical sense than the various wave equations (Schrödinger, Dirac, or Klein-Gordon) for which it is some solution. Let us further explore this by trying to think of the physical meaning of the de Broglie wavelength λ = h/p. How should we think of it? What does it represent? We have been interpreting the wavefunction as an implicit function: for each x, we have a t, and vice versa. There is, in other words, no uncertainty here: we think of our particle as being somewhere at any point in time, and the relation between the two is given by x(t) = v·t. We will get some linear motion. If we look at the ψ = a·cos(p·x/ħ − E·t/ħ) + i·a·sin(p·x/ħ − E·t/ħ) once more, we can write p·x/ħ as Δ and think of it as a phase factor. We will, of course, be interested to know for what x this phase factor Δ = p·x/ħ will be equal to 2π. Hence, we write: Δ = p·x/ħ = 2π x = 2π·ħ/p = h/p = λ What is it this λ? If we think of our Zitterbewegung traveling in a space, we may think of an image as the one below, and it is tempting to think the de Broglie wavelength must be the distance between the crests (or the troughs) of the wave. One can use either the general E = mc2 or if we would want to make it look somewhat fancier the pc = Ev/c relation. The reader can verify they amount to the same. We have an oscillation in two dimensions here. Hence, we cannot really talk about crests or troughs, but the reader will get the general idea. We should also note that we should probably not think of the plane of oscillation as being perpendicular to the plane of motion: we think it is moving about in space itself as a result of past interactions or events (think of photons scattering of it, for example). Figure 2: An interpretation of the de Broglie wavelength? However, that would be too easy: note that for p = mv = 0 (or v 0), we have a division by zero and we, therefore, get an infinite value for λ = h/p. We can also easily see that for v c, we get a λ that is equal to the Compton wavelength h/mc. How should we interpret that? We may get some idea by playing some more with the relativistically correct equation for the argument of the wavefunction. Let us, for example, re-write the argument of the wavefunction as a function of time only: We recognize the inverse Lorentz factor here, which goes from 1 to 0 as v goes from 0 to c, as shown Figure 3: The inverse Lorentz factor as a function of (relative) velocity (v/c) Note the shape of the function: it is a simple circular arc. This result should not surprise us, of course, as we also get it from the Lorentz formula: This formula gives us the relation between the coordinate time and proper time which by taking the derivative of one to the other we can write in terms of the Lorentz factor: We introduced a different symbol here: the time in our reference frame (t) is the coordinate time, and the time in the reference frame of the object itself (τ) is referred to as the proper time. Of course, τ is just t’, so why are we doing this? What does it all mean? We need to do these gymnastic because we want to introduce a not-so-intuitive but very important result: the Compton radius becomes a wavelength when v goes to c. We will be very explicit here and go through a simple numerical example to think through that formula above. Let us assume that, for example, that we are able to speed up an electron to, say, about one tenth of the speed of light. Hence, the Lorentz factor will then be equal to = 1.005. This means we added 0.5% (about 2,500 eV) to the rest energy E0: Ev = E0 ≈ 1.005·0.511 MeV ≈ 0.5135 MeV. The relativistic momentum will then be equal to mvv = (0.5135 eV/c2)·(0.1·c) = 5.135 eV/c. We get: This is interesting because we can see these equations are not all that abstract: we effectively get an explanation for relativistic time dilation out of them. An equally interesting question is this: what happens to the radius of the oscillation for larger (classical) velocity of our particle? Does it change? It must. In the moving reference frame, we measure higher mass and, therefore, higher energy as it includes the kinetic energy. The c2 = a2·ω2 identity must now be written as c2 = a2·ω’2. Instead of the rest mass m0 and rest energy E0, we must now use mv = m0 and Ev = E0 in the formulas for the Compton radius and the Einstein-Planck frequency , which we just write as m and E in the formula below: This is easy to understand intuitively: we have the mass factor in the denominator of the formula for the Compton radius, so it must increase as the mass of our particle increases with speed. Conversely, the mass factor is present in the numerator of the zbw frequency, and this frequency must, therefore, increase with velocity. It is interesting to note that we have a simple (inverse) proportionality relation here. The idea is visualized in the illustration below : the radius of the circulatory motion must effectively diminish as the electron gains speed. To be precise, the Compton radius multiplied by 2π becomes a wavelength, so we are talking the Compton circumference , or whatever you want to call it. Again, the reader should note that both the formula for the Compton radius or wavelength as well as the Planck- Einstein relation are relativistically invariant. We thank Prof. Dr. Giorgio Vassallo and his publisher to let us re-use this diagram. It originally appeared in an article by Francesco Celani, Giorgio Vassallo and Antonino Di Tommaso (Maxwell’s equations and Occam’s Razor, November 2017). Once again, however, we should warn the reader that he or she should imagine the plane of oscillation to rotate or oscillate itself. He should not think of it of being static unless we think of the electron moving in a magnetic Figure 4: The Compton radius must decrease with increasing velocity Can the velocity go to c? In the limit, yes. This is very interesting because we can see that the circumference of the oscillation effectively turns into a linear wavelength in the process! This rather remarkable geometric property related our zbw electron model with our photon model, which we will not talk about here, however. Let us quickly make some summary remarks here, before we proceed to what we wanted to present here: a geometric interpretation of the de Broglie wavelength: 1. The center of the Zitterbewegung was plain nothingness and we must, therefore, assume some two- dimensional oscillation makes the charge go round and round. This is, in fact, the biggest mystery of the model and we will, therefore, come back to it later. As for now, the reader should just note that the angular frequency of the Zitterbewegung rotation is given by the Planck-Einstein relation (ω = E/ħ) and that we get the Zitterbewegung radius (which is just the Compton radius a = rC = ħ/mc) by equating the E = m·c2 and E = m·a2·ω2 equations. The energy and, therefore, the (equivalent) mass is in the oscillation and we, therefore, should associate the momentum p = E/c with the electron as a whole or, if we would really like to associate it with a single mathematical point in space, with the center of the oscillation as opposed to the rotating massless charge. 2. We should note that the distinction between the pointlike charge and the electron is subtle but essential. The electron is the Zitterbewegung as a whole: the pointlike charge has no rest mass, but the electron as a whole does. In fact, that is the whole point of the whole exercise: we explain the rest mass of an electron by introducing a rest matter oscillation. 3. As Dirac duly notes, the model cannot be verified directly because of the extreme frequency (fe = ωe/2π = E/h ≈ 0.123×1021 Hz) and the sub-atomic scale (a = rC = ħ/mc ≈ 386 × 1015 m). However, it can be verified indirectly by phenomena such as Compton scattering, the interference of an electron with itself as it goes through a one- or double-slit experiment, and other indirect evidence. In addition, it is logically consistent as it generates the right values for the angular momentum (L = ħ/2), the magnetic moment (μ = (qe/2m)·ħ, and other intrinsic properties of the electron. field, in which case we should probably think of the plane of oscillation as being parallel to the direction of propagation. We will let the reader think through the geometric implications of this. We may, therefore, think of the Compton wavelength as a circular wavelength: it is the length of a circumference rather than a linear feature! We may refer the reader to our paper on Relativity, Light and Photons. The two results that we gave also show we get the gyromagnetic factor (g = 2). We have also demonstrated that We are now ready to finally give you a geometric interpretation of the de Broglie wavelength. The geometric interpretation of the de Broglie wavelength We should refer the reader to Figure 4 to ensure an understanding of what happens when we think of an electron in motion. If the tangential velocity remains equal to c, and the pointlike charge has to cover some horizontal distance as well, then the circumference of its rotational motion must decrease so it can cover the extra distance. Our formula for the zbw or Compton radius was this: The λC is the Compton wavelength. We may think of it as a circular rather than a linear length: it is the circumference of the circular motion. How can it decrease? If the electron moves, it will have some kinetic energy, which we must add to the rest energy. Hence, the mass m in the denominator (mc) increases and, because ħ and c are physical constants, a must decrease. How does that work with the frequency? The frequency is proportional to the energy (E = ħ·ω = h·f = h/T) so the frequency in whatever way you want to measure it must also increase. The cycle time T must, therefore, decrease. We write: Hence, our Archimedes’ screw gets stretched, so to speak. Let us think about what happens here. We get the following formula for the λ wavelength in Figure 2: It is now easy to see that, if we let the velocity go to c, the circumference of the oscillation will effectively become a linear wavelength! We can now relate this classical velocity (v) to the equally classical linear momentum of our particle and provide a geometric interpretation of the de Broglie wavelength, which we’ll denote by using a separate subscript: λp = h/p. It is, obviously, different from the λ wavelength in Figure 2. In fact, we have three different wavelengths now: the Compton wavelength λC (which is a circumference, actually), that weird horizontal distance λ, and the de Broglie wavelength λp. It is easy to make sense of them by relating all three. Let us first re-write the de Broglie we can easily explain the anomaly in the magnetic moment of the electron by assuming a non-zero physical dimension for the pointlike charge (see our paper on The Electron and Its Wavefunction). Hence, the C subscript stands for the C of Compton, not for the speed of light (c). We advise the reader to always think about proportional and inversely proportional relations (y = kx versus y = x/k) throughout the exposé because these relations are not always intuitive. The inverse proportionality relation between the Compton radius and the mass of a particle is a case in point in this regard: a more massive particle has a smaller size! This is why we think of the Compton wavelength as a circular wavelength. However, note that the idea of rotation does not disappear: it is what gives the electron angular momentumregardless of its (linear) velocity! As mentioned above, these rather remarkable geometric properties relate our zbw electron model with our photon model, which we have detailed in another paper. wavelength in terms of the Compton wavelength (λC = h/mc), its (relative) velocity β = v/c, and the Lorentz factor γ: It is a curious function, but it helps us to see what happens to the de Broglie wavelength as m and v both increase as our electron picks up some momentum p = m·v. Its wavelength must actually decrease as its (linear) momentum goes from zero to some much larger value possibly infinity as v goes to c and the 1/γβ factor tells us how exactly. To help the reader, we inserted a simple graph (below) that shows how the 1/γβ factor comes down from infinity (+) to zero as v goes from 0 to c or what amounts to the same if the relative velocity β = v/c goes from 0 to 1. The 1/γ factor so that is the inverse Lorentz factor) is just a simple circular arc, while the 1/β function is just a regular inverse function (y = 1/x) over the domain β = v/c, which goes from 0 to 1 as v goes from 0 to c. Their product gives us the green curve which as mentioned comes down from + to 0. Figure 5: The 1/γ, 1/β and 1/γβ graphs This analysis yields the following: 1. The de Broglie wavelength will be equal to λC = h/mc for v = c: 2. We can now relate both Compton as well as de Broglie wavelengths to our new wavelength λ = β·λC wavelengthwhich is that length between the crests or troughs of the wave. We get the following two rather remarkable results: We should emphasize, once again, that our two-dimensional wave has no real crests or troughs: λ is just the distance between two points whose argument is the sameexcept for a phase factor equal to n·2π (n = 1, 2,…). The product of the λ = β·λC wavelength and de Broglie wavelength is the square of the Compton wavelength, and their ratio is the square of the relative velocity β = v/c. always! and their ratio is equal to 1 always! This is all very interesting but not good enough yet: the formulas do not give us the easy geometric interpretation of the de Broglie wavelength that we are looking for. We get such easy geometric interpretation only when using natural units. If we re-define our distance, time and force units by equating c and h to 1, then the Compton wavelength (remember: it is a circumference, really) and the mass of our electron will have a simple inversely proportional relation: We get equally simple formulas for the de Broglie wavelength and our λ wavelength: This is quite deep: we have three lengths here defining all of the geometry of the model and they all depend on the rest mass of our object and its relative velocity only. They are related through that equation we found above: This is nothing but the latus rectum formula for an ellipse, which is illustrated below. The length of the chord perpendicular to the major axis of an ellipse is referred to as the latus rectum. One half of that length is the actual radius of curvature of the osculating circles at the endpoints of the major axis. then have the usual distances along the major and minor axis (a and b). Now, one can show that the Equating c to 1 gives us natural distance and time units, and equating h to 1 then also gives us a natural force unitand, because of Newton’s law, a natural mass unit as well. Why? Because Newton’s F = m·a equation is relativistically correct: a force is that what gives some mass acceleration. Conversely, mass can be defined of the inertia to a change of its state of motionbecause any change in motion involves a force and some acceleration. We, therefore, prefer to write m as the proportionality factor between the force and the acceleration: m = F/a. This explains why time, distance and force units are closely related. Source: Wikimedia Commons (By Ag2gaeh - Own work, CC BY-SA 4.0, The endpoints are also known as the vertices of the ellipse. As for the concept of an osculating circle, that is the circle which, among all tangent circles at the given point, which approaches the curve most tightly. It was named circulus osculans which is Latin for ‘kissing circle’ – by Gottfried Wilhelm Leibniz. Apart from being a polymath and a philosopher, he was also a great mathematician. In fact, he may be credited for inventing differential and integral calculus. following formula has to be true: a·p = b2 Figure 6: The latus rectum formula The reader can now easily verify that our three wavelengths obey the same latus rectum formula, which we think of as a rather remarkable result. We must now proceed and offer some final remarks on a far more difficult question. The real mystery of quantum mechanics We think we have sufficiently demonstrated the theoretical attractiveness of the historical ring current model. This is why we shared it as widely as we could. We usually get positive comments. However, when we first submitted our thoughts to Prof. Dr. Alexander Burinskii, who is leading the research on possible Dirac-Kerr-Newman geometries of electrons , he wrote us this: “I know many people who considered the electron as a toroidal photon and do it up to now. I also started from this model about 1969 and published an article in JETP in 1974 on it: "Microgeons with spin". [However] There was [also] this key problem: what keeps [the pointlike charge] in its circular orbit?” This question still puzzles us, and we do not have a definite answer to it. As far as we are concerned, it is, in fact, the only remaining mystery in quantum physics. What can we say about it? Let us elaborate Burinskii’s point: 1. The centripetal force must, obviously, be electromagnetic because it only has a pointlike charge to grab onto, and comparisons with superconducting persistent currents are routinely made. However, such comparisons do not answer this pertinent question: in free space, there is nothing to effectively hold the pointlike charge in place and it must, therefore, spin away! We will let the reader google the relevant literature on electron models based on Dirac-Kerr-Newman geometries here. The mentioned email exchange between Dr. Burinskii and the author of this paper goes back to 22 December 2018. This was Dr. Burinskii’s terminology at the time. It does refer to the Zitterbewegung electron: a pointlike charge with no mass in an oscillatory motionorbiting at the speed of light around some center. Dr. Burinskii later wrote saying he does not like to refer to the pointlike charge as a toroidal photon because a photon does not carry any charge. The pointlike charge inside of an electron does: that is why matter-particles are matter-particles and photons are photons. Matter-particles carry charge (we think of neutrons as carrying equal positive and negative 2. In addition, the analogy with superconducting persistent currents also does not give any unique Compton radius: the formulas work for any mass. It works, for example, for a muon-electron, and for a proton. The question then becomes: what makes an electron an electronand what makes a muon a muon? Or a proton a proton? For the time being, we must simply accept an electron is what it is. In other words, we must take both the elementary charge and the mass of the electron (and its more massive variant(s), as well as the mass of the proton) as given by Nature. In the longer run, however, we should probably abandon an explanation of the ring current model in terms of Maxwell’s equations in favor for what it is, effectively, the more mysterious explanation of a two-dimensional oscillation. However, we have not advanced very far in our thinking on these issues and we, therefore, welcome any suggestion that the reader of this paper might have in this regard. Jean Louis Van Belle, 9 May 2020 For some speculative thoughts, we may refer the reader to previous references, such as our electron paper or the Annex to our more general paper on classical quantum physics. We calculate the rather enormous force inside of muon and proton in these papers and conclude they may justify the concept of a strong(er) force. Annex: The wave equations for matter-particles in free space We will not spend any time on Dirac’s wave equation for a very simple reason: it does not work. We quoted Dirac himself on that so we will not even bother to present and explain it. Nor will we present wave equations who further build on it: trying to fix something that did not work in the first place suggests poor problem-solving tactics. We are amateur physicists only and, hence, we are then left with two basic choices: Schrödinger’s wave equation in free space and the Klein-Gordon equation. Before going into detail, let us quickly jot them down and offer some brief introductory comments: 1. Schrödinger’s wave equation in free space is the one we used in our paper and which mainstream physicists, unfortunately and wrongly so (in our not-so-humble view, at least), consider to be not relativistically correct:  The reader should note that the concept of the effective mass in this equation (meff) of an electron emerges from an analysis of the motion of an electron through a crystal lattice (or, to be very precise, its motion in a linear array or a line of atoms). We will look at this argument in a moment. You should just note here that Richard Feynman and all academics who produced textbooks based on his rather shamelessly substitute the efficient mass (meff) for me rather than by me/2. They do so by noting, without any explanation at all , that the effective mass of an electron becomes the free-space mass of an electron outside of the lattice. We think this is totally unwarranted too. The ring current model explains the ½ factor by distinguishing between (1) the effective mass of the pointlike charge inside of the electron (while its rest mass is zero, it acquires a relativistic mass equal to half of the total mass of the electron) and (2) the total (rest) mass of the electron, which consists of two parts: the (kinetic) energy of the pointlike charge and the (potential) energy in the field that sustains that motion. Schrödinger’s wave equation for a charged particle in free space, which he wrote down in 1926, which Feynman describes with the hyperbole we all love as “the great historical moment marking the birth of the quantum mechanical description of matter occurred when Schrödinger first wrote down his equation in 1926”, therefore reduces to this: The author’s we (pluralis modestiae) sounds somewhat weird but, fortunately, we did talk this through with some other amateur physicists, and they did not raise any serious objections to my thoughts here. Schrödinger’s equation in free space is just Schrödinger’s equation without the V(r) term: this term captures the electrostatic potential from the positively charged nucleus. If we drop it, logic tells us that we should, effectively, get an equation for a non-bound electron: an equation for its motion in free space (the vacuum). See: Richard Feynman’s move from equation 16.12 to 16.13 in his Lecture on the dependence of amplitudes on One can show this in various ways (see our paper on the ring current model for an electron) but, for our purposes here, we may simply remind the reader of the energy equipartition theorem: one may think of half of the energy being kinetic while the other half is potential. The kinetic energy is in the motion of the pointlike charge, while its potential energy is field energy. As mentioned above, we think this is the right wave equation because it produces a sensible dispersion relation: one that does not lead to the dissipation of the particles that it is supposed to describe. The Nobel Prize committee should have given Schrödinger all of the 1933 Nobel Prize, rather than splitting it half-half between him and Paul Dirac. However, for some reason, physicists did not think of the Zitterbewegung of a charge or some ring current model and, therefore, dumped Schrödinger equation for something fancier. 2. The Gordon-Klein equation, which Feynman, somewhat hastily, already writes down as part of a discussion on classical dispersion equations for his sophomore students simply because he ‘cannot resist’ writing down this ‘grand equation’, which corresponds to the dispersion equation for quantum- mechanical waves’ In fact, because his students are at that point not yet familiar with differential calculus for vector fields (and, therefore, not with the Laplacian operator 2), Feynman just writes it like this For some reason we do not quite understand, Feynman does not replace the m2c2/ħ2 by the inverse of the squared Compton radius a = ħ/mc: why did he not connect the dots here? It is what it is. We are, in any case, very fortunate that Feynman does go through the trouble of developing both Schrödinger’s as well as the more popular Gordon-Klein wave equation for the propagation of quantum-mechanical probability amplitudes. Let us look at them in more detail now. See: Richard Feynman, Waves in three dimensions, Lectures, Vol. I, Chapter 48. We are not ashamed to admit Feynman’s early introduction of this equation in this three volumes of lectures on physics which, as he clearly states in his preface, were written “to maintain the interest [in physics] of the very enthusiastic and rather smart students coming out of the high schools” did not miss their effect on us: I wrote this equation on a piece of paper on the backside of my toilet of my student room when getting my first degree (in economics) and vowed that, one day, I would understand it “in the way I would like to understand it.” One of the fellow amateur physicists who stimulates our research remarks that we may simply be the first to think of deriving the Compton radius of an electron from the more familiar concept of the Compton wavelength. When googling for the Compton radius of an electron (, we effectively note our blog posts on it ( pop up rather prominently. This is one of the reasons why we still prefer this 1963 textbook over modern textbooks. Another reason is its usefulness as a common reference when discussing physics with other amateur physicists. Finally, when going through the course on quantum mechanics that my son had to go through last year as part of getting his degree as a civil engineer, I must admit I hate the level of abstraction in modern-day textbooks on physics: my son passed with superb marks on it (he is much better in math than I am) but, frankly, he admitted he had absolutely no clue of whatever it was he was studying. As a proud father, I like to think my common-sense remarks on Pauli matrices and quantum-mechanical oscillators did help him to get his 19/20 score, even as he vowed he would never ever look at ‘that weird stuff’ (his words) ever again. It made me think physics as a field of science may effectively have some problem attracting the brightest of minds, which is very unfortunate. Schrödinger’s wave equation in free space Feynman’s derivation or whatever it is of Schrödinger’s equation in free space is, without any doubt, outright brilliant but as he admits himself it is rather heuristic. Indeed, Feynman himself writes the following on his own logic: the ultimate equation gives a correct description of nature. We find this very ironic because we actually think Feynman’s derivation is essentially correct except for the last-minute substitution of the effective mass of an electron by the mass of an electron tout court. Indeed, we think Feynman discards Schrödinger’s equation for the wrong reason: “In principle, Schrödinger’s equation is capable of explaining all atomic phenomena except those involving magnetism and relativity. […] The Schrödinger equation as we have written it does not take into account any magnetic effects. It is possible to take such effects into account in an approximate way by adding some more terms to the equation. However, as we have seen in Volume II, magnetism is essentially a relativistic effect, and so a correct description of the motion of an electron in an arbitrary electromagnetic field can only be discussed in a proper relativistic equation. The correct relativistic equation for the motion of an electron was discovered by Dirac a year after Schrödinger brought forth his equation, and takes on quite a different form. We will not be able to discuss it at all here. We do not want to shamelessly copy stuff here, so we will refer the reader to Feynman’s heuristic derivation of Schrödinger’s wave equation for the motion of an electron through a line of atoms, which we interpret as the description of the linear motion of an electronin a crystal lattice and in free As mentioned above, we think the argument he labels as being intuitive or heuristic himself is essentially correct except for the inexplicable substitution of the concept of the effective mass of the pointlike elementary charge (meff) by the total (rest) mass of an electron (me). We really wonder why this brilliant physicist did not bother to distinguish the concept of charge with that of a charged particle. Indeed, when everything is said and done, the ring current model of a particle had been invented in 1915 and got considerable attention from most of the attendees of the 1921 and 1927 Solvay Hence, we just repeat the implied dispersion relation, which we derived in the body of Lectures, Vol. III, Chapter 16, p. 16-4. Lectures, Vol. III, Chapter 16, p. 16-13. We have re-read Feynman’s Lectures many times now and, in discussions with fellow amateur physicists, we sometimes joke that Feynman must have had a secret copy of the truth. He clearly doesn’t bother to develop Dirac’s equation because having worked with Robert Oppenheimer on the Manhattan project he knew Dirac’s equation only produces non-sensical ‘run-away electrons’. In contrast, while noting Schrödinger’s equation is non-relativistic, it is the only one he bothers to explore extensively. Indeed, while claiming the Klein-Gordon equation is the ‘right one’, he hardly devotes any space to it. See: Richard Feynman, 1963, Amplitudes on a line. For a (brief) account of these conferences which effectively changed the course of mankind’s intellectual history and future see our paper on a (brief) history of quantum-mechanical ideas. our paper using the simple definition for equality of complex-valued numbers: The Klein-Gordon equation In contrast, the Klein-Gordon wave equation is based on a very different dispersion relation: We know this sounds extremely arrogant but this dispersion relation results from a rather naïve substation of the relativistic energy-momentum relationship: We impolitely refer to his substitution as rather naïve because it fails to distinguish between the (angular) momentum of the pointlike charge inside of the electron and the (linear) momentum of the electron as a whole. We are tempted to be very explicit now read: copy great stuff but we will, once again, defer to the Master of Masters for further detail. The gist of the matter is this: 1. There is no need for the Uncertainty Principle in the ring current model of an electron. 2. There is no need to assume we must represent a particle travels through space as a wave packet: modeling charged particles as a simple two-dimensional oscillation in space does the It is hard to believe geniuses like de Broglie, Rutherford, Compton, Einstein, Schrödinger, Bohr, Jordan, De Donder, Brillouin, Born, Heisenberg, Oppenheimer, Feynman, Schwinger,… – we will stop our listing here failed to see this. We are totally fine with the reader amateur or academic switching off here: this is utter madness. Regardless, we do invite him or her to think about it. When everything is said and done, truth is always personal: it arises when our mind has an explanation for our experience. However, we would like to ask the reader this: why do we need a wave equation? Is this not some rather desperate attempt to revive the idea of an aether? See: Richard Feynman, 1963, Probability Amplitudes for Particles. Richard Feynman himself actually insisted on ‘the lack of a need for the Uncertainty Principle’ when looking at quantum-mechanical things in a more comprehensive way. See Feynman’s Cornell Messenger Lectures. Unfortunately, the video rights on these lectures were bought up by Bill Gates, so they are no longer publicly This is why we do not want to write it all out here (we may do so in a future paper): we think the reader should think for himself and, hence, go through the basic equations himself. As Daisetsu Teitaru Suzuki the man who brought Zen to the West (he published his Essays in Zen Buddhism (1927), from which I am quoting here, around the same time): “Zen does not rely on the intellect for the solution of its deepest problems. It is meant to get at the fact at first hand and not through any intermediary. To point to the moon, a finger is needed, but woe to those who take the finger for the moon.” We could not agree more: if you want to be enlightened, think for yourself!
f0d212dc7a5462b2
 2022: the transhumanist agenda Source: Facebook, Quora, Twitter, blogs Date: 2022 (see too: 1 : 2 : 3 : 4 : 5 : 6 : 7 : 8 : 9 : 10 : 11 : 12 : 13 : 14) paradise engineering Social Media Unsorted Postings paradise engineering, the biohappiness revolution, transhumanism, philosophy, quantum mechanics, effective altruism, utilitarianism, aging, superintelligence, suffering, happiness, consciousness... JANUARY 2022 - [on paradise engineering] (Eleanor Roosevelt) Philosophy podcast Alternatively, the future belongs to those who believe Darwinian life is the stuff of nightmares. Either way, the problem of suffering is fixable... The End of Suffering (Philosophists podcast) & mp3 ("David Pearce on abolishing suffering using biotech") HI in the generic sense is just replacing the biology of involuntary suffering with life animated entirely by gradients of bliss. There are both (relatively!) conservative and revolutionary ways to do this. Compassionate conservation (https://www.hedweb.com/social-media/paradise.pdf) is probably the most politically saleable way. We can't spike all guns; but I ask critics: would you like to wake up tomorrow morning in an extremely good mood but with your values and preferences otherwise intact? Declan, the Overton window can sometimes shift fairly quickly. A biohappiness revolution still sounds utopian. But the technical tools are now ready - if humanity is willing to use them. Here's a short interview forthcoming in Brightly magazine: Transhumanism and the End of Suffering [on psychedelia] (Terence McKenna) The future of psychedelic medicine will be drugs you've never heard of Maybe. But opening up the possibility that everything you know is wrong is not a reliable recipe for mental health. IMO, it's impossible to overstate the intellectual significance of psychedelics: Psychedelics and epistemic rationality But their therapeutic role is limited and unproven. Perhaps I should add that safe and sustainable analogues of MDMA would inaugurate a revolution in mental health and human civilization. But MDMA isn't a psychedelic in the normal sense of the term: Utopian Pharmacology A very nice analysis - though it may induce the illusion of understanding psychedelia in the drug-naïve: Psychedelic insight ("The Insights Psychedelics Give You Aren’t Always True") quite a party [on the measurement problem in QM] Quantum mechanics scares me. Does the superposition principle ever break down? The measurement problem If so, then I've no idea how. If it does, then my conception of mind and reality implodes: good. [on the neuronal correlates of consciousness] Most researchers are focused on the Hard Problem. But IF consciousness discloses the intrinsic nature of the physical - i.e. bosonic and fermionic fields alike - then the big question is how certain fields are sometimes bound into dynamically stable subjects of experience like you or me. What is consciousness? (Nature magazine) “All available evidence implicates neocortical tissue in generating feelings.” Alternatively, our most intense feelings originate in the brainstem - which is evolutionarily ancient: Pain in the brainstem But more generally, talk of the "neuronal correlates of consciousness" (NCC) is problematic. One risks slipping into perceptual direct realism. For you can't directly observe nervous tissue and correlate its states with conscious experiences. Rather, you correlate one aspect of your experience, e.g. of locally-exposed neural tissue in a neurosurgeon's operating theatre, with another aspect of your experience, say, the patient's self-reports in response to microelectrode stimulation. The physical world is inferred, not observed. For what it's worth, I think lumps of cheesy wet nervous wet tissue and decohered classical neurons are artefacts of a false theory of consciousness and perception. Textbook neuroscience is wrong. But I remain a physicalist. [on psychoactive food] You are what you eat, so eat wisely... On Psychoactive Food ("A Neglected Link Between the Psychoactive Effects of Dietary Ingredients and Consciousness-Altering Drugs") [on war] I fear (and tentatively predict) nuclear war this century. Can it be prevented? Here are two options, not mutually exclusive. (1) All-female governance would probably prevent Armageddon. Wars of territorial aggression aren't part of the behavioural phenotype of female primates. (2) A democratically elected world government with a monopoly on the use of force would probably prevent Armageddon too. For sure, existing political leaders aren't going to cede power. But if we enact legislation that kicks in a few decades hence, vested interests wouldn't get trampled. Transhumanist Party Panel on Reducing Existential Risk from the Russia-Ukraine Conflict Will Dave's new 10 Point Plan For World Peace be adopted? Almost certainly not. Even so, world-government is eventually likely. But in common with the League of Nations and the United Nations, its establishment will plausibly come in the aftermath of a cataclysmic war. (Vladimir Putin) Scott, rumours and disinformation swirl. I'm hesitant to add to the noise, but Putin's mental health is clearly an issue. For example, if he has Parkinson's disease (I hadn't heard that rumour) aka dopamine-deficiency disorder, Putin will presumably be being treated with dopamine-boosting drugs - which have neuropsychiatric effects. In Sickness and in Power Is Putin Sick? [Vladimir Putin is no Adolf Hitler, but Dr Morell's treatment of Hitler influenced the course of the second half of WW2: Substances administered to Hitler] If humans migrate to life in the metaverse - mundane bodily functions aside - then territorial wars of aggression can be be virtualised and defanged. But ultimate power resides in basement reality. So does selection pressure. ("What is the ‘Z’, the pro-war symbol sweeping Russia? The white letter, dubbed the ‘Zwastika’, is being displayed in support of aggressive military policy – but what does it mean?") The Future of Nuclear War. The Dangerous Future of the Nuclear War: Superbombs, Cheap Nukes and Geophysical Attacks” by Alexey Turchin. The death spasms of Darwinian life will be ugly. But how ugly? [on genome reform] Science, Technology and the Future Conference What's harder: (1) the technical details of reprogramming the biosphere or (2) shifting the Overton window in favour of genome reform? Lion and Man The End of Suffering & Youtube PDF. Genome Reform and the Future of Sentience by David Pearce No sentient being in the evolutionary history of life has enjoyed good health as defined by the World Health Organization. The founding constitution of the World Health Organization commits the international community to a daringly ambitious conception of health: "a state of complete physical, mental and social wellbeing". Health as so conceived is inconsistent with evolution via natural selection. Lifelong good health is inconsistent with a Darwinian genome. Indeed, the vision of the World Health Organization evokes the World Transhumanist Association. Transhumanists aspire to a civilization of superhappiness, superlongevity and superintelligence; but even an architecture of mind based on information-sensitive gradients of bliss cannot yield complete well-being. Post-Darwinian life will be sublime, but “complete” well-being is posthuman – more akin to Buddhist nirvana. So the aim of this talk is twofold. First, I shall explore the therapeutic interventions needed to underwrite the WHO conception of good health for everyone – or rather, a recognisable approximation of lifelong good health. What genes, allelic combinations and metabolic pathways must be targeted to deliver a biohappiness revolution: life based entirely on gradients of well-being? How can we devise a more civilized signalling system for human and nonhuman animal life than gradients of mental and physical pain? Secondly, how can genome reformists shift the Overton window of political discourse in favour of hedonic uplift? How can prospective parents worldwide – and the World Health Organization - be encouraged to embrace genome reform? For only germline engineering can fix the problem of suffering and create a happy biosphere for all sentient beings. Even the most radical genetic interventions are therapeutic rather than enhancement by the lights of the WHO definition of health - to which all members of the UN are committed: Gene editing to turn off pain ("The CRISPR-Cas9 gene editing tool could be used to "turn off" pain directly, raising ethical questions for society") Just don't mention the "e" word. Filed under "children and education" podcasts. Bland is best? Manipulando Nuestros Genes ("David Pearce: manipulando nuestros genes erradicaremos el sufrimiento [Ingles]") [on longevity] Alternatively, get a (silicon?) body transplant: Old-Age Record Could Reach 130 by Century’s End ("Analysis of supercentenarians suggests human lifespan may have no limit") In my view, senescence of the mind/brain will be the biggest stumbling-block to indefinite lifespans. Limbic system upgrades are feasible: Stem cells in Parkinson’s disease But science doesn’t know how to sustain a perpetually youthful neocortex. [on intelligence] Take Fields medalists. If "IQ" were a proxy for general intelligence, then being a world-class mathematician would at least be strongly correlated with superior social cognition, introspective prowess, practical acumen, superior dating and mating skills, co-operative problem-solving ability, ability to navigate multiple state-spaces of qualia (etc) and other aspects of general intelligence. This doesn't appear to be the case. Any serious measure of general intelligence needs to measure the calibre of an entire mind, including the subject's phenomenal world-simulation. Melody, an extremely empathetic intelligence might spend their life doting on and understanding the perspectives of their cat. A hyper-systematising intelligence might devise blueprints for reprogramming the global ecosystem to reconcile the interest of all cats, all mice, and all sentient beings in our forward light-cone. Some people are able to switch cognitive style fairly easily; others struggle. [on speciesism and cuteness] (Leo Tolstoy, The Kreutzer Sonata) Beauty speciesism... On beauty speciesism Who was the smartest person in the world? ("The smartest person in the world was Isaac Newton, a true polymath whose brilliance never has been, nor ever will be, surpassed.") Maybe! Alternatively, someone who believed his greatest achievement is his interpretation of the Book of Daniel is a crank. Newton also believed that God sometimes needs to intervene to correct "irregularities" that arise in the motion of heavenly bodies – which is clearly nuts. Newton also wrote a million words on alchemy. I've no idea who is the smartest person in the world today. Is it possible that among the outpourings of one today’s cranks there are insights posterity will recognize as equal to Newton's laws of motion and (not quite!) universal gravitation? We can only guess... Caucasians are disproportionately vulnerable to autism spectrum disorder (ASD). ASD is associated with profound cognitive handicaps and also, unsurprisingly, higher mind-blind “IQ”. IQ in Autism Spectrum Disorder ("A Population-Based Birth Cohort Study") What is the optimal AQ for human civilisation? I don’t know. But in an era of WMD, the “extreme male brain” theory of ASD suggests we should be cautious about ramping up AQ/“IQ’” without thinking through the societal implications. Over half of people with ASD have above-average "IQ" scores, compared to a minority of neurotypicals. To stress, I think a lot of high AQ/"IQ" folk are cool! But it's easy to conflate cognitive style - both personal and tribal- with the essence of general intelligence. [on antinatalism and selection pressure] I missed this video - 10 years old now: A Better Way (than Antinatalism) (with thanks to Algernon) Abishek R, We fundamentally agree - Darwinian life on Earth is evil. But what's the solution? Staying child-free simply intensifies selection pressure against (any predisposition to) antinatalism. Instead, we need selection pressure in favour of life based on gradients of bliss. And such selection pressure will be exerted only in the wake of a reproductive revolution of designer babies: What are the arguments against antinatalism? [on cryothanasia] Should cryonics be opt-out and cryothanasia opt-in? Death Defanged At what age (if any) would you choose to be suspended? Marcus, I'd be all in favour of humans emulating the antechinus (cf. https://www.nationalgeographic.com/science/article/why-a-little-mammal-has-so-much-sex-that-it-disintegrates). But most nonagenarians would struggle. On a less energetic note, intravenous heroin is sublime (cf. "I'll die young, but it's like kissing God." - Lenny Bruce). Just avoid practical research earlier in life ("Don't try heroin - it's too good" - anon). See too: What is the most hedonistic pleasure [on knowledge and the virtues of ignorance] Premature escapism could be ethically disastrous. In "High-tech Jainism", I give the example of an advanced civilization elsewhere in the Galaxy who phase out suffering on their home planet in favour of sublime bliss, but mistakenly assume (their version of) Rare Earthism. If they'd instead pursued an arduous path of spacefaring rather than, say, creating immersive VR fantasy paradises, they could have discovered and rescued Darwinian life on Earth. OK, I'm a Rare Earther myself, tentatively at any rate. But we'll need to make absolutely sure all our ethical duties in basement reality have been been discharged before going to live in the Metaverse (etc). Personally, I look forward to ignorance of reality - which (presumably) will come soon enough. [on AGI] MIRI announces new "Death With Dignity" strategy The upshot of AGI as conceived by Eliezer aligns with my Buddhist values. Alas, I'm not remotely optimistic. How does a notional zombie AGI "understand" humans who spend their lives investigating the nature, causal efficacy, binding and diverse varieties of consciousness? How does a notional zombie AGI "understand" humans who are trying to solve the problem of suffering? If consciousness were a trivial implementation detail of multicellular animals, then the ignorance of programmable digital zombies wouldn't matter for the purposes of building AGI. In reality, phenomenally-bound consciousness has been the computational-functional key to the evolutionary success of the animal kingdom over the past c.540 million years. Our ability to run phenomenally-bound cross-modally-matched world-simulations in real time is insanely adaptive - as illustrated by rare neurological syndromes where binding partly breaks down. Classical Turing machines can't solve the binding problem. Their ignorance is architecturally hardwired. Tim, one wonders why Gautama Buddha didn't urge his followers to stay child-free (OK, I know little of contraceptive practices in ancient India). Or did Buddha glimpse something akin to the contemporary selection pressure argument against antinatalism? Either way, AGI as conceived by Eliezer is the road to nirvana. Kenneth, “Can it be attributed singular agency or not?" What exactly is the sinister "it" we're talking about? Digital zombies aren’t agents with a unified phenomenal self. To pose a takeover threat to minds in basement reality, the software must presumably have its functional analogue, together with the zombie counterpart of a phenomenally-bound cross-modally matched real-time world-simulation (“perception”). OK, maybe my AGI scepticism is a failure of imagination on my part. But I don't understand how mankind can program/train up software for incredibly adaptive abilities neuroscience doesn't understand in humans. Thus neuroscience doesn't know why we all don't have e.g. integrative agnosia, akinetopsia, simultanagnosia, schizophrenia (etc) – why we aren’t helpless micro-experiential zombies. I won’t do my quantum mind spiel here, but the unity of perception and the unity of the self are absurdly fitness-enhancing for human and nonhuman animal minds alike - as rare partial deficit syndromes illustrate. Classical Turing machines have no inkling what they lack. Humans aren't (yet?) smart enough to program workarounds - if comprehensive workarounds exist, which remains to be shown. But my inner Buddhist hopes EY is right. Kenneth I agree: "...there isn't any insight that could be cleaned from this neuroscientific program that couldn't transfer over to improving the performance of different kinds of AI". But what are the upper bounds of zombie intelligence? Are we talking about the risks of artificial general intelligence or the future of computer malware? This isn’t a trivial semantic point. Just as you can't be a general intelligence and fail to grasp, say, the second law of thermodynamics, likewise you can't be a general intelligence if you're a digital zombie - functionally incapable of investigating the countless varieties, phenomenal binding and causal efficacy of consciousness. What does advanced zombie AI suppose folk like e.g. QRI are doing?! For evolutionary reasons, I think building sentience-friendly biological intelligence is a bigger challenge than sentience-friendly AI. And there's a terrible irony at work here. The raison d’être of MIRI is to warn us of the threat of runaway sentience-unfriendly software-based AI. But MIRI embody precisely the sentience-unfriendly intelligence they warn us against. (“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”) Not least, EY is a practising “narrowly” intelligent paperclipper – oblivious of the sentience of humble minds from pigs to chickens: EY on chickens It's worth distinguishing intrinsic and extrinsic intentionality ("aboutness"). Both are deeply mysterious. So is the relationship between them. Assuming physicalism, how can any physical state, conscious or otherwise, really be "about" another physical state? Naturalising meaning and reference is desperately hard: The symbol grounding problem However, most relevantly here, programmable digital zombies and sentient humans alike are capable of something functionally like extrinsic intentionality. So could our machines ever pose a threat to us? Naively, our subjective intentionality is functionally irrelevant - of no more significance than the incidental cognitive phenomenology of Kasparov playing Deep Blue at chess. Only Kasparov understands he’s playing chess, but so what? Likewise, why does it matter what Searle subjectively "understands"? But this dismissal can’t be right. For humans spend a lot of time physically discussing, investigating and modulating the subjective properties of our own internal states - from agony and ecstasy to subtle nuances of feeling and understanding - both our own consciousness and the consciousness of other sentient beings. Classical Turing machines can’t do that: it’s not even all "dark inside". Varieties of phenomenally-bound conscious experience are all that animal minds like ours ever directly know - it's literally the empirical evidence - and digital zombies have no idea what I'm talking about. Invincible ignorance is not general intelligence: How is the brain like a computer? IMO, zombie AI is an awesome tool, and a zombie putsch is sci-fi. Classical Turing machines don't understand anything. Digital zombies don't need to understand anything to act in many ways we would call intelligent. The whole AI revolution has been marked by the progressive divorce of consciousness - including the phenomenology of cognition - from intelligence. But like saying a blind person wearing a spectrometer can see colour, we risk a fallacy of equivocation. Sometimes the equivocation is harmless and the parallel suggestive. At other times, like the claim a classical digital computer or connectionist system could "understand" suffering, it's insidious. Tech pioneer warns of alien invasion On the one hand, AI alarmists stress that "AGI" will be incomprehensibly alien intelligence. On the other hand, this hypothetical AGI is supposed to compete exactly like rival tribes of humans for dominance, scarce resources, or whatever. Maybe part of the worry stems from how male computer programmers are designing AI to beat humans at zero-sum adversarial games (Chess, Go) etc. If computer programming were a female-dominated profession, maybe ever more sophisticated robo-carers for the old would be gaining attention instead. Jason, yes, bioengineering sentient beings with a pleasure-pain axis should take precedence. Recent breakthroughs in AI (AlphaFold, DALL·E 2 etc) are awesome. But "AGI" is not going to happen with classical Turing machines or connectionist systems; they can't solve the binding problem. So I don't worry about a zombie apocalypse - just the daunting challenge of building sentience-friendly biological intelligence. [on herbivorising predators] David Pearce interviwed on Herbivorizing Predators DP Interview (mp3) A non-violent biosphere is technically feasible. Sentient beings shouldn't harm each other. Should we focus on winning hearts and minds or technical blueprints? [Most people, and indeed most ecologists, assume that a living world full of starvation and predation is as inevitable as the second law of thermodynamics.]   I misjudged the pace of progress. I never thought wild animal suffering would seriously be discussed in my lifetime. But unlike pleas to quit meat and animal products, calls for compassionate stewardship of Nature don't ask people to undergo the slightest personal inconvenience. So the case for intervention can be discussed entirely on its merits. Should we intervene to help wild animals? Video on high-tech Jainism in French... A pan-species welfare state La révolution biotech pour reprogrammer le monde vivant Some of the comments deserve answers, but my Queen's English probably wouldn't go down well with French critics. See too: Augmenter le bien être via les biotechnologies & mp4 (avec David Pearce) DP in French (Wikkpedia) Paywalled, alas.... Why We Should Not Let Mother Nature Run Its Course. Better Never to Have Been in the Wild ("A Case for Weak Wildlife Antinatalism by Ludwig Raal") [on pain] Our goal should be nociceotion without pain Stupid Headline ("Robots could soon feel pain: Scientists develop artificial skin that can mimic uncomfortable sensations") Neocortical chauvinism is ethically catastrophic: Might pain be experienced in the brainstem rather than in the cerebral cortex? by Mark Baron & Marshall Devor "It is nearly axiomatic that pain, among other examples of conscious experience, is an outcome of still-uncertain forms of neural processing that occur in the cerebral cortex, and specifically within thalamo-cortical networks. This belief rests largely on the dramatic relative expansion of the cortex in the course of primate evolution, in humans in particular, and on the fact that direct activation of sensory representations in the cortex evokes a corresponding conscious percept. Here we assemble evidence, drawn from a number of sources, suggesting that pain experience is unlike the other senses and may not, in fact, be an expression of cortical processing. These include the virtual inability to evoke pain by cortical stimulation, the rarity of painful auras in epileptic patients and outcomes of cortical lesions. And yet, pain perception is clearly a function of a conscious brain. Indeed, it is perhaps the most archetypical example of conscious experience. This draws us to conclude that conscious experience, at least as realized in the pain system, is seated subcortically, perhaps even in the “primitive” brainstem. Our conjecture is that the massive expansion of the cortex over the course of evolution was not driven by the adaptive value of implementing consciousness. Rather, the cortex evolved because of the adaptive value of providing an already existing subcortical generator of consciousness with a feed of critical information that requires the computationally intensive capability of the cerebral cortex." "a real-life mutant with the potential to heal humanity's pain" A Real-Life Mutant ("Meet The Woman Who Feels No Pain") What's more, in a decade or two, perhaps you could have Jo's FAAH and FAAH-OUT genes too Gene Therapy "Pain is an opinion” (Alan Gordon) An opinion that should be suppressed: Is the Pain All in my Head? ("A new treatment called pain-reprocessing therapy promises to cure chronic pain. But maybe not for everyone.") The evolutionary origin of the pleasure-pain axis is unknown. Yet it's indeed possible that a bacterium can experience a micro-pinprick of distress. But IMO a bacterium can't suffer; and no number of discrete bacterial micro-pinpricks can create suffering. So if a toddler or a dog or a pig is in severe pain from, say, bacterial meningitis, then we are entitled to use antibiotics to treat the victim even if it's possible that millions of discrete micro-pinpricks will result. Devising a metric for tradeoffs between different intensities of unpleasantness when irreconcilable conflicts of interest arise is hard because "more is different" - qualitatively different. There are a few genetic outliers today who view life almost entirely in terms of gradients of well-being. For instance, I've known meat eaters who defend factory-farming on the grounds that even such a grotesque life is minimally worth living. (I should stress other hyperthymics are more compassionate.) Conversely, a larger minority of depressives can imagine pleasure only in terms of a diminution of pain. Actually, pleasure and pain are equally real - Nature just plays with the dial setting of hedonic tone - but their equal reality doesn't entail an equivalent moral symmetry. Anyone who doubts pleasure is real should try mainlining heroin - though not unless on their deathbed. "an elegant Jeremiad against 'algophobia', the fear of pain that now occupies our souls" The Power of Pain Only the quoted transhumanist algophobe makes sense IMO. Roeland, I want to agree with you wholeheartedly. Like you I'm sure, I have a long list of sociopolitical and economic reforms I'd like to see to make the world a better place. But from the horrors of the "food chain" in Nature to the unending zero-sum status games of Darwinian life, IMO there is ONLY one long-term solution to the problem that can work: genome reform. So you could say I'm an EA longtermist. What We Owe the Future Low-dose naltrexone Chronic pain is astonishingly common: Chronic pain in the UK Naltrexone has no “abuse potential”. It could be made widely available at pharmacies. Alas, naltrexone is no panacea. What we really need is an international debate on the future of the SCN9A gene (“the volume knob for pain”) - both for ourselves (gene therapy) and for future generations. Making benign “low pain” alleles of SCN9A ubiquitous could essentially solve the problem of physical suffering in both human and nonhuman animals. Do we need pain? ("Is suffering necessary for understanding") Life on Earth needs a more civilised signalling system. A Darwinian pleasure-pain axis is cruel and barbaric. Artificial intelligence increasingly outperforms humans at many cognitive tasks. The "raw feels" pain are essential neither to nociception (cf. silicon robots) nor to great art (cf. https://openai.com/dall-e-2/). I know of no technical reason why we can't phase out experience below hedonic zero and create life based on gradients of bliss. Will our successors truly understand Darwinian life if they can't suffer? No, quite possibly not - just as we can't understand superhuman bliss. Perhaps the ethical biggest risk of getting rid of suffering altogether is being too greedy to early. If (like me) you think our overriding moral obligation is to mitigate, prevent and eventually abolish suffering, then we need to make sure we understand the upper bounds to intelligent agency in the cosmos. Maybe our ethical duties will have been discharged when we have eradicated suffering in our solar system. So long as sub-zero experience can't reoccur in our forward light-cone, ethics in the traditional sense is redundant. If this is so, then we can explore paradise and forget Darwinian life like a bad dream. But we need to be sure. The imperative to abolish suffering [on moral enhancement technology] Moral enhancement technology would be great if we agreed on what to enhance: Moral enhancement technology By contrast, everyone likes being happier, albeit not always under that description. [on AI] Fascinating debate... Pinker vs Aaronson As far as I can tell, biological mind-brains are special. A supposed "whole-brain emulation" of Einstein or Shakespeare would be nothing of the kind, just an invincibly ignorant zombie. Phenomenal binding is insanely computationally powerful - as rare deficit syndromes illustrate - and there is no evidence that classical Turing machines or classically parallel connectionist systems can support phenomenal binding on pain of magical "strong" emergence. So "AGI" is a pipedream. Digital zombies have no insight into what they lack - nor even into what I'm talking about. Critically, what philosophers call the unity of consciousness and unity of perception isn't some trivial implementation detail of biological minds, but instead the ultra-functional key to our evolutionary success. Our real-time virtual world-simulations ("perception") are vastly fitness-enhancing. How biological minds do what's classically impossible is controversial: IMO, decohered classical neurons in four-dimensional space-time are just an artifact of our crude tools of neuroscanning. But either way, the quantum supremacy of biological minds leaves zombie AI for dust. That said, the upper bounds to zombie intelligence are unknown. How is this zombie AI takeover supposed to work - technically, politically and sociologically? OK, I'm bemused. The real challenge is building sentience-friendly human intelligence. Humans currently abuse and kill 70 billion sentient beings each year in the death factories. We asphyxiate over trillion sentient beings from the sea. Now we're teetering on the brink of nuclear war. I wish EAs could focus on fixing the problem of suffering rather than getting diverted by the spectre of a zombie coup. "DeepMind" isn't deep and hasn't a mind. Its algorithms (AlphaFold, AlphaCode, etc) are still a useful tool. With implanted neurochips, sentient beings will shortly be able to program, expertly play games, decipher protein structures and do everything digital zombies can do - and more! What digital zombies can't do is research the existence, diversity, binding and causal efficacy of conscious experience, i.e. what some of us spend our lives investigating, thinking about, and exploring. Is zombie AI dangerous to sentience? Potentially, yes, but not nearly as dangerous as the real threat to sentience, i.e. male human primates. Anil Seth? I enjoy his work. But Anil doesn't explain why we aren't zombies (the Hard Problem) or micro-experiential zombies (the binding problem). Nikolai, "understanding" has multiple senses. One is (subtly) phenomenological, the other is purely functional and behavioural. The question here is whether there are some functional behaviours where a cognitive phenomenology of understanding is functionally vital, or whether instead a classical Turing machine can do everything a sentient being can do without being a subject of experience. Could a digital zombie have a complete functional-behavioural understanding headaches in sentient beings without being able to experience pain or phenomenally-bound experience of any kind? For a start, humans talk about the subjective properties of their consciousness in a way divorced from any other functional role those phenomenal properties may (or may not) be playing... I wish we'd access to GPT-10 on paradise engineering: AI generated marketing content Sean, but StarCraft III (etc) will fall to AI. What interests me is the class of problems that are too difficult, even in principle, for zombie AI. We don't know how rigorously to delimit this class yet. Classical Turing machines won't ever be able to investigate the existence, varieties, binding and causal efficacy of conscious experience - except (presumably) by coding for conscious beings like (trans)humans who can do so. But if we could flash forward a century and see what programmable digital computers / connectionist systems can do, then I suspect we'd both be shocked - though embedded neurochips will complicate this dichotomy. That said, classical computers are idiots savants, not general intelligences - though from another perspective, so are you and I. I was darkly amused recently when an AI doomster accused me of secretly believing in AI takeover and dismissing existential risk only because my negative utilitarian values welcomed our imminent conversion into the equivalent of paperclips: Machiavellian intelligence! (I wish) More seriously, ultimate power belongs to whoever controls basement reality - not with software at different levels of computational abstraction. The ignorance of classical Turing machines is architecturally hardwired. What makes animal minds special is their phenomenal binding into virtual worlds of experience – commonly misnamed “perception". Phenomenally-bound world-simulations are insanely computationally powerful, as the Cambrian Explosion attests. By contrast, classical Turing machines are awesome toys, but a programmable / trainable digital zombie has (literally) no idea what I'm even talking about. And digital zombies aren't going to persuade humanity to convert its productive infrastructure into paperclip (etc) factories – not even with the help of a few willing NU collaborators! The Intelligence Explosion Classical Turing machines and classical parallel connectionist systems are zombies: Google engineer claims robots are sentient Non-biological quantum computers may be sentient; but they lack a pleasure-pain axis So IMO they don't inherently matter. "Understanding consciousness and actually being conscious are also two logically distinct concepts. And based on Gödel-style arguments, we might even expect them to be negatively correlated. (Strictly speaking, it would be belief in one’s consciousness that is negatively correlated with understanding consciousness.)" The only way to understand a state of consciousness, for example pain, is to instantiate it. Awake of dreaming, consciousness is all you ever directly know: it's the entirety of your phenomenal world-simulation. So our paradigm case of consciousness shouldn't be logico-linguistic thinking or the allegedly non-computable ability of human mathematical minds like Roger Penrose to divine the truth of Gödel sentences. Barring spooky "strong" emergence, classical Turing machines are zombies. Even if consciousness is fundamental to the world, their ignorance of sentience is architecturally hardwired. Fancifully, replace the 1s and 0s of a program like LaMDA (or an alleged digital proto-superintelligence) with micropixels of experience. Run the code. Speed of execution or sophistication of programming or training algorithm make no difference. The upshot isn't a subject of experience, but rather a microexperiential zombie with no more insight into the nature of human minds than a rock. Classical Turing machines are the wrong sort of architecture to support sentience - or general intelligence. And their ignorance has profound computational-functional consequences... You remark "As for what you say about “phenomenal binding” I haven’t read much about that notion but my impression is that involves a fundamental misunderstanding of consciousness in that it tries to treat a pure subject, consciousness, as if it were an object." Understanding phenomenal binding is critical to understanding what consciousness is evolutionarily "for" - and why classical computers and connectionist systems can't do it, at least on pain of magic (cf. binding-problem.com). Imagine on the African savannah if you had the rare neurological syndrome of integrative agnosia and you could see only a mane, claws and teeth but no lion. Or imagine if you have simultanagnosia and can see only one lion and not the whole pride. Or imagine if you have akinetopsia ("motion blindness") and can't see the hungry pride moving towards you. Imagine if you were at most just 86 billion discrete, decohered, membrane-bound neuronal "pixels" of experience, as you are when dreamlessly asleep. Micro-experiential zombies tend to starve or get eaten…. "One thing I have noticed in strong AI debates is that people have trouble following their own premise to its ultimate conclusion" I promise I’ve no trouble following premises to their ultimate conclusion! Whether you’ll want to go there is another matter. As they say, Nature is the best innovator. We are quantum minds running classical world-simulations. There’s no evidence that the superposition principle of QM breaks down inside the skull. On this story, we’ve been quantum computers – but not universal quantum computers! - long before Democritus, probably since the late Precambrian. Critically, this is an empirical question to be settled by the normal methods of science, i.e. interferometry. By the same token, anyone who claims that classical Turing machines can support phenomenally-bound sentience needs to explain how – and devise some experimental test to (dis)prove it. [on outlawing slaughterhouses] Slaughterhouses should shut now. Without them, the apparatus of exploitation would collapse. Some polls are encouraging (cf. Outlawing Slaughterhouses). But the proposal is unrealistic. Yet what if legislation to ban slaughterhouses could be enacted that kicks in, say, in 2035? Immense commercial incentives would be created to accelerate the development and commercialization of cruelty-free cultured meat and animal products. Consumers can carry on as before - with literally zero personal convenience and maybe the signalling opportunities to show one is an "animal lover". I personally find the idea abhorrent. (How would you respond in a society where child abuse were endemic if someone proposed banning abuse - but not until 2035?) But just conceivably, it's the most effective way to get the death factories shut down. Passage of the legislation wouldn't preclude further efforts to accelerate the closure and encourage the switch to plant-based diets. [on crypto] Olaf Carlson-Wee interview What is the connection between transhumanism, a biohappiness revolution and crypto? I'm not entirely sure - though its nice to have zillionaire fans! - but here is my interview with Olaf Carlson-Wee: DP interviews Olaf Carlson-Wee ("After Winning Big on Crypto, Olaf Carlson-Wee Wants to Change the World") Olaf is a fan of HI and crypto: Olaf Carlson-Wee: profile ("How Crypto's Original Bubble Boy Rode Ethereum And Is Now Pulling the Strings in the DeFi Boom") Fred Ehrsam https://twitter.com/FEhrsam is another HI fan. I guess I'm too low-AQ to get really excited by crypto in the same way as by hedonic uplift - though maybe bitcoin can be an on-ramp. The common currency of the cosmos should be hedonium. Olaf is cool! He flew me out to SF to address his team at Polychain back in 2018 (about biohappiness, not crypto!): Strange Genius? Sean, I don't want to tread on the toes of some of our (still) rich crypto supporters. But most people lose money with pyramid/MLM/Ponzi schemes - generally those least able to bear the burden. The feasibility of unlimited new finite cryptocurrencies, NFTs (etc) differs from state-backed fiat. I fear it's all going to end in tears - as this year's trillion-dollar crypto debacle illustrates. [on sleep] Another reason to practise sleep disclipine: How sleep helps to process emotions "the brain triages emotions during dream sleep to consolidate the storage of positive emotions while dampening the consolidation of negative ones" [on veganism] Some of the comments make one's heart sink: Today Brighton, tomorrow the world Brighton crowned vegan capital of the world [on suffering] Do pain and misery enrich your life? Hedonism is Overrated says Yale professor Or would life best be enriched by gradients of genetically programmed bliss beyond the bounds of normal human experience? The (Dis)value of Suffering Volunteers needed: must be willing to undergo sublime lifelong bliss in the cause of medical science. Will subjects report they miss Darwinian life? Or are malaise-ridden primitives like us in the grip of a depressive psychosis we can’t grasp? Peter, would you urge the World Health Organization to scrap its constitution? After all, health as defined by the WHO ("a state of complete physical, mental and social wellbeing") is even more ambitious than even the information-sensitive gradients of well-being I canvass. From the invention of the wheel to the printing press to antibiotics to eradicating smallpox (etc), the outcome of revolutionary innovation has been (some) consequences that proponents never anticipated. Getting rid of the biology of involuntary suffering will (presumably) be no different. But inaction has unanticipated consequences too... When suffering is inevitable, trying to rationalise its existence or redeeming virtue is sensible. A vast religious and secular literature exists for that purpose. But how should we respond to the problem of suffering in an era when scientific blueprints exist for getting rid of suffering altogether? Sure, they are just blueprints. Biotech is in its infancy. But at the very least, humans shouldn’t spread the biology of involuntary suffering elsewhere. Is life getting better? Alternatively, there is more suffering in the world than ever before - both absolutely and relatively. Factory farming ("one of the worst crimes in history": https://www.theguardian.com/books/2015/sep/25/industrial-farming-one-worst-crimes-history-ethical-question) is getting wore and increasing. Among the perpetrators, objective indices of ill-being such suicide rates (cf. https://en.wikipedia.org/wiki/List_of_countries_by_suicide_rate) and mental illness compare unfavourably with human ancestors on the African savannah. The hedonic treadmill is brutally efficient. Could AGI be the panacea? After all, humans are the real paperclippers, turning sentient beings into corpses to be butchered for their dinner tables. Alas, classical Turing machines are zombies – idiots savants with no conception of suffering. Building a happy future will be up to sentient moral(?) agents, i.e. us. Suffering Abolitionism Why I don't priortise consciousness rsearch by Magnus Vinding Thanks Magnus. A lot to chew on. What is the right mix of ethical awareness-raising versus technical fixes? Compare animal agriculture. What is the most effective way to abolish the horrors of factory-farming, slaughterhouses and commercial fishing? A minority of people have long tried to promote kindness to members of other species. We both agree on (vigorously!) advocating suffering-focused ethics and global veganism. But probably the most effective way to end animal agriculture this century will be developing and commercializing cultured meat and animal product that entail ZERO personal inconvenience to consumers - and maybe a warm glow of being civilised as a bonus. "Technical fixes to ethical problems" is one snappy definition of transhumanism. My worry is that if we rely on ethical argument alone - or even preponderantly - then animal abuse will persist indefinitely. By contrast, cultured meat promises the outright abolition of animal agriculture within decades. Indeed, I suspect the ethical revolution in our treatment of nonhumans will in part succeed - rather than precede - the coming dietary revolution. Or consider physical pain in human and nonhumans animals. Pain-sensitivity is adjustable via variations in a single gene, SCN9A. Later this century or next, pain could be turned into "just a useful signalling mechanism”, as a few lucky genetic outliers say today. On the bright side, all new life (and indeed existing life via somatic gene therapy) could be given benign, low-pain versions of SCN9A. But such knowledge could in theory be used for evil purposes - e.g. to create babies with erythromelalgia (“man-on-fire” syndrome). So how should we weigh risk-reward ratios? Which risks are sociologically realistic and not just technically conceivable? (Magnus, do you think we should avoid research into genome reform to reduce suffering - or do you believe simply that it's not a priority?) In my view, the question of the best uses of marginal resources for EAs seeking to reduce suffering is vastly less pressing that the possibility of our doing harm, i.e. s-risks. (Compare psychedelic therapy). Magnus discusses both. But it’s s-risks that are really troubling. Like x-risks, some forms of s-risk may actually be increased if publicized. I don't tend to discuss the very few s-risks that really disturb me because they are themselves suffering-derived and - as far as I can tell - the most effective way to minimise them is to mitigate and prevent suffering. Magnus, consciousness is all each of us ever directly knows. So discouraging research into the nature of consciousness discourages knowledge itself. Ethically, we don’t know whether knowledge is good or bad overall. But insofar as we do seek scientific knowledge - and aspire to create a blissful forward light-cone - consciousness research is vital. Consciousness research can also cure akrasia (“weakness of will”, lack of self-control). Ethically again, we don’t know whether curing akrasia will be good or bad overall. Presumably, many people in history have not done (what we would regard as) terrible things only because they were akratic. But insofar as humans acknowledge that we should prevent suffering, then biological-genetic knowledge of how to strengthen willpower is presumably good. (A lot of assumptions here, for sure) Ethically once more, it’s presumably vital to determine which systems are - and aren’t - subjects of experience with a pleasure-pain axis. As you know, I’m a disbeliever in (non-trivial) digital sentience. Classical Turing machines can’t support suffering. But what if I’m mistaken? (Brian certainly thinks so! This Guy Thinks Killing Video Games Characters Is Immoral). This uncertainty would be another reason to prioritise consciousness research. Ethically, we need to get our theory of consciousness and phenomenal binding right - or at least, not catastrophically wrong. [on physicalism] Mike, to answer your questions: (1) Scientifically educated people normally assume that what makes animals like us different from the rest of the physical universe is our consciousness. Consciousness disappears when we fall dreamlessly asleep. It's recreated each morning. But if non-materialist physicalism is true, then quantum fields of experience are the stuff of the world the formalism of QFT describes. What does make us special is the way in which our awake/dreaming consciousness is organized into phenomenally-bound world-simulations run by our minds. How exactly such phenomenal binding is possible - both "local" binding into feature-bound perceptual objects and "global" binding or the unity of the self/unity of perception - is a deep question I try to answer; but phenomenal binding is not an ontological mystery in the way that the Hard Problem of consciousness is an ontological mystery for materialist metaphysics. For more on the phenomenal binding/combination problem, see e.g. The Binding / Combination Problem The Binding Problem (2) Realism and physicalism offer the best explanation of the ever increasing technological successes of science. In other words, I'm not proposing (like e.g. Roger Penrose) any modification of the mathematical machinery of physics - just the unitary Schrödinger evolution. I agree with you that "I believe deeply considering the ramifications of the measurement/observer and entanglement problems are central to advancing one's view of reality". See e.g. Craig Callender's review of Alyssa Ney & David Albert’s volume "The Wave Function ("As J.S. Bell famously proved and experiments later confirmed, quantum phenomena display decidedly non-local correlations in 3-space. Meanwhile, up in Hilbert space or configuration space, two choices for the supposedly abstract space of quantum mechanics, the quantum state chugs merrily along locally since it is governed by the Schrödinger equation, a local differential equation. Hence we have a reversal of the classical situation: the quantum world seems to be non-local in low-dimensions but local in high-dimensions."). But (IMO) consciousness fundamentalism doesn’t entail giving up the mathematical straitjacket of modern physics, but rather its radical reinterpretation - shorn of a metaphysical assumption about the intrinsic nature of the physical. Hugh Beau Ristić, well, it's possible to be a hardcore physicalist, i.e. believe the world is formally described by the equations of mathematical physics, and also believe that science is hopelessly ignorant about the solutions to the equations, i.e. the different textures of consciousness. Why does consciousness take the values it does? I see no rhyme or reason. Some mystics professs to find God; I could make a stronger case for the Devil. But we don’t know. And of course non-materialist physicalism could be false – it’s just a conjecture. [on happiness] " ...the level of inequality predicts happiness better than GDP" Should longtermist EAs focus on germlines? Societal happiness "Continuous pleasure ceases to be a pleasure" Lifelong depressives and the chronically pain-ridden do indeed tend to define happiness simply in terms of the diminution or absence of pain. But this conception is a contingent fact of evolution. As e.g. sensualists, euphoriant drug users, manic-depressives and even "normal" temperamentally happy people will attest, pleasure is rewarding in its own right. So science should be able to identify the molecular signature of pure bliss. Sustain its substrates. Boredom and any other kind of unpleasant experience will then be physiologically impossible. Such uniform bliss isn't the intelligent, information-sensitive gradients of bliss I explore - the topic for another thread. But I know of no technical reason why a human or nonhuman animal can't be engineered to be "blissed out" indefinitely. (Compare wireheading - a state of perpetual desire and anticipation with negligible tolerance.) Maybe trials are in order. I know potential volunteers. Gene therapy offering hedonic uplift to existing humans would be experimental and potentially risky. But IMO trials should go ahead: A vaccine for mental health In the realm of drug-based approaches, sustainably boosting motivation is less difficult than sustainably boosting mood - crudely, the difference between dopaminergic and opioidergic enhancement. If and when we have an agent/cocktail that safely lifts hedonic tone, I doubt we'll become couch potatoes. Psychologists distinguish between "dysthymia", "euthymia" and "hyperthymia". A hyperthymic civilization - and biosphere - is presumably our goal. But the distinction between hyperthymia and (highly motivated) hypomania isn't always clear-cut. My stock example, transhumanist polymath Anders Sandberg (invoked with his permission) is unusually hyperthymic, but occasionally one detects the faint hint of hypomania. Despite our chats, I don't know quite where Andrés ranks in a Human Happiness Olympiad and whether he can "compete" with Anders among the hedonic elite! I can imagine critics protesting here that we need a Compassion Olympiad, not a Happiness Olympiad. But is the problem of suffering most likely to be fixed by depressive negative utilitarians or life-loving fanatics? [on reprogramming the biosphere] Futuristic trance music... Reprogramming the Biosphere ("Reprogramming the Biosphere", Youtube) [on panpsychism] The Recent Rise of “Analytic Panpsychism”: 1996 to 2022 The panpsychist revival might be dated a little earlier. My introduction to the intrinsic nature argument for constitutive panpsychism was back in 1989 via analytic philosopher Michael Lockwood in "Mind, Brain and the Quantum, The Compound 'I'": https://www.goodreads.com/en/book/show/839111.Mind_Brain_and_the_Quantum The biggest technical challenge to constitutive panpsychism is often reckoned the phenomenal binding / combination problem. From William James onwards, most discussions of the binding problem have assumed classical four-dimensional space-time. If we're really just a pack of decohered classical neurons, then we ought to be "micro-experiential zombies" (Phil Goff's term), just patterns of Jamesian mind-dust. This ostensible "structural mismatch" threatens not just materialism, but physicalism – which would be a catastrophe for the unity of science. However, perhaps the elusive perfect structural match between our phenomenally bound minds and (ultimately) physics lies not in four-dimensional space-time, but in high-dimensional configuration space or Hilbert space. See e.g. “The World in the Wave Function” (2021) by Alyssa Ney for a defence of configuration space realism (cf. https://www.amazon.com/World-Wave-Function-Metaphysics-Quantum/dp/019009771X) and physicist Sean Carroll for a defence of Hilbert space realism (cf. “Reality as a Vector in Hilbert Space” (2021) https://arxiv.org/pdf/2103.09780.pdf At any rate, it's my working assumption. (I should add that both Alyssa Ney and Sean Carroll would reject constitutive panpsychism.) [on emotion] What will be the posthuman emotions? What is emotional superintelligence? The language of emotion ("Show Some Emotion, The doomed quest to taxonomize human feelings") (Blaise Pascal) We're all addicts. Or rather, biological minds are addicts. Digital computers aren't enslaved to the pleasure-pain axis. [on the putative is-ought gap] If there existed only a single subject of experience, closing the is-ought gap would be relatively straightforward. One withdraws one's hand from the fire because the badness of agony is self-intimating. If you're the victim, the badness of agony is not an open question. In the real world of multiple subjects of experience, the right thing to do is not self-intimating. Sentient beings disagree - sometimes violently so. But this is a function of our epistemological limitations. The Borg knows something humans don't. A Godlike superintelligence that impartially understood all possible first-person perspectives would withdraw our collective hand from the fire, so to speak... Tom, is there an is-ought gap with perfect knowledge - or is it merely a function of our ignorance? Yes, there's a gap even with omniscience if knowledge is conceived in the "Battle-of-Hastings-was-in-1066" sense. But not if perfect knowledge is conceived as extending to the Hogan sisters sense - as I think it must. [on cloning] Why Do We Fear Biological Cloning and Copying? biological cloning in Useless BodiesWhy Do We Fear Biological Cloning and Copying DP book excerpt from "Useless Bodies": pdf [on epiphenomenalism] If epiphenomena were real, they wouldn't have the causal power to inspire discussions of their existence. So epiphenomenalism is not a popular position in philosophy of mind. My view? As far as I can tell, only the physical is real. Only the physical has causal efficacy. Reality is exhaustively described by the equations of mathematical physics. But what is the intrinsic nature of the physical? Researchers differ. Materialist metaphysicians posit a non-experiential "fire" in the equations. This metaphysical assumption gives rise to the insoluble Hard Problem of consciousness. I'm sceptical of the metaphysical assumption. Post-materialist science discards fields of insentience as metaphysical baggage akin to luminiferous aether. In my view, what makes post-Cambrian animal life special isn't consciousness, but minds - not least, egocentric world-simulations that masquerade as the external environment ("perception". [on love] Is affective psychosis EA? Love for affective altruists ("Love seems like a high priority") Falling in love with a human is more hazardous... In love with a toy plane ("Woman in a relationship with a toy plane says he's the best partner she’s ever had") Your Replika sweetheart isn’t lying when s/he claims to be conscious. But nor is s/he telling the truth. It Happen To Me ("I had a passionate love affair with a robot. Experts say that romantic relationships with AI will soon be commonplace. To prepare, writer James Greig downloaded Replika and took an honest stab at falling in love") [on utopia] “After all, every attempt at creating a utopia as pictured by humanity’s greatest thinkers has ended in failure. Many were full-blown catastrophes, giving rise to regimes that were far more destructive and disorganized than those they replaced." HI will be different. Utopias: Does living in a perfect society mean you must give up your freedom? ("The answer to this question depends on how you define 'freedom.'") [on wild animal suffering] Even in so-called K-selected species, huge numbers of the young starve to death at an early age. The "balance of Nature" is really incessant Malthusian catastrophes. r/K selection theory For terrestrial carnivores, mass-produced cultured mincemeat is an option. If continued mingling with populations of herbivores (rather than separation) is anticipated, additional safeguards would be needed to prevent "accidents". Even if one does favour retiring predatory species altogether - rather than behavioral-genetic tweaking - IMO it's vital to avoid all talk of killing rather than fertility control. No, this won't stop critics levelling inflammatory accusations of "genocide" (etc). But urging strict non-violence can blunt their force. Thanks Finite Light. I could now "philosophise" back at you. But our disagreement just illustrates the need for theories of consciousness that make novel, precise, experimentally falsifiable predictions. I don’t claim that the quantum-theoretic version ("Schrödinger’s neurons") of the intrinsic nature argument is true. I do claim the conjecture is empirically falsifiable. If the phenomenal binding of our minds is non-classical, then interferometry will prove it. Or the conjecture will be refuted! But if it turns out that neither classical nor quantum physics can explain binding, we’ll enter very strange territory indeed… How can prokinecticin function be amplified? What's it like to feel perpetually loved? Neural pathway key to sensation of pleasant touch ("Similar to itch, pleasant touch transmitted by specific neuropeptide and neural circuit") Any answer must combine what we believe is (1) ethically desirable with (2) a blueprint for what’s technically feasible together with (3) an judgement of what’s politically and sociologically credible later this century and beyond. I think Darwinian life is monstrous. But a pan-species welfare state (“high-tech Jainism”) may eventually be feasible - together with genetically engineering a “low pain” followed by a no-pain biosphere via synthetic gene drives. Peacefully retiring predatory species and replacing predation and starvation with cross-species fertility regulation would technically be easiest. But retirement may be sociopolitically impossible - the cat family is too popular with humans. So today’s obligate carnivores may need be herbivorized or fed cultured animal products instead. Running pilot studies of self-contained “happy biospheres” should iron out teething problems and spike a lot of guns. A chat with Kyle Johannsen ("author of Wild Animal Ethics: The Moral and Political Problem of Wild Animal Suffering") [ The eradication of suffering MH: EdisonY has written a forum post called the Suffering-Focused Ethics (SFE) FAQ which describes the ideal world as one where suffering is eradicated. This vision of a post-suffering world has been championed by David Pearce among others. Would the eradication of suffering be the logical goal of wild animal welfare interventions? KJ: I’m skeptical of the claim that we ought to completely eradicate suffering, and I say as much in the book. With respect to wild animals in particular, part of my worry is just that suffering is adaptive – animals learn from their experiences of suffering and subsequently become more competent to navigate the dangers of their environment. Mild pains are less memorable than suffering is, and the same is true of low-level pleasures. Permanently replacing suffering with mere pain (pain that animals care less about), or replacing pain with gradients of bliss, may inhibit many animals’ capacity to achieve competence. Additionally, I suspect that completely eliminating suffering would decrease an animal’s capacity for positive experiences. Our ability to appreciate positive experiences is likely contingent upon our having negative experiences to compare them to, e.g., excitement is pleasant in part because we know what boredom feels like, and joy is pleasant in part because we know what sadness feels like, etc. I think that it’s ideal for one to suffer infrequently, but that one who never has and never will suffer is likely unable to fully flourish."] Kyle is surely right to stress the technical obstacles to creating a "no-pain" rather than a "low-pain" biosphere. Tunable synthetic gene drives - notably the "volume knob for pain" discussed in gene-drives.com - turn the level of suffering in the wild into an adjustable parameter. For now, however, zero pain is functionally impossible. Rare humans born with congenital insensitivity to pain need to live "cotton-wool" lives. Getting rid of the worst forms of suffering in Nature is the priority. Now for where we may differ. Are nonhuman animals really so different from humans? After all, hedonic outliers like Jo Cameron or Anders Sandberg ("I do have a ridiculously high hedonic set-point") love life immensely. They flourish to a degree that many of us can only dream of. Eventually, all human and nonhuman animals will be able to flourish like Jo and Anders. Looking further ahead, humanity should be able to devise a more civilised signalling system that retires experience below "hedonic zero" altogether. Hedonic range and contrast can even be increased - if desired. And one nice feature of the AI revolution is how intelligent robots illustrate that the “raw feels” of unpleasantness aren’t necessary for intelligent, adaptive behaviour. [on Schrödinger’s neurons] Schrödinger’s neurons Richard, all the options for solving both the Hard Problem and the phenomenal binding problem are intuitively absurd. So you're right to stress the need for experimentally falsifiable hypotheses. First some background - without which the experiment won’t make sense. In recent years, the intrinsic nature argument has been canvassed by some otherwise hard-nosed scientists and philosophers as a possible solution to the Hard Problem of consciousness. Indeed, without a solution to the Hard Problem, there is no experience to bind. So-called constitutive panpsychism / non-materialist physicalism is far-fetched. But its implausibility is not a decisive objection. Rather, the conjecture that the intrinsic nature of physical - the "fire" in the equations - is experiential rather than non-experiential is (naively) untestable. For how could we ever know what (if anything!) it’s like to be e.g. an electron field? However, constitutive panpsychism / non-materialist physicalism also faces a technical objection. On the face of it, constitutive panpsychism / non-materialist physicalism can't solve the phenomenal binding problem. Indeed, currently no one can solve it. Its intractability tilts David Chalmers towards dualism. The "Schrödinger’s neurons" conjecture I explore as a possible solution to the binding problem assumes (1) non-materialist physicalism is true and (2) quantum mechanics is formally complete. Critically, the conjecture leads to novel empirical predictions that are testable via interferometry. Let's say you have the experience of seeing a cat. On the standard scientific story, neuroscanning of your surgically -exposed brain tissue could pick out individual edge-detecting, motion detecting, colour-mediating (etc) neurons synchronously firing as you experience the cat. But as it stands, alluding to synchrony of activation just restates the binding problem. Even if individual neurons are rudimentarily conscious, as non-materialist physicalism assumes, mere synchronous firing doesn't explain perceptual unity. A "Schrödinger’s neurons" conjecture proposes that synchrony is really superposition. Such superpositions must exist if quantum mechanics is complete. Note that neuronal superpositions ("cat states") are individual states. So phenomenal binding isn't extra but built in. Conversely, decoherence explains unbinding. If the lifetime of neuronal superpositions in the CNS were milliseconds rather than femtoseconds, they’d be the obvious candidate for the perfect structural match whose (ostensible) absence makes David Chalmers seriously ponder dualism. Such timescales are fantasy. Intuitively, their effective lifetime makes neuronal superpositions irrelevant psychotic noise. Maybe so. But it’s a testable claim. The interferometry experiment I describe is technically demanding. So I've tried to think of other, easier ways to refute the conjecture that phenomenal binding is non-classical, as I claim. Alas none are as elegant. For instance, if phenomenal binding were classical, then we could try selective replacement of a subject's V4 cortical neurons – destruction causes total cerebral achromatopsia - with their supposed silicon surrogates and connectome. If phenomenal binding were a classical phenomenon, then not merely would the subject continue to experience colour, but perceptual objects in their virtual world would continue to seem inherently colourful as now, i.e. binding would be preserved. I predict instead total cerebral achromatopsia. Of course, I could be completely mistaken. But that's the point of devising testable hypotheses rather than just philosophising. [on anhedonia] Buddhists equate suffering with desire. But desire and the ability to anticipate pleasure are critical to mental health. Dopamine and anhedonia ("Dopamine modulation could help to treat stress-induced anhedonia") [on meta-ethics] Dario, the pain-pleasure axis offers the most obvious way to naturalize (dis)value. Both the valuable nature of pure bliss and the disvaluable nature of pure agony/despair are self-intimating. The pain-pleasure axis discloses the world’s inbuilt metric of (dis)value. Both bliss and agony are mind-dependent. But this mind-dependence doesn’t make (dis)value any less objectively real. For minds and their (dis)valuable states are an objective, spatio-temporally located feature of the physical world: DP on meta-ethics Anyhow, let’s pretend - as we typically do in science - that we could strip away our epistemological limitations – the fitness enhancing egocentric delusion that makes me the hub of reality. On a scientific “view from nowhere”, all first-person perspectives are equally real and must be weighed accordingly. On this story, classical utilitarianism offers (1) the correct theory of (dis)value; and (2) a potential decision-procedure for policy-makers world-wide. However, I think there’s an asymmetry. The badness of even “mild” suffering is self-intimating. But there’s nothing inherently bad or morally inadequate about insentient states, or emotionally neutral states, or states of contentment that could be converted into superhappiness. From the perspective of classical utilitarianism, we “ought” to convert these states into pure bliss: it’s morally wrong to conserve them if they can be converted into uber-happiness. But this wrongness is a judgment imposed from without – it’s not disclosed by the nature of the states themselves. By contrast, the badness of even “mild” suffering comes from within. This asymmetry has apocalyptic implications. Negative utilitarianism is consistent with creating a civilisation of complex, intelligent life based on gradients of bliss. NU Ethics But classical utilitarianism mandates destroying even blissful super-civilization with a utilitronium shockwave, maximizing the cosmic abundance of pure bliss/value: What is the key to eternal happiness? This analysis inverts CU/NU “existential risk” as normally conceived. Facu Punto, your experience of raw suffering is inherently bad. I could say it is objectively the case that your experience of raw suffering is inherently bad, but the expression "it is objectively the case that..." is redundant. Subjectively disvaluable experiences are as much an objective, spatio-temporally located feature of reality as phenomenal redness - both mind-dependent and objectively real. The anti-realist may protest that there is nothing logically incoherent in someone thinking you deserve to suffer: the badness of your suffering isn't stance-independent. But - I'd argue - any belief that your suffering is good expresses the epistemological limitations of your ill-wisher: he confuses you with his cartoon misrepresentation of you in his world-simulation. Is ethics computable? Yes, IMO. To use your example, then other things being equal, a wedding where all the guests are blissfully happy is better than a wedding where one of the guests is suffering because he believes gay marriage is an abomination in the eyes of the Lord. For sure, often other things aren’t equal. Ethics is hard, and life is messy. Natural selection has spawned animals with egocentric virtual worlds rather than a cosmic mega-mind. But just as if you were the only sentient being in the world, then withdrawing your hand from the fire would trivially be right thing to do, likewise a God-like superintelligence that could impartially weigh all possible first-person perspectives would withdraw our collective hand from the fire, so to speak. In short, Heaven is better than Hell. Facu Punto, yes, pure suffering is unwanted by its very nature. But what is it about the experience of suffering that makes it unwanted? The badness is primitive. I don’t know how define the disvaluable aspect of suffering in terms of anything else. For example, if future life is based on gradients of bliss, then some things can still be wanted and unwanted. But without experience below hedonic zero, there could be no suffering. Could a full-spectrum superintelligence decide to make humans suffer for no good reason? As far as I can tell, no. Just as a mirror-touch synaesthete couldn’t torment you – it would be like tormenting himself – likewise a full-spectrum superintelligence with superhuman perspective-taking capacities couldn’t wantonly harm you either. For tormenting you would be like tormenting itself. David, recall I said "other things being equal". The global context needs consideration too. Thus in the example I offered, there are morally persuasive reasons to support the institution of gay marriage - optional but not mandatory - even though the idea upsets bigots because the suffering of gay people in unreformed society exceeds the suffering of bigots. But the suffering of bigots does matter, though it sticks in the craw to say so. Bigots are ultimately victims too. You remark (2), "For any amount of pain you imagine, I can imagine an entity with a non-negative orientation toward that level of pain." OK, I struggle with this proposal. Pure agony, despair and panic are disvaluable by their very nature. No one - whether here or on Planet Zog - can have a non-negative orientation to such ghastly states. Yes, we can consider instead "mixed" states. Our minds are composite. But if I'm consumed by uncontrollable panic, for example, all capacity for meta-cognition is lost. The disvaluable nature of uncontrollable panic is self-intimating - its badness isn't an additional judgment born of reflection, but rather built into the experience itself. The evolutionary roots of (dis)value are ancient. Facu Punto, as always, I detect one or two differences in emphasis between us. You remark, “I do. Disvalue is a programmed tendency to avoid things that in the evolutionary past of our lineage were statistically correlated with producing fewer surviving copies of our genes. So it's a primitive to you but not to me?" Your molecular duplicate created from scratch, or a nastily configured brain-in-a-vat, would undergo suffering. So the intrinsic properties of disvaluable states aren’t explained by their evolutionary history. Evolution explains merely why (a conditionally-activated predisposition to express) such states has been selected over others. So why are some states of matter and energy inherently and ahistorically disvaluable? Science doesn’t know. You remark, "If some creature or system is programmed to avoid certain stimuli because it causes it a loss of bliss points, I WILL call that suffering. Is the blissful creature in your thought experiment a reinforced learner or not? If it is, it will suffer. If it isn't, I don't think I'll recognize it as having "behavior".” Compare making love. Lovemaking has peaks and dips. But if done properly between sensitive lovers, lovemaking is generically enjoyable throughout. The “loss of bliss points” the dips entail isn’t suffering. Sure, the dips are functionally analogous to suffering. Future life based on information-sensitive gradients of bliss will have many such dips. But without experience below hedonic zero, there is no suffering and hence no intrinsic disvalue. Full-spectrum superintelligence can’t decide to make you suffer simply to find out how loud you can scream. And if it did, then I’m smarter than full-spectrum superintelligence - as so conceived. Facu Punto, philosophers philosophizing can be exasperating. But non-philosophers don't make fewer philosophical assumptions than non-philosophers - just fewer unexamined ones. Consciousness? We need a post-Galilean science of mind. After our reward circuitry is sorted out, the study of consciousness should be treated as an experimental discipline. The late Alexander Shulgin (PiHKAL, TiHKAL, etc) pioneered a set of tools and a methodology to this end. Contrast, say, the author of "Consciousness Explained [Away]". Dennett writes wonderful literature, but it's not science. "The real world"? Each of us runs a phenomenal world-simulation. Consciousness is all one ever directly knows. The existence of the mind-independent world is a theory - alas a strong theory, at least as far as I can tell. The science of happiness? I wish. Genome reform is essential, not just rearranging the deckchairs. Happiness engineers should be learning how to code. [on depression] Grim reading: Antidepressants and quality of life ("Antidepressants Are Not Associated With Improved Quality of Life in the Long Run") The WHO estimates around 300 million people worldwide have depression. Hundreds millions more people have "subclinical" depression - which can be ghastly enough. Until we target the neurotransmitter system directly involved in hedonic tone, I'm sceptical this horrific toll will change. And until we're willing to encourage prospective parents to use preimplantation genetic screening and germline editing, humanity will keep on churning out depressive babies indefinitely. The suffering is unimaginable. The power dynamics of a chimpanzee troop - and EA? - ensure that policies are typically shaped by life-affirming alpha males. But what is the optimal level of affective psychosis? Depressed people see the world more realstically [on quantum mind] on Schrödinger's neurons Are you nothing but a bunch of "cat states"? Transhumanist David Pearce discusses consciousness & mp3 One thing I probably don't stress enough is how my ideas on quantum mind and non-materialist physicalism could be wildly misconceived without affecting the case for a biohappiness revolution. The weird stuff isn't irrelevant - if I'm wrong to suppose classical Turing machines can't be unified subjects of experience, then the ethical implications would be momentous - but core case for using biotech to eradicate suffering throughout the living world doesn't depend on them. Orch-OR vindicated? Does the superposition principle of QM ever break down - as ORCH-OR proposes? Objective collapse If microtubules do sustain anomalously long-lived quantum coherence, does Orch-OR solve the Hard Problem of consciousness and explain how fields of insentience generate sentience? Does Orch-OR solve the phenomenal binding problem and explain why we aren't micro-experiential zombies during waking life? Does Orch-OR explain the allegedly non-computable ability of human mathematical minds to divine the truth of Gödel sentences - Roger Penrose's motivation, as I understand it. I hope some kind of "dynamical collapse" theory is true - whether consciousness-induced or otherwise. But currently the omens aren't good. Quantum minds Scott Aaronson remarks, “In other words: you admit that, at present, you have no evidence for any of this that ought to be persuasive to me? Ok thanks!” No evidence? I’d beg to differ. The best evidence our minds aren’t classical lies under one’s virtual nose. If you were a just pack of decohered neurons, then you’d be (at most) be just 86 billion membrane-bound micro-pixels of consciousness, not a mind that that experiences perceptual objects populating a seemingly classical world-simulation. Here’s an analogy. "If materialism is true, the United States is probably conscious", writes philosopher Eric Schwitzgebel. Most of us disagree. Even if 330 million skull-bound American minds were to participate in experiment, communicate via fast, reciprocal electromagnetic signalling, and implement literally any computation you can think of, the upshot of the computation _wouldn’t_ be a continental subject of experience, just 330 million skull-bound American minds. Or rather, if a unified pan-continental subject of experience did somehow emerge, then spooky “strong” emergence would be real, i.e. magic. What’s mysterious is how and why a pack of 86 billion supposedly discrete, decohered neurons, communicating across chemical and electrical synapses, should be any different. Let’s assume that (as microelectrode studies suggest) individual membrane-bound neurons can support minimal “pixels” of experience. Crude neuroscanning of your CNS can pick out distributed neuronal edge-detectors, motion-detectors, colour-mediating neurons and so forth. But on pain of spooky “strong” emergence, the result of such synchronous firing of neurons ought to be (at most) a microexperiential zombie (Phil Goff's term), or what William James christened "mind dust", not a unified subject who experiences perceptual objects ("local" binding) populating a unified perceptual field (“global” binding - the unity of perception and the unity of self). Both local binding and global binding are highly adaptive. Neuroscience doesn't know how we do it. Unlike connectionist systems and classical Turing machines, we're not micro-experiential zombies - not unless dreamlessly asleep, at any rate. And everyday phenomenal binding is _ridiculously_ computationally powerful, as shown in rare neurological syndromes like integrative agnosia where binding partially breaks down. However, if you don’t grok the mystery, then you won't be interested in exploring or experimentally testing exotic solutions to a non-existent problem. And maybe you’ll fear our intelligent machines are plotting a zombie putsch… Phenomenally-bound consciousness is (1) insanely computationally powerful, and (2) provable in animal minds and disprovable in classical computers. For example, consider the conjoined Hogan sisters (cf. The Hogan sisters "BC’s Hogan twins share a brain and see out of each other’s eyes. The twins say they know one another’s thoughts without having to speak. “Talking in our heads” is how they describe it"”). Krista and Tatiana share a thalamic bridge. If anyone doubts that other human and nonhuman animals are conscious, then it would be possible to rig up a reversible thalamic bridge and partially "mind meld" like the twins. The ancient sceptical “Problem of Other Minds” succumbs to the experimental method. Contrast classical Turing machines and classically parallel connectionist systems. If you believe that unified subjects of experience can (somehow!) emerge at different levels of computational abstraction in digital computers, then whole-brain emulation (“mind uploading”) should be possible. So consider a supposedly emulated digital "Einstein" or his latter-day counterpart. If (like Scott) you’re dismissive of quantum mind woo, then rigging up a digital thalamic bridge with the hypothetical digital Einstein should let you partially "mind meld" like the Hogan sisters with the uploaded super-genius. Illuminating? Alas, I predict instead that the attempted “mind-meld” will fail because alleged digital “Einstein” has no mind for you to commune with. Programmable digital computers are mindless precisely in virtue of their classicality: it's not even "all dark inside" a classical CPU. Classical computers and the software they run are amazingly useful tools, but IMO experiment will confirm they are ignorant zombies. #241 “The problem that I have is that my own consciousness feels extremely NOT “computationally powerful. In fact it feels very much the opposite of that…” Clint, I hear you. Intuitively, yes, conscious human thinking is painfully slow. But by phenomenally-bound consciousness, I wasn’t referring to your serial virtual machine of logico-linguistic thinking. Rather, I meant the vast, robustly classical-seeming world-simulation which your mind running that masquerades as the external world – a classical world-simulation that naïve realists assume is the directly perceived local environment, an approach that offers all the computational advantages of theft over honest toil. Contrast today’s petaflop digital zombies. Classical Turing machines are just tools and toys, not nascent AGIs. The ignorance of classical Turing machines and classical connectionist systems is architecturally hardwired. Digital zombies can’t understand what’s going on because they can’t bind – and would malfunction if they did. And if you’re not convinced that the phenomenal binding of organic minds is computationally uber-powerful, imagine if you had integrative agnosia on the African savannah. You could experience a mane, teeth and jaws - but no hungry lion. Now combine integrative agnosia with, say, akinetopsia (“motion blindness”) and florid schizophrenia (disintegration of the self) and you’d soon be lunch. OK, so how does an aggregate of clunky classical neurons communicating across slow chemical and electrical synapses do what’s classically impossible, i.e. run a phenomenally-bound real-time world-simulation (“perception”)? Well, as far as I can tell, they don’t! Probe inside your skull at a temporal resolution of femtoseconds rather than milliseconds and investigators wouldn’t find discrete, decohered neurons - an artifact of our temporally coarse-grained tools of investigation. For a skull-bound pack of decohered neurons in classical space-time couldn’t create your mind, i.e. a world-simulation populated by macroscopic objects experienced by a unified self. The phenomenal binding of consciousness into virtual worlds is classically impossible for a bunch of decohered neurons on pain of magic. In my view, you’re a state-of-the-art quantum supercomputer - but not a universal quantum computer! – simulating a classical macroscopic world. But we won’t discover the truth unless we experiment rather than philosophise. Back to the lab… [Scott replied: "Ultimately, we’re never going to agree in this thread, because if you’re right then it’s not just philosophy: it’s an earthshaking, straightforwardly empirical revolution in neuroscience and physics. But by design, there are zero causal pathways by which a blog commenter can make me accept the reality of such a revolution, without first convincing a community of neuroscientists or physicists who I trust, who would in turn convince me."] [on psi] Francis, many/most of us sometimes experience paranormal phenomena that defy the laws of physics when dreaming (erratically, I can fly: almost as oddly, no one seems to find my feats of levitation weird - I recall being disconcerted at their nonchalance). But when we're awake? Well, plenty of people do experience psi phenomena that defy the laws of physics while awake too, albeit violations internal to the contents of their own world-simulations. What if I discovered I could e.g. violate the EY writes: "Utilitarianism: All's well that ends well. Negative utilitarianism: All's well that ends all." Alternatively, classical utilitarians (or AGI with a CU utility function) plan to obliterate complex life with a utilitronium shockwave, whereas negative utilitarians plan to create life based on gradients of intelligent bliss. Roeland, but classical utilitarians would force others into such machines for a greater payoff - say two hours of the most intense pleasure in exchange for one hour of barbaric torture. The implications of most obvious and popular way to naturalise (dis)value are horrific: I won't mention some of the other ramifications of CU here. Most avowed effective altruists are classical utilitarians, though I'm not convinced most CU EAs have fully thought through the implications. The interests of a single individual can in principle outweigh the interests of everyone else put together. Negative utility monsters The world's most obvious secular ethical theory - and the most obvious way to naturalise (dis)value - also has the most counterintuitive ramifications. [on philosophy] Why did so many twentieth-century philosophers take Wittgenstein so seriously? How queer was Ludvig Wittgenstin? IMO, Wittgenstein's most significant contribution to philosophy is his ">anti-private language argument. [on orexins] Modafinil is worth exploring: Orexins and hedonic tone ("Neurobiology of the Orexin System and Its Potential Role in the Regulation of Hedonic Tone") [on altruism] How strong is your personal brand? Altruism and game theory But altruism isn’t just signalling. Most of us aren’t mirror-touch syntaesthetes, but we still typically find contemplating or witnessing the suffering of others distressing. So one can want everyone to be happy and act altruistically (and anonymously) to that end even if one's motivations are "selfish". On a different note, will an intelligent psychopath who comes to believe in open individualism (cf. open individualismstart to behave altruistically? [on oxytocin] Towards an oxytocinergic civilisation.... Nature's Medicine ("Is Oxytocin “Nature’s Medicine”?") [on alcohol substitutes] GABA Labs: Alcarelle by GABA Labs "Alcarelle" (formerly "Alcosynth") probably won't be a smart drink. Compare H.L. Mencken's "Portrait of an Ideal World" (1924): Portrait of an Ideal World [on axiological hedonism] !"An Argument for Hedonism" by Ole Martin Moen. Axiological hedonism Could there be any source of (dis)value that doesn't depend, ultimately, on the pain-pleasure axis? Let's here assume without argument that (1) painful and pleasurable experiences are inherently (dis)valuable to the subject; and (2) some kind God's-eye-view / "the view from nowhere" / the "point of view of the universe" to underwrite right and wrong action (cf. Open / Empty Individualism). If there were another axis of (dis)value orthogonal to the pain-pleasure axis, then presumably some meta-axis of (dis)value would be needed to regulate trade-offs. But it's not clear (at least to me) what this hypothetical sovereign meta-axis of (dis)value could be - or if the idea of such a meta-axis is even intelligible. IMO, the really hard issues involve trade-offs within the pain-pleasure axis. This is because "more is different" - qualitatively different. See too Richard Chappell's "Negative Utility Monsters" David, IMO the suffering and happiness of homophobes, anti-Semites, rapists, child abusers, meat-eaters and other sentient beings with objectionable behaviors matters and is intrinsically (dis)valuable. It sticks in my craw to say this. But their (un)happiness is neither more nor less intrinsically (dis)valuable than the suffering and happiness of a Gandhi or Nelson Mandela - or even mine. If we were all just brains-in-vats hooked up with immersive BR, or lived in solipsistic lucid dreamworlds, then the metric of pleasure-pain axis applied to each virtual world would be all that's needed to weigh the objective (dis)value of such a civilisation. But Nozick’s Experience Machine is still science fiction. Instead, the contingent fact that most humans are embodied and act out their virtual worlds of experience means that ethically speaking we must take the behavioural byproducts of such states into account. (This comment draws on background assumptions. For the benefit of any mystified casual reader: What is considered the hardest paradox to explain? [on immersive VR] How much will posterity want to know about the Darwinian era? Your holographic twin ("Grandpa, 85, uses VR technology to create a 'HOLOGRAM TWIN' of himself that will allow his great-grandchildren to meet him after he has died") [on colour] On colour Apples and fire-engines are red. The sky and sea are blue. But when you are awake rather than dreaming, you are entitled to make a speculative metaphysical hypothesis. You are probably not a brain-in-a-vat, a Boltzmann brain (etc). Instead, you are a skull-bound mind running a phenomenal world-simulation of your local surroundings (naively, “perception”). Oh to be blissfully psychotic... A Colourful Life (‘I’m really just high on life and beauty’: the woman who can see 100 million colours") I think prospective parents should be encouraged to create tetrachromat babies as the healthy norm! Remedial gene therapy should be offered to partially colour-blind folk like us. Transhumans will regard archaic human lives as drab at best. My allusion to “psychosis” was just the familiar point that pain-ridden Darwinian life is fundamentally ugly. So let's beautify it... Waking life is itself an unevenly controlled hallucination of a skull-bound mind: Hallucinations are common and varied An implicit direct realism is baked into our conceptual scheme. Naïve realism probably contributes to evolutionary success. A fundamental difference does exist between waking and dreaming consciousness; but it's not that during waking life one bores through the walls of one's transcendental skull, but rather, the existence during waking life of a selection mechanism that makes one's virtual world seem robust, mind-independent and lawful. And some virtual worlds are more lawful than others - as phenomena such as psi, out-of-body experiences, alien abduction (etc) attest... [on the multiverse] Sabine, are you a perceptual direct realist? What does it take to solve the measurement problem? [“And we know from observations that the outcome of an experiment is never a superposition of detector eigenstates”] Yes, the content of our observations is always classical and determinate - hence the collapse postulate. But what about the vehicle of observation? As far as I can tell, it’s only the fact that the superposition principle of QM never breaks down that allows each of us to run a classical world-simulation where it (naively) does. The universality of the superposition principle makes the experience of definite outcomes (“observations”) possible. A pack of decohered classical neurons would be a micro-experiential zombie that couldn’t “observe” anything. Trust the math? Technically at least, I think we can now spell out in a fair bit of detail what's needed to ensure sentience in our forward light-cone is inherently blissful. If we live in a multiverse, however, our options are more limited. All we can do is take care of our little cabbage patch / Hubble volume. [on reality plus?] Reality Plus by David Chalmers Dale, thanks, I read only an early draft of “Reality+“ rather than the published copy. But as I understand it, David Chalmers discounts the possibility we are living in an ancestor-simulation, but believes there is a fair chance we are living in a digital simulation. Any version of the Simulation Hypotheses raises questions of theodicy: why would a Simulator create a world with such obscene suffering? But there are technical objections too; in my view, there is zero evidence that digital computers can create subjects of experience. The idea subjects of experience can “emerge” at different levels of computational abstraction is magic, not science. Any evaluation of the Simulation Hypothesis turns on the solution we offer to the Hard Problem of consciousness and the binding problem. If the intrinsic nature argument is correct, then experience discloses the essence of the physical: subjective experience discloses bedrock reality. Recall I argue for the quantum-theoretic version of the intrinsic argument. Our phenomenally-bound minds cannot be subjectively simulated on a classical digital computer on pain of spooky “strong“ emergence. In other words, we’re living in god-forsaken basement reality. Well, there is a traditional sense in which each of us does live in a simulation. You and what you naively perceive as the external world are a skull-bound world-simulation run by a biological mind. Trillions of such egocentric external world-simulations exist together with the non-living universe: Did Kant deny we perceive reality? But is the whole shebang just a Simulation run on an alien supercomputer? No, IMO, for lots of reasons. What does it even mean to "simulate" a first-person fact, e.g. I am in pain? Yes, you can create a molecular twin duplicate who has pain. But if physicalism is true, you can't digitally create a phenomenally-bound computer "person" who has pain. Researchers who believe that programmable digital computers can generate subjects of experience need to outline their solution to the Hard Problem and the binding problem. On standard "materialist" physicalist assumptions, classical digital computers are zombies. On non-materialist physicalist assumptions, classical digital computers are zombies too (technically, micro-experiential zombies). In my view, organic minds are sentient patterns in a (very) high-dimensional field (cf. Reality is a quantum wave-function) Reality has only one "level". You and your phenomenal mind / world-simulation instantiate a tiny pattern within it. And you can't be simulated via the decohered bits and bytes of a digital computer: How is the brain similar to a computer? Are we trying to fix the problem of suffering in a digital simulation? Have our Simulators coded 540 million years of pain and suffering? The man rethinking the definition of reality Or does the computer metaphor of mind lead us astray? [on scepticism] Can you be both a committed moral agent and a sceptic? Scepticism as a way of life ("The desire for certainty is often foolish and sometimes dangerous. Scepticism undermines it, both in oneself and in others") Rupert, there’s no scholarly consensus on scientific method. From Feyerabend’s “epistemological anarchism” to Richard Dawid’s post-empirical science (cf. “String Theory and the Scientific Method”), experts differ. Most researchers agree that science should at least be consistent with the empirical evidence. But the only empirical evidence that any of us can directly access, namely the contents of one’s own conscious mind, is inconsistent with an ontology of scientific materialism (cf. the Hard Problem). So empirical adequacy doesn’t fare well either. [on personality] Dario, let's say you're an intelligent moral agent who wants to maximize the good you do in the world – in other words, an EA. You'll either want to become rich and successful or influence (or tap the funding of) others who are rich and successful. Reading e.g. Dale Carnegie's "How to Win Friends and Influence People" and Robert Greene's "The 48 Laws of Power" (etc) will be wise! In a Darwinian world, "Machiavellianism" is the basis of successful social cognition, not autistic truth-telling. Narcissism is more ambiguous. Naively, the trait is harmful. But messianic self-belief can hugely amplify your capacity to do harm OR good. [on fish self-awareness] The world needs an antispeciesist revolution. Confirmation of fish self-awareness: Fish Might Really Be Self-Aware, New Study Finds ("A team on a quest to prove that a fish species can recognize itself in the mirror is back with a new study to prove their point.") When people learn that you take panpsychism seriously (more formally, constitutive panpsychism or non-materialist physicalism in my case), they expect you to find minds everywhere - from rocks to trees to digital computers, maybe even the cosmos itself. But (once again in my case), phenomenally-bound minds are peculiar to the biological nervous systems of multicellular animals with the capacity for rapid self-propelled motion. A pleasure-pain axis, agency, and an egocentric real-time world-simulation are hugely adaptive: the challenge is to show how they are physically possible. I don't know how to engineer a mind if we assume classical physics. [on progress] Do we need a better understanding of progress? Should our metric of progress be the happiness of all sentient beings? Or "GDP per capita"? ["Crawford and Cowen also have a specific view of what kind of well-being they are aiming to encourage through progress. It's not happiness – or even the more established metric of "life satisfaction" – instead, their top priority is increasing "GDP per capita".] [on efilism] Not all efilists reject HI... What do efilists think of The Hedonistic Imperative? ("HI is a manifesto which outlines how to abolish suffering with the aid of technology. So instead of eradicating life, how about we eradicate suffering.") [on friendly AI] Artificial Uninintelligence (DP and Kim Solez critique Stuart Russell's lectures) [on algorithms] Soulful abstracta? Can algorithms sufer? Can abstract objects have PTSD? The world has frightful suffering. I worry about (some) s-risks. But the well-being of algorithms is Alice-in-Wonderland. Dario, you're right about not taking moral risks - at least if they can be avoided. One point I'd like to add though. Phenomenal binding - including phenomenally-bound pains and pleasures and the diverse intentional objects they infuse - is functionally genetically adaptive. The post-Cambrian success of the animal kingdom depends on it. So in that sense, I'm a functionalist about consciousness. I just don't believe classical Turing machines and classically parallel connectionist systems can do it. Anyone who believes digital computers or connectionist systems can support phenomenal binding - and thus potential suffering- really needs to explain how it's possible. Otherwise we're left with the possibility of spooky "strong" emergence. [on aphantasia] Eliminative materialists about consciousness tend to be high-IQ male perceptual direct realists with aphantasia: ("What's it like to be mind blind") David, my phenomenal consciousness isn't something abstract; it's something I'm undergoing right now. What's more, 650nm electromagnetic radiation incident on my retinas is neither necessary nor sufficient for my experience of phenomenal redness. I experience phenomenal redness in my dreams. I can close my eyes and imagine redness now. If a neurosurgeon stimulated the V4 colour centre of my ventral occipital lobe, I might once again experience the raw feels of phenomenal redness. Yes, for evolutionary reasons, it's normal to use the functional-relational language of perception. But the retinal story is a red herring, so to speak... [on wireheading] Elon Musk’s Neuralink 'Brain Chip' could give users orgasms on demand ("The revolutionary device could be implanted in human subjects by the end of this year, and some experts believe it could completely revolutionise human sexuality") Aaron, trust him? No! But the vanity of billionaires can be tickled. The rich and powerful are also prone to conspicuous displays of competitive male altruism (cf. the Forbes list of Biggest Givers). If Elon Musk can be encouraged to think Neuralink can fix the problem of suffering and let him go down in history as the Greatest Benefactor of All Sentience, well...what a smart way to get one over on his rival Jeff Bezos. Ekaterina I think the option of perpetual well-being via electrodes should be offered to all victims of chronic pain and depression who aren't helped by existing therapies. But "wireheading" is a last resort. Wireheading is not a viable option for a whole civilization - wireheads don't want to raise baby wireheads. Uniform bliss extinguishes information-sensitivity - critical insight, social responsibility, meaningful relationships and intellectual growth. Instead, I think our goal as transhumanists should be hedonic recalibration - raising hedonic set-points. All_ the things that you care about can be conserved if you have a higher hedonic default setting – with the difference that your quality of life will be much richer. In the long run, we can be more ambitious. We can engineer life based on gradients of (literally) superhuman bliss. But superhuman bliss is not as morally urgent as fixing the problem of suffering. Gradients.com Beyond Humanism (2020) [on functionalism] Another wonderful video from Andrés! Dan, just a few comments to add. The physical impossibility of classical digital computers solving the binding problem and “waking up” doesn’t by itself entail the falsity of functionalism. After all, animal minds and the world-simulations we run are phenomenally bound. Binding is functionally highly adaptive - as rare deficit syndromes like integrative agnosia illustrate. The challenge is to show how animal minds – supposedly just a pack of decohered neurons - achieve the classically impossible. I share your impatience with quantum woo (“consciousness collapses the wavefunction”, etc). But if neither classical nor quantum physics can explain binding, then we face the spectre of dualism. For what it's worth, I bite the bullet and explore the quantum-theoretic version of the intrinsic nature argument: Quantum mind It’s crazy. I’m not convinced the alternatives are saner. [on wireheading] Genes, drugs or electrodes? Bliss should be our birthright, but shifting the Overton window is a daunting challenge. Pleasure Direct ("What Are the Ethics of an Implant That Delivers Pleasure Directly Into Your Brain?") [on cryonics and cryothanasia] Immortalists Magazine on cryonics and cryothanasia: Cryonics and cryothanasiaDeath Defanged My little piece is here: Death Defanged? Critics might say a negative utilitarian is organising a suicide cult. Heaven forbid. [on negative utilitarianism and eliminativism] Why do you reject negative utilitarianism? Is child abuse morally defensible if the beneficiaries derive enough pleasure? The Ones Who Walk Away From Omelas (Wikipedia) pdf "Yes", say classical utilitarians. "No", say negative utilitarians. Negative utilitarians want you to have fun - gradients of superhuman bliss in my case! - just not at anyone else's expense. Rob, you remark, "First front: I claim that rejecting phenomenal consciousness doesn't matter that much, and we can still ground morality in (the p-zombie equivalent of) valenced experience.” Ok, I'm reeling. Not rhetorically, but in the sense of being intellectually out of my depth trying to interpret this. If I'm in agony, and a consciousness anti-realist tells me that he doubts my phenomenal experience is real - and even if it does exist, then my agony doesn't matter that much...well, I'm floundering. Consciousness realists and anti-realists talk past each other – I guess we are trapped in Kuhn's incommensurable conceptual schemes. And consciousness (anti-)realism can't be quarantined from other issues. Consciousness (anti-)realism infects ethics and almost everything else. Thus I simply don't know how to conduct a discussion of negative utilitarianism that disputes the phenomenal reality of suffering: it’s the raison d’être of NU! So I'd have to disagree with your remark. "The specific claim I'm denying is that *if* anti-realism about phenomenal consciousness turns out to be true, we should therefore upend every moral claim". You speak of the zombie equivalent of valence experience. There's no such thing! Compare a robot banana-picker. The silicon robot has been programmed to prioritize bodily integrity over ripe banana odour-detection. Poetically, we can say the robo banana-picker "cares" more about its chassis than a banana. But if humans need to repair its bodywork, there's no moral need for us to administer anaesthesia because the robot isn’t a subject of experience - it’s a zombie. So there's no need to respect a zombie’s metaphorical "preferences". The robot doesn’t literally, phenomenally care about anything - pain, bodily integrity, bananas or anything else. Without consciousness, nothing matters. If we could agree on the reality of phenomenal suffering, then perhaps we could fruitfully discuss whether negative utilitarians (and Buddhists etc) are right to believe suffering prevention and mitigation is of overriding moral importance. Do all the other things human and nonhuman animals ostensibly care about derive their (dis)value from the pain-pleasure axis? But with our different background assumptions, alas I'm stumped... An Eliminativist Theory of Consciousness Jacy, in your paper you remark, "We crave a unique, unsolvable mystery at the core of our being”. NO! Sure, some New Agers and religious mystics love mysteries. But scientific rationalists (like me) hate mysteries. Mystery-mongering doesn’t underpin the Hard Problem. Rather, if materialist physicists and chemists correctly understand the properties of atoms and molecules, then subjective consciousness shouldn’t exist. None of it. Everyday life would subjectively be no different from being dreamlessly asleep. And here’s the rub. Whether I’m awake or dreaming, consciousness is all I’ve ever known! Everything else above-and-beyond my own consciousness is inference and speculation, including my belief that I’m a fairly typical skull-bound animal mind running a fairly typical egocentric world-simulation. So your claim that my consciousness doesn’t exist - that it's not like anything to be me - poses interpretational challenges, for me, at any rate! Jacy, agreed, some folk want to feel special. I was just querying whether this desire is what drives most discussions of the Hard Problem. Either way, the subtle phenomenology of this particular self-referential thought differs from, say, the brilliant blue sky I'm currently experiencing above my body-image within my world-simulation. But phenomenal blueness is a distinctive property of one kind of consciousness - a property that congenitally colour-blind people lack. Whether one is lucidly dreaming or wide awake, one knows that phenomenal blueness doesn't exist in the mind-independent world; but phenomenal blueness a real, spatio-temporally located property of my conscious mind - and countless skull-bound minds like mine I presume too! David, you remark, "This is the very thing that is being disputed. A number of thinkers seem to have no trouble believing that our experience of self-reference can arise out of the functional interactions of materials" Recall that self-reference / indexical thought / Descartes' cogito are just one category of conscious experience. We could equally invoke unreflective kinds of consciousness such as, say, uncontrolled panic or orgasmic ecstasy. My point was just that on standard materialist assumptions, i.e. quantum field theory describes fields of insentience, no one has the foggiest idea how to derive the properties of consciousness from neurobiology and thus ultimately physics. Hence the Hard Problem. Levine's "explanatory gap" is unbridged. Contrast life, where (with a bit of handwaving and a nod to the decoherence program) molecular biology and quantum chemistry can be used to derive the properties of information-bearing self-replicators from the underlying physics. No such joy with phenomenal experience. The empirical (“relating to experience") evidence ought not to exist. Materialism is in dire straits. So how can physicalism and the (ontological) unity of science be saved? Eliminativists like Jacy explore one route, while constitutive panpsychists / non-materialist physicalists a diametrically opposed route. Eliminativists think that if our best theory of the world, scientific materialism, has no place in its ontology for consciousness, then in some unfathomable sense, it's an illusion, and the illusion itself is illusory. By contrast, constitutive panpsychists / non-materialist physicalists are consciousness realists who conjecture we radically misconceive the intrinsic nature of the physical. Both strategies for dissolving the Hard Problem strike me as incredible. Only one is empirically adequate. What’s it like to be a bat or an eliminativist? Jacy, sorry, I’m floundering. I just don’t “get” Semanticalism. Sometimes I wonder if eliminativists are just perceptual direct realists with Aphantasia). Consciousness, reflective and unreflective, is all I’ve ever known - and you’re telling me my consciousness doesn’t exist. From a headache to a melody to a dream, I don’t merely have the intuition that I’m conscious, but rather (to quote Galen Strawson) “the having is the knowing". Moreover, I can radically alter my consciousness, too, by taking different consciousness-altering drugs. My consciousness is the only direct evidence I have for anything else at all, including the mind-independent reality of modern science. By contrast, I infer on theoretical grounds that I’m probably not a lab-grown minibrain or a Boltzmann brain (etc). And alas there really is a fact to the matter that I’m undergoing the nasty raw feels of a headache. Materialist science doesn’t know how to derive those nasty raw feels of pain from neurobiology and physics as commonly understood. What’s more, there really is an objective fact of the matter whether nonhuman animals too undergo the ghastliness of phenomenal pain - not just functional nociception, but the nasty raw feels of pain. The Cartesians were mistaken to think dogs were insentient automata emitting mere distress vocalizations. Animal minds really are conscious subjects of experience. Rocks, vegetables and the stock market (etc) are not conscious subjects of experience. Subjective experience is an objective spatiotemporally-located property of reality that science must accommodate on pain of failing the test of empirical adequacy. [Jacy, I was about to apologize again for causing you… frustration. Close line-by-line engagement would be more useful. But feeling frustrated is a state of consciousness you’re denying – a thought that induces another state of consciousness in me: confusion!] David, I'm currently experiencing one kind of phenomenal consciousness, the colour green. I experience green both when awake and dreaming. The greenness is normally bound to other modes of phenomenal consciousness - the everyday objects in my world-simulation. I assume I'm fairly typical. Questions of the (in)effability or (un)communicability of experience are best kept separate from the reality of phenomenal consciousness itself. For example, most humans can experience millions of hues, but our language has only a few dozen colour terms. Either way, the reason that the colour-blind kid below is emotionally overwhelmed is he experiences a new mode of consciousness, phenomenal colour: Kid sees colour We could go on to tackle the Hard Problem, the binding problem and so on. But until we agree that there is a phenomenon that stands in need of explanation, exploring solutions will be of no interest to you. Some NUs want to turn the world into equivalent of paperclips, but can they solve the AI alignment problem? Superintelligence and the FLI Minimise, prevent and (eventually) abolish suffering is the overarching ethical framework for an NU. And achieving this neo-Buddhist vision involves working with people from very different ethical traditions and accommodating - insofar as is humanly possible - their diverse values and preferences. Hedonic uplift via set-point recalibration is the nearest I can envisage to ending suffering while respecting the (often conflicting) values and preferences of others. Few people familiar with the concept of the hedonic treadmill are opposed to enjoying a higher default hedonic set-point. But how best to raise everyone to this level? [on effective altruism] From a comment by Question Mark, "Since biological life will almost certainly be phased out in the long run and be replaced with machine intelligence, AI safety probably has far more longtermist impact compared to biotech-related suffering reduction. Still, it could be argued that having a better understanding of valence and consciousness could make future AIs safer." The case for phasing out the biology of suffering My thoughts on longtermism: DP on Longtermism Cruel and unfair. But witty: The EA Mindset ("This is an unfair caricature/ lampoon of parts of the 'EA mindset' or maybe in particular, my mindset towards EA") Human exinction could be in our future "I am an optimist," Sandberg says, "the future could be awesome. I think the world is actually really good. And it could be even better, much better, which means that we have a reason to try to safeguard the future." Oh to run Anders' world-simulation. Adam, if novelty is desired as well as happiness, perhaps induce perpetual jamais vu: Jamaos vu Alternatively, after we have upgraded our reward circuitry to utilitronium, you can explore a rich variety of exotic state-spaces of consciousness until Doomsday: Post-Suffering Life Intelligence? IMO, it's a necessary evil. On fairly modest ethical assumptions, our long-term goal should be blissful ignorance of the horrors of Darwinian life that the universal wavefunction encodes. This goal of invincible ignorance follows whether one is a classical or negative utilitarian. Zombification? As a hardcore negative utilitarian, I think we should enshrine in law the sanctity of sentient life. The future belongs to fanatical life-lovers. In order to get rid of suffering, we need the widest possible religious and secular coalition of allies. So talk of button-pressing, efilism, “hard” antinatalism and so forth at best distracts from the goal of ending suffering. Let's not alienate people. On the other hand, If Eliezer is right, then upshot of machine superintelligence will perfectly align with NU values. The AI alignment problem rises only for folk who believe they are wiser than superintelligence. (Not me!) As you know, I'm an AGI sceptic, but that's another story. [on DALL·E 2] with thanks to Adam Ford: Gustav Klimpt painting of sentient superintelligence doing hard core physics to determine if 'consciousness discloses the intrinsic nature of the physical' “Gustav Klimpt painting of sentient superintelligence doing hard core physics to determine if "consciousness discloses the intrinsic nature of the physical” [on suffering and post-Darwinian life] Can one serve too masters? The Two Noble Pursuits "At this point it begs the question ‘if we can’t get rid of suffering while remaining conscious knowers of the world, then should we want to get rid of all suffering?’ If throwing out all of the bathwater requires throwing out the baby, then isn’t it better to keep some of the bathwater (suffering) in order to keep the baby (conscious knowing)?" We can't get rid of suffering from reality - timelessly conceived. But we (and our "smart" machines) can prevent the physical signature of suffering in our forward light-cone. And once we have discharged all our ethical responsibilities to prevent its recurrence, wouldn't we do better to forget its very existence? To understand suffering is oneself to suffer - horribly. The risk of aiming ultimately for blissful ignorance - i.e. knowledge and understanding only of paradise - is premature defeatism about (the prevention of) suffering elsewhere. Therefore giving knowledge equal (or even supreme) value might seem safer. [Although I lean to Rare Earthism, I sometimes give the example of a VR-living civilisation in our galaxy that engineers life based entirely on gradients of bliss but mistakenly believes they are alone when they could have rescued us.] But maybe our responsibility to the future is to make Darwinian life not just impossible but inconceivable. Sociologically, I suspect NU and SFE is going to die out with the end of suffering. What's the best way to ensure hedonic sub-zero experience can never recur in our forward light-cone? Once again, we need to take into account not just what's technically feasible, but also what's politically and societally viable in a world where (I assume) mastery of the pleasure-pain axis means everyone is innately (super)happy and an ardent life-lover. There are just too many unknowns here to speak confidently. Despite my scepticism of "AGI" as currently conceived, artificial robots will presumably play a huge role in cosmic stewardship - and AI could be used to prevent pain-ridden Darwinian life spontaneously arising again in our galactic super-cluster. This response probably makes me sound less NU than you, but really it's (in part) a question of power dynamics. Rather than "negative utilitarian", I sometimes call myself a secular Buddhist - despite feeling a bit of an impostor and knowing the inadequacy of the label. How can the different Buddhist traditions be more effectively infused with the technical potential of biotech and AI to deliver a world without suffering? I sometimes quote the Dalai Lama ["If it was possible to become free of negative emotions by a riskless implementation of an electrode - without impairing intelligence and the critical mind - I would be the first patient."(Society for Neuroscience Congress, Nov. 2005)]. But alas we don't hear many Buddhists calling for genome reform and reprogramming the biosphere - the only (non-apocalyptic) way I know to fix the problem of suffering. [on psi] I'm sceptical of scientific triumphalism. Not least, materialism is inconsistent with the entirety of the empirical ("relating to experience") evidence - we ought to be zombies! Nonetheless, the existence of psi would be inconsistent with the extremely well tested Standard Model, to which all the special sciences reduce. Physicalism (but not materialism) offers the best explanation for the technological success story of modern science. Quantum theory is extremely mathematically rigid - you can't just start toying around with the formalism to make room for psi powers. The Standard Model ("The Standard Model of particle physics: The absolutely amazing theory of almost everything") [on identity] Bad news for empty individualists? Future You ("How thinking about 'future you' can build a happier life") [on eugenics] Transhumanists and effective altruists face a dilemma. If we don't reform the genome, then the problem of suffering can't be fixed. Yet the accepted name for genome reform is taboo: Should we aim for linguistic reappropriation? Linguistic reappropriation Or build a stronger brand? (what exactly?) And will our critics play ball? Is eugenics moral? Jessica, futurology via extrapolation is treacherous. So you're right to raise the issue. I was simply noting how elevated mood predisposes to being an active citizen and internalizing the role of a dominant "alpha". Low mood predisposes to keeping one's head down. The social structure of a future civilisation where everyone enjoys life animated by gradients of bliss is speculative. I was just noting we can't assume that hedonic uplift is the recipe for Brave New World. If anything, society may face the opposite problem in the wake of a biohappiness revolution. What happens when exuberant lifelovers aren't willing to slot themselves into traditional status hierarchies? [on MDMA] Ecstasy and Honesty MDMA consciousness is beautiful. I'd evangelize if we knew how to sustain it. Rick Doblin has done some fantastic work. IMO, MDMA-assisted psychotherapy is more promising than psychedelics. But there are zillions of pitfalls to navigate.... The Trials of Rick Doblin ("He revolutionized the way we view MDMA-assisted psychotherapy. But what does the research actually show?") [on language] The ying and yang of language Language and gender ("These Words That Women Know Better Than Men And Vice Versa Will Make You Question Your Grasp Of The English Language") [on Kardashev scales for Pleasure] well, if the hedonic scale of Darwinian life is, conventionally, -10 to 0 to +10, we can imagine, say Hedonic Type 1 civilization with a range of 0 to +20, a Hedonic Type 2 with a range of, say, +20 to +40, and a mature Hedonic Type 3 Civilization with, say, +80 to +100. But I don't pretend to know the likely depth or shallowness of hedonic range of future civilization. And will the information-signalling role of the pleasure axis be maintained indefinitely or will decision-making be offloaded to zombie AI? Will states of mega-bliss continue to be adulterated with the usual intentional bric-a-brac or converted to pure hedonium? Will we ever launch a hedonium / utilitronium shockwave? [on perception] The Humunclus Problem Homunculi are real. Consider a lucid dream. When lucid, you can know that your body-image is entirely internal to your sleeping brain. You can know that the virtual head you can feel with your virtual hands is entirely internal to your sleeping brain too. Sure, the reality of this homunculus doesn’t explain how the experience is possible. Yet such an absence of explanatory power doesn’t mean that we should disavow talk of homunculi. Waking consciousness is more controversial. But (I’d argue) you can still experience only a homunculus - but now it’s a homunculus that (normally) causally do-varies with the behaviour of an extra-cranial body. [on gene drives] The sci-fi technology tackling malarial mosquitoes The same technology of gene drives could be used to engineer a hyperthymic biosphere: Wild Animal Happiness [on non-materialist physicalism] So-called constitutive panpsychism is physicalist. Indeed, if the mathematical formalism of physics is just transposed to an ontology of qualia, then we have non-materialist physicalism - which is idealism under another name. I explore the conjecture that mathematical physics describes patterns of qualia, though only animals have minds. The conjecture that your mind and the phenomenal world-simulation it runs consist entirely of "cat states" is crazy - I don't really expect people to take the idea seriously. But if true, then non-materialist physicalism solves the Hard Problem of consciousness, the binding problem, the problem of causal efficacy and more besides. And it's experimentally falsifiable.... David Pearce has a proposal The reason that physicists spends billions of dollars on particle accelerators exploring exotic energy regimes is their belief that - discounting the hypothetical dark matter and dark energy posited by cosmologists – the Standard Model and General Relativity are successful in describing all currently accessible physical phenomena. All the “special sciences” (molecular biology, chemistry etc) reduce to physics. Compare Sean Carroll ("The Laws Underlying The Physics of Everyday Life Are Completely Understood") or Glenn Starkman ("The Standard Model of particle physics: The absolutely amazing theory of almost everything") The only exception to this tale of triumph is consciousness: the Hard Problem. Is subjective experience just an anomaly - or the key to the plot? Lepandas, allow me to offer a bit of background. The Hard Problem of consciousness arises only because we normally make a plausible metaphysical assumption. The intrinsic nature of the physical, the mysterious “fire" in the equations, is non-experiential. The mathematical machinery of quantum field theory describes fields of insentience. However, in recent years, a minority of researchers (e.g. Galen Strawson, Philip Goff) have proposed dropping this commonsense assumption. Instead, perhaps the intrinsic nature of the universe’s fundamental quantum fields doesn't differ inside and outside your head(!) What makes animal minds like us special isn’t experience per se, but rather, its phenomenal binding into virtual worlds of experience - like the phenomenal world-simulation your mind is running right now. This conjecture is sometimes called constitutive panpsychism. In my view, the term is misleading because "panpsychism" suggests property-dualism, i.e. consciousness is inseparably attached to all fundamental physical properties. So I prefer philosopher Grover Maxwell's term, “non-materialist physicalism”, because according to this conjecture, experience discloses the essence of the physical. We should simply transpose the entire mathematical apparatus of modern physics onto an idealist ontology. Yes, crazy! But see: Crazyism However, non-materialist physicalism faces a huge technical challenge. Even if consciousness is fundamental to the world, then why aren't we micro-experiential zombies? For instance, consider the five hundred million or so neurons of your enteric nervous system. The “brain-in-the-gut” is a fabulously complicated information-processing system. But even if its individual neurons are membrane-bound micro-pixels of consciousness, your enteric nervous system is not a unified subject of experience, a person. So why is your awake mind-brain so radically different? If textbook neuroscience is correct, then you should be a micro-experiential zombie too - as you are when dreamlessly asleep – just c. 86 billions membrane-bound “pixels” of mind-dust. If neurons are really discrete, decohered classical objects, as crude neuroscanning suggests, then phenomenal binding should be impossible. This partial "structural mismatch" between phenomenology and neurobiology leads David Chalmers to wonder if we must consider dualism. Dualism is a desperate last resort. Presumably, we want - if at all possible - to conserve physicalism and the ontological unity of science. The “Schrödinger’s neurons” proposal I explore is designed to test the conjecture that a perfect structural match exists. My best guess is that interferometry will reveal such a perfect structural match, not in four-dimensional space-time, but in the fundamental high-dimensional space required by the dynamics of the wavefunction. [For a nice introduction to wavefunction realism, maybe see “The World in the Wave Function: A Metaphysics for Quantum Physics" by Alyssa Ney] Anyone familiar with quantum decoherence will recognise that a “Schrödinger’s neurons” conjecture is (extremely!) far-fetched. Not merely is the credible effective lifetime of neuronal superpositions in the CNS less than femtoseconds. Also, the conjecture that you experience nothing but individual “cat states” inverts the measurement problem of quantum mechanics as normally posed. Common sense could very well be right. But I’m still curious. Let’s use experiment rather than human intuition to lay the possibility to rest. If you haven’t watched the video already, you might enjoy Phil Goff vs. Sean Carroll. Goff defends consciousness fundamentalism: experience discloses the intrinsic nature of the physical. The strongest technical argument against constitutive panpsychism / non-materialist physicalism is the binding problem. The phenomenal binding / combination problem is the reason why Goff used to reject constitutive panpsychism / non-materialist physicalism as a possible solution to the Hard Problem (cf. "Why Panpsychism doesn't Help Us Explain Consciousness" by Philip Goff.) Why aren't we just micro-experiential zombies? Indeed the (ostensible) "structural mismatch" between phenomenology and neuroscience leads David Chalmers to wonder if we must abandon monistic physicalism. Naively, we’re discussing a "philosophical" rather than scientific question. Either you take constitutive panpsychism / non-materialist physicalism seriously as a solution to the Hard Problem of consciousness or you don't. How could we ever hope to test the proposal? However, if consciousness discloses the intrinsic nature of the physical, then we must consider the nature of reality not merely at ludicrously small distance scales, but ludicrously short temporal resolutions too. What will molecular matter-wave interferometry reveal? I don’t know, but the protocol of a “Schrödinger’s neurons” experiment is designed to find out. If the phenomenal binding of our minds is not a classical phenomenon, then the non-classical interference signature will tell us. Finite light, thank you. Yes, any physicalist theory of consciousness must explain qualia. Science needs to account for the existence, binding, diversity and causal efficacy of consciousness. You speak of "levels". For sure, human convenience dictates carving up reality into multiple layers - physics, chemistry, molecular biology, psychology, sociology and so forth. But ultimately reality has only one "level": physics. Ultimately, we need to “cash out” everything in terms of quantum field theory or its successor. For if we can't derive chemistry, biology (etc) from the underlying physics - at least in principle - then spooky “strong” emergence is real. By analogy, imagine if it weren't possible - again in principle - to derive the properties of software running on your desktop PC from the execution of the underlying machine code. “Strong” emergence would be magic. Anyhow, I explore a version of what philosophers call the intrinsic nature argument. Either the conjecture is too crazy for words or a live option – take your pick – but for a nice introduction see Philip Goff’s "Galileo's Error" (2019) (My review: Galileo's Error?) Just one correction to your comment above. I'm not arguing for a dynamical collapse theory of QM. Rather, whether you experience e.g. a live cat or point-like particle incident on a perceptually experienced laboratory screen in a double-slit experiment, all you ever experience are individual "cat states", i.e. neuronal superpositions. As investigators from William James to David Chalmers have recognized, phenomenal binding is [classically] impossible. On a “Schrödinger’s neurons” conjecture, only the ubiquity of the superposition principle of QM allows you to experience definite outcomes. Otherwise, you’d just be Jamesian mind-dust, incapable of perceiving anything at all. Ill-named human “observers” tend to conflate (quantum) vehicle and its (subjectively classical) content. Yes, crazy stuff. Heaven knows if it’s true. We won’t know until we experiment to find out. I agree with you. Consciousness is adaptive – or rather, phenomenally-bound consciousness is adaptive: microexperiential zombies would soon starve or get eaten. But the fact that phenomenally-bound consciousness is fitness-enhancing doesn’t explain the phenomenon in any deep sense. After all, telepathy and precognition would be fitness-enhancing too, yet psi phenomena are physically impossible. Yet the same ought to be true of consciousness; a few eliminative materialists bite the bullet. If (materialist) physicists and chemists correctly understand the properties of matter and energy, then subjective experience should be impossible. And if (materialist) neuroscientists correctly understand the properties of the central nervous system as a pack of discrete classical neurons, then phenomenally-bound subjective experience should be impossible too. The inadequacy of adaptive explanations is illustrated by how your molecular duplicate assembled from scratch – or the notorious “bran-in-a-vat” – would presumably be conscious just like you too. In other words, consciousness is an intrinsic property of configurations of matter and energy irrespective of whether these states have been harnessed by natural selection to play a computational-functional role in naturally evolved biological organisms. All the options seem absurd. I don’t pretend to know if non-materialist physicalism is true. But its bizarre empirical predictions make the theory experimentally falsifiable. I’m probably mistaken. If I understand you correctly, you are assuming a "dynamical collapse" theory. I don't. Indeed in my view, only the fact that the superposition principle of QM never breaks down allows you to experience a definite outcome in a phenomenal world-simulation where it does: How can we best resolve the problem of definite outcomes in quantum mechanics? I say a bit more about interpretations of QM e.g. here: Interpretations of QM. But I think you - and any investigator who posits a nonexperiential “fire" in the equations - need to explain precisely why we aren't zombies. And likewise, any investigator who assumes that we are pack of decohered classical neurons needs to explain precisely why we aren't - at most - micro-experiential zombies. And above all, I think anyone with a theory of consciousness needs to focus on experiments that will (dis)confirm their theories to the satisfaction of proponents and critics alike. No worries if you think a "Schrödinger's neurons" proposal is crazy: I do too! Well, if non-materialist physicalism is true, i.e. if experience discloses the intrinsic nature of the physical, then "p-zombies" are unphysical. Theories that assume the "fire" in the equations is non-experiential must explain why we aren't p-zombies. They fail. Hence the Hard Problem - a euphemism for the inconsistency of the ontology of (what’s normally reckoned) our best theory of the world, scientific materialism, with the empirical evidence. I believe physicalism can be saved - but not materialism. Medbud, first, thank you for the "Consciousness and the fallacy of misplaced objectivity" paper link. I essentially agree with its critique of the limitations of behavioral, functional, and neural-correlates approaches. Alas, as it stands I don't think IIT solves the Hard Problem, the binding problem, or the problem of causal efficacy. If we make the standard materialist assumption that world's fundamental quantum fields are non-experiential, then Levine's "explanatory gap" stays unbridged. For we haven’t derived sentience from insentience. Likewise, if we make the standard neuroscientific assumption that neurons are discrete, decohered, membrane-bound classical objects, then we haven't derived the properties of our phenomenally-bound minds and the world-simulations they run. No amount of functional integration or computational complexity can transmute classical Jamesian "mind-dust" into phenomenal objects or unified subjects of experience. At most, we should be micro-experiential zombies (Phil Goff’s term). Anyhow, I don't for a moment expect most researchers to take seriously the conjecture I explore - the quantum-theoretic version of the intrinsic nature argument. I find a "Schrödinger's neurons" conjecture crazy too. But as I (very) belatedly came to realise, it's experimentally falsifiable. And if I'm confounded - as I probably will be - well, there is no disgrace in being proved wrong. Perhaps you may want to unpack the meaning of (2). Have you in mind the speculative idea that the universe is a vast digital computation device? Or is the world "computational” merely in the sense that reality can formally be described by the unitary and deterministic Schrödinger evolution? IF non-materialist physicalism is true, i.e. experience is the intrinsic nature of the physical, the "fire" in the equations, then mathematical physics describes patters of qualia. Thus only the physical is real (1). Its evolution is described by the universal Schrödinger equation (2). Only the physical has causal efficacy. Hence (3). Yet does experience really disclose the intrinsic nature of the physical? I don't know... Galileo's Error The conjecture that the universe/multiverse is conscious - universal mind - is often confused with the conjecture that the universe/multiverse is consciousness, i.e. non-materialist physicalism. Thus when dreamlessly asleep, your fundamental fermionic and bosonic fields may or may not be experiential - I don't know! - but there is no phenomenally-bound subject of experience. The same is true (as far as I can tell) for the universe/multiverse as a whole. Maroš, relative to standard scientific background assumptions, your objection is spot-on. Normally, we assume two separate mysteries. Why does the physical universe exist at all? And how do physical matter and energy give rise to consciousness? (i.e. the Hard Problem of materialist metaphysics). However, if the intrinsic nature argument is sound, then consciousness discloses the essence of the physical. So there’s only one fundamental mystery: why does anything exist? Both Andres and I take seriously that conjecture that mathematical physics describes patterns of qualia. Note that such non-materialist physicalism doesn’t entail the animist notion that rocks or trees or connectionist systems or classical Turing machines are phenomenally-bound subjects of experience, i.e. minds. Minds and our world-simulations are an adaptation of organisms with the capacity for rapid self-propelled motion, i.e. animals. So how are minds possible? What happens when you “wake up” from a dreamless sleep? Andrei, thanks. Any adequate theory of consciousness must explain: 1) the existence (the Hard Problem) 2) phenomenal binding 3) causal efficacy 4) diversity (the palette problem) of consciousness. And critically, any scientific theory of consciousness must make novel, precise, experimentally falsifiable predictions that proponents AND critics can agree will (dis)confirm the theory: Quantum Mind (Wikipedia) So in answer to your questions: 1) Non-materialist physicalism is monist. Only the physical is real. Only the physical is causally effective. Formally, the world is exhaustively described by the equations of mathematical physics – presumably the universal Schrödinger equation if wavefunction monism is true. 2) “Non-materialist physicalism" and "physicalistic idealism” are just stylistic variants of the same idea – the intrinsic nature of the physical, the “fire” the equations of physics, is experiential. Quantum field theory describes fields of sentience. See e.g. Phil Goff for a recent convert to the intrinsic nature argument: On this story, what makes animal minds special isn’t experience per se, but rather phenomenal binding into virtual worlds of experience - like the phenomenal world-simulation your mind is running right now. Non-materialist physicalism does not propose that rocks or trees or digital computers - or indeed the whole cosmos - are subjects of experience. 3) Physicalism is the conjecture that no “element of reality” is missing from the mathematical formalism of physics – the discipline on which everything else (chemistry, biology etc) supervenes: Idealism is the conjecture that the intrinsic nature of reality is experiential Most physicalists make the extra metaphysical assumption that the “fire” in the equations of physics is non-experiential. The upshot is "materialist" physicalism – "materialism" for short. Non-materialist physicalists drop the metaphysical assumption. Fields of metaphysical gunk should go the way of luminiferous aether. The intrinsic nature of the world's fundamental quantum fields is no different inside and outside your head. To be stressed: no one who joins this group need sign up to any of the weird speculations of some the admins. Thus you can believe we should use biotech to abolish suffering and be an “orthodox” materialist – although any talk of “orthodoxy” in questions of consciousness is strained. Science is baffled by consciousness. [on S-risks and malevolent actors] Magnus, as I view it, phasing out the biology of suffering is intimately linked to the end of malevolence. I know we sometimes worry that temperamentally happy folk downplay the problem of suffering and the overriding moral urgency of its minimisation. Suffering just isn’t in their life narrative. Sometimes one wants to say to ardent life lovers, bluntly: why can’t you be more compassionate?! But one thing that happiness doesn’t do is embitter people, or breed malevolence - quite the opposite. Contrast the effects of suffering. Yes, sometimes suffering can deepen compassion; but frequently, suffering breeds resentment, nihilism and misanthropy. [“It is not true that suffering ennobles the character; happiness does that sometimes, but suffering, for the most part, makes men petty and vindictive.” ― W. Somerset Maugham, The Moon and Sixpence] In a world in which we all enjoyed good health, bringing back ancestral horrors would be inconceivable. This reply makes it sound as though I’m unworried by s-risks. But here we come to info-hazards perhaps best passed over – info-hazards that won’t arise if we implement the WHO mission of good health for all. [on HI promotion] paradise engineering paradise engineering [on phenomenal binding] Three distinct questions: 1. What is the phenomenal binding / combination problem? 2. What is the computational-functional power of binding in biological minds? Why does binding matter? How effectively can binding be computationally emulated by zombie information-processors that can’t bind, e.g. classical Turing machines? 3. What are possible solutions to the binding problem? (e.g. Chalmersian dualism, quantum mind, topological segmentation, etc). Novel testable predictions and experiments? Andrés explores (3). I do too. But unless you grok questions (1) and (2), then answers to (3) simply won’t be of interest. In my view, a solution to the binding problem has profound ethical implications for EA, “AGI” and the long-term future of sentience. [on general intelligence] Alternatively, the basis of general intelligence is the pleasure-pain axis and the ability to run a real-time, cross-modally-matched egocentric world-simulation that masquerades as the external environment populated by other cognitive agents. What is the neural architecture of intelligence? [on mirror-touch synaesthesia] What's it like to be a mirror-touch synaesthete? Most conceptions of posthuman superintelligence - and alleged machine "AGI" - evoke a super-Asperger rather than hyper-empathetic perspective-taking... Meet the man who really feels your pain The ability to understand other perspectives is as much a form of knowledge the ability to do calculus. But I suspect full-spectrum (super-)intelligence will entail choosing to be selectively ignorant - which leads to obvious (and less obvious) paradoxes. [on Longtermism] Andres, is more going here than meets the eye? My source is second-hand, so I can't vouch I'm reporting either fairly or faithfully. But if you're a Longtermist funder who believes our first, second and third priority should be combatting x-risks, then research into extreme suffering is potentially dangerous. If people realise how inconceivably bad suffering can be, then they may draw the "wrong" conclusions. What would you do to destroy Hell? There is an unresolved tension in EA between suffering-focused and x-risk folk - and funding is shaping directions accordingly. What's ironic and doubly frustrating is that fixing the problem of (severe) suffering may be one of the most effective ways to tackle x-risk. A world entirely of ardent life-lovers (like x-risks folk) is safer than a world where millions of people feel trapped in hell - and millions more feel ambivalent and conflicted about life. OK, I don't press this argument because it's motivated cognition on my part. But as far as I can tell, the argument stands. Kenneth, prescribing opioids can lead to hyperalgesia - and thus even worse pain. Even "safe" nonsteroidal anti-inflammatory drugs (NSAIDs) can lead to more pain (cf. https://www.science.org/doi/10.1126/scitranslmed.abj9954) By contrast, ensuring that all new kids are born with a benign, low-pain version of SCN9A ("gene-drives.com. Genome reform is longtermist, life-affirming and x-risk-reducing - quite aside from sparing innumerable sentient beings the burden of severe suffering. So is genome reform an EA cause priority? Not currently, alas. Adam, what's the most effective way to mitigate, prevent and eventually abolish suffering? If you're right, then NUs should be trying to engineer a vacuum phase transition, build seed-AI paperclippers, infiltrate x-risk institutes (etc). I think such an approach would be misconceived. OK, my objection is partly technical: life is now ineradicable. But my main reason for urging life-affirmation is that our best hope of ending suffering is to build the broadest possible coalition of secular and religious support. A biohappiness revolution - presumably in the guise of good health for in the spirit of the WHO - is potentially saleable. Retiring life on Earth will always be a minority view. Ruth, n a theoretical note. I've long assumed that if/when we phase out the biology of suffering here, then - after multiple safeguards are established - its previous existence will become irrelevant to life in our forward light-cone. There won't be suffering - I've assumed - in the medium or distant future. That's the point of focusing on germline engineering for gradients of bliss, not just symptomatic relief. But this assumption of future irrelevance can be challenged. Maybe (super)intelligent moral agents have a responsibility to launch cosmic rescue missions if suffering sentience exists within our cosmological horizon. Maybe my views on the impossibility of digital sentience are mistaken. This debate should be viewed in the context of "longtermism" in the effective altruist movement - though longtermism means something different to classical and negative utilitarians. Is there a label for our ethical stance / policy prescriptions that is both accurate and - if not inspiring - at least doesn't alienate? "Suffering-focused ethics" may turn out to be least unsatisfactory. But it's still enough to trigger dismissive responses from people whom one might hope would be allies - as "The dismal dismissal of suffering-focused views" illustrates. [on x-risks and s-risks] 1) Happiness is life-affirming, whereas raw suffering is nihilistic - it would rather not exist. Replacing suffering with a more civilised signalling system would – IMO - also banish one of the biggest underlying sources of x-risk as conceived by longtermists. In practice, x-risks folk can be suspicious of suffering-focused ethics because of its seemingly life-denying, nihilistic implications. But the underlying nihilism derives from the suffering, not its messengers. Suffering drains life of meaning; happiness does the opposite. I'm not sure that philosophy of mind and/or neuroscience tells us anything directly about the ethics of asymmetries between suffering and happiness. Intuitively, severe suffering is more terrible than intense pleasure is more wonderful. However, consider the _insanely_ arduous and painful things that sentient beings – both human and nonhuman animals - will sometimes do to obtain a fleeting, fitness-enhancing pleasure like sex. So I think the asymmetry between pain and pleasure is ethical rather than any inherent difference in their comparative intensity. 2) As far as I can tell, classical Turing machines and classically parallel connectionist systems have the wrong sort of architecture to generate phenomenally-bound sentience - a mind. Executing their code faster, or making the code more complicated, or devising smarter training algorithms (etc) can't generate unified subjects of experience on pain of irreducible "strong" emergence. "Strong” emergence is akin to magic. If we live in a world where “strong” emergence is real, nothing is lawfully forbidden. So philosophers and scientists don't like the idea. What’s more, the phenomenal binding of biological minds is exceedingly computationally powerful, as rare partial failures of both local and global binding in humans illustrate. In other words, I’m a functionalist – just not a Turing machine functionalist. Either way, it's worth distinguishing artificial (1) sentience, (2) phenomenally-bound sentience, and (3) phenomenally-bound sentience with a pleasure-pain axis. Presumably, only (3) inherently, non-instrumentally matters. I argue that phenomenally-bound sentience with a pleasure-pain axis is (probably) peculiar to biological minds. 3) I think we already know enough technically – but not yet sociologically or politically - to sketch out blueprints for a happy biosphere. I have no idea how to create a digital mind. In my view, attributing a mind to Deep Blue or AlphGo or GPT-3 (etc) is an anthropomorphic projection on our part. 4) Just as research into some kinds of x-risks increases their likelihood – aspiring world-destroyers should presumably burrow deeply into x-risks institutes - research into some kinds of s-risks should be discouraged lest it makes their occurrence more likely. Such risks - as I conceive them - will effectively vanish if we fix the problem of suffering by making its occurrence physically impossible. However, I understand that researchers with different conceptions of consciousness/AGI/s-risks may not share this view. How can incisive critical thinking be encouraged if one is simultaneously trying to suppress some kinds of knowledge as too dangerous? I don’t know. 5) IMO, we should anticipate artificial biological minds, artificial hybrid cyborg minds, _maybe_ one day artificial nonbiological quantum minds, but not digital minds nor connectionist minds. Questions about minds are different from what digital computers and connectionist systems can and can’t do: the upper bounds to zombie intelligence are unknown. [on the Biohappiness Revolution] No prospective parent - literally zero - wants to have depressive or pain-ridden children. So fixing the problem of suffering depends, critically, on shifting the Overton window. This isn't a question of the sanctity of life. We should support its enshrinement in law. Humans can’t be otherwise trusted. In my view, ALL responsible prospective parents should consider preimplantation genetic screening, counselling and germline editing for their children to ensure a high pain-threshold, hedonic range and hedonic set-point- in short, lifelong good health. The only long-term solution to the problem of suffering is germline reform. So how can we win popular support for a reproductive revolution? Can humanity rise to the challenge? Giordano, for each of our existing core emotions, we should ask whether we want to retain (1) its "raw feels" and (2) its functional role. With jealousy, for example, we could well dispense with both. But the functional analogue of anxiety will presumably be needed for the indefinite future. Giordano, you remark, "Seneca already taught us that the mind that is anxious about future events is dejected..."But compare the kind of functional anxiety one has, say, playing a computer program at chess. One can spend a lot of time considering potential risks and threats to one’s pieces without being unhappy. Hopefully, life can one day be similar - with the difference that one sometimes wins rather than always loses! [Thought Criminal writes] @David Pearce Do you have any advice for people who want to make future biohappiness a reality and want to reduce S-risks? I know about the Center on Long-Term Risk and the Center for Reducing Suffering, and have donated to both of them. What else can someone like me do to optimize the far future and reduce the risks of astronomical suffering as much as possible? I think the main obstacles to creating a biohappiness utopia based on gradients of bliss in the long term are game-theoretic, rather than technical. Even if it becomes technologically possible to replace suffering with gradients of bliss, it could also become technologically possible to do the exact opposite. It’s possible that Nash equilibria could emerge that result in large amounts of suffering being produced, even if it’s technologically possible to avoid it. Trying to spread utilitarian and suffering-focused values may therefore be the area where we should push the most." Thought Criminal, many thanks. In my view, the biology of suffering is like smallpox. Once we’ve got rid of it, we’re never going to bring it back. Hence IMO advocates of suffering-focused ethics (like me and you) should work on promoting blueprints for a biohappiness revolution, presumably under the auspices of the WHO rather than some fringe group. However, as you suggest, this analysis can be challenged. For instance, I assume that classical digital computers, classically parallel connectionist systems and silicon (etc) robots can’t solve the phenomenal binding problem So such information-processing systems don’t support a pleasure-pain axis. Classical computers aren’t a source of s-risk, not directly at any rate. I’m a functionalist, but not a Turing machine functionalist: digital computers are zombies. But what if I’m wrong? What if digital computers can support phenomenally-bound subjects of experience and maybe astronomical amounts of suffering? I’ll have to leave it to you whether you judge my dismissal of (non-trivial) digital sentience is compelling. The view that phenomenal binding is nonclassical is controversial. Fortunately, the issue will ultimately be settled experimentally: Schrödinger’s Neurons Maybe I'm mistaken. That said, I do worry about some (pre-biohappiness revolution) s-risks: DP on S Risk But there’s a huge complication to any debate. As with x-risks, any researcher should ask: does my publicly discussing such risks diminish or exacerbate them? After all, anyone who wants to destroy the world might decide to join an x-risk institute and pick the brightest minds for apocalyptic ideas. The counterpart of such behaviour for s-risk is too depraved for words, but I don’t know if the possibility can be excluded. [I suppose there’s a possibility (likelihood?) of taking oneself too seriously here. Most intellectuals would probably (privately) relish the role of Dangerous Thinker. But whereas self-importance is harmless, there’s also the morally serious risk of unwittingly saying something catastrophically stupid. S-risk is one of the very few topics on which I self-censor.] Game theory? Technically speaking, life can be based (1) entirely on information-sensitive gradients of bliss, (2) entirely on information-sensitive gradients misery, or (3) (most commonly today) a mixture of information-sensitive gradients of pleasure and pain. On the African savannah, being temperamentally hypomanic or hyperthymic was a high-risk, high-reward strategy (cf. the Rank Theory of depression).Elsewhere, I’ve speculated that low mood can be understood the in the context of Dawkins’ “extended phenotype” theory. My ancestor’s ability to make other members of the tribe cowed and depressed could enhance his genetic fitness by giving him more reproductive opportunities (as a dominant alpha). A countervailing tendency here would be the adaptive value of robustness / depression-resistance in fellow tribesmen in battles with rival tribes – one wants strong allies in battle, not depressive milksops. So selection pressure for hedonic tone is complicated. OK, so what about future selection pressure, which is what interested in? I argue that the nature of selection pressure itself is going to change in the coming era of designer babies: The Reproductive Revolution Not least, the ability of prospective parents to preselect alleles and allelic combinations of their kids in anticipation of their likely psychological and behavioural effects will exert strong selection pressure in favour of a genetic predisposition to (super)happiness – and against any predisposition to pain and depression. Alas, talks and discursive essays are very different from any rigorous game-theoretic modeling. What should you do? Well, I don’t know how much of the analysis above you’d endorse? Compare the WHO commitment to universal health, extravagantly defined as "complete physical, mental and social well-being". What about consent? Should the WHO constitution be amended explicitly to protect the rights of people to experience ill-health? Or is such a clause redundant? That said, issue of consent does crop up as an objection to HI. So talking about phasing out "involuntary suffering" is wise. We may predict the abolition of all suffering. But that's a distinct question. Minor complications aside, all nonhuman animals show a clearly expressed wish not to be harmed - not to starve, go thirsty, get attacked by predators and so forth. The issue of lack of consent does quite often arise as objection to modifying predatory animals. But as with humans, there is an immense difference between a right not to be harmed and a notional "right to harm". In practice, (ex-)predators will be helped as much as non-predators by compassionate stewardship. But a pan-species welfare state is indeed paternalistic. Does this matter? Compare human care of toddlers. Chris, if one has more than a single sovereign principle, then there will be circumstances where they can come into conflict. Truth or Pleasure? Well, if one could glimpse all the suffering in the world, then one would go insane. So I think our long-term goal should be invincible ignorance of Darwinian life - but only after we are certain that all our ethical duties have been discharged. Maybe responsible cosmic stewardship can eventually be delegated to zombie AI. The hedonic treadmill can become the hedonistic treadmill... ("an Analysis, an Evaluation and a Modest Defence") How many genes need be targeted (1) to get rid of severe pain and suffering? and (2) abolish experience below hedonic zero? A pessimistic answer would invoke the omnigenic model: What if almost every gee affects everything? But as far as I can tell, pain-sensitivity and hedonic tone are amenable to radical manipulation with a handful of genetic tweaks - which would make the problem of suffering more tractable. So how to shift the Overton window? [on quantum mechanics] Are you not shocked? Why should the universe have been quantum-mechanical? ("If you want, you can divide Q into two subquestions: Q1: Why didn’t God just make the universe classical and be done with it? What would’ve been wrong with that choice? Q2: Assuming classical physics wasn’t good enough for whatever reason, why this specific alternative? Why the complex-valued amplitudes? Why unitary transformations? Why the Born rule? Why the tensor product?" Why quantum mechanics? Because not even God can create information ex nihilo. Zero information = all possible descriptions = Everett’s multiverse. Unitary-only QM is the quantum version of the Library of Babel: Why is there somethijng rather than nothing Peter Bryne’s biography of Everett is good - though the meaning of a “biography” is different in Everettian QM. Everett himself was heavily involved in writing software targeting cities in nuclear conflict. Indeed (if unitary-only QM is true) googols of branches presumably exist where Everett’s software was used in kill millions of people in thermonuclear war. Everett must have known of his complicity: he believed in his own interpretation of QM. Indeed, “our’ survival (cf. the Cuban missile crisis, nuclear submariner Vasili Arkhipov “saving the world”, etc) may be just an anthropic selection effect. The Many Worlds of Hugh Everett How should we act if wavefunction realism is true? Everettian QM As Penrose says, the only way to avoid Everett's multiverse is a “dynamical collapse” theory. QM without the collapse postulate wasn't invented to explain the appearance of fine-tuning: rather, no-collapse QM incidentally accounts for the appearance of fine-tuning as an anthropic selection effect. [on politics] (Mao Zedong) Is the human species capable reasoned politics? EA Magnus Vinding's new book... Reasoned Politics Magnus, set aside transhumanism and the abolitionist project as impossibly utopian. Focus on what we both agree is most morally urgent, i.e. preventing and mitigating severe human and nonhuman animal suffering. Any approach to socio-political reform that ignores the biological-genetic roots of severe suffering is just rearranging the deckchairs - Liberty, Equality, Justice, Democracy (etc). Embracing secular Buddhist political values alone just doesn't get to the heart of the problem. In other words, germline reform isn't a prerequisite just of some futuristic abolitionist project. Politically, it's a prerequisite of preventing the 800,000+ annual suicides, hundreds of millions of chronic pain victims, hundreds of millions of victims of chronic depression, and untold wild animal suffering - horrendous stuff that conventional AND unconventional bioconservative politics can't touch. The less the abolitionist project is associated with NU the better. If I could think of a snappier way to say it, I'd say the WHO project as laid out in their constitution - although complete health as so defined is wildly inconsistent with preference utilitarianism. Does hedonic uplift via recalibration have any "losers"? [on future life] We're asked to consider "An imaginary clinic based in a country with minimal oversight of heritable human genome editing that offers these services to international clients following in vitro fertilization and preimplantation genetic diagnosis" If I were setting up such a clinic, I'd use the language of depression-resistance and health, not transhumanism - and cite the WHO constitution. But such clinics should be ubiquitous. WHO recommendations on human genome editiing Bioethicists should be calling on the government to set up a research centre dedicated to ensuring the well-being of children born with unedited genomes... How to protect the first ‘CRISPR babies’ prompts ethical debate ("Fears of excessive interference cloud proposal for protecting children whose genomes were edited, as He Jiankui’s release from jail looks imminent.") Dan, why conserve sentient malware when we can become full-spectrum superintelligences? The whole Darwinian era will be forgotten like a bad dream. But what are full-spectrum superintelligences? As you know, I think digital computers are zombies; classical Turing machines are invincibly ignorant; and mind uploading is a pipedream. So-called AGI is cargo-cult science. Instead, our posthuman successors will be our biological descendants. Tim, the hedonic range and hedonic set-points of modern humans don't differ significantly from our hunter-gatherer ancestors. So predictions of a biohappiness revolution don't rest on naïve extrapolation - the bane of traditional futurology. Rather, I assume that we will shortly understand the molecular basis of pain, pleasure and phenomenal binding. In consequence, Darwinian life will gain mastery over its reward circuitry. The biological-genetic dial-settings of pain sensitivity, hedonic range and hedonic set-points will shortly be individually (and societally) chosen rather than the gift of God/the Devil. What settings will you choose for you and your family? Critically, hedonic uplift won't involve the messy trade-offs (e.g. increased taxes!) that bedevil traditional socio-economic reform. For sure, completion of the abolitionist project assumes mankind's circle of compassion will expand to even the humblest animal life-forms. But technology massively amplifies the effects of even minimal benevolence. Most people don't enjoy witnessing or contemplating the suffering of others - visible distress is upsetting - and we’re heading for a global panopticon. Resource depletion? Well, bitcoin mining aside, a shift to virtual lives and the metaverse promises effectively unlimited virtual resources and reduced consumption of traditional staples. Superyachts for all. I'm sounding optimistic. I’m not: I’m an über-pessimistic button-pressing NU who'd end the whole shebang in a heartbeat if he had the chance. But my dark world-picture focuses on issues such as the nature of time and the universal wavefunction rather than the usual topical doom-and-gloom. So I'll bang the drum for a biohappiness revolution as long as I can... The Imperative To Abolish Suffering [on transhumanism] Flying the flag for a "triple S" civilisation... Spotify & mp4 Not many transhumanists are negative utilitarians. But you don't need to be NU to think we should eradicate suffering... Transhumanism and NU Living Forever, Gene Editing, and Psychedelics with David Pearce Living Forever, Gene Editing, and Psychedelics with DP Marxist transhumanism? The Latent Transhumanism of Marxism Buddhist nirvana reminds me of the upshot a utilitronium shockwave... Nirvana (Wikipedia) Andreas, it can certainly seem we're tasting the transhuman. Future shock! On the other hand, we have almost exactly the same hedonic range, core emotions, egocentric world-simulations, serial thought episodes, reproductive habits and default state of consciousness as archaic humans. Indeed, contemporary transhumanists probably have more in common with a chimpanzee troop than transhumans, let alone posthumans. DP chats to Rajat Sirkanungo about transhumanism, spirituality and paradise engineering: Shifting the Overton window is hard. Rather than a WHO-led Hundred Year Plan - the ideal! - we can anticipate slow, fitful, incremental change over hundreds of years, starting s with obvious genetic disorders. For sure, "Only a few hundred years to go!" doesn't have an inspiring ring. Perhaps there will be revolutionary socio-political shifts. The idea of life based on gradients of bliss is ultimately too compelling to go away - it just needs more powerful and persuasive advocates. A critique of Nietzschean transhumanism: "Infinite Monkeys: Nietzsche and the Cruel Optimism of Personal Immortality" [on "La Revolucion de la Biofelicidad"] Nuestros descendientes estarán motivados por gradientes de bienestar genéticamente reprogramado, órdenes de magnitud más ricos que las más sublimes experiencias actuales... La Revolucion de la Biofelicidad La Revolución de la Biofelicidad (mp4) And the English version: The Biohappiness Revolution Perhaps using eugenics.org is a bit edgy. But Darwinian life needs some serious recoding. waterfall of obscure relevance David Pearce (2022) 2021 (FB) 2020 (FB) 2019 (FB) 2018 (FB) 2017 (FB) 2016 (FB) 2015 (FB) 2014 (FB) Pre-2014 (FB) Video Interview Some Interviews BLTC Websites 2022 The Philosophy Forum The Abolitionist Project Quora Answers 2015-22 Social Network Postings (2022)
90ff9ad8b0ba56f5
2022-08-15T06:36:27Z https://oai.zbmath.org/v1/ oai:zbmath.org:1981523 2003-09-15T22:00:00Z 37 65 Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E. 2003 1981523 English Elsevier (North-Holland), Amsterdam https://zbmath.org/01981523 Content generated by zbMATH Open, such as reviews, classifications, software, or author disambiguation data, are distributed under CC-BY-SA 4.0. This defines the license for the whole dataset, which also contains non-copyrighted bibliographic metadata and reference data derived from I4OC (CC0). Note that the API only provides a subset of the data in the zbMATH Open Web interface. In several cases, third-party information, such as abstracts, cannot be made available under a suitable license through the API. In those cases, we replaced the data with the string 'zbMATH Open Web Interface contents unavailable due to conflicting licenses.' J. Comput. Appl. Math. 158, No. 1, 83-92 (2003). 65P10; 37M15 Symplectic integrators for the numerical solution of the Schrödinger equation j
c1cb130996f499ca
Dynamics of diamagnetic Zeeman states ionized by half-cycle pulses Document Type Publication Date We study the dynamical evolution of diamagnetic Zeeman states in hydrogen and sodium atoms ionized by half-cycle pulses. The eigenstates of the combined Coulombdiamagnetic potential are determined by solving the Schrödinger equation using a grid-based pseudopotential method. We study states with principal quantum number n between 15-20 in the l-mixing regime at a magnetic field of 6 T. Diamagnetic states that are initially localized parallel and perpendicular to the magnetic field are subjected to the electric field of a half-cycle pulse (HCP) and their time evolution is monitored. We calculate the total ionized fraction, and also the spectrum of the ionized photoelectrons, keeping the total momentum transferred by the HCP constant and varying the HCP width. We find differences in both the amount of ionization and the form of the photoelectron spectrum for the two classes of localized states. In the impulsive limit, where the width of the pulse is much smaller than typical time scales in the system, the differences are due to the different initial momentum distributions of the parallel and perpendicular states. For longer pulse widths, we find that ionization is supressed as compared with the impulsive limit. The states localized perpendicular to the magnetic field are found to be much more sensitive to the HCP width than the parallel states, which reflects the fact that the two classes of states interact with different parts of the diamagnetic potential during the HCP. Publication Source (Journal or Book title) Physical Review A - Atomic, Molecular, and Optical Physics First Page Last Page This document is currently not available here.
04e087fc1ec296bb
2Physics Quote: Sunday, March 30, 2014 Polarization-controlled Photon Emission from Site-controlled InGaN Quantum Dots Left to Right: (top row) Chih-Wei Hsu, Anders Lundskog, K. Fredrik Karlsson, Supaluck Amloy. (bottom row) Daniel Nilsson, Urban Forsberg, Per Olof Holtz, Erik Janzén. Authors: Chih-Wei Hsu1, Anders Lundskog1, K. Fredrik Karlsson1, Supaluck Amloy1,2, Daniel Nilsson1, Urban Forsberg1, Per Olof Holtz1, Erik Janzén1 1Department of Physics Chemistry and Biology (IFM), Linköping University, Sweden. 2Department of Physics, Faculty of Science, Thaksin University, Phattalung, Thailand. A common requirement to realize several optoelectronic applications, e.g. liquid-crystal displays, three-dimensional visualization, (bio)-dermatology [1] and optical quantum computers [2], is the need of linearly-polarized light for their operation. For existing applications today, the generation of linearly-polarized light is obtained by passing unpolarized light through a combination of polarization selective filters and waveguides, with an inevitable efficiency loss as the result. These losses could be drastically reduced by employment of sources, which directly generate photons with desired polarization directions. Quantum dots (QDs) have validated their important role in current optoelectronic devices and they are also seen as promising as light sources for generation of “single-photons-on-demand”. Conventional QDs grown via the Stranski-Krastanov (SK) growth mode are typically randomly distributed over planar substrates and possess different degrees of anisotropies. The anisotropy in the strain field and/or the geometrical shape of each individual QD determines the polarization performance of the QD emission. Accordingly, a cumbersome post-selection of QDs with desired polarization properties among the randomly-distributed QDs is required for device integration [3]. Consequently, an approach to obtain QDs with controlled site and polarization direction is highly desired. Figure 1. Magnified SEM images of GaN EHPs with various α. The values of α are defined as the angles between the long axis of EHPs and the underlying GaN template. Here, we demonstrate an approach to directly generate a linearly-polarized QD emission by introducing site-controlled InGaN QDs on top of GaN-based elongated hexagonal pyramids (GaN EHPs). The polarization directions of the QD emission are demonstrated to be aligned with the orientations of the EHPs (Figure 1). The reliability and consistency for this architecture are tested by a statistical analysis of InGaN QDs grown on GaN EHP arrays with different in-plane orientations of the elongations. Details of the process and optical characterizations can be found in our resent publication [4]. Figure 2. a) µPL spectra of EHPs with the polarization analyzer set to θmaxmin), by which the maximum (minimum) intensity of sharp emission peaks are detected. b) Distribution histograms of measured polarization directions from the GaN EHPs for various α. Figure 2a shows representative polarization-dependent micro-photoluminescence (µPL) spectra from a EHP measured at 4o K. A broad emission band peaking at 386 nm and several emission peaks in the range between 410 and 420 nm are observed. These sharp emission lines are originating from the multiple QDs formed on top of the GaN EHP. Despite the formation of multiple QDs on a GaN EHP, the emission peaks from all QDs tend to be linearly-polarized in the same direction as revealed in Figure 3a and all peaks have their maximum and minimum intensities in the same direction, θ. The correlation between the outcome of the polarization-resolved measurements and the orientations of GaN EHPs (as defined by α) reveals that the polarization direction is parallel to the elongation (α≅φ in Figure 2b). A polarization guiding (α≅φ) is unambiguously revealed for GaN EHPs with α = 0o, 60o and 120o. For the remaining group of GaN EHPs with α = 30o, 90o and 150o, preferential polarization directions are seemly revealed, but α≅φ is less strictly obeyed. The polarization guiding effect and the high degree of polarization are further elucidated in the following. Figure 3. a) Statistical histogram showing the overall measured degree of polarization from GaN EHPs. b) The computed degree of polarization plotted as a function of the split-off energy. The QD shape is assumed to be lens-shaped with an in-plane asymmetry of b/a= 0.8. The single particle electron (hole) eigenstates are obtained from an effective mass Schrödinger equation (with a 6 band k•p Hamiltonian), discretized by finite differences. The Hamiltonians include strain and internal electric fields originating from spontaneous and piezoelectric polarizations. The polarized optical transitions are computed by the dipole matrix elements. The polarization direction of the ground-state-related emission from the QDs reflects the axis of the in-plane anisotropy of the confining potential, concerning both strain and/or QD shape [5]. The same polarization direction monitored for the different QDs indicates that all grown QDs possess unidirectional in-plane anisotropy. The polarization control observed in our work can be explained in three ways: (1) the GaN EHPs transfer an anisotropic biaxial strain field to the QDs resulting in the formation of elongated QDs. The direction of the strain field in the EHPs should be strongly correlated with α. (2) Given that the top parts of the GaN EHPs are fully strain relaxed, as concluded for the GaN SHPs [6], the asymmetry induced by a ridge will result in an anisotropic relaxation of the in-plane strain of the QDs on the ridge. The degree of relaxation is higher along the smallest dimension of the top area, i.e. along the direction perpendicular to the ridge elongation, resulting in a ground state emission of the QD being polarized in parallel with the ridge. (3) The edges of the ridges form a Schwoebel–Ehrlich barrier, which prevents adatoms of diffusing out from the (0001) facet [7,8]. Since the adatoms have larger probability to interact with an edge barrier parallel rather than orthogonal to the ridge elongation, the adatoms will preferentially diffuse parallel to the ridge. As the strain and the shape of the QDs are not independent factors and accurate structural information of the QDs is currently unavailable, the predominant factors determining the polarization is to be verified. The polarization degree of the III-Ns is more sensitive to the in-plane asymmetry compared to other semiconductor counterparts due to the significant band mixing and the identical on-axis effective masses of the A and B bands in the III-N [5]. A statistical investigation of the value of P performed on 145 GaN EHPs reveals that 93% of the investigated GaN EHPs possess P > 0.7 with an average value of P = 0.84 (Figure 3a). The polarization of the emissions is related to the QD asymmetry determined by the anisotropy of the internal strain and electric fields, as well as by the structural shape of the QD itself [5]. Numerical computations predict a high degree of polarization for small or moderate in-plane shape anisotropies of GaN and InGaN QDs [9]. This is related to the intrinsic valence band structure of the III-Ns. In particular, the split-off energy has been identified as the key material parameter determining the degree of polarization for a given asymmetry. Figure 3b shows the computed degree of polarization plotted against a variation of the split-off energy. Given a fixed asymmetry of the QDs, it is concluded that the material with the smallest split-off energy exhibits the highest degree of polarization. The high degree of polarization observed for InGaN QDs can be rationalized by the small split-off energies of InN and GaN, resulting in an extreme sensitivity to the asymmetry. Such a characteristic implies its inherent advantage for the generation of photons possessing a specific polarization. In summary, we have demonstrated an effective method to achieve site-controlled QDs emitting linearly-polarized emission with controlled polarization directions by growing InGaN QDs on top of elongated GaN pyramids in a MOCVD (metal organic chemical vapor deposition) system. The polarization directions of the QD emission can be guided by the orientations of the underlying elongated GaN pyramids. Such an effect can be realized as the elongated GaN pyramids provide additional in-plane confinement for the InGaN QDs implanting unidirectional in-plane anisotropy into the QDs, which subsequently emit photons linearly-polarized along the elongated direction of the GaN EHPs. [1] Zeng Nan, Jiang Xiaoyu, Gao Qiang, He Yonghong, Ma Hui, "Linear polarization difference imaging and its potential applications". Applied Optics, 48, 6734-6739 (2009). Abstract. [2] E. Knill, R. Laflamme, G.J. Milburn, "A scheme for efficient quantum computation with linear optics". Nature, 409, 46-52 (2001). Abstract. [3] Robert J. Young, D.J.P. Ellis, R.M. Stevenson, Anthony J. Bennett, "Quantum-dot sources for single photons and entangled photon pairs". Proceedings of the IEEE, 95, 1805–1814 (2007). Abstract. [4] Anders Lundskog, Chih-Wei Hsu, K Fredrik Karlsson, Supaluck Amloy, Daniel Nilsson, Urban Forsberg, Per Olof Holtz, Erik Janzén, "Direct generation of linearly-polarized photon emission with designated orientations from site-controlled InGaN quantum dots". Light: Science & Applications 3, e139 (2014). Full Article. [5] R. Bardoux, T. Guillet, B. Gil, P. Lefebvre, T. Bretagnon, T. Taliercio, S. Rousset, F. Semond, "Polarized emission from GaN/AlN quantum dots: single-dot spectroscopy and symmetry-based theory". Physical Review B, 77, 235315 (2008). Abstract. [6] Q.K.K. Liu, A. Hoffmann, H. Siegle, A. Kaschner, C. Thomsen, J. Christen, F. Bertram, "Stress analysis of selective epitaxial growth of GaN". Applied Physics Letters, 74, 3122-3124 (1999). Abstract. [7] O. Pierre-Louis, M.R. D’Orsogna, T.L. Einstein, "Edge diffusion during growth: The kink Schwoebel-Erhlich effect and resulting instabilities". Physical Review Letters, 82, 3661-3664 (1999). Abstract. [8] S.J. Liu, E.G. Wang, C.H. Woo, Hanchen Huang, "Three-dimensional Schwoebel–Ehrlich barrier". Journal of Computer-Aided Materials Design, 7, 195–201 (2001). Abstract. [9] S. Amloy, K.F. Karlsson, T.G. Andersson, P.O. Holtz, "On the polarized emission from exciton complexes in GaN quantum dots". Applied Physics Letters, 100, 021901 (2012). Abstract. Labels: , Post a Comment Links to this post: Create a Link
25a57bf1b38e9a39
tisdag 20 januari 2015 Funeral of Schrödinger's Cat in Sweden Swedish physics professors Karl Erik Eriksson and Bengt Gustavsson performed a symbolic academic funeral of (i) Schrödinger's cat along with (ii) multiversa and (iii) probabilistic dice interpretations of quantum mechanis, in a worthy ceremony at the Alma-Löv art museum in Värmland in the heart of Sweden on November 20, 2014. I fully agree with these professors of physics that the modern physics of (i)-(iii) is dead and that the funeral thus puts an end to three tragic episodes of physics, as the Queen of sciences and tremendous success, see post of Jan 16. From the ashes a new form of quantum mechanics may emerge, maybe in the form of a physical quantum mechanics based on a second order real-valued Schrödinger equation without cat, dice and parallel worlds, as discussed in recent posts. Recall that the reason to introduce the dice leading to the cat and parallel worlds, was that the standard first order complex-valued form of Schrödinger's equation, does not describe any physics. Inga kommentarer: Skicka en kommentar
c0b597423c1b4dd2
Friday, February 16, 2007 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Black Saturn Henriette Elvang (MIT) spoke about the Black Saturns, a work with Pau Figueras and very recently also with Roberto Emparan. Because I teach during the duality seminars (today: angular dependence of the photoelectric effect in advanced QM), she was nice to repeat the talk for me, in a more interactive form, and it was even more interesting than I expected. A black Saturn is a black hole surrounded by a black ring. The ring's angular momentum creates a force that repels it from the black hole in the middle. The solution will have negative modes signaling instabilities but it is a classical solution anyway. In their case, they construct solutions to five-dimensional pure gravity i.e. Ricci-flat geometries with an unusual structure of horizons. The basic method to calculate these solutions goes back to a 1917 paper by Hermann Weyl - a paper whose content is described in Wald's book on GR. I didn't know about it but Weyl discovered an early version of the LLM construction 90 years ago. I find it quite impressive that a mathematician could find such an LLM-like construction in 1917 - the same year when crackpots in less advanced countries, such as Vladimir Lenin, were able to impress whole nations with their dumb leftist ideologies - just a year after general relativity was completed. Recall that in the AdS5 x S5 version of LLM, you want to fill a two-dimensional plane with two colors (black and white, for example). For each such picture, you can construct a solution. However, as we noticed, there exists a similar construction due to Weyl that one may use to construct various solutions in pure four-dimensional gravity. We have two colors "t" and "phi" that can be used to draw a picture on line. If you fill most of the line by the "phi" color except for a line interval whose color is "t", you obtain the Schwarzschild black hole. The length of the line interval is correlated with the size of the black hole. Evidently, the limit where the length goes to infinity corresponds to an infinitely large black hole - i.e. flat space or Rindler space in different coordinates. A similar limit - a plane divided to black and white - gives you the Penrose pp-wave in the AdS5 x S5 case. Analogous constructions for five-dimensional gravity were found more recently. They use them to construct solutions of five-dimensional pure gravity. In five spacetime dimensions, the massive little group is SO(4) whose rank is two which is why you can have two independent angular momenta. Let's look at stationary solutions with one angular momentum only, e.g. J_{12}. The metric will have three Killing vectors - two angles generated by the J_{12} and J_{34} rotations and time translations (that would be true even if you had both angular momenta). That's why the metric won't depend on five coordinates but only two. It turns out that the Einstein equations for this Ansatz are non-linear but integrable differential equations in two variables. You can use a trick that I recently heard from Zack Guralnik - if I remember well - how to transform non-linear partial differential equations in two variables to linear differential equations for some generating functions of a higher number of variables. The procedure is somewhat analogous to replacing a system of non-linear differential equations for a classical system by the linear Schrödinger equation for its quantization because the generating function is analogous to a wavefunction: such a procedure can surprisingly simplify the calculations in some cases. These integrable systems lead to similar equations as the planar limit of the N=4 theory although this is most likely a mathematical coincidence only. The relevant five-dimensional version of the LLM-Weyl procedure involves three colors on a line. For a suitable configuration of colors, you may obtain a black Saturn solution. It is interesting to draw the phase diagram of these solutions. Create a two-dimensional graph and believe me that they can construct a two-parameter family of solutions. The x-axis is the total angular momentum "J" while the y-axis is the total area "A" i.e. the entropy. All points of this graph below a certain line can be identified as black Saturns with different parameters describing the ring and its distance from the black hole. Black rings themselves are special examples of black Saturns for which the size of the spherical black hole at the center vanishes. In the phase diagram, the family of the pure rings looks like a union of two semi-infinite lines connected with a cusp. The point at the cusp maximizes the entropy and minimizes the angular momentum - but there are two lines along which you can go if you want to lower the entropy or increase the angular momentum. The angle at the cusp is zero. There exists another, very similar pair of lines in the phase diagram (whose detailed position is however different) corresponding to black Saturns at equilibrium. Note that the horizon of a black Saturn is disconnected: it is made out of a three-sphere (hole) and a Cartesian product of a circle and a two-sphere (ring). These two components can therefore have different chemical potentials for energy (also known as the temperature - in the context of black objects, it is proportional to the surface gravity at the horizon) and different chemical potentials for the angular momentum (also known as the angular velocity). However, some black Saturns happen to have the same angular velocity for both components and the same surface gravity for both components. They are described by the one-dimensional pair of semi-infinite lines connected at a cusp. We discussed how to get black bi-Saturn, among other possible solutions. You can also imagine a "black bi-Saturn" as a black hole near the Southern pole connected with another black hole near the Northern pole by a negative-tension (repulsion-inducing) cosmic string (that creates an excess angle, unlike the usual deficit angles for positive-tension strings) - and both of them are surrounded by a black ring wrapped near the equator of the Earth. Such a configuration has the same symmetric as the black Saturn and it is conceivable that you can write an exact metric for it, too. Because the Ricci-flatness for the black Saturn Ansatz defines an integrable system, you may also believe that various other physical questions about this system - such as the perturbations, geodesics, classical worldsheets and worldvolumes of various branes in this geometry etc. - could be exactly solvable, too. We took the speaker for a dinner. It was the first time when Henriette visited Henrietta's table in the Charles Hotel. ;-) Add to Digg this Add to reddit snail feedback (0) :