chash
stringlengths
16
16
content
stringlengths
267
674k
01af61c5832033c9
How Science and Religion Come Together This article was originally written in 1981 when I was working as a Research Associate in High Energy Physics at Ohio State University. Reading Rawel Singh’s recent article that he submitted to the SikhNet News reminded me of this article I wrote 27 years ago, and so I dusted it off and updated it for publication here. – Guruka Singh Khalsa  THERE has always been science, and there has always been religion as long as the human race has walked on this mother Earth. In fact, these two are not only related to each other, but they are really, in essence, identical. Examining the meaning of the word “science” we can see that it is derived from the Latin “sciens” which is the present participle of “scire” which means “to know or to understand”. Many people completely misunderstand the word ‘religion.’ It comes from the Latin word ‘ligare’ which means to tie or fasten. It is the same root as the word ligature or ligament. It means to be connected, tied. So the actual meaning of religion is to experience that state of being connected or tied to the One – to our origin and our Infinity. In sum, science is the technology of knowing or understanding the essence of the universe, and religion is the actual experience of one’s own identity with that essence. The Nature of Religion We must fundamentally understand that true religion is experiential. That is, religion is not what we usually consider it to be, i.e., a set of beliefs or ideals, or even a philosophy or manner of behavior, but it is, in fact, the moment to moment experience of one’s origin, the source of one’s being, God. Where can this experience be found? The answer is: everywhere! For it is not really a question of where at all, but one of how. Obviously if one is experiencing the living presence of God, which is by nature infinite and inexpressible, the experience can only be that, an experience. It cannot be written or told. Over and over in the Siri Guru Granth Sahib we hear the phrase: “Nanak tells that story which can never be told…” Both science and religion are the attempt to tell “that story which can never be told”, to share the experience of the infinite and inexpressible with our fellow travellers on this beautiful spaceship we call Earth. Science and religion are identical in purpose, but they are opposite in method. In fact, one might almost say that science is religion without a heart, and religion is science without a head; two opposite approaches to the identical task: to express the inexpressible so that others may share it. The Nature of Science The task that science has set itself is to describe the essential nature of the entire universe precisely and mathematically in the totality of its creation. The irony of this task is that it is endless in nature. The supposed ultimate goal of science, to propose a theory that explains neatly and logically how everything in the universe operates and fits together into a whole, is actually an impossible task. The reason that it is impossible is that science itself is part of our constantly growing, evolving, moving, and ecstatic universe. The very act of scientific observation affects not only the specific process under observation, but also the entire creation. When, at the dawn of the twentieth century, Albert Einstein proposed his general theory of relativity, and Max Planck proposed his principle of uncertainty, the world was stood on its ear. Suddenly the seemingly explainable, mechanical nature of the world as understood by science until then, was dissolved in a sea of uncertainty. The sudden and unexpected announcement by Einstein that everything in the universe is connected to everything else, and that every action no matter how seemingly small affects the whole, and Planck’s discovery that absolute measurements within the realm of time and space were impossible because both could not be measured simultaneously, made the hard facts of science suddenly seem a lot less solid. The realization that all scientific models and theories are only approximate, and that their verbal interpretations always suffer from the inaccuracy of language itself was now inescapable. It now seems that the totality of science’s accomplishments is asymptotic in nature. An asymptotic function, of course, is one that approaches a limit without ever reaching it. In other words, while the scientific method can approach closer and closer to its goal of describing the essential nature of the universe, it can never actually get there. It is like Zeno’s bridge, crossing it takes forever. The discovery and exploration of the subatomic world of modern physics has revealed a reality that transcends both language and reasoning. At the dawn of the twentieth century science and religion entered the common ground of inexpressibility. The Image of the Infinite All religions seem to have begun with the transcendental experience of a single individual or a small group in individuals. Let’s take as our example here, Guru Nanak Dev Ji, the founder of the Sikh faith. As a young man, Guru Nanak underwent an ecstatic and transforming experience; an experience in which he understood the essential nature of existence. It is said that after undergoing this experience, his first impulse was to express in words what had happened to him. The great poem we call “Japji” was the product of that effort. The first part of the Japji, known as the root or “Mool” mantra is the expression, in as few words as possible, of Guru Nanak’s experience of reality. Ek Ong Kar The Creative Power and the Creation are ONE Sat Nam Its Identity is TRUTH Karta Purkh It is the Doer of everything Nirbau Beyond fear Nirvair Beyond Revenge Akal Beyond Death Moorat Image of the Infinite Ajooni Unborn Saibhang Full of Light Gur Prasad To experience this is the Guru’s Gift Jugaad Sach TRUE FOR ALL TIME Nanak O NANAK Hosee bi Sach FOREVER TRUE The utter simplicity of the Mool Mantra is striking. It is the very essence of which all religions are made. It is interesting to realize that the very first line of it expresses exactly the same concept which is the foundation of all modern relativistic science: E=MC2 Ek Ong Kar: The Creator (energy) and the Creation (matter) are ONE! This primal truth is expressed in various ways in all the world’s religions. Let’s take as another example the Taoist symbol called the “Tai C’hi”, which many people know as the “yin-yang” symbol. The Tai C’hi is an attempt to express the experience of the infinite in an image or picture. In reality the Tai C’hi is not a two dimensional image but a four dimensional one. While such a symbolic representation is difficult to express in words, I hope that you will bear with me while I attempt to do so. Examining the Tai C’hi, we notice that in the center of the black teardrop shape is a point of white, and in the center of the white teardrop is a single black point. Now imagine that the “tail” of the white teardrop actually continues “behind” the symbol itself and is connected to the white dot at the center of the black half, while the “tail” of the black half comes around the “front” of the image and connects to the black dot at the center of the white half. Got it? Ok, now imagine that the Tai C’hi is not static, but that it is in constant flowing motion within itself. i.e. that the white portion is constantly flowing out of the center of the black portion and the black portion is continuously being born out of the center of the white portion. In other words, the Tai C’hi is not a circle divided into a black half and white halves, but an attempt to express in a beautifully simple image the constant process of creation, growth and evolution itself. It is a picture or image of “Ek Ong Kar”. It is an “Akal Moorat” a timeless, infinite image. The Power of the Word In the Christian bible, in the book of John, is written: What is the Word of which John was speaking? For thousands of years the sacred texts of India, the Vedas, Shastras and Smritis, have taught that sound vibration holds the key to the mysteries of the universe, to the actual creation and sustenance of our world, and to the means of extricating ourselves from its bonds. In the Vedic tradition, the world we see, the world of phenomena is said to be the visible manifestation of the infinite combinations of sound patterns, all derived from the soundless sound (Anahat Shabd) of the One who is both the Creator and the Creation. To express it another way, the Word of which John is speaking is the Anahat Shabd of the Vedic teachings; the continuous vibrating of the Creator/Creation in action. The latest research in astronomy validates the creation of the universe as beginning with this cosmic sound. Three independent teams of astronomers yesterday presented the most precise measurements to date ofthe infant universe as it existed approximately 14 billion years ago, exposing telltale reverberations theycalled "the music of creation." The results represent a significant advance in scientists’ efforts to understand what happened in the initial split second of cosmic creation and how the universe has evolved since, researchers said. …the cosmic microwave background, is the cold echo of the hot Big Bang fireball. Reaching Earth from all directions, it is enfeebled and stretched into the microwave range by the expansion of the cosmos. Scientists say that 1 percent of the static picked up on a home TV antenna is the echo of the Big Bang approximately 15 billion years ago. Astronomers first detected this background glow in 1965, using a ground-based radio telescope. But the radiation appeared bafflingly uniform and featureless in contrast to the present-day universe — all lumpy with stars and galaxies. Russian and American theorists soon predicted that the seeds of this lumpiness should show up in what mathematicians call a "harmonic series" of fluctuations imprinted on the embryonic glow. The primordial cosmic soup "is full of sound waves compressing and rarefying matter and light, much like sound waves compress and rarefy air inside a flute or trumpet," said Paolo deBernardis, Italian leader of the international collaboration known as BOOMERANG, or Balloon Observations of Millimetric Extragalactic Radiation and Geophysics, another high-altitude balloon project presenting new analysis. -Washington Post Monday, April 30, 2001 This sound continues today. Once we accept that what we call “time” is really only a matter of appearances, and that what we know as “past”, “present”, and “future” are in reality just ways of describing what appear to us to be different parts of what is actually a continuous, infinite, timeless process, then we can begin to understand that the actual manifestation of that process is vibratory in nature. In fact, new research shows that what is conducted by our nervous system is in fact, sound vibrations. so, in order to explore the nature of creation itself, we must correlate the sciences of biology, subatomic particle physics, chemistry, and mathematics because sound vibration is the integrating phenomenon of life, the common denominator, through which and by which everything in the universe operates. The ancient Indian texts explain that sound is the cause and not the effect of vibration and that there can be sound without any vibration even without the usual media of conveyance such as air, water, or so called solid matter. This is the concept of the “Anahat Shabd”, the driving energy and force behind all manifestation; the “soundless sound” or “unstruck melody” which is the infinite, endless continuum… indivisible, unfragmented and unimaginably potent… the most powerful source of all… the sound of the Creator/Creation constantly creating itself. This constant hum of creation, the primal AUM, is the vibratory activity of all energy and matter. The power within the atom’s core is but a minute pinpoint of this infinite energy, and yet even this is able to utterly destroy cities, nations, and even our planet itself. With the exception of the work of a few advanced scientists (most of whom were ostracized by the traditional scientific community) such as Nikola Tesla and John Keely, the relatively coarse instruments of modern science have barely begun to probe this infinite energy source. The Anahat Shabd, the “soundless sound”, is the subtlest element of all. It is the etheric essence, finer than earth, air, water, or fire, beyond the speed of light… all pervasive, the source of cohesion, of electricity, of magnetism and gravitation, of all that exists. The modern physicist E.C.G. Sudarshan has described the etheric essence of the Anahat Shabd in scientific terms as follows: “The ether as superfluid is consistent with relativity and quantum theory. It is the support of all light, in it all bodies exist, it is attached to none, it is ever present beyond the limitations of time and space. It has no inertial qualities, no interactions, yet it is the very substance of illumination.” E.C.G. Sudarshan (preprint, University of Texas, 1974). This is the secret of the power of the Word. If the creative activity we know as God is manifested through vibration, and if the entire universe as we perceive it is nothing but a constantly changing pattern of vibrations, then the knowledge of how to consciously create vibrations which affect the vibratory continuum in a specific manner is the knowledge of how to consciously participate in the creative process itself. This is the ancient science of sound that is known as “mantra”. In the Japji of Guru Nanak, he speaks in great length and depth of the Anahat Shabd and of the effects of both listening and speaking: Akharee Nam Akharee Salaah. Akharee Gian Geet Gun Gaah. Akharee Likhan Bolan Baan. Akharaa Sir Sanjog Vikaan.” In sound is naming and praising. In sound is all knowledge and song In sounds spoken and written. In sound lies the destiny. Guru Nanak, Japji Sahib, pauri 19 Ant Na Vekhan Sunan Na Ant Ant Na Jaapai Kiaa Mant? Ant Na Jaapai Keeta Aakaar Ant na Jaapai Paaraavaar.” There is no end to seeing and hearing There is no end in sight… What Mantra lies within God’s mind? The structure of the universe is infinite. Endless vibrating expansion. Guru Nanak, Japji Sahib, pauri 24 Suniai Sidh Peer Sur Nath. Suniai Dharat Dhaval Akash Suniai Deep Loa Pataal Suniai Poeh Na Sakai Kaal Nanak, Bhagataa Sadaa Vigaa Suniai Dookh Paap Kaa Naas. Suniai Eesar Baramaa Ind Suniai Mukh Saalaahan Mand Suniai Jog Jugat Tan Bhed Suniai Saast Simrit Ved Nanak, Bhagataa Sadaa Vigaa Suniai Dookh Paap Kaa Naas Suniai Sat Santokh Giaan Suniai Ath Sath Kaa Ishnaan Suniai Par Par Paaveh Maan Suniai Laagai Sahej Dhiaan Nanak, Bhagataa Sadaa Vigaas Suniai Dookh Paap Kaa Naas Suniai Saraa Gunaa Ke Gaah Suniai Sekh Peer Paatishaah Suniai Aande Paaveh Raaho Suniai Haath Hovai Asagaaho Nanak, Bhagataa Sadaa Vigaas Suniai Dookh Paap Kaa Naas.” Listening… saints, heroes, masters. Listening… the earth, the power, the ethers. Listening… high and low realms, oceans of light. Listening… beyond time. O Nanak! God’s lovers bloom forever. Listening destroys all pain and error. || 8 || Listening… men become gods. Listening… the way of yoga and the body’s secrets. Listening… all holy books and scriptures. O Nanak! God’s lovers bloom forever… Listening destroys all pain and error. || 9 || Listening… Truth, patience, wisdom. Listening… bathing at all holy places. Listening…reading and reading gains honor. Listening…concentration comes easy. O Nanak! God’s lovers bloom forever. Listening destroys all pain and error. || 10 || Listening… deep oceans of grace. Listening… kings, emperors, saints. Listening… blind ones find the Path. Listening… the unknown is known. O Nanak! God’s lovers bloom forever. Listening destroys all pain and error. || 11 || Guru Nanak, Ja pji, pauris 8 – 11 All religions use sounds to affect consciousness. From the shaman’s drum at the peyote meeting to Gregorian chant and the Latin Catholic liturgy, the effects of sound on the human consciousness, and on the perception and creation of reality, are universally experienced. If all matter is in reality the interplay and patterns of waves of sound, then it requires no great leap of imagination to see that all form in nature is the outpouring of causative sound. If God as the constant Creator can form and change the vast array of the plane of matter, then we as co-creators can use the science of sound to form and change the patterns of the world and of our own inner beings. In the Vedic view of the cosmos, the whole universe is an ocean of sound and light of varying degrees of density or luminosity. It is understood that sound proceeds even light. God and the Superforce During the latter part of the twentieth century the forefront of scientific research has been theoretical physics. Theoretical physics consists of the postulation and subsequent proof of the actual manner in which the universe is constructed. Most people are aware that the “stuff” of reality is made up of molecules, and that molecules are made up of atoms. Most are also aware that atoms are made of so-called particles known as protons, neutrons, and electrons. These things were understood at the turn of the century, but with the arrival of Neils Bohr’s quantum mechanics in the early part of the twentieth century, a whole new world of sub-atomic particles has opened up. As physicists began to discover and name these new particles they were looking for an underlying symmetry, a pattern that would make sense both logically and aesthetically, just as all patterns in nature seem to have an inner rhythm and beauty to them. The Newtonian image of the atom, that of a tiny “solar system” with a sun/nucleus made up protons and neutrons, and a whirl of little planet/electrons surrounding it has now become obsolete. It now appears that what at first seemed to be three different basic types of particles, and later was understood to consist of these particles made up of, and interacting with, smaller component particles, mesons, neutrinos, quarks, and charmed particles, etc., are in fact simply one energy; one energy that is constantly transforming itself into different frequencies of vibration, spinning right and lift, up and down… a constant flow of changing ripples and waves in one cosmic sea of energy. Compare the description of God given by Guru Arjan Dev, the fifth Nanak, in his poem “Sukhmani Sahib,” The Jewel of Peace, to the quantum concept of the universe described above. “Parbrahm ke sagle thao Jit jit ghar rakhai taisa tin nao Aap karan kravan-jog. The entire Creation is One with God. But we call its parts by different names. Prabh bhavai soi phun hog. God is the Doer of everything. Pasrio aap hoe anat tarang. Whatever He wills happens. Lakhe na jaah Parbrahm ke rang Jaisi mat dee taisa pargaas. God is an ocean, pervading everywhere, filled with endless waves of creation. Parbrahm karta abinas.” His play cannot be described. But each man has a vision of Him according to the light granted to him. Guru Arjan Dev Ji, Sukhmani Sahib, Ashtapadi IX The appearance of separate and definable “particles” is only an illusion, a trick of perception that occurs in the same manner that a baseball seems to be suspended in mid-air in a short exposure photograph, when in fact it is hurtling through the air quite fast and is, in actuality, in another place completely by the time we ever see the photograph. Even with the “high-speed photography” possible with our modern atomic accelerators such as Cern in Geneva, and Fermilab in Chicago, the “snapshots” of our bubble chambers and tracking emulsions are much to slow to catch the divine dance of energy as it actually occurs. In fact, it is our minds that are too slow to catch the divine dance! But it can be experienced, and this experience, beyond the limitations of time and space, beyond the mind and its symbolic and linear thinking, is the experience of constant ecstasy and immense joy which is the very flow of the Anahat Shabd itself. In addition to the illusion of separate particles of matter, science is now beginning to discover that what was previously thought of as separate types or kinds of energy are really only one. The very latest concept of modern physics is the idea of a so-called "Superforce." What were previously thought to be the five basic separate energies of the universe, electricity, magnetism, weak atomic bonding force, strong atomic force, and gravitation, each with it’s own mathematical expression and laws, are currently beginning to be understood as merely separate expressions of ONE force, which physicists call the Superforce. These energies are constantly transforming themselves into each other, just as the “particles” of matter are constantly transforming themselves into each other. The One appears as many in all His manifested forms Ek Ong Kar – There is One Creator/Creation. This is the fundamental discovery of modern science. That not only is all matter convertible into energy, and vice versa, but that energy and matter are simply different appearances of the very same thing. And that not only are energy and matter the same thing, but that the constant interplay of this energy is the very play of God itself, the divine play of “lila”. The religious experience is the experience that we and God and God and we are one and the same. It is only our minds, our egos that make us ever feel that we are observing, that we are separate from other forms of consciousness and energy. But science is the process of observation, and observation implies an observer. How wonderful it is to realize that there is only One observer… and that He is constantly observing Himself through His/our eyes, and enjoying it immensely! The modern scientific view that consciousness is within everything has been expressed by E.H. Walker: “Consciousness may be associated with all quantum mechanical processes. The uniqueness of our consciousness lies in the fact that it is a part of a logic machine, which in turn is the brain of a particular kind of physical system, a living organism. That is, the terms “life”, “thought”, and “consciousness”, properly defined, are separable. An organism does not have to be conscious or capable of thinking in order to be alive. A brain does not have to be conscious to be capable of “thought” (we are using the term in the restricted sense of “data processing”). Only the higher organisms have brains for data processing, and only under very special conditions, when a large part of the data processing functions of the brain is handled by an irreducible quantum mechanical process, does the organism become a conscious, thinking being. Any of these attributes may exist independently of the others, or in conjunction with only one other. A non-living computer that is capable of both thought and consciousness would be a real possibility. Consciousness may also exist without being associated with either a living system or a data processing system. Indeed, since everything that occurs is ultimately the result of one or more quantum mechanical events, the universe is “inhabited” by an almost unlimited number of rather discrete conscious, usually non-thinking entities that are responsible for the detailed working of the universe. These conscious entities determine (or exist concurrently with the determination) SINGLY the outcome of each quantum mechanical event, while the Schrödinger equation (to the extent that it is accurate) describes the physical constraint placed on their freedom of action COLLECTIVELY.” E.H. Walker, “The Nature of Consciousness”, Journal of Mathematical Biosciences 7 : 175-176, 1970 The Heart of Science and the Head of Religion Earlier we said that science is sometimes thought of as religion without a heart, and religion as science without a head. The current evolutionary state of human consciousness is that these two are finally becoming one. Religion is giving its heart to science, and science has given its head to religion. In the latter decades of the 20 th century, and the beginning of the 21st, we are experiencing the continuing explosion of information sharing and group consciousness on this planet. A network of computers, televisions, radios, electronic cables, optical fibers, and satellites links our entire planet electronically. Ideas and images that affect us all are instantly transmitted and shared with others of our species all over the globe. We have even begun to acknowledge and connect to the consciousness of other species, as John Lilly’s research with dolphins has shown. The group subconscious described by Carl Jung is now openly reflected in a visible planetary group consciousness. The age of blind faith is over. The age of belief without knowledge died with the advent of instantaneous global information sharing. We have irrevocably entered the age of conscious knowledge, experience, and responsibility. The union of science and religion is now taking place. The people of planet Earth can no longer march under a banner that reads: “I believe – therefore I know” but are joining together under a new banner which proudly proclaims: “I know! Therefore I believe” The Age of Khalsa This is the age of the joining of all the Dharmas. This is the age of sudden transformation. Not since the incredible evolutionary speedup that marked the beginning of mankind’s use of the forebrain, the factor that separated us from the apes, has such a quantum leap in consciousness happened on this planet. We are entering the age of the Khalsa – an age of purified and subtle creative behavior and consciousness – an age of the self-sensory human being. The religion of the future, which is evolving out of the union of science and traditional religion, is the worship of God as Universal Truth. I’d like to close with my translation a poem of the Khalsa, originally written in Gurmukhi, and transmitted to this Earth by the most pure, humble, and devoted channel I have ever met, Yogi Bhajan. From the volume called the “Furmaan Khalsa”, The Inner Command of the Pure Ones, here is the poem called “Lohe da Mandir”, The Temple of Steel: The Temple of Steel has been built! The Light of Wisdom is Shining! Today the sinners are filled with fear. Today the Sovereign Khalsa Nation has proclaimed: “Only the Righteous will survive!” Today Satan is dead! The New Nation of the Earth comprises all the Dharmas. This Order comes direct from God! The cycle of births and deaths is ended. Today, only TRUTH will be accepted! Today, no one is weak. Today, the power of the Will reigns Supreme. Today, by the grace of the Lord of both the Worlds, The being is reborn as Khalsa! Strong as steel Steady as stone. A face radiating sweetness… A forehead shining with self-respect… An inner voice that calls out to all people… And carries them across the Ocean of fear. The Flag of TRUTH is flying High! Falsehood lies in ruins. The Path of Love is simple: There is no loss or gain. White Hot Steel, heated in the fires of the times, has branded “TRUTH!” upon our foreheads. The Temple of Steel has been built! The Light of Wisdom is Shining! We shall live our lives in TRUTH! For this is the Priceless Blessing of our Guru. – By Guruka Singh Khalsa
7ebbe509487ccdbc
You are currently browsing the tag archive for the ‘Laplace-Beltrami operator’ tag. Let {L: H \rightarrow H} be a self-adjoint operator on a finite-dimensional Hilbert space {H}. The behaviour of this operator can be completely described by the spectral theorem for finite-dimensional self-adjoint operators (i.e. Hermitian matrices, when viewed in coordinates), which provides a sequence {\lambda_1,\ldots,\lambda_n \in {\bf R}} of eigenvalues and an orthonormal basis {e_1,\ldots,e_n} of eigenfunctions such that {L e_i = \lambda_i e_i} for all {i=1,\ldots,n}. In particular, given any function {m: \sigma(L) \rightarrow {\bf C}} on the spectrum {\sigma(L) := \{ \lambda_1,\ldots,\lambda_n\}} of {L}, one can then define the linear operator {m(L): H \rightarrow H} by the formula \displaystyle m(L) e_i := m(\lambda_i) e_i, which then gives a functional calculus, in the sense that the map {m \mapsto m(L)} is a {C^*}-algebra isometric homomorphism from the algebra {BC(\sigma(L) \rightarrow {\bf C})} of bounded continuous functions from {\sigma(L)} to {{\bf C}}, to the algebra {B(H \rightarrow H)} of bounded linear operators on {H}. Thus, for instance, one can define heat operators {e^{-tL}} for {t>0}, Schrödinger operators {e^{itL}} for {t \in {\bf R}}, resolvents {\frac{1}{L-z}} for {z \not \in \sigma(L)}, and (if {L} is positive) wave operators {e^{it\sqrt{L}}} for {t \in {\bf R}}. These will be bounded operators (and, in the case of the Schrödinger and wave operators, unitary operators, and in the case of the heat operators with {L} positive, they will be contractions). Among other things, this functional calculus can then be used to solve differential equations such as the heat equation \displaystyle u_t + Lu = 0; \quad u(0) = f \ \ \ \ \ (1) the Schrödinger equation \displaystyle u_t + iLu = 0; \quad u(0) = f \ \ \ \ \ (2) the wave equation \displaystyle u_{tt} + Lu = 0; \quad u(0) = f; \quad u_t(0) = g \ \ \ \ \ (3) or the Helmholtz equation \displaystyle (L-z) u = f. \ \ \ \ \ (4) The functional calculus can also be associated to a spectral measure. Indeed, for any vectors {f, g \in H}, there is a complex measure {\mu_{f,g}} on {\sigma(L)} with the property that \displaystyle \langle m(L) f, g \rangle_H = \int_{\sigma(L)} m(x) d\mu_{f,g}(x); indeed, one can set {\mu_{f,g}} to be the discrete measure on {\sigma(L)} defined by the formula \displaystyle \mu_{f,g}(E) := \sum_{i: \lambda_i \in E} \langle f, e_i \rangle_H \langle e_i, g \rangle_H. One can also view this complex measure as a coefficient \displaystyle \mu_{f,g} = \langle \mu f, g \rangle_H of a projection-valued measure {\mu} on {\sigma(L)}, defined by setting \displaystyle \mu(E) f := \sum_{i: \lambda_i \in E} \langle f, e_i \rangle_H e_i. Finally, one can view {L} as unitarily equivalent to a multiplication operator {M: f \mapsto g f} on {\ell^2(\{1,\ldots,n\})}, where {g} is the real-valued function {g(i) := \lambda_i}, and the intertwining map {U: \ell^2(\{1,\ldots,n\}) \rightarrow H} is given by \displaystyle U ( (c_i)_{i=1}^n ) := \sum_{i=1}^n c_i e_i, so that {L = U M U^{-1}}. It is an important fact in analysis that many of these above assertions extend to operators on an infinite-dimensional Hilbert space {H}, so long as one one is careful about what “self-adjoint operator” means; these facts are collectively referred to as the spectral theorem. For instance, it turns out that most of the above claims have analogues for bounded self-adjoint operators {L: H \rightarrow H}. However, in the theory of partial differential equations, one often needs to apply the spectral theorem to unbounded, densely defined linear operators {L: D \rightarrow H}, which (initially, at least), are only defined on a dense subspace {D} of the Hilbert space {H}. A very typical situation arises when {H = L^2(\Omega)} is the square-integrable functions on some domain or manifold {\Omega} (which may have a boundary or be otherwise “incomplete”), and {D = C^\infty_c(\Omega)} are the smooth compactly supported functions on {\Omega}, and {L} is some linear differential operator. It is then of interest to obtain the spectral theorem for such operators, so that one build operators such as {e^{-tL}, e^{itL}, \frac{1}{L-z}, e^{it\sqrt{L}}} or to solve equations such as (1), (2), (3), (4). In order to do this, some necessary conditions on the densely defined operator {L: D \rightarrow H} must be imposed. The most obvious is that of symmetry, which asserts that \displaystyle \langle Lf, g \rangle_H = \langle f, Lg \rangle_H \ \ \ \ \ (5) for all {f, g \in D}. In some applications, one also wants to impose positive definiteness, which asserts that \displaystyle \langle Lf, f \rangle_H \geq 0 \ \ \ \ \ (6) for all {f \in D}. These hypotheses are sufficient in the case when {L} is bounded, and in particular when {H} is finite dimensional. However, as it turns out, for unbounded operators these conditions are not, by themselves, enough to obtain a good spectral theory. For instance, one consequence of the spectral theorem should be that the resolvents {(L-z)^{-1}} are well-defined for any strictly complex {z}, which by duality implies that the image of {L-z} should be dense in {H}. However, this can fail if one just assumes symmetry, or symmetry and positive definiteness. A well-known example occurs when {H} is the Hilbert space {H := L^2((0,1))}, {D := C^\infty_c((0,1))} is the space of test functions, and {L} is the one-dimensional Laplacian {L := -\frac{d^2}{dx^2}}. Then {L} is symmetric and positive, but the operator {L-k^2} does not have dense image for any complex {k}, since \displaystyle \langle (L-\overline{k}^2) f, e^{\overline{k}x} \rangle_H = 0 for all test functions {f \in C^\infty_c((0,1))}, as can be seen from a routine integration by parts. As such, the resolvent map is not everywhere uniquely defined. There is also a lack of uniqueness for the wave, heat, and Schrödinger equations for this operator (note that there are no spatial boundary conditions specified in these equations). Another example occurs when {H := L^2((0,+\infty))}, {D := C^\infty_c((0,+\infty))}, {L} is the momentum operator {L := i \frac{d}{dx}}. Then the resolvent {(L-z)^{-1}} can be uniquely defined for {z} in the upper half-plane, but not in the lower half-plane, due to the obstruction \displaystyle \langle (L-z) f, e^{i \bar{z} x} \rangle_H = 0 for all test functions {f} (note that the function {e^{i\bar{z} x}} lies in {L^2((0,+\infty))} when {z} is in the lower half-plane). For related reasons, the translation operators {e^{itL}} have a problem with either uniqueness or existence (depending on whether {t} is positive or negative), due to the unspecified boundary behaviour at the origin. The key property that lets one avoid this bad behaviour is that of essential self-adjointness. Once {L} is essentially self-adjoint, then spectral theorem becomes applicable again, leading to all the expected behaviour (e.g. existence and uniqueness for the various PDE given above). Unfortunately, the concept of essential self-adjointness is defined rather abstractly, and is difficult to verify directly; unlike the symmetry condition (5) or the positive condition (6), it is not a “local” condition that can be easily verified just by testing {L} on various inputs, but is instead a more “global” condition. In practice, to verify this property, one needs to invoke one of a number of a partial converses to the spectral theorem, which roughly speaking asserts that if at least one of the expected consequences of the spectral theorem is true for some symmetric densely defined operator {L}, then {L} is self-adjoint. Examples of “expected consequences” include: • Existence of resolvents {(L-z)^{-1}} (or equivalently, dense image for {L-z}); • Existence of a contractive heat propagator semigroup {e^{tL}} (in the positive case); • Existence of a unitary Schrödinger propagator group {e^{itL}}; • Existence of a unitary wave propagator group {e^{it\sqrt{L}}} (in the positive case); • Existence of a “reasonable” functional calculus. • Unitary equivalence with a multiplication operator. Thus, to actually verify essential self-adjointness of a differential operator, one typically has to first solve a PDE (such as the wave, Schrödinger, heat, or Helmholtz equation) by some non-spectral method (e.g. by a contraction mapping argument, or a perturbation argument based on an operator already known to be essentially self-adjoint). Once one can solve one of the PDEs, then one can apply one of the known converse spectral theorems to obtain essential self-adjointness, and then by the forward spectral theorem one can then solve all the other PDEs as well. But there is no getting out of that first step, which requires some input (typically of an ODE, PDE, or geometric nature) that is external to what abstract spectral theory can provide. For instance, if one wants to establish essential self-adjointness of the Laplace-Beltrami operator {L = -\Delta_g} on a smooth Riemannian manifold {(M,g)} (using {C^\infty_c(M)} as the domain space), it turns out (under reasonable regularity hypotheses) that essential self-adjointness is equivalent to geodesic completeness of the manifold, which is a global ODE condition rather than a local one: one needs geodesics to continue indefinitely in order to be able to (unitarily) solve PDEs such as the wave equation, which in turn leads to essential self-adjointness. (Note that the domains {(0,1)} and {(0,+\infty)} in the previous examples were not geodesically complete.) For this reason, essential self-adjointness of a differential operator is sometimes referred to as quantum completeness (with the completeness of the associated Hamilton-Jacobi flow then being the analogous classical completeness). In these notes, I wanted to record (mostly for my own benefit) the forward and converse spectral theorems, and to verify essential self-adjointness of the Laplace-Beltrami operator on geodesically complete manifolds. This is extremely standard analysis (covered, for instance, in the texts of Reed and Simon), but I wanted to write it down myself to make sure that I really understood this foundational material properly. Read the rest of this entry » RSS Google+ feed Get every new post delivered to your Inbox. Join 6,020 other followers
941f53fa1d65a384
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I am an electronics and communication engineer, specializing in signal processing. I have some touch with the mathematics concerning communication systems and also with signal processing. I want to utilize this knowledge to study and understand Quantum Mechanics from the perspective of an engineer. I am not interested in reading about the historical development of QM and i am also not interested in the particle formalism. I know things have started from the wave-particle duality but my current interests are not study QM from that angle. What I am interested is to start studying a treatment of QM from very abstract notions such as, 'what is an observable ? (without referring to any particular physical system)' and 'what is meant by incompatibility observables ?' and then go on with what is a state vector and its mathematical properties. I am okay to deal with the mathematics and abstract notions but I some how do not like the notion of a particle, velocity and momentum and such physical things as they directly contradict my intuition which is based on classical mechanics ( basic stuff and not the mathematical treatment involving phase space as i am not much aware of it). I request you to give some suggestions on advantages and pitfalls in venturing into such a thing. I also request you to provide me good reference books or text books which give such a treatment of QM without assuming any previous knowledge of QM. share|cite|improve this question Why not sakurai? – user7757 Feb 3 '13 at 12:08 Possible duplicate: – Qmechanic May 23 '13 at 19:51 up vote 1 down vote accepted Try "Mathematical Foundations of Quantum Mechanics" by George Mackey. It is about 130 pages with a chapter on classical mechanics. The author is a very well know mathematician, and I think the books is what you are looking for. Also on higher level is the book "Quantum mechanics for mathematicians" by Leon Takhtadzhian. The book by Folland on Quantum Field Theory has a chapter on Quantum Mechanics, which can be read independently of the rest of the book. Edit: Since this came up on the first page, I'll add one more. F.Strocchi "An Introduction to the Mathematical Structures of Quantum Mechanics. A Short Course for Mathematicians." share|cite|improve this answer An unconventional approach would be to study quantum computation or quantum information theory first. What is 'unusual' about quantum mechanics is the mathematical underpinnings, which is essentially a generalization of probability theory. (I have heard more than one colleague say that quantum mechanics is simply physics which involves 'non-commutative probability', i.e. in testing whether some collection of events are realized for some sample space, there is a pertinent sense of the order in which one tests those events.) To the extent that this is true, it is not important to be learning the actual physics alongside that mathematical underpinning, so long as you can learn about something evolving e.g. under the Schrödinger equation or collapsing under measurement. Studying quantum information evolving under a computational process is one way you could achieve that. Because the narrative of the field is less about the crisis in physics in the 20s–40s, and more about physicists and computer scientists struggling to find a common language, the development is clearer and there is a better record of justifying the elements of the formalism from a fundamental standpoint. By studying quantum information and/or quantum computation, you will be able to decouple the learning of the underpinnings from the learning of the physics, and thereby get to the heart of any conceptual troubles you may have; and it will give you a tidier sandbox in which to play with ideas. To this end, I recommend "Nielsen & Chuang", which is the standard introductory text of the field. It's suitable as an introduction both for those coming from a computer science background, and from a quantum physics background; so apart from learning some of the formalism, you can get some exposure to some of the physics as well. There are other texts which I have not read, though; and about a bazillion pages of lecture notes floating around on the web. share|cite|improve this answer I strongly advise you Quantum Theory: Concepts and Methods by Asher Peres. I think this book is answering the questions you're asking. Like 'what is an observable ?' share|cite|improve this answer That's quite an unusual request i.e. to start with an abstract formulation of quantum mechanics, especially for someone in a profession so closely connected with the "real world". However, to answer your question, I think what you're looking for is an axiomatic approach to quantum mechanics. Such treatments will keep the physical examples to the minimum and skip to the mathematics straight away. You could start with this reference (just for a chatty treatment of what the postulates look like !), and maybe search for quantum mechanics textbooks with "axiomatic" in the title. For many people, the fact that the quantum mechanical predictions of physical quantities are sometimes counterintuitive is precisely what gives the subject its appeal. Hope this helps. Edit: It appears not so easy to find "Axiomatic Quantum Mechanics" in textbook titles ! However Google returns a few articles featuring those words. Of course there is also Von Neumann's "Mathematical Foundations of Quantum Mechanics". share|cite|improve this answer You might find articles by Leon Cohen of interest. He has considered the relationship between classical and quantum theory from a signal processing since the 1960s. For example, PROCEEDINGS OF THE IEEE, VOL. 77, NO. 7, JULY 1989, "Time-Frequency Distributions-A Review". This concentrates on the relationship between the Wigner function in quantum theory and various concepts in signal processing. This might not answer your question so much as point to something that you might find more broadly helpful because of its signal processing provenance. The mathematics of Hilbert spaces only enters into a small fraction of the signal processing literature, but the vast majority of it could be put into such mathematical terms (signal processing is, after all, preoccupied with Fourier and other integral transforms). share|cite|improve this answer I always recommend Tony Sudbery's Quantum Mechanics and the Particles of Nature, don't be put off by the bad word in the title: he is fairly axiomatic and has both the abstract part and the concrete part. I recommend them more highly than either Mackey, already cited, or Varadarajan, both of which are idiosyncratic. Prof. Sudbery is an expert in Quantum Information Theory but does not take a biassed or idiosyncratic approach in his text. share|cite|improve this answer Here is a little book by a physicist trained as an engineer in fluid dynamics applied to aircraft: "Foundations of Quantum Physics" by Toyoki Koga (1912-2010), Wood and Jones, Pasadena, CA, 1980. This book has forewords by Henry Margenau and Karl Popper. Another book by him is "Inquiries into Foundations of Quantum Physics", 1983. share|cite|improve this answer Your Answer
fab2c8a25d0c025e
Time evolution of the density operator next up previous Next: The quantum equilibrium ensembles Up: Principles of quantum statistical Previous: The density matrix and Time evolution of the density operator The time evolution of the operator tex2html_wrap_inline509 can be predicted directly from the Schrödinger equation. Since tex2html_wrap_inline599 is given by the time derivative is given by where the second line follows from the fact that the Schrödinger equation for the bra state vector tex2html_wrap_inline601 is Note that the equation of motion for tex2html_wrap_inline599 differs from the usual Heisenberg equation by a minus sign! Since tex2html_wrap_inline599 is constructed from state vectors, it is not an observable like other hermitian operators, so there is no reason to expect that its time evolution will be the same. The general solution to its equation of motion is The equation of motion for tex2html_wrap_inline599 can be cast into a quantum Liouville equation by introducing an operator In term of iL, it can be seen that tex2html_wrap_inline599 satisfies What kind of operator is iL? It acts on an operator and returns another operator. Thus, it is not an operator in the ordinary sense, but is known as a superoperator or tetradic operator (see S. Mukamel, Principles of Nonlinear Optical Spectroscopy, Oxford University Press, New York (1995)). Defining the evolution equation for tex2html_wrap_inline509 this way, we have a perfect analogy between the density matrix and the state vector. The two equations of motion are We also have an analogy with the evolution of the classical phase space distribution tex2html_wrap_inline617 , which satisfies with tex2html_wrap_inline619 being the classical Liouville operator. Again, we see that the limit of a commutator is the classical Poisson bracket. Mark Tuckerman Tue May 9 19:40:24 EDT 2000
08862e9b9fb6415d
Next Article in Journal Effects of the November 2012 Flood Event on the Mobilization of Hg from the Mount Amiata Mining District to the Sediments of the Paglia River Basin Next Article in Special Issue Computational Redox Potential Predictions: Applications to Inorganic and Organic Aqueous Complexes, and Complexes Adsorbed to Mineral Surfaces Previous Article in Journal Field Application of Accelerated Mineral Carbonation Previous Article in Special Issue Water Organization and Dynamics on Mineral Surfaces Interrogated by Graph Theoretical Analyses of Intermolecular Chemical Networks Minerals 2014, 4(2), 208-240; doi:10.3390/min4020208 Heath D. Watts 1,*, Lorena Tribe 2 and James D. Kubicki 1,3,* Department of Geosciences, The Pennsylvania State University, University Park, PA 16802, USA Division of Science, The Pennsylvania State University, Berks, Reading, PA 19610, USA Earth and Environmental Systems Institute, The Pennsylvania State University, University Park, PA 16802, USA Author to whom correspondence should be addressed; Tel.: +1-814-865-3951 (J.D.K.); Fax: +1-814-867-2378 (J.D.K). Received: 24 December 2013; in revised form: 25 February 2014 / Accepted: 6 March 2014 / Published: 27 March 2014 : A review of the literature about calculating the adsorption properties of arsenic onto mineral models using density functional theory (DFT) is presented. Furthermore, this work presents DFT results that show the effect of model charge, hydration, oxidation state, and DFT method on the structures and adsorption energies for AsIII and AsV onto Fe3+-(oxyhydr)oxide cluster models. Calculated interatomic distances from periodic planewave and cluster-model DFT are compared with experimental data for AsIII and AsV adsorbed to Fe3+-(oxyhydr)oxide models. In addition, reaction rates for the adsorption of AsV on α-FeOOH (goethite) (010) and Fe3+ (oxyhydr)oxide cluster models were calculated using planewave and cluster-model DFT methods. arsenic; density functional theory (DFT); kinetics; thermodynamics; adsorption; computational chemistry; planewave DFT; reaction rates; As—Fe bond distances 1. Introduction 1.1. Arsenic Chemistry, Geochemistry, Prevalence, and Toxicity The study of arsenic (As) adsorption on mineral surfaces is necessary to understand both the distribution and mobility of As species in nature as well as to develop remediation strategies for As waste sites. Arsenic is found in a variety of geochemical environments at aqueous concentrations varying from <0.5 to >5000 μg/L, and is found in a variety of geochemical environments [1,2]. Natural and anthropogenically-mediated biogeochemical interactions among arsenic species, biota, and minerals can affect the distribution, mobility, and toxicity of As in the environment [2,3,4,5,6,7,8]. Although recent work has posited that arsenic could be a potential biochemical and astrobiological proxy for phosphorus during biological evolution [9], this hypothesis is controversial [10]. Arsenic can occur in both inorganic (iAs) and organic (oAs) forms; the chemical form of As, or species, as well as the concentration of As, affects the solubility, mobility, reactivity, bioavailability, and toxicity of As [11,12]. iAs occurs predominately as either arsenious acid (HnAsIIIO3n−3; sometimes called arsenous acid) in reducing environments, or arsenic acid (HnAsVO4n−3) in oxidizing environments, where n = 0, 1, 2, or 3 [12,13,14]. Both the oxidation and protonation states of iAs depend on the physiochemical conditions of the sample environment [2]. For example, the three pKa values for arsenic acid are 2.2., 6.9, and 11.5 [15]; therefore, the pH of the environment will affect the protonation state of H3AsO4, which will, in turn, affect the mobility, reactivity, and bioavailability of iAsV. In addition to iAs, methylated AsIII and AsV compounds such as monoarsinic acid (MMAV) and dimethylarsinic acid (DMAV) occur both naturally and due to anthropogenic sources [12,13]. Arsenic toxicity depends on the species present; a general trend of decreasing toxicity is: R3As > H3AsO3 > H3AsO4 > R4As+ > As0, where R is an alkyl group or a proton [12,16]. Arsenic originating from natural water-rock interactions of surface and groundwater [2,6] and from anthropogenic sources such as acid mine drainages [17,18] can lead to drinking water contamination. Biogeochemical processes result in organic As compounds accumulating in oil, shale, and coal [19]. DMA and MMA have been used as herbicides, pesticides, and defoliants and pose a contamination problem to surface and groundwater [20]. Roxarsone (C6AsNH6O6) is used as a supplement in chicken feed and ends up in waste from poultry operations [21]. In addition to natural and anthropogenic groundwater contamination by As, anthropogenic As is contaminating the oceans, and subsequently seafood [16], which could have human health implications. Human diseases caused by As contamination include various cancers, liver disease, cardiovascular disease, and an increase in mortality from pulmonary tuberculosis [22,23,24,25,26,27,28]. Groundwater that contains As concentrations >10 μg/L limit set by the World Health Organization (WHO) places more than ten million people at risk from arsenicosis. Locales ranging from Southern Bangladesh, India, Argentina, Chile, and Vietnam have groundwater sources with As concentrations >10 μg/L [2,3,6,25,26,28,29,30,31,32,33]. The remediation of arsenic in aquifers can be challenging due to the size of contaminated groundwater systems and varying biogeochemical conditions. For instance, the shallow groundwater of the Chaco Pampean Plain of Argentina spans 106 km2 and contains 10 to 5300 μg As/L [34]. A variety of As remediation methods were reviewed recently [35,36]; these methods have shown varying success rates. For example, an in situ study of an alkaline aquifer showed relatively low adsorption of iAs [37]. Conversely, a permeable reactive barrier study showed As removal to <5 μg/L due to induced sulfate reduction and the presence of zero-valent Fe [38], and experiments with household sand filters in Vietnam were able to reduce As concentrations in drinking water to <10 μg/L with a 40% success rate [39]. Laboratory and field experiments with household zero-valent Fe filters showed similarly effective results in Bangladesh [40]. Field tests with Fe-coated zeolites showed 99% removal of As [41]. Mn-Fe oxide-coated diatomites were able to reduce iAs from >40 μg/L to <10 μg/L in a pilot field-scale study [42]. Although many of these studies show promise for field-scale remediation of As, the biogeochemistry, aqueous geochemistry, and mineralogy of the surface and groundwater can affect the efficiency of the methods; therefore, it is necessary to understand the As adsorption process more thoroughly, so that the remediation methods can be applied more effectively. 1.2. Arsenic Treatment Methods Due to the prevalence of As contamination worldwide and the threat of arsenicosis [43], research and the development of methods to understand As chemistry and to attenuate As in water are imperative [44]. Methods that have been developed to attenuate As such as electrocoagulation and electrodialysis [45,46], treatment with microorganisms to affect the biogeochemical cycling of As [7], and the adsorption of As onto a variety sorbents [47,48,49]. Among the sorbents used for As attenuation are organic polymers [50], and minerals such as dolomite [51], zeolites [52], and Al minerals such as alumina or gibbsite [53,54]. Alumina has been found to effectively remove iAsV [55], but it is necessary to use activated alumina to efficiently remove iAsIII from solution [56]. A multitude of studies on As adsorbents have been conducted using Fe-based mineral sorbents [57,58], such as magnetite [54,59,60,61,62,63], ferrihydrite [53,54,64,65,66], goethite [48,54,66,67], and zero-valent iron [62]. The focus of the current work is on the chemistry of iAs species adsorbed to Fe-oxide and Fe-hydroxide mineral surfaces. Prior experiments have found that inorganic [64] and organic [64,68,69,70] ligands may adsorb to Fe minerals and compete with As for adsorption sites, but Zn cations [59] may augment As adsorption to Fe sorbents. Therefore, if ligands that inhibit As adsorption can be removed or precipitated and ligands that enhance As adsorption can be added to As-containing site, it could be possible to develop improved As-remediation techniques. Arsenic adsorption may also be enhanced by the addition of magnetite to agricultural waste such as wheat straw [61], but adsorption of As can be reduced by microorganisms [7,66]. If it is possible to control the growth and metabolism of microorganisms present in As-containing water, then it could be possible to enhance As adsorption. 1.3. Studying As Adsorption with Experimental and Modeling Methods Because of the complex and often uncharacterized biogeochemistry of arsenic-rich environments, it is useful and necessary to use a variety of experimental data and modeling results to characterize these complex matrices. For example, Figure 1 shows models of the monodentate, bidentate, and outer-sphere adsorption of HAsO42− to Fe-(oxyhyr)oxide clusters (Figure 2A–C) and periodic models (Figure 2D–F) models. Experimental and modeling techniques have been employed to determine the geometries, energetics, and spectroscopic properties exhibited by As species adsorbed to mineral surfaces; improved knowledge about the adsorption chemistry of As species can aid in the development of methods for attenuating As in the environment. Although the focus of our work is on the use of quantum mechanics (QM) techniques for studying the properties of As adsorption, it is necessary to frame these results in relationship to experimental and other modeling methods, because QM results can be useful for interpreting experimental data and for parameterizing other modeling methods such as classical force fields [71] and surface complexation models [72,73]. Figure 1. (A) Ferric iron (oxyhydr)oxide cluster model of Fe2(OH)6(H2O)4·4H2O; (B) monodentate mononuclear (MM) cluster model of Fe2(OH)5(H2O)4H2AsO4·4H2O; and (C) binuclear bidentate (BB) cluster model of Fe2(OH)4(H2O)4HAsO4·4H2O. Minerals 04 00208 g001 1024 Figure 2. (A) and (D) monodentate mononuclear adsorption of HAsO42−; (B) and (E) bidentate binuclear adsorption of HAsO42−; and (C) and (F) outer-sphere complex of HAsO4, on clusters of Fe3+-(oxyhydr)oxide (AC) and periodic α-goethite (010) (DF) models. Minerals 04 00208 g002 1024 1.4. Studying As Adsorption with Experiments Vibrational spectroscopy (e.g., Fourier transform infrared spectroscopy (FTIR) or Raman), X-ray absorption near edge structure (XANES), extended X-ray adsorption fine structure (EXAFS) spectroscopies, as well as adsorption isotherm and kinetics experiments are useful for determining the chemistry of As adsorption. FTIR and Raman studies are useful for determining the bonding configurations between As species and mineral surfaces. When As adsorbs as an inner-sphere complex, characteristic vibrational frequency shifts are observable [74,75,76,77,78,79]. XANES spectroscopy can provide information about the oxidation state of As that is adsorbed to mineral surfaces [80,81,82,83,84], and can determine if the oxidation state of As or the surface changes during the adsorption process [78,85]. EXAFS spectroscopy provides information about the coordination chemistry of adsorbed As [46,80,81,82,86,87,88,89,90,91,92]. Coordination state information is useful for determining whether the As adsorbs as a monodentate, bidentate, or outer-sphere complex (Figure 2). Moreover, studies that use two or more instrumental methods such as XANES/FTIR [78] or XANES/EXAFS [82] can increase the reliability of data interpretation. Furthermore, kinetics and isotherm studies provide information about the rates and energetics of As adsorption onto mineral surfaces that further help constrain adsorption mechanisms [55,93,94,95,96,97]. Although experimental techniques have contributed greatly to the understanding of As adsorption chemistry, computational chemistry can be used to help interpret experimental data on the mechanisms of As adsorption to mineral surfaces. In addition, computational chemistry can fill in missing information on the details of adsorption mechanisms and kinetics. 1.5. Studying As Adsorption with Mathematical Models Surface complexation modeling (SCM) techniques such as the charge distribution multi-site complexation (CD-MUSIC) [76,95,98,99,100,101,102,103,104,105,106], the extended triple-layer model [107], isotherm modeling [108,109], and the ligand and charge distribution model (LCD) [110,111] have been used to model As species in solution and adsorbed to mineral surfaces. In general, SCM models use experimental data or quantum mechanics results such as equilibrium constants, bond lengths, and surface charges to calculate the adsorption isotherms of As on Fe mineral surfaces. When integrated, the CD-MUSIC and LCD SCMs are able to model humic substances interacting with Fe surfaces and the effect they have on As adsorption [111,112]. SCM can be used to interpret interactions among As, mineral surfaces, and organic and inorganic ligands, as well as charge and protonation effects. However, the precision of these models depends on the quality of their parameterization that is obtained from experimental data, which can be difficult to interpret, or with QM results [72,73]. 1.6. Studying As Adsorption with Quantum Mechanics Modeling Methods Numerous studies applying QM, or specifically density functional theory (DFT) methods [113,114], use models similar to those shown in Figure 2A–C. For example, DFT methods have been used to study the thermodynamics [91,115,116,117], vibrational frequencies [75,118,119,120,121], kinetics [120,121,122], ligand effects [87], oxidation-reduction reactions [123], and the coordination of adsorbed As [90,91,117,119,124,125,126,127,128,129,130,131,132]. Many of the previous computational chemistry studies relied on cluster models such as those in Figure 2A–C, but some groups have used periodic models to capture the chemistry of the mineral surface sorbates more precisely [124,125,127]. In this paper, we report structures and kinetics comparisons for the results from both cluster models (Figure 2A–C) and periodic models (Figure 2D–F). These comparisons are necessary to determine if and how the results from the molecular cluster and periodic model calculations differ. In addition to the use of cluster versus periodic models, other factors can affect the results obtained from DFT calculations. These factors include surface charge, hydration, model convergence criteria, and potentially the software used for the calculation. For example, prior studies used highly charged models [91] to study the adsorption of As to Fe clusters; however, the work that we are presenting used models for the surface clusters with neutral charges, because localized high charges are unlikely. Prior work using the implicit solvation using the conductor reaction field (COSMO) [133,134] suggests that a single water molecule can produce activation energy results that are more precise than the addition of multiple water molecules can [122]. Conversely, other studies have used anhydrous surfaces to model As adsorption to Fe clusters [91]. The work we present herein used both hydrated cluster models and hydrated periodic models in an attempt to model the natural environment of As adsorption more accurately and realistically. As the processing power and speed of computers increases, the size and complexity of As adsorption models increases, as illustrated by two Fe cluster models from papers dating from 2001 and 2006 [90,129]. The Fe cluster model in the latter paper is larger and likely more realistic than the model in the former paper. Results from prior DFT calculations provide contradictory results about the coordination state of As adsorption. For example, there are DFT results that predict that monodentate [128], bidentate [90,91,119,129], or a mixture of As coordination states are occurring on Fe-mineral surface models [130]. Because both the experimental data and the DFT results are providing ambiguous information about As coordination state, further calculations and experiments are necessary to clarify this topic. Alternatively, As may adsorb in a variety of configurations depending upon which surfaces are present on a given mineral sample [135]. The convergence criteria of the energy minimization calculations for the models could also have an effect on the precision of the calculated results and the ability of the results to reproduce experimental data and provide insight about As adsorption chemistry. For example we use a minimization convergence criteria of 0.03 kJ/mol, whereas, for example, Sherman and Randall [91] used higher tolerance energy minimization criteria 5 kJ/mol; however, tighter convergence criteria again reflects the availability and evolution of computational resources. In the studies presented here, the thermodynamics, geometries, and kinetics of inner-sphere iAsIII and iAsV adsorbed as monodentate mononuclear (MM) or bidentate binuclear (BB) complexes to solvated Fe clusters were evaluated. The molecular cluster results show how hydration and the initial Fe cluster charge affect iAsIII and iAsV adsorption for BB models, and the results compare the of adsorption energies of BB iAsIII and iAsV onto hydrated neutral Fe clusters. The DFT calculations on the cluster models also compare the calculated As—Fe distances for the BB models with pertinent experimental observations. In addition, inner- and outer-sphere As-Fe complexes were used to determine the activation energies (ΔEa) of the adsorption/desorption process; these calculations were performed using both molecular cluster models and periodic models of the α-FeOOH (goethite) (010) surface. 2. Methods 2.1. Applied Quantum Mechanics Background The application of QM with quantum chemistry software allows one to calculate chemical properties such as thermodynamics, kinetics, molecular geometries, spectroscopic parameters, transition states structures that might not be experimentally observable, and potentially hazardous chemistries [136,137,138]. Quantum chemistry calculations begin with an initial input of Cartesian coordinates for the molecule of interest, and then these coordinates are subsequently allowed to change during energy minimization calculations. The bond lengths, bond angles, and dihedral angles of the model are systematically perturbed, followed by the calculation of the relative energy of the model. After each energy calculation, subsequent systematic perturbations of the model geometry take place until the force (F), where F = dE/dr and dE and dr are the change in the energy and change in the model coordinates, respectively, converges to at or near “zero” (i.e., “zero” is defined as the convergence criteria by the modeler). When F is zero, the model then resides at a stationary point on a potential energy surface (PES). The user of computational chemistry software can specify the criterion for energy minimization convergence [136,139]. Subsequent calculation of the vibrational frequencies for the model determines the second derivative of energy with respect to atomic coordinates (d2E/dr2). If the calculated vibrational frequencies are all real (positive), then the model is at a PES minimum; if one vibrational frequency is imaginary (negative), then the model is at a transition state or PES maximum; if the model exhibits > 1 imaginary frequency, then the model is unstable and a new input geometry should be used. Obtaining a PES minimum does not guarantee that the model is at a global minimum, only that the model is at a local minimum. Using multiple input models obtained from a conformational analysis can aid in the attainment of a globally minimized final geometry [136,139]. The calculation of the vibrational frequencies (d2E/dr2) also allows the calculation of thermodynamic properties such as enthalpy, Gibbs free energy, and entropy. If one imaginary frequency is present, the model is at a transition state and the results from this model, the initial structure of the model, and the PES minimum model can be used to calculate rate constants for reactions. Note that throughout the manuscript the output from QM calculations is referred to as results and not data, we refer to the output from experiments as data. The vibrational frequency calculation provides infrared and Raman frequencies for the model, and further calculations can provide NMR chemical shifts, UV-Visible wavelengths, isotope effects, and temperature effects [136]. There are numerous methods available for calculating energies and other chemical properties with QM, among the most widely used are Hartree-Fock (HF) method [140], Møller-Plesset perturbation (MP) theory [141], and density functional theory (DFT) [113,114]. All of these methods arose from the development of quantum mechanics and the Schrödinger equation (ĤΨ(r) = EΨ(r)), where Ĥ is a Hamiltonian operator, Ψ(r) is the wave function, and E is the energy of the model. If Ψ(r) is known for a model, it is possible to solve for E and thus obtain the molecular properties for any model of interest. The Schrödinger equation has only to solve for the electronic energy of the model, because the nuclear energy and positions are relatively low and stationary compared to those of the electrons [142]. Thus far, it has been impossible to solve the Schrödinger equation for models that are more complex than H2 because Ψ(r) makes the solution of the equation untenable for larger molecules with a greater number of electrons; therefore, it has been necessary to develop approximations for the Schrödinger equation such as HF and MP theory. HF theory minimizes the energy of each electron iteratively with respect to the average energy of the other electrons in a model [140]. The shortcoming HF is that it does not account for electron correlation (repulsion) between each electron in the model. Using HF leads to an overestimation of model stability. MP theory accounts for electron correlation by systematically perturbing the molecular Hamiltonian [141]; however, the cost of using MP theory is prohibitive for models of geochemical interest. Furthermore, unlike HF theory, MP theory is not variational [143], so, the calculated energy could be lower than the ground state energy. Unlike HF and MP theories that calculate electron interaction to obtain molecular energies, DFT calculates the electron density of the molecule to determine the energy [113,114,144]. The shortcoming of DFT is that the theory lacks a method for calculating the exact energy of the electron correlation and exchange term (Exc). The inability to calculate the Exc results in an underestimation of the total energy. Neglecting Exc, as HF theory does, or underestimating Exc, as DFT does, results in an underestimation of the total energy of a given model. For DFT, the lack of a precise correlation energy results because DFT does not account for the columbic interaction (repulsion) between electrons with anti-parallel (opposite) spin. Imprecise exchange energy results because DFT does not account precisely for the fact that electrons with parallel (same) spins cannot reside in the same orbital. Therefore, neglecting or underestimating the Exc violates the Pauli Exclusion Principle. However, a variety of DFT methods have been developed to approximate Exc and these methods can produce precise results [138,145]. Significantly, the computational cost of DFT calculations is substantially less than the cost of MP theory calculations, and the precision of DFT calculations is greater than that of HF calculations. Therefore, although DFT methods account imprecisely for Exc, they are preferable to HF and MP methods. Many DFT methods are available, and a particular method could be useful for calculating a particular chemical property (e.g., energy) but not for calculating other properties (e.g., NMR chemical shifts); these differences in precision are due to the parameters used for Exc in the DFT methods [138,145]. Additional work is necessary to evaluate the efficacy of DFT methods for calculating properties such as adsorption energies, rate constants, and structures. Although DFT methods such as B3LYP [146,147] can provide accurate results for a variety of chemical properties, we suggest that computational geochemists begin to explore the use of other DFT methods that could provide improved results. In addition to using an electron correlation method such as MP or DFT, it is necessary to specify a basis set when using molecular orbital (MO) calculations [136,148,149,150,151]; however, when using planewave calculations [152,153,154,155,156,157,158], basis sets are not used. For DFT calculations on the clusters, the basis sets are equations that define atomic orbitals and are used in linear combinations to create molecular orbitals. When using basis sets, each atomic orbital for a given atom contains one electron [149], so for the C atom, which has six electrons and an electronic configuration of 1s22s22p2, it is necessary to have a minimum of five basis functions (i.e., 1s, 2s, 2px, 2py, and 2pz). Note that because the p-orbitals are energetically degenerate, the 2px, 2py, and 2pz are present as basis functions. The variational principle of quantum mechanics states that Eg ≤ ⟨Ψ(r)∣Ĥ∣Ψ(r)⟩, where Eg is the ground state (lowest) energy of the molecule. The variational principle shows that increasing the accuracy of Ψ(r) will increase the calculated accuracy E relative to Eg; increased basis set size can increase the accuracy of Ψ(r). Basis set size can be increased by using double zeta (DZ) or triple zeta (TZ) basis sets which double or triple the number of basis sets used, relative to the minimal basis set [148]. For C, the DZ and TZ basis sets have ten and fifteen basis functions, respectively. Another method for increasing the accuracy of a basis set is to use split basis sets, where more basis sets are used for the bond-forming valence electrons, while the non-bond-forming core electrons are treated with minimal basis sets. For C, this would involve one 1s orbital and two each of the 2s, 2px, 2py, and 2pz basis sets for a total of nine basis sets. Further increases in basis set size and precision can be obtained by the addition of polarization and diffuse basis functions [148]. Polarization functions increase the angular momentum basis sets on a particular atomic orbital; for example, in methane each H atom has a 1s orbital, but in the C-H bonds, the H atoms receives some p-orbital character from the C atom; therefore, including p-orbital polarized functions on the H atoms increases the accuracy of the orbital description. In addition, diffuse basis sets are used increase the electronic radius where electrons can reside. Diffuse basis sets are useful for calculations with anions and for weak interactions such as van der Waals interactions. 2.2. Molecular Orbital Theory Calculations with Fe Clusters For the cluster model DFT calculations, all models were constructed in Materials Studio (Accelrys Inc., San Diego, CA, USA) and the energy minimization, Gibbs free energy, and transition-state calculations were performed in the gas phase using Gaussian 09 software [136]. All energy minimization calculations on the cluster models were performed without symmetry or atomic constraints. Energy minimizations, frequency, and kinetics (transition state) calculations were performed using the hybrid density functional B3LYP with the 6-31G(d) basis set [148,149,150,151]. B3LYP accounts for Exc and the 6-31G(d) basis set is a DZ basis set with p-polarization functions on the non-H atoms. Energy convergence was set to 0.03 kJ/mol during the energy minimization calculations. The frequency calculations using B3LYP/6-31G(d) ensured that each model was at either a potential energy minimum (no imaginary frequencies) or at a transition state (one imaginary frequency) [136]; however, the frequency calculation does not ensure the model is at a global energy minimum. For Gaussian calculations, it is necessary to specify an electron correlation method (e.g., B3LYP), a basis set, the type of calculation (e.g., optimization), the Cartesian coordinates of the atoms in the model, the charge of the model, and the spin multiplicity of the model [136]. The electron configuration of Fe3+ is [Ar]3d5. For the energy minimization calculations, we used high-spin Fe3+, where each 3d electron occupies one of the five d-orbitals, and where the electrons are all either spin up or spin down; this means that each of the two Fe atoms in the cluster has five unpaired electrons. The multiplicity is = 2S + 1, where S is the spin of an unpaired electron and can be +½ or –½. Therefore, for our high-spin clusters there are 10 unpaired electrons, each having a spin of +½, so the multiplicity = 2(10/2) + 1 or 11. For the rate constant calculations, we experimented by using high-spin multiplicity for both Fe atoms (i.e., 11), and a combination of up spin for one Fe atoms (i.e., +5) and down spin for the other (i.e., −5). Fe model surface complex clusters were designed to minimize surface charge because charge buildup on the actual mineral surfaces is believed to be relatively small. However, surface models with charges were also calculated to demonstrate the effect of model charge on calculated energetics, and to compare with prior studies that used charged models [91]. In addition, explicit hydrating H2O molecules were included for the aqueous species, the surface Fe-OH groups, and the adsorbed arsenic acid (iAsV) molecules. Figure 1 shows examples of a tetrahydrated Fe (oxyhydr)oxide cluster, (Fe2(OH)6(H2O)4, A, without adsorbed HAsO42−, B, monodentate mononuclear (MM) adsorbed HAsO42−, and C, bidentate binuclear (BB) adsorbed HAsO42−. Hydration can be a key in accurately predicting structures and frequencies of anionic surface complexes because they are both a function of the hydration state of the sample [75,157]. The interatomic distances calculated in each model surface complex were compared to As-O bond lengths and As—Fe distances obtained from EXAFS spectra [46,80,81,82,86,87,88,89,90,91,92]. Gibbs free energies (G) of each species were estimated by calculating G in a polarized continuum the permittivity for water of 78.4 at 298.15 K. The integral equation formalism of the polarized continuum model (IEFPCM) [159] was used to calculate the Gibbs free energy in solution. The polarized continuum places the model in the cavity of a field with a constant permittivity, in this case for water; this field is used as a proxy for the solvent of interest. Single-point energy calculations were performed with the B3LYP functional and the 6-31+G(d,p) basis set [148,149,150,151]; the TZ basis set with augmented functions on the non-H atoms, and d-polarized and p-polarized functions for the non-H atoms and H-atoms, respectively, was used in order to improve the accuracy of the energy that was calculated using B3LYP/6-31(d) during the energy minimizations; this is a standard practice. The ΔGads was then determined from stoichiometrically balanced reactions. Configurational entropy terms are neglected in this approach; hence, we emphasize that these are Gibbs free energy estimates. We do not expect the precision of ΔGads to be better than ±10 kJ/mol. To illustrate the potential effect that a chosen DFT method can have on geometry and thermodynamics, we report the As—Fe, As-OH, As=O, and As-OFe bond distances for the Fe2(OH)4(OH2)4HAsO4·4H2O model and ΔGads for the reaction: H2AsO4·8H2O + Fe2(OH)6(OH2)4·8H2O → Fe2(OH)4(OH2)4HAsO4·4H2O + OH·4H2O + 5H2O For this reaction, we used either B3LYP, PBE0 [160,161,162], and M06-L [163] DFT methods to minimize each structure in the reaction with the 6-31G(d) basis set, and the 6-311+G(d,p) basis with the self-consistent reaction field (SCRF) IEFPCM and the solvent water to calculate the single-point energy of each structure. The PBE0 and M06-L methods were chosen to compare with the results from the often-used B3LYP method, because the PBE0 function was the method used for the periodic planewave calculations for this work, and the M06-L method was specifically parameterized for use with transition metals such as Fe [162]. For the transition-state calculations on the clusters, the outer-sphere complexes of AsO43− on an Fe3+-(oxyhydr)oxide clusters were obtained by constraining one Fe—As distance and allowing all other atoms to relax. The constrained distance was increased incrementally, allowing for energy minimization of the system at each new distance. The reaction path was graphically visualized. The change in energy for the adsorption reactions (ΔEads) were inferred by using the total electronic energy plus the zero-point correction obtained from the inner-sphere frequency calculations [163]. 2.3. Planewave Calculations Using α-FeOOH (010) The starting configuration for the periodic bidentate, binuclear HAsO42− on the goethite (α-FeOOH) (010) surface was taken from previous simulations of HPO42− on the same surface [135]. Phosphate and arsenate structures and chemistries are similar, so this starting configuration is a reasonable approximation. An energy minimization was performed on this starting configuration to allow the atoms to relax as necessary for the As for P substitution. Energy minimizations were carried out with the lattice parameters constrained the experimental values (9.24 × 9.95 Å2) [164] and with a vacuum gap between surface slabs of 10 Å. The model stoichiometry was 24FeOOH, HAsO42−, 29H2O, and 2H3O+ (Fe24O83H89As). The small model system size and the high percentage of H+ per H2O molecules severely limits the realism of the model compared to experimental systems, so the results from these calculations should be considered exploratory of model system behavior rather than an accurate portrayal of arsenate adsorption thermodynamics and kinetics. Projector-augmented planewave calculations [152,165] with the Perdew-Erzenhof-Burke exchange correlation functional [160] were performed with the Vienna Ab-initio Simulation Package (VASP 5.2) [153,154,155,156,157]. The PBE0 exchange correlation functionals were Fe pv (14 valence e), O (6 valence e), H (1 valence e) and As (5 valence e) as labeled in the VASP exchange correlation functional library. Energy cut-offs (ENCUT in VASP input files) of 500 eV and 400 eV were used for energy minimizations and molecular dynamics simulations, respectively. The precision of the self-consistent field calculation of electron density was (PREC = Accurate = 700 eV/ROPT = −2.5 × 10−4) for energy minimizations and (PREC = Medium = 700 eV/ROPT = −2.5 × 10−3) for molecular dynamics simulations. The PREC-flag determines the energy cutoff (ENCUT) when no value is given for ENCUT in the central input file of VASP, INCAR, and the ROPT-tag controls the real-space optimization. The lower accuracies of the molecular dynamics (MD) simulations were chosen for practical reasons. Thousands of configurations and their energies need to be calculated for MD simulations, so the less stringent electron density grid speeds up the energy calculation at each step. The assumption here is that although the MD simulations are less accurate, they are not dramatically in error for predicting atomic structure. Thus, the MD simulations can be used to relax the atomic positions to achieve an approximate configuration and energy, and then energy minimizations can be performed to obtain structures and energies that are more precise. Without the MD simulations, the likelihood of the energy minimizations becoming trapped in a local potential energy minimum is much greater. Periodic DFT calculations were run with 1 k-point created with the Monkhorst-Pack mesh. The DFT+U correction [166,167] was employed with a U = 4 eV for Fe and 0 eV for all other elements. Spin states were ordered according to the experimental observed magnetic ordering of goethite [168]. These selections have worked reasonably well in a previous study on goethite and goethite-water [73]. No dispersion-corrections were employed in these calculations although the results of DFT calculations of adsorption onto mineral surfaces may be affected by Van der Waals forces and how they are approximated [86]. The MD simulations were run at a temperature of 300 K maintained by the Nosé-Hoover thermostat [169]. Time steps were 0.5 fs (POTIM = 0.5). POTIM is a time-step variable and its value depends on the type of calculation being performed. Note that because some DFT methods may over structure and freeze water at 300 K, some authors have used higher temperatures to overcome this problem [170]. Another method is to use D instead of H [171], so that a 1 fs time step can be used instead of a 0.5 fs time step. Both are accepted practices, but we prefer to use the actual temperature and H atoms. This may cause error, but these errors are intrinsic to the method and as such not different in character from other computational uncertainties. Introducing errors by giving the atoms extra kinetic energy or mass may mask the discrepancies with experiment. Instead, highlighting these discrepancies is important and points to the need to improve computational methods. Calculation of the periodic surface complex structures also allows for creation of more realistic molecular clusters that are surface specific. Figure 3 shows how an extended cluster (Figure 3B) can be extracted from the periodic model (Figure 3A). By selecting all the O atoms bonded to the Fe atoms connected to the As surface complex, a molecular cluster is created that retains surface-specific structure. The O atoms at the edge of the cluster are then terminated with H+ in order to satisfy valence and adjust the overall charge of the cluster as desired. Positions of hydration H2O molecules can be included in the extended cluster (Figure 3B) to better mimic the aqueous phase. The combination of periodic and molecular cluster DFT results can take advantage of the strengths of each approach. For example, the periodic calculations should provide more accurate surface structures and adsorption energies, but molecular cluster models can be used to predict IR, Raman, and NMR spectra [135]. Figure 3. (A) Periodic model of goethite and (B) an extended cluster extracted from the periodic model. Minerals 04 00208 g003 1024 3. Results and Discussion The molecular cluster results show how the initial cluster charges affect Gibbs free energy of adsorption (ΔGads) of iAsV and iAsIII, the effect of hydration on ΔGads for the iAsV models, and compare the of adsorption energies of triprotic iAsIII, and monoprotic or diprotic iAsV onto hydrated neutral Fe clusters. The molecular cluster calculations also compare the calculated As—Fe distances with EXAFS data. In addition, calculations to determine the rate constant for iAsV adsorption on Fe clusters and periodic models are discussed. 3.1. Effect of Cluster Charge on ΔGads Reactions (1)–(7) in Table 1 show the ΔGads of H3AsO3 (Reaction (1) and (2)), HAsO42− (Reaction (3) and (4)), and H2AsO4 (Reaction (5)–(7)) onto either a neutral (Fe2(OH)6(OH2)40) or +4 charged (Fe2(OH)2(OH2)84+). Reaction (7) is stoichiometrically equivalent to one previously reported; however, for our calculation we used an energy minimization convergence criteria of 0.03 kJ/mol, whereas Sherman and Randall [91] used 5 kJ/mol. Significantly, Sherman and Randall [91] reported that the reaction: Fe2(OH)2(OH2)6H2AsO43+ (BB) + H2O → Fe2(OH)2(OH2)7H2AsO43+ (MM) was endothermic and required +95 kJ/mol of energy; however, our results predict that this conversion would require +17 kJ/mol of Gibbs free energy. Both calculations predict that the BB structure is energetically favorable, but our results show energy difference between the BB and MM structures is not as large, and that these structures could co-exist in nature. The possibility of the presence of both BB and MM agree with prior research [130], but the lower ΔGads of the BB is inconsistent with the claim that MM adsorption is dominant [128]. The methodologies used to calculate the conversion of BB to MM could account for the calculated energy differences; the methods differ by: • Model convergence criteria; • Implicit solvation (our work) and gas-phase results from Sherman and Randall [91]; • Electronic energies (ΔE) from Sherman and Randall [91] and ΔG for our work; • Use of a single, gas-phase minimized H2O model to balance Equation (2) [91], whereas we used 1/8 the energy of an implicitly solvated model with eight H2O molecules. Throughout this work, unless otherwise noted, we used explicitly hydrated models for all of the reactants and products for the energy minimization calculations, and those same models for the implicitly solvated (i.e., IEFPCM) single-point energy calculations. Table 1. Effect of Fe cluster charge on the ΔGads of iAsIII and iAsV. Reaction #ReactionΔGads (kJ/mol) (1)H3AsO3 + Fe2(OH)6(OH2)40 → Fe2(OH)4(OH2)4HAsO3 + 2H2O−60 (2)H3AsO3 + Fe2(OH)2(OH2)84+ + 16H2O → Fe2(OH)2(OH2)6HAsO32+ + 2H3O+·8H2O−159 (3)HAsO42− + Fe2(OH)6(OH2)40 + 8H2O → Fe2(OH)4(OH2)4HAsO4 + 2OH·4H2O+14 (4)HAsO42− + Fe2(OH)2(OH2)84+ → Fe2(OH)2(OH2)6HAsO42+ + 2H2O−263 (5)H2AsO4 + Fe2(OH)6(OH2)40 + 3H2O → Fe2(OH)4(OH2)4HAsO4 + OH·4H2O−309 (6)H2AsO4 + Fe2(OH)2(OH2)64+ + 7H2O → Fe2(OH)2(OH2)6HAsO42+ + H3O+·8H2O−336 (7)H2AsO4 + Fe2(OH)2(OH2)84+ → Fe2(OH)2(OH2)6H2AsO43+ + 2H2O−338 Comparing Reactions (1) with (2), (3) with (4), and (5) with (6) shows that the adsorption of iAs onto the more highly charged surfaces is more energetically favorable. However, it is unlikely that a +4 localized charged would occur in nature, and the results for ΔGads for the 0 charged Fe clusters are more realistic for exergonic adsorption of H3AsO3 and the endergonic adsorption of HAsO42−. The adsorption of H2AsO4 is energetically favorable, regardless of the initial Fe cluster charge (Reactions (5)–(7)). 3.2. Effect of Fe Cluster Hydration on ΔGads for Anhydrous and Octahydrated H2AsO4 Reactions (8)–(13) in Table 2 give examples of the effect of neutral Fe cluster hydration on the ΔGads of H2AsO4. The reactions differ only by the number of H2O molecules of hydration that are present on the initial Fe cluster (i.e., 0, 4, 8, or 8 for Reactions (8)–(11), respectively) and the HAsO42−-Fe cluster (i.e., 0, 4, 4, and 8, for Reactions (8)–(11), respectively). Reactions (12) and (13) used octahydrated H2AsO4 as the reactant. Note that we used a tetrahydrated hydroxide model to mass and charge balance Reactions (8)–(11). Table 2. Effect of Fe cluster hydration (Reactions (8)–(11)) and iAsV hydration (Reactions (12) and (13)) on ΔGads of iAsV. Reaction #ReactionΔGads (kJ/mol) (8)H2AsO4 + Fe2(OH)6(OH2)4 + 3H2O → Fe2(OH)4(OH2)4HAsO4 + OH·4H2O−186 (9)H2AsO4 + Fe2(OH)6(OH2)4·4H2O + 3H2O → Fe2(OH)4(OH2)4HAsO4·4H2O + OH·4H2O−195 (10)H2AsO4 + Fe2(OH)6(OH2)4·8H2O → Fe2(OH)4(OH2)4HAsO4·4H2O + OH·4H2O + H2O−217 (11)H2AsO4 + Fe2(OH)6(OH2)4·8H2O + 3H2O → Fe2(OH)4(OH2)4HAsO4·8H2O + OH·4H2O−223 (12)H2AsO4·8H2O + Fe2(OH)6(OH2)4·4H2O → Fe2(OH)4(OH2)4HAsO4·4H2O + OH·4H2O + 5H2O−64 As the H2O molecules of hydration increases from Fe2(OH)6(OH2)4 to Fe2(OH)6(OH2)4·8H2O (Reactions (8)–(11)), the ΔGads of H2AsO4 becomes more negative. Reactions (10) and (11) differ only by the numbers of H2O molecules present on the cluster product, the HAsO42−-Fe cluster, four for Reaction (10) and eight for Reaction (11). The ΔGads for Reactions (10) and (11) differ by 6 kJ/mol, which is less than the ±10 kJ/mol error associated with thermodynamics calculations, so the results are indistinguishable. Using eight hydrating H2O molecules on the HAsO42−-Fe cluster rather than four significantly increased the time need to minimize the model. Significantly, Reactions (12) and (13), where iAsV is present as octahydrated H2AsO4 exhibit ΔGads that are likely more realistic than those seen for ΔGads of anhydrous H2AsO4 (Reactions (8)–(11)). Reactions (8)–(11) are shown here to emphasize the cluster hydration results. Therefore, we used the anhydrous H2AsO4 reactant in Reactions (8)–(11) because these calculations are focusing on the hydration state of the clusters and are used here as a teaching tool. To attain results that are meaningful with respect to nature and experimental conditions, all of the products and reactants should be hydrated. Recently [122], a claim was made that using a single explicit H2O molecule and implicit solvation with the self-consistent reaction field (SCRF) COSMO [133,134] could produce superior results to using multiple H2O molecules. This argument is based on calculations that used small, monohydrated organic and inorganic molecules in the COSMO SCRF that showed better agreement with experiment when a single, rather than multiple H2O molecules were used to hydrate the models [172]. However, the work that modeled iAsV interacting with ferric hydroxide clusters assumed that the result from simple organic and inorganic molecules would be applicable for the cluster calculations [122]; they did not test models with more than one H2O molecule. One argument for the addition of multiple H2O molecules is that the model would better approximate the aqueous environment in which As adsorption occurs. However, if the multiple H2O molecules are arranged in a way that does not lead to an observable PES minimum (i.e., becoming trapped in a local minimum), then using more than one H2O molecule could lead to errors. On the other hand, the results for Reactions (12) and (13) suggest that multiple H2O of hydration for all products and reactants could lead to results that are chemically more realistic than reactions with anhydrous reactants and products. Furthermore, chemical properties such as vibrational frequencies [75] are dependent on hydrogen bonding, so the inclusion of additional explicit H2O molecules could be necessary to calculate precise spectroscopic and structural results. Our work used the IEFPCM, and not the COSMO reaction field used by Farrell and Chaudhary [122]. A particular quantum chemistry method such as the SCRF, DFT method, or basis set can be useful for precisely calculating particular chemical properties, such as energies, but may produce other chemical properties that are imprecise. In this instance, the COSMO SCRF was parameterized to work most successfully with limited explicit hydration, but other SCRF such as IEFPCM may require the addition of more H2O molecules to obtain precise results. Furthermore, particular DFT methods have been developed that are useful for calculating energies, geometries, and kinetics [138,173,174], whereas other DFT methods are useful for calculating spectroscopic properties such as NMR chemical shifts for H and C [175,176]. Because the exact Exc for DFT is unknown, it is not yet possible to use one DFT method to calculate every chemical property. Therefore, it is necessary when doing DFT calculations to read the literature to find methodologies that are efficient for calculating the chemical properties of interest and to be willing to experiment with variations of those methods, if the calculated results are imprecise when compared with experimental data. This procedure is similar to those an experimentalist takes when deciding how to study a chemistry of interest. 3.3. Effect of As Oxidation State and DFT Method on ΔGads Reactions (14)–(19) in Table 3 show the ΔGads of octahydrated H3AsO3, HAsO42−, and H2AsO4 onto tetra- and octahydrated Fe2(OH)6(OH2)40. The ΔGads for iAsIII is more favorable than it is for iAsV; this is fortunate, if correct, because iAsIII is more toxic than iAsV is. The results for the preferential adsorption of iAsIII over iAsV at the point-of-zero charge for the Fe cluster are supported by experimental data that shows the same trend [101]. Under acidic or basic conditions, the Fe clusters would have charges, and neutral H3AsO3 might not adsorb to Fe surfaces as favorably. Conversely, although these reactions show less favorable adsorption of octahydrated HAsO42− and H2AsO42− to the neutral Fe cluster than for iAsIII, under acidic or basic conditions when the cluster is charged, the charged iAsV ions could adsorb more favorably than uncharged iAsIII. Although the products and reactants are anhydrous, Reactions (1)–(7) support this assertion, where neutral H3AsO3 adsorbs more strongly to the neutral cluster (Reaction (1)) than to the +4-charged cluster (Reaction (2)). Conversely, the charged iAsV reactants in Reactions (3)–(7) adsorb more strongly to the +4-charged clusters than they do to the neutral clusters. These results that show weaker adsorption of H3AsO3 and stronger adsorption of charged iAsV to charged Fe clusters are also supported by experimental data [101]. Furthermore, the results for Reactions (14) and (15) show that HAsO42− would adsorb less favorable to the neutral Fe cluster than does H2AsO4. Note that a tetrahydrated hydroxide model was used to mass and charge balance Reactions (14)–(17); the hydroxide was not necessary to mass charge balance Reactions (12) and (13). Table 3. Effect of As oxidation state on BB adsorption to neutral Fe clusters. For Reaction (16), the density functional theory (DFT) methods used to calculate ΔGads were: a, B3LYP; b, PBE0; and c, M06-L. Reaction #ReactionΔGads (kJ/mol) (12)H3AsO3·8H2O + Fe2(OH)6(OH2)4·4H2O → Fe2(OH)4(OH2)4HAsO3·4H2O + 10H2O−124 (13)H3AsO3·8H2O + Fe2(OH)6(OH2)4·8H2O → Fe2(OH)4(OH2)4HAsO3·4H2O + 10H2O−146 (14)HAsO42−·8H2O + Fe2(OH)6(OH2)4·4H2O → Fe2(OH)4(OH2)4HAsO4·4H2O + 2OH·4H2O+15 (16)H2AsO4·8H2O + Fe2(OH)6(OH2)4·4H2O → Fe2(OH)4(OH2)4HAsO4·4H2O + OH·4H2O + 5H2O−64 a, −35 b, −3 c For Reaction (16), we compared the ΔGads calculated with the B3LYP, PBE0, or M06-L DFT methods. The ΔGads calculated with B3LYP (−64 kJ/mol), M06-L (−3 kJ/mol), and PBE0 (−35 kJ/mol) results all predict favorable, exergonic reactions. These results show that calculated thermodynamic results are dependent on the chosen DFT method. Because the adsorption of iAsV is experimentally observed over a wide pH range [101], the results from B3LYP, PBE0, or M06-L could be correct. Thermodynamic results from DFT calculations are typically precise within ±10 kJ/mol; therefore, the B3LYP results would range from −74 to −54 kJ/mol and the PBE0 results would range from −45 to −25 kJ/mol; these results do not overlap, so the precision of the results from B3LYP and PBE0 differ. Within the ±10 kJ/mol error range of DFT methods, the M06-L results would range from −13 to +7 kJ/mol; therefore, because iAsV adsorption is favorable, the M06-L results could be erroneous. 3.4. As—Fe Distance and As-O Bond Length Data from Experiments Compared with Cluster and Periodic Model Results Table 4 shows the BB As—Fe distances and As-O bond distances calculated from this work for AsIII and AsV, the calculated BB results of Sherman and Randall [91] for AsV, and EXAFS data for AsIII and AsV on four mineral surfaces. Notably, that the Fe2(OH)2(OH2)6H2AsVO43+ (BB) model [91] exhibits four As-O bonds, two of which are As-OH bonds that are 1.73 Å. Sherman and Randall [91] observed a 1.62–1.64 Å As-O bond with EXAFS that is not present in their models where H2AsO4 has two As-OH single bonds to the As atom. We argue that the 1.62–1.64 Å As-O bond is an As-O double bond (As=O) that our calculations predict to be 1.63 Å due to HAsO42− being adsorbed to the Fe surface (model Fe2(OH)4(OH2)4HAsVO4 (BB)), rather than a As-OH single bond, and that iAsV is not present on Fe surfaces as H2AsO4 with two As-OH single bonds. However, we also note that the Fe2(OH)4(OH2)4HAsVO4 (BB) model is not hydrated with explicit H2O molecules and that when the model is hydrated with either four or eight explicit H2O molecules, the As=O bond length increases from 1.63 to 1.67 Å, which is slightly longer than the observed 1.62–1.64 Å As=O length, but is still shorter than the 1.73 Å bond lengths from the model of Sherman and Randall [91]. In Table 4, we report the results for the +3-charged model that was energy minimized using the methods described in this work. The As-OFe and As-OH bond distances calculated here differ by 0.01 Å from those reported by Sherman and Randall [91], whereas the As—Fe distances calculated here are both 0.05 Å shorter than those reported by Sherman and Randall [91]. The difference in As—Fe distances likely occur due to differences in methodology, but the results from both models lie within the range of experimental uncertainty. Table 4. As—Fe interatomic distance, As-OFe bond distance, and As-O bond distance results from this work compared to previous calculations and extended X-ray adsorption fine structure (EXAFS) data for AsV and AsIII adsorption onto ferrihydrite (Fh), hematite (Hm), goethite (Gt), and lepidocrocite (Lp). For Fe2(OH)4(OH2)4HAsVO4·4H2O (BB), the DFT methods used to calculate ΔGads were: x, B3LYP; y, PBE0; and z, M06-L. AsV ComplexAs—Fe (Å)As—Fe (Å)As-OFe (Å)As-OFe (Å)As-OH (Å)As-OH (Å)As=O (Å) Fe2(OH)4(OH2)4HAsVO4 (BB) 1.63 Fe2(OH)4(OH2)4HAsVO4·4H2O (BB)3.20 x, 3.19 y, 3.08 z3.28 x, 3.24 y, 3.29 z1.69 x, 1.68 y, 1.69 z1.72 x, 1.71 y, 1.72 z1.76 x, 1.75 y, 1.79 z 1.67 x, 1.66 y, 1.65 z Fe2(OH)4(OH2)4HAsVO4·8H2O (BB)3.303.301.701.701.76 1.67 Goethite (010) periodic model (BB)3.563.681.721.721.78 1.73 Fe2(OH)2(OH2)6H2AsVO43+ (BB) a3.293.291.711.711.731.73 AsV on Fh (BB) a3.273.381.701.701.67 1.64 AsV on Gt (BB) a3.303.301.701.701.70 1.63 AsV on Lp (BB) a3.303.321.711.711.66 1.63 AsV on Hm (BB) a3.243.351.701.701.70 1.62 AsV on Fh (BB) b3.25 (±0.02) AsV on Gt (BB) b3.28 (±0.01) AsV on Lp (BB) c3.31 (±0.014) 1.69 (±0.004) AsV on Gt (BB) c3.30 (±0.008) 1.69 (±0.004) AsV on Fh (BB) d3.27 Goethite (010) periodic model (MM)3.545.00 †1.781.751.71 ‡ 1.68 AsV on Gt e3.25 § 1.689 1.679 AsIII ComplexAs—Fe (Å)As—Fe (Å)As-O (Å)As-O (Å)As-O (Å)As=O (Å) Fe2(OH)4(OH2)4HAsIIIO3·4H2O (BB)3.263.411.771.74 1.90na Fe2(OH)4(OH2)4HAsIIIO3·8H2O (BB)3.293.391.781.72 1.90 AsIII on Lp (BB) c3.41 (±0.013) 1.78 (±0.014) AsIII on Gt (BB) c3.31 (±0.013) 1.78 (±0.012) AsIII on Fh (BB) d3.41–3.44 AsIII on Fh and Hm (BB) f3.35 (±0.05) AsIII on Gt and Lp (BB) f3.3–3.4 AsIII on Gt (BB) g3.378 (±0.014) Notes: a Sherman and Randall [91]; b Waychunas et al. [89]; c Farquhar et al. [82]; d Gao et al. [87]; e Loring et al. [128]; f Ona-Nguema et al. [92]; g Manning et al. [81]. This As—Fe distance does not agree with that reported by Loring et al. [128] for a MM. For the MM periodic model, there is one As-OH bond and two As partial double bonds (1.71 and 1.68 Å), because HAsO42− is adsorbed to the surface. § For the Loring et al. [128] model, both As-O bonds are aprotic and should have partial double bonds. For the iAsV models, the two calculated As—Fe distances within each configuration differ by 0.12, 0.08, and 0.00 Å from each other for the Fe2(OH)4(OH2)4HAsVO4, Fe2(OH)4(OH2)4HAsVO4·4H2O, and Fe2(OH)4(OH2)4HAsVO4·8H2O models, respectively. Sherman and Randall [91] reported two As—Fe distances for ferrihydrite (Fh), lepidocrocite (Lp), hematite (Hm), and goethite (Gt), whereas the other data report a single As—Fe distance for the adsorption onto Fh, Gt, or Lp. The AsV—Fe distance results from the Fe2(OH)4(OH2)4HAsVO4·8H2O model agree within experimental uncertainty with the Gt and Lp data of Sherman and Randall [91], the Gt and Fh data of Waychunas et al. [89], and the Gt and Lp data of Farquhar et al. [82]. The data for the AsV—Fe distances overlap for the minerals used for these studies; therefore, determining the type of mineral to which the iAsV is bonding could be difficult; however, we can state that the Fe cluster models are predicting As—Fe distances that are indicative of BB adsorption of AsV. Similarly, the calculated experimental and AsV—OFe bond distances agree precisely. For the BB results for the periodic structures of α-FeOOH (010), the As-OFe bonds (1.72 Å) show precise agreement with experiment and correlate well with the cluster results. However, the As—Fe distances (3.56 and 3.68 Å) both overestimate the experimental data, and the As-OH and As=O bonds both overestimate the experimentally observed bond distances. For the MM periodic model, the calculated As—Fe distance is overestimating the 3.25 Å distance measured distance of Loring et al. [128] by 0.29 Å. The errors associated with the periodic calculations could be due to systematic errors in the planewave calculations or could be because the α-FeOOH (010) surface is not the surface where As adsorption predominately occurs. The two AsIII—Fe bond distances in the Fe2(OH)4(OH2)4HAsIIIO3·4H2O model differ by 0.15 Å and by 0.10 for the octahydrated version of that model., The longer calculated As—Fe distances (ca. 3.4 Å) agree with most of the AsIII data within uncertainty, whereas the shorter calculated As—Fe distance agrees well with the data of Gao et al. [87]. Again, because of the data overlap and due to the uncertainty in the As—Fe distances observed for AsIII adsorption onto Hm, Gt, Lp, and Fh, it is difficult to resolve different sorption mechanisms of various Fe-oxide and Fe-hydroxide minerals; however, it is possible to state the BB adsorption is occurring. Furthermore, it is possible to differentiate AsIII—Fe and AsV—Fe BB adsorption, because the former distances are approximately 3.4 Å, whereas the latter are approximately 3.3 Å. Significantly, these distances are seen both experimentally and computationally. Moreover, there is good agreement between the calculated AsIII-OFe distance and the EXAFS data of Sherman and Randall [91]. For the Fe2(OH)4(OH2)4HAsVO4·4H2O (BB) model, we compared the As—Fe distance results obtained from the B3LYP, PBE0, and M06-L methods. For the As—Fe distances, the B3LYP (3.20 Å) and PBE0 (3.19 Å) results correlate for one of the As—Fe distance, and the B3LYP (3.28 Å) and M06-L (3.29 Å) results agree for the other As—Fe distance. The As-O and As=O bond lengths agree well among the DFT methods and agree precisely with the experimental data in Table 4. This type of DFT method testing helps eliminate the possible effects of exchange-correlation functional errors on the results. The Fe2(OH)4(OH2)4HAsVO4·4H2O (BB) model used for these DFT method comparisons is the adsorption product of Reaction (16), and the M06-L ΔGads results for Reaction (16) ranged from −13 to +7 kJ/mol, suggesting thermodynamic adsorption results from M06-L that are potentially unfavorable, relative to the results from B3LYP and PBE0 (Table 3). In addition, the 3.08 Å As—Fe distance from the M06-L minimized model (Table 4) is predicts is significantly shorter than the experimental data and the results from B3LYP and PBE0. Therefore, because the As—Fe distance calculated with M06-L is imprecise, it is likely that the ΔGads results from M06-L is also imprecise. Notably, the PBE0 As—Fe distance results from the cluster and periodic calculations differed distinctly. The cluster results underestimated the As—Fe distance data by approximately 0.1 Å, whereas the periodic calculation results overestimated those data by approximately 0.25 Å. The PBE0 method, like many DFT methods, may contain different parameters, depending on which software package implements it (e.g., VASP, Gaussian 09, etc.), so the results obtain with a particular DFT method using different software packages might not be directly comparable. In addition to the potential differences in the DFT methods, model sizes could also contribute to the discrepancies in the calculated distances obtained from the periodic and cluster models. One would presume that the larger periodic models would provide results that are more precise relative to the data than the smaller cluster models do; however, neither model size produced precise As—Fe distances. Differences between periodic and cluster model results have been discussed previously [73,135]. 3.5. Sorption Kinetics for iAsV on Cluster and Periodic Models Calculations were completed to show a possible reaction pathways for desorption of the monodentate inner-sphere complex of HAsO42− from Fe3+-(oxyhydr)oxide cluster and from the periodic goethite (010) surface. Although the bidentate binuclear complex is likely to be more stable [88,91], the monodentate species is an intermediate between the bidentate and outer-sphere species. Figure 4 shows desorption of iAsV from a Fe2(OH)4(H2O)5-HAsO4 cluster model, where the model begins as a MM structure with a As—Fe distance of 3.27 Å (Figure 4A), moves through a transition state structure (Figure 4B), and reaches the outer-sphere structure (Figure 4C) where the As—Fe distance is 4.36 Å. The Fe—As distances were increased manually and then held constant in each calculation until there ceased to be a bonding interaction. The energies of the monodentate reaction pathway are portrayed as a function of the Fe—As distance in the Figure 4 for both periodic and cluster models. Figure 4. Desorption of iAsV from Fe clusters showing (A) the initial MM model, (B) the transition state model, and (C) the outer-sphere, final structure model. Minerals 04 00208 g004 1024 Based on the results shown in Figure 5, ΔEa for the breaking of the first bond in monodentate complex requires approximately +133 and +70 kJ/mol in the periodic and cluster models, respectively. (Note that there is a small increase in energy of the model system near a Fe—As distance of 4.2 Å, but this energy increase is insignificant compared to the first barrier.) The energy barrier for the reverse reaction is higher, +148 kJ/mol, in the periodic model because the outer-sphere complex is lower in energy in this model. The higher energy of the inner-sphere complex and the high-energy barrier of adsorption suggest that adsorption would not occur to the goethite (010) surface under these conditions; this result corroborates with the discussion the long As—Fe distances reported in the previous section for the periodic models. In the molecular cluster models, the adsorption reaction barrier is insignificant, i.e., +1 kJ/mol. We strongly remind the reader, however, that the conditions of this model are not realistic compared to experimental conditions where lower H+ activity, lower arsenate concentrations and greater volume of water would exist and affect the results. On the other hand, the cluster models exhibit almost no energy barrier to adsorption from outer-sphere to monodentate with the inner-sphere complex having a lower energy (Figure 5). Figure 5. Constrained scan of Fe—As distances starting from monodentate configuration to outer-sphere using periodic and molecular cluster DFT calculations results in ΔEa of adsorption of +148 and +1 kJ/mol, respectively and desorption of +133 and +70 kJ/mol, respectively. Minerals 04 00208 g005 1024 The discrepancies between the two types of models are illustrative of some of the problems that can be encountered by each type of approach. First, although the periodic models were run for short (i.e., 6000 steps × 0.5 fs = 3 ps) molecular dynamics simulations at 300 K to relax the atoms, the complex nature of the periodic model all but ensures that a global minimum configuration will not be obtained. This is an example of a general problem, i.e., adding more atoms to the simulation may make it more realistic but increases the number of potential energy minima dramatically. Thus, the transition state may overestimate the ΔEa because the system is not in the lowest possible potential energy configuration, especially with respect to the configuration of H2O molecules. There are numerous possibilities for overcoming the metastable minimum problem. Longer simulation runtimes are one option. These longer simulations could be performed with tight-binding DFT (i.e., DFT-B; see REF for a review of DFT-B) or classical force fields. However, one problem with classical simulations is that it is difficult to created accurate parameterizations that allow for bond-making and bond-breaking, especially for configurations far from equilibrium such as transition states (see [71] for a review). Replicate MD simulations are another option for exploring configuration space. These require multiple simulations at different temperatures to be run simultaneously such that higher and lower temperature configurations can be switched with potential energies overlap. Again, however, this method requires a significant expansion of computational effort to run the multiple MD simulations. Alternatively, the cluster model allows the “surface” atoms to relax without constraint from the remainder of the crystal, which may help explain the lower ΔEa. The “outer-sphere” configuration is higher in energy in this case because the HAsO42− is not completely solvated by the six extra H2O molecules in the model. In addition to the loss of solvation energy, protonation of the HAsO42− does not occur in the cluster whereas in the periodic model H2AsO4 is the product. Even with these discrepancies in the energies and products between the periodic and cluster models, the structures of the reactants and transition states are similar. A combined approach using insights from both periodic and cluster models is useful at this time because each has strengths and weaknesses. These first simulations of the reaction path can be reiterated by performing longer MD simulations at various steps and by searching for lower energy points along the reaction path. In addition, lower energy transition states determined via the cluster approach can be used to guide construction of transition states in the more realistic (but more complex) periodic simulations. The main problem in this case, however, is likely to be that the (010) surface is not the dominant face responsible for adsorbing arsenate onto goethite. Thus, extensive calculations of any type could prove futile in terms of reproducing the observed ΔEa of adsorption/desorption. Other surfaces could be examined, because the (010) surface may not be the most preferred surface for adsorption of arsenate onto goethite. We had selected the (010) surface for arsenate adsorption onto goethite based on the analogy with chromate adsorption [177,178]. Recent DFT calculations have been used to suggest that the (210) surface adsorbs phosphate more strongly than the (010) surface [135]. None of the three just-mentioned studies included arsenate, however, so we are using the other oxyanions as analogs for arsenate. Our future computational work will focus on arsenic sorption reaction mechanisms onto the (210) surface, and this work would benefit from experiments similar to those that used chromate [177]. This point emphasizes that realistic model construction is one of the most important considerations in performing computational geochemistry. Too often, missing components are some inaccuracy in the original model creation leads to discrepancies with observation that cannot be resolved with even the most accurate quantum mechanical calculations. 4. Conclusions This work explored the effect of cluster charge, hydration, As oxidation state, and DFT methods on the Gibbs free energy of adsorption (ΔGads) of inorganic arsenic (iAs) species onto Fe3+-(oxyhydr)oxide models. In general neutral clusters and hydrated models produced ΔGads results that are likely more realistic than models with charged clusters and anhydrous models. As shown with experiments [101], iAsIII adsorption onto neutral Fe3+-(oxyhydr)oxide cluster models was more exergonic than iAsV adsorption onto the same cluster models. For the DFT calculations on the clusters, the results showed that both ΔGads and As—Fe distances depend on the DFT method used to calculate those properties; however the As—Fe distance results from these calculations generally agreed precisely with the experimental data cited. The cluster model As—Fe distance and As-O bond distance results showed relatively precise agreement with the experimental data. Conversely, the periodic planewave calculation results for iAsV adsorption onto α-FeOOH (010) generally overestimated the As—Fe distance and As-O bond length data for iAsV adsorption onto goethite. Other α-FeOOH surfaces could produce results that are more precise. Sorption kinetics calculations using DFT with cluster and a periodic model of α-goethite (010) showed discrepancies in the calculated activation energies of iAsV adsorption. One major difference for the discrepant results could be that the relatively large periodic model did not reach an energy minimum during the DFT MD simulation, whereas the cluster model was smaller than the periodic model and did attain a PES minimum. Although the calculated activation energies for the two methods differed, the initial and transition state structures for both calculations were structurally similar. Longer DFT MD simulations and periodic structures other than the (010) surface of α-FeOOH could produce results that are more precise. The calculated reaction rates, thermodynamics, and structural results presented in this work provide results that could lead to a better understanding of the adsorption of arsenic to Fe (oxyhydr)oxide minerals. However, further studies are necessary to better determine which DFT methods produce the most precise results, the effect of model size on model precision, and the effects of model hydration and surface charge on As adsorption to Fe (oxyhydr)oxide models. Furthermore, basis set size, which was not addressed herein, could potentially affect the precision of the results for the cluster models; therefore, future studies should include the evaluation of basis set effects. Furthermore, increased collaborative efforts among experimental and computational (geo)chemists could lead to improved knowledge about arsenic adsorption on Fe minerals. The authors would like to thank Hind A. Al-Abadleh, Associate Director at the Laurier Centre for Women in Science (WinS) for her help with editing and improving this paper. We also thank Benjamin Tutolo for running preliminary calculations on arsenate desorption who was supported by a research experience for undergraduates grant from the National Science Foundation under Grant No. CHE-0431328 (the Center for Environmental Kinetics Analysis, an NSF-DOE environmental molecular sciences institute). Computational support was provided by the Research Computing and Cyberinfrastructure group at The Pennsylvania State University. Lorena Tribe acknowledges support of the Research Collaboration Fellowship funded by The Pennsylvania State University. Author Contributions Heath D. Watts did the work reported in Section 3.1, Section 3.2, Section 3.3 and Section 3.4. James D. Kubicki and Lorena Tribe did the work reported in Section 3.5. All authors contributed to the preparation and writing of the manuscript. Conflicts of Interest The authors declare no conflict of interest. References and Notes 1. Kim, K.-W.; Chanpiwat, P.; Hanh, H.T.; Phan, K.; Sthiannopkao, S. Arsenic geochemistry of groundwater in Southeast Asia. Front. Med. 2011, 5, 420–433. [Google Scholar] 2. Smedley, P.L.; Kinniburgh, D.G. A review of the source, behaviour and distribution of arsenic in natural waters. Appl. Geochem. 2002, 17, 517–568. [Google Scholar] [CrossRef] 3. Anawar, H.M.; Akai, J.; Mihaljevič, M.; Sikder, A.M.; Ahmed, G.; Tareq, S.M.; Rahman, M.M. Arsenic contamination in groundwater of bangladesh: Perspectives on geochemical, microbial and anthropogenic issues. Water 2011, 3, 1050–1076. [Google Scholar] [CrossRef] 4. Drewniak, L.; Maryan, N.; Lewandowski, W.; Kaczanowski, S.; Sklodowska, A. The contribution of microbial mats to the arsenic geochemistry of an ancient gold mine. Environ. Pollut. 2012, 162, 190–201. [Google Scholar] [CrossRef] 5. Signes-Pastor, A.; Burló, F.; Mitra, K.; Carbonell-Barrachina, A.A. Arsenic biogeochemistry as affected by phosphorus fertilizer addition, redox potential and pH in a west Bengal (India) soil. Geoderma 2007, 137, 504–510. [Google Scholar] [CrossRef] 6. Oremland, R.S.; Stolz, J.F. The ecology of arsenic. Science 2003, 300, 939–944. [Google Scholar] [CrossRef] 7. Dhuldhaj, U.P.; Yadav, I.C.; Singh, S.; Sharma, N.K. Microbial interactions in the arsenic cycle: Adoptive strategies and applications in environmental management. Rev. Environ. Contam. Toxicol. 2013, 224, 1–38. [Google Scholar] 8. Moreno-Jiménez, E.; Esteban, E.; Peñalosa, J.M. The fate of arsenic in soil-plant systems. Rev. Environ. Contam. Toxicol. 2012, 215, 1–37. [Google Scholar] 9. Wolfe-Simon, F.; Blum, J.S.; Kulp, T.R.; Gordon, G.W.; Hoeft, S.E.; Pett-Ridge, J.; Stolz, J.F.; Webb, S.M.; Weber, P.K.; Davies, P.C.W.; et al. A bacterium that can grow by using arsenic instead of phosphorus. Science 2011, 332, 1163–1166. [Google Scholar] [CrossRef] 10. Wolfe-Simon, F.; Blum, J.S.; Kulp, T.R.; Gordon, G.W.; Hoeft, S.E.; Pett-Ridge, J.; Stolz, J.F.; Webb, S.M.; Weber, P.K.; Davies, P.C.W.; et al. Response to comments on “A bacterium that can grow using arsenic instead of phosphorus.”. Science 2011, 332, 1149–1149. [Google Scholar] 11. Masscheleyn, P.H.; Delaune, R.D.; Patrick, W.H. Arsenic and selenium chemistry as affected by sediment redox potential and pH. J. Environ. Qual. 1991, 20, 522–527. [Google Scholar] [CrossRef] 12. Cullen, W.R.; Reimer, K.J. Arsenic speciation in the environment. Chem. Rev. 1989, 89, 713–764. [Google Scholar] [CrossRef] 13. Greenwood, N.N.; Earnshaw, A. Arsenic, Antimony and Bismuth. In Chemistry of the Elements; Pergamon Press: Oxford, UK, 1997; pp. 547–599. [Google Scholar] 14. Masscheleyn, P.H.; Delaune, R.D.; Patrick, W.H. Effect of redox potential and pH on arsenic speciation and solubility in a contaminated soil. Environ. Sci. Technol. 1991, 25, 1414–1419. [Google Scholar] [CrossRef] 15. Flis, I.E.; Mishchenko, K.P.; Tumanova, T.A.; Russ, J. Dissociation of arsenic acid. J. Inorg. Chem. 1959, 4, 120–124. [Google Scholar] 16. Francesconi, K.A. Arsenic species in seafood: Origin and human health implications. Pure Appl. Chem. 2010, 82, 373–381. [Google Scholar] [CrossRef] 17. Beauchemin, S.; Fiset, J.-F.; Poirier, G.; Ablett, J. Arsenic in an alkaline AMD treatment sludge: Characterization and stability under prolonged anoxic conditions. Appl. Geochem. 2010, 25, 1487–1499. [Google Scholar] [CrossRef] 18. Cheng, H.; Hu, Y.; Luo, J.; Xu, B.; Zhao, J. Geochemical processes controlling fate and transport of arsenic in acid mine drainage (AMD) and natural systems. J. Hazard. Mater. 2009, 165, 13–26. [Google Scholar] [CrossRef] 19. Cramer, S.P.; Siskin, M.; Brown, L.D.; George, G.N. Characterization of arsenic in oil shale and oil shale derivatives by X-ray absorption spectroscopy. Energy Fuels 1988, 2, 175–180. [Google Scholar] [CrossRef] 20. Pelley, J. Commonarsenical pesticide under scrutiny. Environ. Sci. Technol. 2005, 39, 122–123. [Google Scholar] [CrossRef] 21. Arai, Y.; Lanzirotti, A.; Sutton, S.; Davis, J.A.; Sparks, D.L. Arsenic speciation and reactivity in poultry litter. Environ. Sci. Technol. 2003, 37, 4083–4090. [Google Scholar] 22. Argos, M.; Kalra, T.; Rathouz, P.J.; Chen, Y.; Pierce, B.; Parvez, F.; Islam, T.; Ahmed, A.; Rakibuz-Zaman, M.; Hasan, R.; et al. Arsenic exposure from drinking water, and all-cause and chronic-disease mortalities in Bangladesh (HEALS): A prospective cohort study. Lancet 2010, 376, 252–258. [Google Scholar] 23. Chen, Y.; Graziano, J.H.; Parvez, F.; Liu, M.; Slavkovich, V.; Kalra, T.; Argos, M.; Islam, T.; Ahmed, A.; Rakibuz-Zaman, M.; et al. Arsenic exposure from drinking water and mortality from cardiovascular disease in Bangladesh: Prospective cohort study. Br. Med. J. 2011, 342, d2431. [Google Scholar] [CrossRef] 24. Das, N.; Paul, S.; Chatterjee, D.; Banerjee, N.; Majumder, N.S.; Sarma, N.; Sau, T.J.; Basu, S.; Banerjee, S.; Majumder, P.; et al. Arsenic exposure through drinking water increases the risk of liver and cardiovascular diseases in the population of West Bengal, India. BMC Public Health 2012, 12, 639. [Google Scholar] [CrossRef] 25. Ferreccio, C.; Smith, A.H.; Durán, V.; Barlaro, T.; Benítez, H.; Valdés, R.; Aguirre, J.J.; Moore, L.E.; Acevedo, J.; Vásquez, M.I.; et al. Case-control study of arsenic in drinking water and kidney cancer in uniquely exposed Northern Chile. Am. J. Epidemiol. 2013, 178, 813–818. [Google Scholar] [CrossRef] 26. Meliker, J.R.; Slotnick, M.J.; AvRuskin, G.A.; Schottenfeld, D.; Jacquez, G.M.; Wilson, M.L.; Goovaerts, P.; Franzblau, A.; Nriagu, J.O. Lifetime exposure to arsenic in drinking water and bladder cancer: A population-based case-control study in Michigan, USA. Cancer Causes Control 2010, 21, 745–757. [Google Scholar] [CrossRef] 27. Paul, S.; Bhattacharjee, P.; Mishra, P.K.; Chatterjee, D.; Biswas, A.; Deb, D.; Ghosh, A.; Mazumder, D.N.G.; Giri, A.K. Human urothelial micronucleus assay to assess genotoxic recovery by reduction of arsenic in drinking water: A cohort study in West Bengal, India. Biometals 2013, 26, 855–862. [Google Scholar] [CrossRef] 28. Smith, A.H.; Marshall, G.; Yuan, Y.; Liaw, J.; Ferreccio, C.; Steinmaus, C. Evidence from Chile that arsenic in drinking water may increase mortality from pulmonary tuberculosis. Am. J. Epidemiol. 2011, 173, 414–420. [Google Scholar] [CrossRef] 29. Concha, G.; Broberg, K.; Grandér, M.; Cardozo, A.; Palm, B.; Vahter, M. High-level exposure to lithium, boron, cesium, and arsenic via drinking water in the Andes of northern Argentina. Environ. Sci. Technol. 2010, 44, 6875–6880. [Google Scholar] [CrossRef] 30. Kinniburgh, D.G.; Smedley, P.L.; Davies, J.; Milne, C.J.; Gaus, I.; Trafford, J.M.; Ahmed, K.M. The Scale and Causes of the Groundwater Arsenic Problem in Bangladesh. In Arsenic in Ground Water; Springer: Berlin, Germany, 2003; pp. 211–257. [Google Scholar] 31. Mondal, D.; Banerjee, M.; Kundu, M.; Banerjee, N.; Bhattacharya, U.; Giri, A.K.; Ganguli, B.; Sen Roy, S.; Polya, D.A. Comparison of drinking water, raw rice and cooking of rice as arsenic exposure routes in three contrasting areas of West Bengal, India. Environ. Geochem. Health 2010, 32, 463–477. [Google Scholar] [CrossRef] 32. Sun, G.; Li, X.; Pi, J.; Sun, Y.; Li, B.; Jin, Y.; Xu, Y. Current research problems of chronic arsenicosis in China. J. Health Popul. Nutr. 2011, 24, 176–181. [Google Scholar] 33. Nickson, R.; McArthur, J.; Burgess, W.; Ahmed, K.M.; Ravenscroft, P.; Rahman, M. Arsenic poisoning of Bangladesh groundwater. Nature 1998, 395, 338. [Google Scholar] [CrossRef] 34. Nicolli, H.B.; Bundschuh, J.; Blanco, M.C.; Tujchneider, O.C.; Panarello, H.O.; Dapeña, C.; Rusansky, J.E. Arsenic and associated trace-elements in groundwater from the Chaco-Pampean plain, Argentina: Results from 100 years of research. Sci. Total Environ. 2012, 429, 36–56. [Google Scholar] [CrossRef] 35. Sharma, V.K.; Sohn, M. Aquatic arsenic: Toxicity, speciation, transformations, and remediation. Environ. Int. 2009, 35, 743–759. [Google Scholar] [CrossRef] 36. Malik, A.H.; Khan, Z.M.; Mahmood, Q.; Nasreen, S.; Bhatti, Z.A. Perspectives of low cost arsenic remediation of drinking water in Pakistan and other countries. J. Hazard. Mater. 2009, 168, 1–12. [Google Scholar] [CrossRef] 37. Welch, A.H.; Stollenwerk, K.G.; Maurer, D.K.; Feinson, L.S. In Situ Arsenic Remediation in a Fractured, Alkaline Aquifer. In Arsenic in Ground Water; Welch, A.H., Stollenwerk, K.G., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2003; pp. 403–419. [Google Scholar] 38. Beaulieu, B.; Ramirez, R.E. Arsenic remediation field study using a sulfate reduction and zero-valent iron PRB. Groundw. Monit. Remediat. 2013, 33, 85–94. [Google Scholar] 39. Berg, M.; Luzi, S.; Trang, P.T.K.; Viet, P.H.; Giger, W.; Stüben, D. Arsenic removal from groundwater by household sand filters: Comparative field study, model calculations, and health benefits. Environ. Sci. Technol. 2006, 40, 5567–5573. [Google Scholar] [CrossRef] 40. Neumann, A.; Kaegi, R.; Voegelin, A.; Hussam, A.; Munir, A.K.M.; Hug, S.J. Arsenic removal with composite iron matrix filters in Bangladesh: A field and laboratory study. Environ. Sci. Technol. 2013, 47, 4544–4554. [Google Scholar] [CrossRef] 41. Jeon, C.-S.; Park, S.-W.; Baek, K.; Yang, J.-S.; Park, J.-G. Application of iron-coated zeolites (ICZ) for mine drainage treatment. Korean J. Chem. Eng. 2012, 29, 1171–1177. [Google Scholar] [CrossRef] 42. Wu, K.; Liu, R.; Liu, H.; Chang, F.; Lan, H.; Qu, J. Arsenic species transformation and transportation in arsenic removal by Fe-Mn binary oxide–coated diatomite: Pilot-scale field Study. J. Environ. Eng. 2011, 137, 1122–1127. [Google Scholar] [CrossRef] 43. Mudhoo, A.; Sharma, S.K.; Garg, V.K.; Tseng, C.-H. Arsenic: An overview of applications, health, and environmental concerns and removal processes. Crit. Rev. Environ. Sci. Technol. 2011, 41, 435–519. [Google Scholar] [CrossRef] 44. Ng, K.-S.; Ujang, Z.; Le-Clech, P. Arsenic removal technologies for drinking water treatment. Rev. Environ. Sci. Biotechnol. 2004, 3, 43–53. [Google Scholar] [CrossRef] 45. Ali, I.; Khan, T.A.; Asim, M. Removal of arsenic from water by electrocoagulation and electrodialysis techniques. Sep. Purif. Rev. 2011, 40, 25–42. [Google Scholar] [CrossRef] 46. Van Genuchten, C.M.; Addy, S.E.A.; Peña, J.; Gadgil, A.J. Removing arsenic from synthetic groundwater with iron electrocoagulation: An Fe and As K-edge EXAFS study. Environ. Sci. Technol. 2012, 46, 986–994. [Google Scholar] [CrossRef] 47. Ali, I. Water treatment by adsorption columns: Evaluation at ground level. Sep. Purif. Rev. 2014, 43, 175–205. [Google Scholar] [CrossRef] 48. Mohan, D.; Pittman, C.U. Arsenic removal from water/wastewater using adsorbents—A critical review. J. Hazard. Mater. 2007, 142, 1–53. [Google Scholar] [CrossRef] 49. Ali, I.; Gupta, V.K. Advances in water treatment by adsorption technology. Nat. Protoc. 2006, 1, 2661–2667. [Google Scholar] [CrossRef] 50. Wei, Y.-T.; Zheng, Y.-M.; Chen, J.P. Uptake of methylated arsenic by a polymeric adsorbent: Process performance and adsorption chemistry. Water Res. 2011, 45, 2290–2296. [Google Scholar] [CrossRef] 51. Salameh, Y.; Al-Lagtah, N.; Ahmad, M.N.M.; Allen, S.J.; Walker, G.M. Kinetic and thermodynamic investigations on arsenic adsorption onto dolomitic sorbents. Chem. Eng. J. 2010, 160, 440–446. [Google Scholar] [CrossRef] 52. Chutia, P.; Kato, S.; Kojima, T.; Satokawa, S. Arsenic adsorption from aqueous solution on synthetic zeolites. J. Hazard. Mater. 2009, 162, 440–447. [Google Scholar] [CrossRef] 53. Adra, A.; Morin, G.; Ona-Nguema, G.; Menguy, N.; Maillot, F.; Casiot, C.; Bruneel, O.; Lebrun, S.; Juillot, F.; Brest, J. Arsenic scavenging by aluminum-substituted ferrihydrites in a circumneutral pH river impacted by acid mine drainage. Environ. Sci. Technol. 2013, 47, 12784–12792. [Google Scholar] [CrossRef] 54. Giles, D.E.; Mohapatra, M.; Issa, T.B.; Anand, S.; Singh, P. Iron and aluminium based adsorption strategies for removing arsenic from water. J. Environ. Manag. 2011, 92, 3011–3022. [Google Scholar] [CrossRef] 55. Manning, B.A.; Goldberg, S. Adsorption and stability of arsenic(III) at the clay mineral–water interface. Environ. Sci. Technol. 1997, 31, 2005–2011. [Google Scholar] [CrossRef] 56. Singh, T.S.; Pant, K.K. Equilibrium, kinetics and thermodynamic studies for adsorption of As(III) on activated alumina. Sep. Purif. Technol. 2004, 36, 139–147. [Google Scholar] [CrossRef] 57. Gallegos-Garcia, M.; Ramírez-Muñiz, K.; Song, S. Arsenic removal from water by adsorption using iron oxide minerals as adsorbents: A review. Miner. Process. Extr. Metall. Rev. 2012, 33, 301–315. [Google Scholar] [CrossRef] 58. Miretzky, P.; Cirelli, A.F. Remediation of arsenic-contaminated soils by iron amendments: A review. Crit. Rev. Environ. Sci. Technol. 2010, 40, 93–115. [Google Scholar] [CrossRef] 60. Zhang, S.; Niu, H.; Cai, Y.; Zhao, X.; Shi, Y. Arsenite and arsenate adsorption on coprecipitated bimetal oxide magnetic nanomaterials: MnFe2O4 and CoFe2O4. Chem. Eng. J. 2010, 158, 599–607. [Google Scholar] [CrossRef] 61. Tian, Y.; Wu, M.; Lin, X.; Huang, P.; Huang, Y. Synthesis of magnetic wheat straw for arsenic adsorption. J. Hazard. Mater. 2011, 193, 10–16. [Google Scholar] [CrossRef] 62. Kanel, S.R.; Manning, B.; Charlet, L.; Choi, H. Removal of arsenic(III) from groundwater by nanoscale zero-valent iron. Environ. Sci. Technol. 2005, 39, 1291–1298. [Google Scholar] [CrossRef] 63. Yavuz, C.T.; Mayo, J.T.; Suchecki, C.; Wang, J.; Ellsworth, A.Z.; D’Couto, H.; Quevedo, E.; Prakash, A.; Gonzalez, L.; Nguyen, C.; et al. Pollution magnet: Nano-magnetite for arsenic removal from drinking water. Environ. Geochem. Health 2010, 32, 327–334. [Google Scholar] [CrossRef] 64. Zhu, J.; Pigna, M.; Cozzolino, V.; Caporale, A.G.; Violante, A. Sorption of arsenite and arsenate on ferrihydrite: Effect of organic and inorganic ligands. J. Hazard. Mater. 2011, 189, 564–571. [Google Scholar] [CrossRef] 65. Villalobos, M.; Antelo, J. A unified surface structural model for ferrihydrite: Proton charge, electrolyte binding, and arsenate adsorption. Rev. Int. Contam. Ambie 2011, 27, 139–151. [Google Scholar] 66. Huang, J.-H.; Voegelin, A.; Pombo, S.A.; Lazzaro, A.; Zeyer, J.; Kretzschmar, R. Influence of arsenate adsorption to ferrihydrite, goethite, and boehmite on the kinetics of arsenate reduction by Shewanella putrefaciens strain CN-32. Environ. Sci. Technol. 2011, 45, 7701–7709. [Google Scholar] [CrossRef] 67. Mamindy-Pajany, Y.; Hurel, C.; Marmier, N.; Roméo, M. Arsenic adsorption onto hematite and goethite. Comptes Rendus Chim. 2009, 12, 876–881. [Google Scholar] [CrossRef] 68. Bowell, R.J. Sorption of arsenic by iron oxides and oxyhydroxides in soils. Appl. Geochem. 1994, 9, 279–286. [Google Scholar] [CrossRef] 69. Ko, I.; Davis, A.P.; Kim, J.-Y.; Kim, K.-W. Effect of contact order on the adsorption of inorganic arsenic species onto hematite in the presence of humic acid. J. Hazard. Mater. 2007, 141, 53–60. [Google Scholar] [CrossRef] 70. Simeoni, M.A.; Batts, B.D.; McRae, C. Effect of groundwater fulvic acid on the adsorption of arsenate by ferrihydrite and gibbsite. Appl. Geochem. 2003, 18, 1507–1515. [Google Scholar] [CrossRef] 71. Aryanpour, M.; van Duin, A.C.T.; Kubicki, J.D. Development of a reactive force field for iron-oxyhydroxide systems. J. Phys. Chem. A 2010, 114, 6298–6307. [Google Scholar] [CrossRef] 72. Fitts, J.P.; Machesky, M.L.; Wesolowski, D.J.; Shang, X.; Kubicki, J.D.; Flynn, G.W.; Heinz, T.F.; Eisenthal, K.B. Second-harmonic generation and theoretical studies of protonation at the water/α-TiO2 (110) interface. Chem. Phys. Lett. 2005, 411, 399–403. [Google Scholar] [CrossRef] 73. Kubicki, J.D.; Paul, K.W.; Sparks, D.L. Periodic density functional theory calculations of bulk and the (010) surface of goethite. Geochem. Trans. 2008, 9. [Google Scholar] [CrossRef] 74. Arts, D.; Sabur, M.A.; Al-Abadleh, H.A. Surface interactions of aromatic organoarsenical compounds with hematite nanoparticles using ATR-FTIR: Kinetic studies. J. Phys. Chem. A 2013, 117, 2195–2204. [Google Scholar] 75. Bargar, J.R.; Kubicki, J.D.; Reitmeyer, R.; Davis, J.A. ATR-FTIR spectroscopic characterization of coexisting carbonate surface complexes on hematite. Geochim. Cosmochim. Acta 2005, 69, 1527–1542. [Google Scholar] [CrossRef] 76. Goldberg, S.; Johnston, C.T. Mechanisms of arsenic adsorption on amorphous oxides evaluated using macroscopic measurements, vibrational spectroscopy, and surface complexation modeling. J. Colloid Interface Sci. 2001, 234, 204–216. [Google Scholar] [CrossRef] 77. Sun, X.; Doner, H. An investigation of arsenate and arsenite bonding structures on goethite by FTIR. Soil Sci. 1996, 161, 865–872. [Google Scholar] [CrossRef] 78. Zhao, K.; Guo, H. Behavior and mechanism of arsenate adsorption on activated natural siderite: Evidences from FTIR and XANES analysis. Environ. Sci. Pollut. Res. 2014, 21, 1944–1953. [Google Scholar] [CrossRef] 79. Müller, K.; Ciminelli, V.S.T.; Dantas, M.S.S.; Willscher, S. A comparative study of As(III) and As(V) in aqueous solutions and adsorbed on iron oxy-hydroxides by Raman spectroscopy. Water Res. 2010, 44, 5660–5672. [Google Scholar] [CrossRef] 80. Illera, V.; Rivera, N.A.; O’Day, P.A. Spectroscopic Characterization of Co-Precipitated Arsenic- and Iron-Bearing Sulfide Phases at Circum-Neutral pH. In Proceedings of the 2009 American Geophysical Union Fall Meeting, San Francisco, CA, USA, 14–18 December 2009. 81. Manning, B.A.; Fendorf, S.E.; Goldberg, S. Surface structures and stability of arsenic(III) on goethite: Spectroscopic evidence for inner-sphere complexes. Environ. Sci. Technol. 1998, 32, 2383–2388. [Google Scholar] [CrossRef] 82. Farquhar, M.L.; Charnock, J.M.; Livens, F.R.; Vaughan, D.J. Mechanisms of arsenic uptake from aqueous solution by interaction with goethite, lepidocrocite, mackinawite, and pyrite: An X-ray absorption spectroscopy study. Environ. Sci. Technol. 2002, 36, 1757–1762. [Google Scholar] [CrossRef] 83. Ona-Nguema, G.; Morin, G.; Wang, Y.; Foster, A.L.; Juillot, F.; Calas, G.; Brown, G.E. XANES evidence for rapid arsenic(III) oxidation at magnetite and ferrihydrite surfaces by dissolved O2 via Fe2+-mediated reactions. Environ. Sci. Technol. 2010, 44, 5416–5422. [Google Scholar] [CrossRef] 84. Tu, Y.-J.; You, C.-F.; Chang, C.-K.; Wang, S.-L. XANES evidence of arsenate removal from water with magnetic ferrite. J. Environ. Manag. 2013, 120, 114–119. [Google Scholar] [CrossRef] 85. Xu, L.; Zhao, Z.; Wang, S.; Pan, R.; Jia, Y. Transformation of arsenic in offshore sediment under the impact of anaerobic microbial activities. Water Res. 2011, 45, 6781–6788. [Google Scholar] [CrossRef] 86. Couture, R.-M.; Rose, J.; Kumar, N.; Mitchell, K.; Wallschläger, D.; van Cappellen, P. Sorption of arsenite, arsenate, and thioarsenates to iron oxides and iron sulfides: A kinetic and spectroscopic investigation. Environ. Sci. Technol. 2013, 47, 5652–5659. [Google Scholar] [CrossRef] 87. Gao, X.; Root, R.A.; Farrell, J.; Ela, W.; Chorover, J. Effect of silicic acid on arsenate and arsenite retention mechanisms on 6-L ferrihydrite: A spectroscopic and batch adsorption approach. Appl. Geochem. 2013, 38, 110–120. [Google Scholar] [CrossRef] 88. Waychunas, G.A.; Davis, J.A.; Fuller, C.C. Geometry of sorbed arsenate on ferrihydrite and crystalline FeOOH: Re-evaluation of EXAFS results and topological factors in predicting sorbate geometry, and evidence for monodentate complexes. Geochim. Cosmochim. Acta 1995, 59, 3655–3661. [Google Scholar] [CrossRef] 89. Waychunas, G.A.; Rea, B.A.; Fuller, C.C.; Davis, J.A. Surface chemistry of ferrihydrite: Part 1. EXAFS studies of the geometry of coprecipitated and adsorbed arsenate. Geochim. Cosmochim. Acta 1993, 57, 2251–2269. [Google Scholar] [CrossRef] 90. Ladeira, A.C.Q.; Ciminelli, V.S.T.; Duarte, H.A.; Alves, M.C.M.; Ramos, A.Y. Mechanism of anion retention from EXAFS and density functional calculations: Arsenic(V) adsorbed on gibbsite. Geochim. Cosmochim. Acta 2001, 65, 1211–1217. [Google Scholar] [CrossRef] 91. Sherman, D.M.; Randall, S.R. Surface complexation of arsenic(V) to iron(III) (hydr)oxides: Structural mechanism from ab initio molecular geometries and EXAFS spectroscopy. Geochim. Cosmochim. Acta 2003, 67, 4223–4230. [Google Scholar] [CrossRef] 92. Ona-Nguema, G.; Morin, G.; Juillot, F.; Calas, G.; Brown, G.E. EXAFS analysis of arsenite adsorption onto two-line ferrihydrite, hematite, goethite, and lepidocrocite. Environ. Sci. Technol. 2005, 39, 9147–9155. [Google Scholar] [CrossRef] 93. Fuller, C.C.; Davis, J.A.; Waychunas, G.A. Surface chemistry of ferrihydrite: Part 2. Kinetics of arsenate adsorption and coprecipitation. Geochim. Cosmochim. Acta 1993, 57, 2271–2282. [Google Scholar] [CrossRef] 94. Goldberg, S. Competitive adsorption of arsenate and arsenite on oxides and clay minerals. Soil Sci. Soc. Am. J. 2002, 66, 413–421. [Google Scholar] [CrossRef] 95. Jain, A.; Raven, K.P.; Loeppert, R.H. Arsenite and arsenate adsorption on ferrihydrite: Surface charge reduction and net OH-release stoichiometry. Environ. Sci. Technol. 1999, 33, 1179–1184. [Google Scholar] [CrossRef] 96. Maji, S.K.; Kao, Y.-H.; Liao, P.-Y.; Lin, Y.-J.; Liu, C.-W. Implementation of the adsorbent iron-oxide-coated natural rock (IOCNR) on synthetic As(III) and on real arsenic-bearing sample with filter. Appl. Surf. Sci. 2013, 284, 40–48. [Google Scholar] [CrossRef] 97. Raven, K.P.; Jain, A.; Loeppert, R.H. Arsenite and arsenate adsorption on ferrihydrite: Kinetics, equilibrium, and adsorption envelopes. Environ. Sci. Technol. 1998, 32, 344–349. [Google Scholar] [CrossRef] 98. Antelo, J.; Avena, M.; Fiol, S.; López, R.; Arce, F. Effects of pH and ionic strength on the adsorption of phosphate and arsenate at the goethite–water interface. J. Colloid Interface Sci. 2005, 285, 476–486. [Google Scholar] [CrossRef] 99. Hiemstra, T.; van Riemsdijk, W.H. A surface structural approach to ion adsorption: The charge distribution (CD) model. J. Colloid Interface Sci. 1996, 179, 488–508. [Google Scholar] [CrossRef] 100. Weng, L.; van Riemsdijk, W.H.; Hiemstra, T. Effects of fulvic and humic acids on arsenate adsorption to goethite: Experiments and modeling. Environ. Sci. Technol. 2009, 43, 7198–7204. [Google Scholar] [CrossRef] 101. Dixit, S.; Hering, J.G. Comparison of arsenic(V) and arsenic(III) sorption onto iron oxide minerals: Implications for arsenic mobility. Environ. Sci. Technol. 2003, 37, 4182–4189. [Google Scholar] [CrossRef] 102. Ngantcha, T.A.; Vaughan, R.; Reed, B.E. Modeling As(III) and As(V) removal by an iron oxide impregnated activated carbon in a binary adsorbate system. Sep. Sci. Technol. 2011, 46, 1419–1429. [Google Scholar] [CrossRef] 103. Que, S.; Papelis, C.; Hanson, A.T. Predicting arsenate adsorption on iron-coated sand based on a surface complexation model. J. Environ. Eng. 2013, 139, 368–374. [Google Scholar] [CrossRef] 104. Jeppu, G.P.; Clement, T.P.; Barnett, M.O.; Lee, K.-K. A scalable surface complexation modeling framework for predicting arsenate adsorption on goethite-coated sands. Environ. Eng. Sci. 2010, 27, 147–158. [Google Scholar] [CrossRef] 105. Jessen, S.; Postma, D.; Larsen, F.; Nhan, P.Q.; Hoa, L.Q.; Trang, P.T.K.; Long, T.V.; Viet, P.H.; Jakobsen, R. Surface complexation modeling of groundwater arsenic mobility: Results of a forced gradient experiment in a Red River flood plain aquifer, Vietnam. Geochim. Cosmochim. Acta 2012, 98, 186–201. [Google Scholar] [CrossRef] 106. Sharifa, S.U.; Davisa, R.K.; Steelea, K.F.; Kima, B.; Haysa, P.D.; Kresseb, T.M.; Fazioc, J.A. Surface complexation modeling for predicting solid phase arsenic concentrations in the sediments of the Mississippi River Valley alluvial aquifer, Arkansas, USA. Appl. Geochem. 2011, 26, 496–504. [Google Scholar] [CrossRef] 107. Pakzadeh, B.; Batista, J.R. Surface complexation modeling of the removal of arsenic from ion-exchange waste brines with ferric chloride. J. Hazard. Mater. 2011, 188, 399–407. [Google Scholar] [CrossRef] 108. Kanematsu, M.; Young, T.M.; Fukushi, K.; Green, P.G.; Darby, J.L. Arsenic(III,V) adsorption on a goethite-based adsorbent in the presence of major co-existing ions: Modeling competitive adsorption consistent with spectroscopic and molecular evidence. Geochim. Cosmochim. Acta 2013, 106, 404–428. [Google Scholar] [CrossRef] 109. Selim, H.; Zhang, H. Modeling approaches of competitive sorption and transport of trace metals and metalloids in soils: A review. J. Environ. Qual. 2013, 42, 640–653. [Google Scholar] [CrossRef] 110. Wan, J.; Simon, S.; Deluchat, V.; Dictor, M.-C.; Dagot, C. Adsorption of As(III), As(V) and dimethylarsinic acid onto synthesized lepidocrocite. J. Environ. Sci. Health Part A. Tox Hazard. Subst. Environ. Eng. 2013, 48, 1272–1279. [Google Scholar] 111. Cui, Y.; Weng, L. Arsenate and phosphate adsorption in relation to oxides composition in soils: LCD modeling. Environ. Sci. Technol. 2013, 47, 7269–7276. [Google Scholar] 112. Weng, L.; van Riemsdijk, W.H.; Koopal, L.K.; Hiemstra, T. Ligand and Charge Distribution (LCD) model for the description of fulvic acid adsorption to goethite. J. Colloid Interface Sci. 2006, 302, 442–457. [Google Scholar] [CrossRef] 113. Hohenberg, P.; Kohn, W. Inhomogeneous electron gas. Phys. Rev. 1964, 136, 864–871. [Google Scholar] [CrossRef] 114. Kohn, W.; Sham, L.J. Self-consistent equations including exchange and correlation effects. Phys. Rev. 1965, 140, 1133–1138. [Google Scholar] [CrossRef] 115. Adamescu, A.; Hamilton, I.P.; Al-Abadleh, H.A. Thermodynamics of dimethylarsinic acid and arsenate interactions with hydrated iron-(oxyhydr)oxide clusters: DFT calculations. Environ. Sci. Technol. 2011, 45, 10438–10444. [Google Scholar] [CrossRef] 116. He, G.; Zhang, M.; Pan, G. Influence of pH on initial concentration effect of arsenate adsorption on TiO2 surfaces: Thermodynamic, DFT, and EXAFS interpretations. J. Phys. Chem. C 2009, 113, 21679–21686. [Google Scholar] [CrossRef] 117. Zhang, N.; Blowers, P.; Farrell, J. Evaluation of density functional theory methods for studying chemisorption of arsenite on ferric hydroxides. Environ. Sci. Technol. 2005, 39, 4816–4822. [Google Scholar] [CrossRef] 118. Adamescu, A.; Mitchell, W.; Hamilton, I.P.; Al-Abadleh, H.A. Insights into the surface complexation of dimethylarsinic acid on iron (oxyhydr)oxides from ATR-FTIR studies and quantum chemical calculations. Environ. Sci. Technol. 2010, 44, 7802–7807. [Google Scholar] [CrossRef] 119. Kubicki, J.D.; Kwon, K.D.; Paul, K.W.; Sparks, D.L. Surface complex structures modelled with quantum chemical calculations: Carbonate, phosphate, sulphate, arsenate and arsenite. Eur. J. Soil Sci. 2007, 58, 932–944. [Google Scholar] [CrossRef] 120. Tofan-Lazar, J.; Al-Abadleh, H. ATR-FTIR studies on the adsorption/desorption kinetics of dimethylarsinic acid on iron-(oxyhydr)oxides. J. Phys. Chem. A 2012, 116, 1596–1604. [Google Scholar] [CrossRef] 121. Tofan-Lazar, J.; Al-Abadleh, H. Kinetic ATR-FTIR studies on phosphate adsorption on iron (oxyhydr)oxides in the absence and presence of surface arsenic: Molecular-level insights into the ligand exchange mechanism. J. Phys. Chem. A 2012, 116, 10143–10149. [Google Scholar] [CrossRef] 122. Farrell, J.; Chaudhary, B.K. Understanding arsenate reaction kinetics with ferric hydroxides. Environ. Sci. Technol. 2013, 47, 8342–8347. [Google Scholar] 123. Zhu, M.; Paul, K.W.; Kubicki, J.D.; Sparks, D.L. Quantum chemical study of arsenic(III,V) adsorption on Mn-oxides: Implications for arsenic(III) oxidation. Environ. Sci. Technol. 2009, 43, 6655–6661. [Google Scholar] [CrossRef] 124. Blanchard, M.; Morin, G.; Lazzeri, M.; Balan, E.; Dabo, I. First-principles simulation of arsenate adsorption on the (1Ī 2) surface of hematite. Geochim. Cosmochim. Acta 2012, 86, 182–195. [Google Scholar] [CrossRef] 125. Blanchard, M.; Wright, K.; Gale, J.D.; Catlow, C.R.A. Adsorption of As(OH)3 on the (001) surface of FeS2 pyrite: A quantum-mechanical DFT Study. J. Phys. Chem. C 2007, 111, 11390–11396. [Google Scholar] [CrossRef] 126. Duarte, G.; Ciminelli, V.S.T.; Dantas, M.S.S.; Duarte, H.A.; Vasconcelos, I.F.; Oliveira, A.F.; Osseo-Asare, K. As(III) immobilization on gibbsite: Investigation of the complexation mechanism by combining EXAFS analyses and DFT calculations. Geochim. Cosmochim. Acta 2012, 83, 205–216. [Google Scholar] [CrossRef] 127. Goffinet, C.J.; Mason, S.E. Comparative DFT study of inner-sphere As(III) complexes on hydrated α-Fe2O3 (0001) surface models. J. Environ. Monit. 2012, 14, 1860–1871. [Google Scholar] [CrossRef] 128. Loring, J.; Sandström, M.; Norén, K.; Persson, P. Rethinking arsenate coordination at the surface of goethite. Chem. Eur. J. 2009, 15, 5063–5072. [Google Scholar] [CrossRef] 129. Oliveira, A.F.; Ladeira, A.C.Q.; Ciminelli, V.S.T.; Heine, T.; Duarte, H.A. Structural model of arsenic(III) adsorbed on gibbsite based on DFT calculations. J. Mol. Struct. Theochem. 2006, 762, 17–23. [Google Scholar] [CrossRef] 130. Otte, K.; Schmahl, W.W.; Pentcheva, R. DFT+ U study of arsenate adsorption on FeOOH surfaces: Evidence for competing binding mechanisms. J. Phys. Chem. C 2013, 117, 15571–15582. [Google Scholar] [CrossRef] 131. Stachowicz, M.; Hiemstra, T.; van Riemsdijk, W.H. Surface speciation of As(III) and As(V) in relation to charge distribution. J. Colloid Interface Sci. 2006, 302, 62–75. [Google Scholar] [CrossRef] 132. Tanaka, M.; Takahashi, Y.; Yamaguchi, N. A study on adsorption mechanism of organoarsenic compounds on ferrihydrite by XAFS. J. Phys. Conf. Ser. 2013, 430, 012100. [Google Scholar] [CrossRef] 133. Klamt, A.; Jonas, V.; Bürger, T.; Lohrenz, J.C.W. Refinement and parametrization of COSMO-RS. J. Phys. Chem. A 1998, 102, 5074–5085. [Google Scholar] [CrossRef] 134. Delley, B. The conductor-like screening model for polymers and surfaces. Mol. Simul. 2006, 32, 117–123. [Google Scholar] [CrossRef] 135. Kubicki, J.D.; Paul, K.W.; Kabalan, L.; Zhu, Q.; Mrozik, M.K.; Aryanpour, M.; Pierre-Louis, A.-M.; Strongin, D.R. ATR-FTIR and density functional theory study of the structures, energetics, and vibrational spectra of phosphate adsorbed onto goethite. Langmuir 2012, 28, 14573–14587. [Google Scholar] [CrossRef] 136. Frisch, M.J.; Trucks, G.W.; Schlegel, H.B.; Scuseria, G.E.; Robb, M.A.; Cheeseman, J.R.; Scalmani, G.; Barone, V.; Mennucci, B.; Petersson, G.A.; Nakatsuji, H.; Caricato, M.; Li, X.; Hratchian, H.P.; Izmaylov, A.F.; Bloino, J.; Zheng, G.; Sonnenberg, J.L.; Hada, M.; Ehara, M.; Toyota, K.; Fukuda, R.; Hasegawa, J.; Ishida, M.; Nakajima, T.; Honda, Y.; Kitao, O.; Nakai, H.; Vreven, T.; Montgomery, J.A., Jr.; Peralta, J.E.; Ogliaro, F.; Bearpark, M.; Heyd, J.J.; Brothers, E.; Kudin, K.N.; Staroverov, V.N.; Kobayashi, R.; Normand, J.; Raghavachari, K.; Rendell, A.; Burant, J.C.; Iyengar, S.S.; Tomasi, J.; Cossi, M.; Rega, N.; Millam, N.J.; Klene, M.; Knox, J.E.; Cross, J.B.; Bakken, V.; Adamo, C.; Jaramillo, J.; Gomperts, R.; Stratmann, R.E.; Yazyev, O.; Austin, A.J.; Cammi, R.; Pomelli, C.; Ochterski, J.W.; Martin, R.L.; Morokuma, K.; Zakrzewski, V.G.; Voth, G.A.; Salvador, P.; Dannenberg, J.J.; Dapprich, S.; Daniels, A.D.; Farkas, Ö.; Foresman, J.B.; Ortiz, J.V.; Cioslowski, J.; Fox, D.J. Gaussian 09, Revision B.01; Gaussian, Inc.: Wallingford CT, USA, 2009. Available online: (accessed on 10 March 2014). 137. Curtiss, L.; Redfern, P.; Raghavachari, K. Gaussian-3X (G3X) theory: Use of improved geometries, zero-point energies, and Hartree–Fock basis sets. J. Chem. Phys. 2001, 114, 108–117. [Google Scholar] [CrossRef] 138. Zhao, Y.; Truhlar, D.G. Density functionals with broad applicability in chemistry. Acc. Chem. Res. 2008, 41, 157–167. [Google Scholar] [CrossRef] 139. Leach, A.R. Energy Minimisation and Related Methods for Exploring the Energy Surface. In Molecular Modelling: Principles and Applications; Prentice Hall: Upper Saddle River, NJ, USA, 2001; pp. 253–302. [Google Scholar] 140. Szabo, A.; Ostlund, N.S. The Hartree-Fock Approximation. In Modern Quantum Chemistry; Dover Publications: Mineola, NY, USA, 1989; pp. 108–230. [Google Scholar] 141. Møller, C.; Plesset, M.S. Note on an approximation treatment for many-electron systems. Phys. Rev. 1934, 46, 618–622. [Google Scholar] [CrossRef] 142. Levine, I.N. Electronic Structure of Diatomic Molecules. In Quantum Chemistry; Pearson: Upper Saddle River, NJ, USA, 2009; pp. 369–373. [Google Scholar] 143. Levine, I.N. The Variation Method. In Quantum Chemistry; Pearson: Upper Saddle River, NJ, USA, 2009; pp. 211–247. [Google Scholar] 144. Matta, C.F.; Boyd, R.J. An Introduction to the Quantum Theory of Atoms in Molecules. In A Chemist’s Guide to Density Functional Theory; Koch, W., Holthausen, M.C., Eds.; John Wiley & Sons: Hoboken, NJ, USA, 2001; pp. 1–34. [Google Scholar] 145. Korth, M.; Grimme, S. “Mindless” DFT benchmarking. J. Chem. Theory Comput. 2009, 5, 993–1003. [Google Scholar] [CrossRef] 146. Becke, A.D. A new mixing of Hartree–Fock and local density-functional theories. J. Chem. Phys. 1993, 98, 1372. [Google Scholar] [CrossRef] 147. Lee, C.; Yang, W.; Parr, R.G. Development of the Colle-Salvetti correlation-energy formula into a functional of the electron density. Phys. Rev. B 1988, 37, 785–789. [Google Scholar] [CrossRef] 148. Bachrach, S.M. Quantum Mechanics for Organic Chemistry. In Computational Organic Chemistry; John Wiley & Sons: Hoboken, NJ, USA, 2007; pp. 8–11. [Google Scholar] 149. Clark, T.; Chandrasekhar, J.; Spitznagel, G.W.; Schleyer, P.V.R. Efficient diffuse function-augmented basis sets for anion calculations. III. The 3-21+G basis set for first-row elements, Li-F. J. Comput. Chem. 1983, 4, 294–301. [Google Scholar] [CrossRef] 150. Krishnan, R.; Brinkley, J.S.; Seeger, R.; Pople, J.A. Self-consistent molecular orbital methods. XX. A basis set for correlated wave functions. J. Chem. Phys. 1980, 72, 650–654. [Google Scholar] [CrossRef] 151. Papajak, E.; Zheng, J.; Xu, X.; Leverentz, H.R.; Truhlar, D.G. Perspectives on basis sets beautiful: Seasonal plantings of diffuse basis functions. J. Chem. Theory Comput. 2011, 7, 3027–3034. [Google Scholar] [CrossRef] 153. Kresse, G.; Hafner, J. Ab initio molecular-dynamics simulation of the liquid-metal-amorphous-semiconductor transition in germanium. Phys. Rev. B 1994, 49, 14251–14269. [Google Scholar] [CrossRef] 154. Kresse, G.; Hafner, J. Ab initio molecular dynamics for open-shell transition metals. Phys. Rev. B. 1993, 48, 13115–13118. [Google Scholar] [CrossRef] 157. Kresse, G.; Furthmüller, J.; Hafner, J. Theory of the crystal structures of selenium and tellurium: The effect of generalized-gradient corrections to the local-density approximation. Phys. Rev. B 1994, 50, 13181–13185. [Google Scholar] [CrossRef] 158. Myneni, S.C.B.; Traina, S.J.; Waychunas, G.A.; Logan, T.J. Experimental and theoretical vibrational spectroscopic evaluation of arsenate coordination in aqueous solutions, solids, and at mineral-water interfaces. Geochim. Cosmochim. Acta 1998, 62, 3285–3300. [Google Scholar] [CrossRef] 159. Cancès, E.; Mennucci, B.; Tomasi, J. A new integral equation formalism for the polarizable continuum model: Theoretical background and applications to isotropic and anisotropic dielectrics. J. Chem. Phys. 1997, 107, 3032–3041. [Google Scholar] [CrossRef] 160. Perdew, J.; Burke, K.; Ernzerhof, M. Errata: Generalized gradient approximation made simple. Phys. Rev. Lett. 1996, 78, 1396. [Google Scholar] [CrossRef] 161. Perdew, J.; Burke, K.; Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 1996, 77, 3865–3868. [Google Scholar] [CrossRef] 162. Adamo, C.; Barone, V. Toward reliable density functional methods without adjustable parameters: The PBE0 model. J. Chem. Phys. 1999, 110, 6158–6170. [Google Scholar] [CrossRef] 163. Zhao, Y.; Truhlar, D.G. A new local density functional for main-group thermochemistry, transition metal bonding, thermochemical kinetics, and noncovalent interactions. J. Chem. Phys. 2006, 125, 194101. [Google Scholar] [CrossRef] 164. Szytuła, A.; Burewicz, A.; Dimitrijević, Ž.; Kraśnicki, S.; Rżany, H.; Todorović, J.; Wanic, A.; Wolski, W. Neutron diffraction studies of α-FeOOH. Phys. Status Solidi 1968, 26, 429–434. [Google Scholar] [CrossRef] 166. Dudarev, S.L.; Botton, G.A.; Savrasov, S.Y.; Humphreys, C.J.; Sutton, A.P. Electron-energy-loss spectra and the structural stability of nickel oxide: An LSDA+ U study. Phys. Rev. B 1998, 57, 1505–1509. [Google Scholar] [CrossRef] 167. Rollmann, G.; Rohrbach, A.; Entel, P.; Hafner, J. First-principles calculation of the structure and magnetic phases of hematite. Phys. Rev. B 2004, 69, 165107. [Google Scholar] [CrossRef] 168. Coey, J.M.D.; Barry, A.; Brotto, J.; Rakoto, H.; Brennan, S.; Mussel, W.N.; Collomb, A.; Fruchart, D. Spin flop in goethite. J. Phys. Condens. Matter 1995, 7, 759–768. [Google Scholar] [CrossRef] 169. Nosé, S. A unified formulation of the constant temperature molecular dynamics methods. J. Chem. Phys. 1984, 81, 511. [Google Scholar] [CrossRef] 170. Leung, K.; Nielsen, I.M.B.; Criscenti, L.J. Elucidating the bimodal acid-base behavior of the water-silica interface from first principles. J. Am. Chem. Soc. 2009, 131, 18358–18365. [Google Scholar] [CrossRef] 171. Liu, L.; Zhang, C.; Thornton, G.; Michaelides, A. Structure and dynamics of liquid water on rutile TiO2(110). Phys. Rev. B 2010, 82, 161415. [Google Scholar] [CrossRef] 172. Kelly, C.P.; Cramer, C.J.; Truhlar, D.G. Adding explicit solvent molecules to continuum solvent calculations for the calculation of aqueous acid dissociation constants. J. Phys. Chem. A 2006, 110, 2493–2499. [Google Scholar] [CrossRef] 173. Felipe, M.A.; Xiao, Y.; Kubicki, J.D. Molecular orbital modeling and transition state theory in geochemistry. Rev. Mineral. Geochem. 2001, 42, 485–531. [Google Scholar] [CrossRef] 174. Zhao, Y.; Truhlar, D.G. Design of density functionals that are broadly accurate for thermochemistry, thermochemical kinetics, and nonbonded interactions. J. Phys. Chem. A 2005, 109, 5656–5667. [Google Scholar] [CrossRef] 175. Sarotti, A.M.; Pellegrinet, S.C. Application of the multi-standard methodology for calculating 1H NMR chemical shifts. J. Org. Chem. 2012, 77, 6059–6065. [Google Scholar] [CrossRef] 176. Sarotti, A.M.; Pellegrinet, S.C. A multi-standard approach for GIAO 13C NMR calculations. J. Org. Chem. 2009, 74, 7254–7260. [Google Scholar] [CrossRef] 177. Villalobos, M.; Pérez-Gallegos, A. Goethite surface reactivity: A macroscopic investigation unifying proton, chromate, carbonate, and lead(II) adsorption. J. Colloid Interface Sci. 2008, 326, 307–323. [Google Scholar] [CrossRef] 178. Villalobos, M.; Cheney, M.A.; Alcaraz-Cienfuegos, J. Goethite surface reactivity: II. A microscopic site-density model that describes its surface area-normalized variability. J. Colloid Interface Sci. 2009, 336, 412–422. [Google Scholar] [CrossRef]
3b3b4286c3034233
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Suppose we have a time varying potential $$\left( -\frac{1}{2m}\nabla^2+ V(\vec{r},t)\right)\psi = i\partial_t \psi$$ then I want to know why is the general solution written as $\psi = \displaystyle\sum_n a_n(t)\phi_n(\vec{r})e^{-iE_n t} $ Particularly, why do we get a time dependent coefficient $a_n(t)$. This confuses me because when we have a time independent potential, then we use variable separation and usual method to get the general solution $$\psi = \displaystyle\sum_n a_n\phi_n(\vec{r})e^{-iE_n t}$$ However, the time varying counterpart cannot be reduced this way by variable seperation. EDIT: I could not find a free preview of the book I am using, however, the lectures here for example, use the same solution. share|cite|improve this question Are you sure there is $\phi_n(t)\exp(-iE_nt)$ not just $\phi_n\exp(-iE_nt)$? – Maksim Zholudev Feb 22 '12 at 7:22 @MaksimZholudev That was a typo. Thanks for pointing it out. – yayu Feb 22 '12 at 10:24 The basis functions $\phi_n(\vec{r})$ and the energies $E_n$ are the solutions of the stationary Schrödinger equation: $$ \left( -\frac{1}{2m}\nabla^2+ V_0(\vec{r})\right)\phi_n(\vec{r}) = E_n \phi_n(\vec{r}) $$ If the Hamiltonian depends on time one even can not write this equation. But the set of functions $\phi_n(\vec{r})$ is a full basis in the Hilbert space. So one always can expand any function (from this space) over this basis. The snapshot of the wavefunction $\psi(\vec{r},t)$ at the moment $t$ is just a function of coordinates and the element of this Hilbert space. So we can expand it: $$ \psi(\vec{r},t) = \sum_n b_n(t) \phi_n(\vec{r}) $$ If the Hamiltonian do not depend on time the expansion coefficients can be easily derived from the general Schrödinger equation (the one with the time derivative): $$ b_n(t) = a^{(0)}_n e^{-iE_nt} $$ In the case of time-dependent potential these coefficients are usually considered as unknown functions of time: $$ b_n(t) = a_n(t) e^{-iE_nt} $$ The perturbation theory is used to find the approximation for these functions. share|cite|improve this answer 1) What OP is looking at is known as time-dependent perturbation theory. Here the energies $E_n$ are eigenvalues for the unperturbed time-independent Hamiltonian $H^{(0)}$. The full Hamiltonian is 2) Imagine for a second that the potential $V$ is time-independent and commutes with $H^{(0)}$. Let $v_n$ be the eigenvalues of $V$. In the time independent case, the wavefunction solution is then of the form $$\psi(t,\vec{r}) ~=~ \displaystyle\sum_n c_n\phi_n(\vec{r})e^{-i(E_n+v_n) t} ~=~ \displaystyle\sum_n \left(c_n e^{-iv_nt}\right) \phi_n(\vec{r})e^{-iE_n t}.$$ 3) For general time-dependent perturbations $V(t)$, it is hence natural to expect that the coefficients $a_n(t)$ in the eigenfunction expansion $$\psi(t,\vec{r}) ~=~ \displaystyle\sum_n a_n(t)\phi_n(\vec{r})e^{-iE_n t} $$ could depend on time $t$, cf. OP's question(v2). Here $\phi_n(\vec{r})$ denote eigenfunctions for the unperturbed problem, $$ H^{(0)}\phi_n(\vec{r})~=~E_n\phi_n(\vec{r}).$$ share|cite|improve this answer It is simply a matter of definition. If the time-dependent coefficients can be reasonably found from the Scroedinger equation, then it is a solution for your time-dependent wave function. It is introducing new variables $a_n (t)$ and determining them from the exact equation. It can always be done. share|cite|improve this answer Your Answer
d61df1c8b76f723e
Skip to content Quantum Refutations and Reproofs May 12, 2012 One of Gil Kalai’s conjectures refuted but refurbished Niels Henrik Abel is famous for proving the impossibility of solving the quintic equation by radicals, in 1823. Finding roots of polynomials had occupied mathematicians for centuries, but unsolvability had scant effort and few tools until the late 1700’s. Abel developed tools of algebra, supplied a step overlooked by Paolo Ruffini (whose voluminous work he did not know), and focused his proof into a mere six journal pages. Today our guest poster Gil Kalai leads us in congratulating Endre Szemerédi, who on May 22 will officially receive the 2012 prize named for Abel. He then revisits his “Conjecture C” from his first post in this series, in response to a draft paper by our other guest poster Aram Harrow with Steve Flammia. Szemerédi’s prize is great news for Discrete Mathematics and Theoretical Computer Science, areas for which he is best known, and this blog has featured his terrific work here and here. The award rivals the Nobel Peace Prize in funds and brings the same handshake from the King of Norway. Gil offers the analogy that Abel’s theorem showed why a particular old technology, namely solution by radicals, could not scale upward beyond the case of degree 4. The group-theoretic technology that superseded it, particularly as formulated by Évariste Galois, changed the face of mathematics. Indeed Abelian groups are at the heart of Peter Shor’s quantum algorithm. Not only did the work by Abel and Galois pre-date the proofs against trisecting angles, duplicating cubes, and squaring circles, it made them possible. Refutations and Revisions Gil’s analogy is not perfect, because quantum computing is hardly an “old” technology, and because currently there is no compelling new positive theory to supersede it. Working toward such a theory is difficult, and there are places where it might be tilting against the power stations of quantum mechanics itself. In this regard, Aram and Steve’s paper provides a concrete counter-example to a logical extension of Gil’s conjecture for the larger quantum theory, in a way that casts doubt on the original. The refutation and revision of conjectures is a big part of the process described by Imre Lakatos in his book Proofs and Refutations, which was previously discussed in this blog. Here, the conjectures are physics conjectures, related to technological capability, and the “proof” and “reproof” process refers to confronting formal mathematical models with (counter-)examples and various checks by observations of Nature. After two sections by Aram and Steve explaining their paper and its significance, Gil assesses the effect on his original Conjecture C and re-assesses its motivation. The latter is reinforced by a line of research begun in 1980 with the following question by Sir Anthony Leggett, who won the Nobel Prize in Physics in 2003: How far do experiments on the so called “macroscopic quantum systems” such as superfluids and superconductors test the hypothesis that the linear Schrödinger equation may be extrapolated to arbitrary complex systems? Leggett’s “disconnectivity measure” in his 1980 paper, “Macroscopic Quantum Systems and the Quantum Theory of Measurement,” was an early attempt to define rigorously a parameter that distinguishes complicated quantum states. (source, ref1, ref2) In this light, Gil formulates two revisions of his conjecture that stay true to his original intents while avoiding the refutation. Then I (Ken) review lively comments that continue to further the debate in previous posts in our series. Aram Harrow, with Steve Flammia Recall that Gil defined an entanglement measure {K(\rho)} (there called {ENT}) on a quantum state {\rho} in a particular standard manner, where {\rho} signifies a possibly-mixed state. The statement of Conjecture C then reads, There is a fixed constant {c}, possibly {c = 2}, such that for states {\rho_n} produced by feasible {n}-qubit quantum computers, \displaystyle K(\rho_n) = O(n^c). Here the technical meaning of “feasible” depends on which models of noisy quantum computers reflect the true state and capability of technology, and is hard for both sides to pin down. We can, however, still refute the conjecture by finding states {\rho} that by consensus ought to be feasible—or at least to which the barriers stated by Kalai do not apply—for which {K(\rho)} is large. Our point of attack is that there is nothing in the definition of {K(\rho)} or in the motivation expressed for the conjecture that requires {\rho} to be an {n}-fold aggregate of binary systems. Quantum systems that represent bits, such as up/down or left/right spin, are most commonly treated, but are not exclusive to Nature. One can equally well define basic ternary systems, or 4-fold or 5-fold or {d}-fold, not even mandating that {d} be prime. Ternary systems are called qutrits, while those for general {d} are called qudits. The definition of {n}-qudit mixed states {\rho} allows {K(\rho)} to be defined the same way, and we get the same conjecture statement. Call that Conjecture C’. As Gil agrees, our note shows unconditionally that Conjecture C’ is false, for any {d} as low as {d = 8}. Theorem 1 There exist intuitively feasible {n}-qudit states {\rho_n} on a 2-dimensional grid for which {K(\rho) = 2^{2n/3 - o(n)}}. It is important to note that with {d=8} we cannot simply declare that we have a system on {3n} qubits, because we cannot assume a decomposition of a qudit state via tensor products of qubit states. Indeed when the construction in our note is attempted with qubits, the resulting states {\rho'_n} have {K(\rho'_n) \sim n^2.} However, our construction speaks against both the ingredients and the purpose of the original Conjecture C. What the Conjecture is Driving At Conjectures of this kind, as Steve and I see it, are attempts at what Scott Aaronson calls a “Sure/Shor separator.” By his definition that would distinguish states we’ve definitely already seen how to produce from the sort of states one would require in any quantum computer achieving an exponential speedup over (believed) classical methods. It represents an admirable attempt to formulate QC skepticism in a rigorous and testable way. However, we believe that our counterexamples are significant not especially because they refute Conjecture C, but because they do so while side-stepping Gil’s main points about quantum error correction failing. More generally, we think that it’s telling that it’s so hard to come up with a sensible version of Conjecture C. In our view, this is because quantum computers harness phenomena, such as entanglement and interference, that are already ubiquitous. Nature makes them relatively hard to control, but it is also hard to focus sensibly on what about the control itself is difficult. The formulations of Conjecture C and related obstacles instead find themselves asserting the difficulty of creating rather than controlling. Of course they are trying to get at the difficulty of creating the kinds of states needed for controlling, but the formulations still wind up trying to block the creation of phenomena that “just come naturally.” In our view, the situation is similar to ones in classical computing. A modern data center exists in a state of matter radically unlike anything ever seen in pre-industrial times. But if you have to quantify this with a crude observable, then it’s hard to come up with anything that wasn’t already seen in much simpler technology, like light bulbs. Our note can be thought of as showing that Conjecture C refers to a correlation measure that is high not only for full-scale quantum computers, but even for the quantum equivalent of light bulbs—technology that is non-trivial, but by no means complex. Gil Again: Revising Conjecture C One of the difficult aspects of my project is to supply mathematical engines for the conjectures, which were initially expressed in informal English terms and with physical intuition. For example, in Conjecture 4 we need to define “highly entangled qubits” and “error-synchronization” formally. This crucial technical part of the project, which is the most time-consuming, witnessed much backtracking. It happened with initial formulations for Conjecture 4 that failed when extended from qubits to qudits, which was indeed a reason for me to dismiss them and look for a more robust one, and this has guided me with other conjectures. Aram and Steve’s example suffices to look for another formal way to express the idea behind Conjecture C. While rooted in quantum computer skepticism, Conjecture C expresses a common aim to find a dividing line between physical quantum states in the the pre- and post-universal quantum computers eras. When Aram’s grandchildren will ask him, “Grandpa, how was the world before quantum computers?” he could reply: “I hardly remember, but thanks to Gil we had some conjectures regarding the old days”—and the grandchildren will burst in laughter about the old days of difficult entanglements. Conjecture C expresses the idea that “complicated pure states” cannot be approached by noisy quantum computers. More specifically, the conjecture asserts that quantum states that can be realistically created by quantum computers are “{k}-local” where {k} is bounded (and perhaps is even quite small). But to formally define {k}-locality is a tricky business. (Joe Fitzsimons’ 2-locality suggestions in comments beginning here and extending a long way down are related to this issue.) We can be guided by the motivation stated on the first page of the paper by Anthony Leggett mentioned above, for his “disconnectivity measure” which intends to distinguish two kinds of quantum states: Familiar “macroscopic quantum phenomena” such as flux quantization and the Josephson effect [correspond to states having very low] disconnectivity, while the states important to a discussion of the quantum theory of measurement have a very high value of this property. Leggett has stayed active with this line of work in the past decade, and it may be informative to develop further the relation to his problems of quantum measurement and problems in quantum computation. In this general regard, let me discuss possible new mathematical engines for the censorship conjecture. Conjecture C For Codes Error-correcting codes are wonderful mathematical objects, and thinking about codes, is always great. Quantum error-correcting codes will either play a prominent role in building universal quantum computers or in explaining why universal quantum computers cannot be built, whichever comes first. The map I try to draw is especially clear for codes: Conjecture C for codes: For some (small) constant {c}, pure states representing quantum error correcting codes capable of correcting {c}-many errors cannot be feasibly approximated by noisy quantum computers. Like in the original version of Conjecture C our notion of approximation is based on qubit errors. Conjecture 1 in the original post asserts that for every quantum error-correcting code we can only achieve a cloud of states, rather than essentially a Dirac delta function, even if we use many qubits for encoding. The expected qubit errors of the noisy state compared to the intended state can still be a small constant. Conjecture C for codes asserts that when the code corrects many errors than this cloud will not even concentrate near a single code word. Here “many” may well be three or even two. Conjecture D for Depth Conjecture C for codes deals only with special types of quantum states. What can describe general pure states that cannot be approximated? Conjecture D: For some (small) constant {d}, pure states on {n} qubits that can be approximated by noisy quantum computers can be approximated by depth-{d} quantum circuits. Here we adopt the ordinary description of quantum circuits where in each round some gates on disjoint sets of one or two qubits are performed. Unlike the old Conjecture C which did not exclude cluster states, and thus could not serve as a Sure/Shor separator in Scott Aaronson’s strict sense, the new Conjecture D may well represent such a separator in the strict sense that it does not allow efficient factoring. It deviates from the direction of earlier versions of Conjecture C since it is based on computational theoretic terms. The new Conjecture D gives a poetic justice to bounded depth circuits. In classical computation, bounded-depth circuits of polynomial size give a mathematically fascinating yet pathetically weak computational class. In quantum computation this may be a viable borderline between reality and dream. In the Comments The comments section of the “Quantum Super-PAC” post has seen an extremely lively discussion, for which we profusely thank all those taking part. We regret that currently we can give only the barest enumeration of some highlights—we envision a later summary of what has been learned. Discussion of a possible refutation of Gil’s conjectures via 2-local properties started in earnest with this comment by Joe Fitzsimons. See Gil’s replies here and here, and further exchanges beginning next. John Sidles outlined a mathematical approach to the conjectures beginning here. Hal Swyers moved to clarify the physics involved in the discussions. Then John Preskill reviewed the goings-on, including 2-locality and the subject of Lindblad evolution as used by Gil and discussed extensively above, and continued here to head a new thread. Swyers picked up on questions about the size of controllable systems here and in a second part here. Gil outlined a reply recently here. Meanwhile, Gil rejoined a previous post’s discussion of the rate of error with a comment in the “Super-PAC” post here. Alexander Vlasov re-opened the question of whether the conjectures don’t already violate linearity. Sidles raised a concrete example related to earlier comments by Mikhail Katkov here. Then Gil related offline discussions with David Deutsch here. Gil has recently reviewed the debate on his own blog. He and Jim Blair also mentioned some new papers and articles beginning here. On the technological side, Steve Flammia noted on the Quantum Pontiff blog that ion-trap technology has taken a big leap upward in scale for processes that seem hard to simulate classically, though the processes would need to be controlled more to support universal quantum computation. Open Problems Propose a version for Conjecture C or D, or explain why such a conjecture is misguided to start with. 108 Comments leave one → 1. May 12, 2012 9:41 am I wish some computer scientist would write about Alexander Grothendieck. • John Sidles permalink May 12, 2012 7:18 pm Do I feel my leg being gently pulled? :) • Serge permalink May 12, 2012 7:52 pm Ah yes, what a genius! I agree he’d have done marvels in computer science, even though his revolutionary achievements in algebra, geometry, topology, categories, philosophy, let alone his political involvements, are well enough for one lifetime. :) • John Sidles permalink May 12, 2012 9:56 pm Serge, a newly published book well-worth reading is Elaine Riehm’s and Frances Hoffman’s Turbulent times in mathematics: the life of J. C. Fields and the history of the Fields medal (2012) in which we find the following quotation from David Hilbert, who at the 1928 ICM endeavored to restore the bonds of mathematical collegiality that had been shattered by the First World War: “Let us consider that we as mathematicians stand on the highest pinnacle of the cultivation of the exact sciences. We have no other choice than to assume this highest place, because all limits, especially national ones, are contrary to the nature of mathematics. It is a complete misunderstanding of our science to construct differences according to people and races, and the reasons for which this has been done are very shabby ones. Mathematics knows no races. … For mathematics, the whole cultural world is a single country. The sobering failure of Hilbert’s 1948 efforts was to become evident evident in the sad circumstances of Hilbert’s own death, and the desperate circumstances of Grothendieck’s own childhood, in the heart of the Third Reich. Are present-day circumstances any less sobering, than those of Hilbert’s era, and of Grothendieck’s era? Is the role of mathematics any less central? Appreciation and thanks are due to Elaine Riehm and Frances Hoffman, for writing a book that helps us to ponder these questions. • May 12, 2012 10:04 pm We have a scheme to talk about Grothendieck sometime in the summer. • John Sidles permalink May 13, 2012 11:30 am Hoorah! Ken, that’s exciting to look forward to! Serge’s post was correct (IMHO)  the successes and failures of Grothendieck’s wonderful enterprises are confounding, delightful, disturbing, and instructive for all STEM disciplines. • Serge permalink May 13, 2012 1:20 pm • May 13, 2012 1:37 pm 2. ramirez permalink May 12, 2012 10:43 am Quantum space is defined as the extra space when the integration of the matrix P=1 gets off the boundaries. as in P=1 square . the inverse functions on radicals have to comply with the equivalence. not as an Arch function were the real numbers change value on the negative sector. positive negative positive as logic gates for a nano circuit, are god for a logic circuit in a binary system. on the hyperbolic functions c=2 has an exponential on real numbers and a radical in prime numbers. this problem gave to Enrrico Fermi the ability to create a fermion. and Einstein created the solution using 1/2 of the sine 1/2 cosine to create a prime number that can have a solution as a radical and the quantum space could exist without problem. P=NP eliminating the bipolarity on the second factor. when reaches a speeds faster than the speed of light. the particle accelerator does not reach speeds faster than the light. using C=2 if C is the constant of the speed of light as seen in the equation of Apple computers robot Jeffrey 5000 creates a gravitational force. but does not reach the prime interface as described by Einstein were Gravity creates the inverse Matrix of p=1. is different than the Arch Matrix of P=1. 3. Rachel permalink May 12, 2012 12:25 pm This is a physics problem, and making conjectures with no basis in physics does not make sense. Are you really suggesting that Nature sees that you are trying to prepare a state encoded in a quantum error-correcting code and decides to stop you? I strongly disagree with calling these random, unsupported guesses “conjectures.” A conjecture should have at least some reason behind it, not just “gives poetic justice to bounded-depth circuits.” • May 12, 2012 4:46 pm Hi Rachel Good question! It is a natural question to ask if in nature we can witness approximately pure states manifesting long (or high-depth) quantum processes. (Let us even allow unlimited computation power to control the process.) After all, unless there is some fault-tolerant machinary it is hard to see how “long” quantum process can stay approximately pure. So bounded-depth processes is a natural proposal for the limit of quantum processes that do not manifest quantum fault-tolerance. 4. Serge permalink May 12, 2012 4:16 pm The impossibility of deciding whether P=NP is a direct consequence of Heisenberg’s uncertainty principle. • May 13, 2012 1:56 pm • May 13, 2012 1:57 pm why would anyone even study such a possibility? It is like saying I cannot count n! quickly because of Cauchy-Schwarz! • May 13, 2012 2:00 pm why would anyone even study such a possibility? It is like saying to know whether I cannot count n! quickly because of Cauchy-Schwarz is undecidable! • Serge permalink May 13, 2012 3:03 pm Not exactly. With P=NP you have a speed” – that of computer processes – and a “position” – the accuracy of the output result. I believe that the product of the probability for an algorithm to output accurate results by the probability for it to be efficient is lower than some fixed constant. I claim that both phenomenons – the one about quantum particles and the one about computer processes – are implied by some more general principle, though I can’t write out the details of a relationship between my principle and Heisenberg’s. • ramirez permalink May 13, 2012 7:33 pm Couchy effect as an absorption coefficient can be used on different conditions, but specially under a gravitational force. as the Schwarzchild equation measures the compression state of the space under gravity force. Here is when Einstein observes that light behaves as matter and is affected by gravity also.calculates the strength of the inertial force produced by the black holes. as a P=NP its assumed that NP can coexist in the same time space, and this condition presumes the existence of a great gravitational force, in the form of antimatter . here N would be the Anti-quark, or anti-proton and P the real number(X), that exists inside the schwarz ring of gravity. Couchy measures the intensity and speed of the absorption. However to consider that the quantum space does exist without that intense gravity would be uncertain as trying to decide the sex of a baby. here a radical have to be a proportional exponential to the times square. means we have to compress two dimensions, one for N and one for P.when C=Constant of the speed of Light and you take C=2 you create this uncertainty dilemma. its only when you consider C=Csquare when the time space paradox allows you N for Quantum space and a real space for P=1. see Schrodingers cat paradox. The Linear space with Reimman and Euler.The integration of two dimensions that allows you the existence of antimatter is still under test in the Linear particle accelerator that has fail to prove the existence of antimatter in the Higgs Boson concept The Tevatron cannot go faster than the speed of Light. • Serge permalink May 13, 2012 7:42 pm 5. John Sidles permalink May 12, 2012 7:14 pm On Shtetl Optimized, in response to a well-posed question from Ajit R. Jadhav, I described a toolkit for quantum dynamical simulation in which Conjecture C holds true, and yet the framework is sufficiently accurate for many (all?) practical quantum simulation purposes. A bibliography is included. The aggregate toolkit contains perhaps not even one original idea … still it is fun, and useful too, to appreciate how naturally many existing dynamical ideas mesh together. As for whether Nature simulates herself via this toolkit, who knows? The post does sketch an alternative-universe version of the Feynman Lectures that encompasses this eventuality. 6. May 13, 2012 2:11 pm Dear all, Greetings from Lund. I am here for the Crafoord days celebrating the Crafoord Prize being given to Jean Bourgain and Terry Tao. There is a symposium entitled “from chaos to harmony” celebrating the occasion with live video of the five lectures here . Here are some questions I am curious about regarding the topic of the post: 1) Can somebody explain Leggett’s parameter precisely? I remember that when I tried to understand it (naively perhaps) the parameter was large for certaib systems with large classical correlations. In any case, I would be happy to see a clear explanation what the parameter is. 2) What could be potential counterexamples to the suggestion that all natural (approximately) pure evolutions are of (uniformly) bounded depth. 3) Is the note by Aram and Steve gives a convincing evidence regarding Conjecture C in its original form. I am very thankful to Aram and Steve and overall I was quite convinced. But I am not entirely sure. This has two parts: a) Is the state W realistic? b) Is Conjecture C’ in the form refuted by them (and there is no dispute that their example refute Conjecture C’) the right extension to qudit-operated QC of the qubit version. • May 13, 2012 2:17 pm To add to 1), consider a quantum circuit that maps the all-|0> state to a state f. Is there an easy way—preferably gate-by-gate inductive—to compute Leggett’s D-measure of f? • May 13, 2012 10:27 pm a) Is the state W realistic? I would find it hard to think of a reason why it wouldn’t be. It’s essentially what you get when a single photon is absorbed by a gas cloud, or when you put a single photon through a defraction grating. • aramharrow permalink May 13, 2012 11:33 pm Joe, you probably know this stuff better than me, but for a gas cloud of N atoms, doesn’t the temperature have to scale like 1/log(N)? For photons that’s also true, but I think with a better prefactor. For example, modes of an X-ray probably have very little thermal noise in them. • May 13, 2012 11:49 pm Hi Aram, I was thinking of things like vapor cell quantum memories, which store the quantum state essentially as a w-state (see and have been demonstrated with reasonable fidelities. While certainly these are essentially constant sized devises, the constant is enormous. • aramharrow permalink May 13, 2012 11:51 pm Cool, thanks! • ramirez permalink May 15, 2012 12:17 pm The W- state as a receptor, it does absorb wave length frequencies and they are used as synthetic retina for digital cameras, they do absorb light and releases it, the main trick here is that the photon is turned into an electric current as in the solar panel arrays. so in this way the encoded information can be transcribed into zeros and ones.Bose-Einstein condensate obtains the harmonic state of some gases when they are under pressure and a near to absolute Zero K temperature.The solid state receptors for Wide Band Antenna does work on Microwaves capturing and releasing the information that is in the air. however the antenna position losses its grip to the sine of the wave so the new W-state receptors have multiple position on fractal arrays to correct this problem, as in your cellular. • aramharrow permalink May 13, 2012 11:47 pm For Leggett’s parameter, it’s crucial that the parameter “a” be taken to be <1/2, so that classical systems always have disconnectivity equal to 0. If you take it to be 1/3, then this says that D is the largest N such that for all subsets of N qubits and all divisions of those qubits into subsystems A, B, we have S(AB) <= (S(A) + S(B))/3, where S() is the von Neumann entropy. For evidence of depth, I think that the presence of iron in the Earth is pretty good evidence. The only natural process we know for creating it is stellar nucleosynthesis, which (a) takes a very long time, and (b) requires quantum mechanics (and (c), I had to look up the name of on wikipedia..). Because of (a) and (b), we have evidence of deep quantum processes. I can't prove this, since I can't rule out the possibility of a low-depth classical method of producing lots of iron. Rather I think the evidence for it is like the evidence for evolution, which is that it's the only plausible theory that is consistent with the data, and that the theory alone has predicted things that weren't originally used to derive the theory. Note that I didn't say anything about any states being pure. This is because purity is subjective, and I don't know of a way for our physical theories can meaningfully depend on this. This of course is a common theme in my (and Rachel's and Peter's and others) objections to Gil's conjectures, which is that they are phrased in ways that suggest Nature may have to know which states we prefer the system to be in. • Gil Kalai permalink May 16, 2012 8:14 am Dear Aram, this is a great comment with a lot of interesting things to think about. I am enthusiastic to see this clear explanation of Leggett’s parameter (and it would be nice to discuss this parameter), and the iron as evidence for depth is exciting and we should certainly discuss it. I suppose I do not understand the point about purity. What do you mean by purity being subjective, what is “this” that physical theory cannot depend on and what is the critique on my conjecture that is referred to. • May 16, 2012 3:32 pm Hi Aram, here is a remark regarding Leggett’s parameter as you described it. In the context of Conjecture 4 and the notion of “highly entangled state”, one idea was to base the notion on entanglement between partitions of the qubits into two parts. A counterexample for this idea but for qudits that came up in a discussion with Greg Kuperberg some years ago looks like this: let G be an expanded graph with valence 3 and with 2n vertices. Take 3n Bell pairs and arrange them into 2n qudits with d=8 according to the pattern of the graph G. Then at the d=8 qudit level, this state has a lot of excess entanglement for partitions into two parts. This is achieved simply by grouping the halves of the Bell pairs and not by doing any true quantum information processing. So maybe this is an example also of a very mundane state that represents high value of Leggett’s D-parameter. • June 22, 2012 1:16 am Regarding the expander counter-example: there’s something a little ambiguous about Leggett’s definition in that he states it for states that are symmetric under permutation, so that the reduced state of any N particles depends only on N. Probably the right pessimistic interpretation for non-symmetric states is that you want to choose the worst subset. So then it becomes “D := max N s.t. for any S with |S|=N there exists T\subset S s.t. H(rho_S) <= delta (H(rho_T) + H(rho_{S-T})) " But if that's the definition, then D will be very low for this expander construction you described And for most non-symmetric states I can think of. It doesn't feel like a very robust definition, though. If we replace |S|=N with |S|<=N then you get something different. Presumably also we should restrict N to be << system size. And certainly replacing "for any S" with "exists S" would be far too permissive; then just having a bunch of EPR pairs would count. • John Sidles permalink May 19, 2012 2:19 pm Gil asks: Can somebody explain Leggett’s parameter D precisely? I am struggling with this too. Hopefully the following LaTeX will be OK (apologies in advance if there are bugs). An explicit definition of D is given in Leggett’s article (eqs. 3.1-3), and three concrete examples are worked out (the first example is marred by a typographic error: S_2=1 should be S_1=S_2=0). The part of the definition that I struggle with is the pullback-and-partition of the entropy S onto subsystems. In particular, the post-pullback partition into (spatially separated? weakly coupled?) subsystems is problematic … and such partitions are problematic in classical thermodynamics too. Presciently, Leggett’s article authorizes us to adjust the definition of D as needed: We want D to be a measure of the subtlety of the correlations we need to measure to distinguish a linear superposition from a mixture. A variety of definitions will fulfil this role; for the purpose of the present paper (though quite possibly not more generally) the following seems to be adequate … We continue as follows, with a view toward eliminating problematic references to separation. Let a system S be simulated on a Hilbert space \mathcal{H} by unraveling an ensemble of Lindbladian dynamical trajectories, and let \rho_{\mathcal{H}} be the density matrix of the trajectory ensemble thus simulated. Pullback the Lindbladian equations of motion and dynamical forms onto a rank-r tensor product manifold \mathcal{K} and let \rho_{\mathcal{K}} be the density matrix of the trajectory ensemble thus simulated. By analogy to the Flammia/Harrow measure \Delta, define a rank-dependent Kalai-style FT separator measure \Delta'(r) to be a minimum over the choice of tensor bases of the trace-separation \Delta'(r) = \underset{\mathrm{bases\ of}\ \mathcal{K}}{\min}\ \Vert\rho_{\mathcal{H}}-\rho_{\mathcal{K}(r)}\Vert_{\mathrm{tr}} Then a Leggett-style rank-based variant of Kalai Conjecture C is Kalai-type Conjecture C’ For all physically realizable n-qubit trajectory ensembles, and for any fixed trace fidelity epsilon, there is a polynomial P(n) such that \Delta'(P(n)) \lt \epsilon This conjecture possesses the generic virtue of most tensor-rank conjectures: computational experiments are natural and (relatively) easy. It also has the generic deficiency of tensor-rank conjecture: it is not obvious (to me) how the conjecture might be rigorously proved. • John Sidles permalink May 19, 2012 2:24 pm Close … let’s try again … “Then a Leggett-style rank-based variant of Kalai Conjecture C is:” Kalai-type Conjecture C’ For all physically realizable n-qubit trajectories, and for any fixed trace fidelity \epsilon, there is a polynomial P(n) such that \Delta'(P(n)) \le \epsilon. Apologies for the \text{\LaTeX} glitches! :) 7. John Sidles permalink May 13, 2012 2:30 pm Aram and Steve refer in several places to “our note” … it would be helpful if a link were provided to this (otherwise mysterious) note. Or have I just overlooked a link? • May 13, 2012 2:33 pm “Note” and “paper” are synonymous—the post went thru a long edit cycle, and their April 16 ArXiv upload came in the middle of that. The link is at the top:, and actually until three days ago it was at a less-stable “arxaliv” link. • John Sidles permalink May 13, 2012 9:48 pm Thank you, Ken, for clarifying that! The Flammia/Harrow note “Counterexamples to Kalai’s Conjecture C” looks exceedingly interesting & employs several novel constructions … plausibly it will require comparably to digest as it took to conceive! :) 8. John Sidles permalink May 14, 2012 7:16 am As I slowly digest Steve and Aram’s (really excellent and enjoyable!) arXiv note “Counterexamples to Kalai’s Conjecture C” (arXiv:1204.3404v1]), one concern that arises is associated to the restriction “states ρ which have been efficiently prepared.” In designing an apparatus for efficient state preparation, it is natural to begin by generalizing the apparatus shown in Figure 1 (page 6) of Pironio et al’s much-cited “Random Numbers Certified by Bell’s Theorem” (arXiv:0911.3427v3). The natural generalization is conceptually simple: specify more ion cells, that generate more outgoing photons, such that state preparation is heralded by higher-order coincidence detection, as observed through unitary-transform interferometers having larger numbers of input/output channels. Visually speaking, just add more rows to Figure 1! AFAICT, in the large-n qubit limit this natural generalization is robust with respect to validation (that is, the state heralding is reliable, when we see it) but it is exponentially inefficient (the mean waiting time for state heralding is exponentially long in n). We might hope that this efficiency obstruction is purely technical, to be overcome (e.g.) with greater detection efficiency and lower-loss optical coupling between ions and detectors. But this limit is of course the limit of strong renormalization, and it is not obvious (to me) that the qubit physics remains intact following strong renormalization. These are hard questions. Over on Shtetl Optimized, where these same issues are being discussed, I had occasion to quote the following passage: “Non-physicists often have the mistaken idea that quantum mechanics is hard. Unfortunately, many physicists have done nothing to correct that idea. But in newer textbooks, courses, and survey articles, the truth is starting to come out: if you wish to understand the central ‘paradoxes’ of quantum mechanics, together with almost the entire body of research on quantum information and computing, then you do not need to know anything about wave-particle duality, ultraviolet catastrophes, Planck’s constant, atomic spectra, boson-fermion statistics, or even Schrödinger’s equation.” (from arXiv:quant-ph/0412143v2). Among practicing researchers, this comforting belief — which has the great merit of being immensely inspiring to beginning students — was perhaps more widely held in the 20th century than at present … because the immensely long, immensely difficult struggle to build working quantum computers has slowly and patiently been teaching us humility. That the Kalai/Flammia/Harrow Conjecture C includes the phrase “efficiently prepared” (as contrasted with “efficiently described” for example) is evidence that these lessons-learned are being assimilated and acted-upon. Surely there is a great deal more to be said regarding these issues and obstructions, and we can all hope that one outcome of this debate will be a jointly-written note from Aram and Steve and Gil, that surveys and summarizes (for 21st century students especially) the wonderfully interesting challenges and opportunities that are associated to this fine debate. • aramharrow permalink May 14, 2012 7:42 am Hi John, those are some good points, which I won’t fully address. But I do agree that experiments that wait for multiple coincidences are not scalable, and wouldn’t work for this kind of thought experiment. On the other hand, something like what Boris Blinov’s group is doing (using entangled photons to entangle distant ions) would, I believe, address this problem. Obviously doing such an experiment once isn’t easy, and doing it N times in parallel is only harder, but it’s almost certainly harder only by a linear factor. • John Sidles permalink May 14, 2012 8:41 am Aram, a reference would be very helpful. I had a professor who was fond of quoting Julian Schwinger to the effect that certain facts were “well known to those who knew them well.” In a similar vein, the arXiv note refers to “states whose physical plausibility is relatively uncontroversial” … and so it is natural and legitimate to wonder whether this opinion is shared by folks whose job it is to prepare these states. • aramharrow permalink May 14, 2012 9:05 am Some of this work is planned for the future, but this paper describes those future plans. I think it’s uncontroversial that the states are physically plausible, and that any fundamental obstacle to their creation would be extraordinarily surprising, like discovering new energy levels for the Hydrogen atom. But that is consistent with the fact that doing the experiment once is going to be very hard, and doing it N times will be something like N times as hard. • John Sidles permalink May 14, 2012 9:55 am Aram, I will look carefully at the link you provided. As we both appreciate, large-n entanglement obstructions typically are associated with the adverse scaling (1-\epsilon)^n \simeq e^{-\epsilon n} where \epsilon is some (finite) single-qudit single-operation error probability, and the proposed remediations of this adverse scaling typically are equivalent to some variant of quantum error correction … even in experiments whose intrinsic dynamics seemingly is non-computational. If there is any way to evade this generic mechanism, then I am eager to grasp it! • aramharrow permalink May 15, 2012 10:45 pm I guess one thing I should add is that our counterexamples construct states with high entanglement (according to Gil’s measure) *without* getting into the challenging parts of scalable FTQC. So our point is not a very deep statement, it’s simply that conjecture C is unrelated to the question of whether FTQC can work. As for your point about epsilon vs n, note that for photons, \epsilon goes like e^{-\hbar\omega / k_B T}, which is one sense constant, but in another sense, exponentially small, and in practice can really be very small. • aramharrow permalink May 16, 2012 10:28 pm One more thing along these lines. John S. points out that the tensor rank is low for the W state, meaning that they are relatively uninteresting from the perspective of quantum computing. Based on this, you could view our counter-example as saying that Gil’s entanglement measure counts too many things as entangled, including things that are so lightly entangled as to not provide computational advantage. Thus, it does not provide the quantitative Sure/Shor separator that he is looking for. 9. Serge permalink May 14, 2012 3:18 pm Let me explain the analogy a bit further. Heisenberg’s uncertainty principle is due to the fact that, in order to locate a particle, you must shed light on it. Unfortunately, light is made of photons and photons are also particles. Similarly, in order to settle that a program is correct you have to write a proof. Unfortunately, proofs are also programs and this results in the following fact: “The more you know about the correctness of a program, the less you become able to know about its complexity class, and vice versa.” This is, IMHO, the reason why all efficient “solutions” to SAT are not known to solve every instance. They only have an acceptable probability of correctness – they’re called heuristic algorithms. Conversely, the algorithms used in artificial intelligence are often proven mathematically correct… but very little is ever said about their efficiency. • Serge Ganachaud permalink May 14, 2012 7:05 pm I wouldn’t insist, but my preceding comment is a step towards P=NP being undecidable. :) • Serge permalink May 26, 2012 1:39 pm In that regard, NP-completeness could be viewed as computer science’s counterpart of the quantum level. • Serge permalink May 26, 2012 8:20 pm … and the analogy goes further, as the macroscopic level is made of the quantum level just like NP problems are polynomially reducible to NP-complete problems. I really think that defining suitable distances or topologies on the sets of problems, of proofs and of programs would suffice to prove that P=NP can’t be proved. 10. May 15, 2012 1:26 am Aram and Steve’s state W and related states The parameter K[ρ]. Here is a reminder what K(ρ) is. Given a subset B of m qubits, consider the convex hull F[B] of all states that, for some k, factor into a tensor product of a state on some k of the qubits and a state on the other m-k qubits. When we strart with a state ψ on B we consider D(ψ,F[B]) the trace distance between ψ and F[B]. When we have a state ρ on n qubits we define K(ρ) as the sum over all subsets B of qubits, of D(ρ[B], F[B]). Here ρ[B] is the restriction of ρ to the Hilbert space describing the qubits in B. The states W. Next let me remind what are the states we talk about. We consider the state W_n=1\sqrt n |000\dots 1\rangle +1/\sqrt n |000\dots 01\rangle +\dots +1/\sqrt n |1000\dots 0\rangle. Let us also consider the more general state W_{n,k} which is the transposition of all vectors |\epsilon _1\epsilon_2\dots \epsilon_n \rangle where \epsilon_i are 0 or 1 and precisely k of them are 1. (So W_n=W_{n,1}.) Dycke state. In my paper I considered the state W_{2n,n} as a potential counterexample to Conjecture C. Again let me remind you that conjecture C asserts that for a realistic quantum states ρ, K(ρ) attains a small value (polynomial in n). I thought about W_{2n,n} as a simulation of 2n bosons each having a ground state |\,0\rangle and an excited state |\,1\rangle, such that each state has occupation number precisely n. While K(W_{2n,n}) is exponentially large in n a rather similar pure state , the tensor product of n copies of (1/\sqrt 2) (|0\rangle + |1\rangle ), is not entangled and for it K is n. So it is quite important to understand well what is the state which is experimentally created. What conjecture 1 says: Already Conjecture 1 is relevant to Aram and Steve’s W_n (and the more general W_{n,k}). The conjecture predicts that the noisy W_n states are mixture of different W_{n,k}, where k is concentrated around 1. It can be, say, the mixture, denoted by W_n[t] of W_{n,1} with probability 1-t and W_{n,o} with probability t. (Perhaps, with addition ordinary independent noise on top). So we can ask two questions: 1) Is the noisy $W_n$ states created in the laboratory in agreement with Conjecture 1? If we realize W_{n,k} by k photons the question is if the number of the photons itself is stable. Joe, when you refer to the state W_n that was constructed with reasonable fidelities, in the paper you have cited, what are the mixed state which are actually been created? 2) The second question is about a mathematical computation that extends what Aram and Steve did: What is the value of K(W_n[t])? Namely, if we have a noisy W_n of the type I described above what will be its value of K? is it still exponential in n? Leggett’s disconnectivity parameter. If somebody is willing to write down what is Leggett’s definition of his disconnectivity parameter and explain it this will make it easier to discuss Leggett’s parameter. The definition is short but I don’t understand it that well. • May 15, 2012 3:19 am Hi Gil, The paper I refered to was a survey paper, not any one experiment. However, in quantum memory type experiments they aren’t actually explicitly trying to generate w-states. Their ultimate goal is to basically absorb a photon for some period of time and then emit it. The physics of the situation is such that the state of the vapour is pretty close to a w-state, but that isn’t really what they care about (although it is maybe what we care about), it is simply the mechanism for their trick to work. The fidelity I was talking about was of the emited photon. This is only indirect evidence of the w-state, and measuring the state of the vapor itself seems likely to be beyond our current technological capabilities, but I believe it is reasonable evidence that we can create w-like states on a large scale. • May 15, 2012 5:16 pm Hi Aram Joe all, The motivation behind the parameter K(ρ) was indeed coming from error correcting codes that correct c errors. There, small subsets of qubits behave like product states but for larger sets (of size c+1 if I remember right ) you will get substantial contribution to the terms defining K. As Aram and Stave showed much more mundane states like W_n have expoenential value for the parameter K. I certainly agree that W does not look as expressing exotically strong entanglement. And I tend to agree that W-like states can be created. What we can think about is if such expected W-like states like those I described above also have an exponential value for K. Also I am not sure if we exausted listing all possible ways that the state W can be implemented. • May 15, 2012 11:31 pm Hi Gil, I was just trying to answer your question “is the W state realistic?”. I think we have pretty strong evidence that it is even at extremely large scales. Certainly we have not considered even a small fraction of the ways it can arise with relative ease, but I would have thought even the few examples considered thus far should be convincing enough on their own. • Gil Kalai permalink May 16, 2012 7:53 am Right, Joe, thanks. I am quite convinced about W being realistic. My follow up question was how realistic W-like states look like. In particular, how do they look as mixed state approximations of W (This may differ for different examples which is why I thought it will be useful to consider more examples.) This follow-up question is relevant to first being sure about the parameter K and also to Conjecture 1 which predicts how mixed state approximations of W look like. • aramharrow permalink May 16, 2012 8:04 am I think that what makes the photon-based W states realistic is that the per-qubit noise decreases exponentially with photon frequency, or more precisely, photon frequency divided by temperature. (And this noise should be on the order of 1 / #qubits.) So large, nearly-pure, W states should be feasible. If I’ve calculated right, then this ratio should be on the order of 100, when the photons are visible light, and the temperature is room temperature. This means that thermal noise per qubit would be e^{-100}. I guess this means that other sources of noise will be more relevant. Although photon loss isn’t much of a risk. I’m not sure what the dominant source of noise would be, really. • May 16, 2012 12:30 pm Aram: Once you bring detectors into the picture, you would need to worry about dark counts, which would become more and more important as the probability of there being a real photon in that mode decreases. This doesn’t alter the underlying state, but would decrease the fidelity of any reconstructed density matrix. • aramharrow permalink May 16, 2012 12:39 pm Sure, but for the purpose of the conjecture, we don’t need detectors; we just need to believe that at some point, the state existed. • ramirez permalink May 15, 2012 10:17 pm Leggett´s Dis-connectivity parameter (off line). begins when we are trying to create some programing strings on a programatic lang like de C plus , or C plus plus. it does not define the postions on a Matrix. we are considering that the zeros an ones are traveling at the speed of light. Eisenberg’s uncertainty says that in order to read a bit you have to write it first. there is nothing faster than the speed of light traveling in the empty space, once you go off boundaries of the domain of the Matrix P=1 the recording bits are unreachable. according to Bayes any statistical notacion or bit recorded on spaces (sideral) out of the reach of a logic gate disrupts the real time connectivity, lets say that you send an spatial probe out of the system and you need to comunicate in real time with it, you send the information of the programatic strings wherever they are, but the distance the probe is moving is a couple o billion light yeas away simply the information wouldn’t be there on time and several years later you receive some static final. what does it happen? you loose a grip of real time. Here Eistein talks about the light bending coefficient when the radius of the matrix integration domain goes off the limits of connectivity of communication of a logic gate. The generated inertial force at the end of the string will catch a gravitational force as the spin on the radius goes on. this is going to create an inverse value that is considered as antimatter modifying its structure . Einstein Observes the star lights of the super nova’s detonation(gamma rays) and sees that the exploding stars generate a light pulse that travels faster than the Limit of the light constant measured in an empty space. at this condition Eistein calls it Quanta. and states that Star Light travels in Quanta. so he does not takes C plus plus. as a solution for the dis-connectivity problem he divides P=1 between the sideral time and real time, obtaining B as a radical of the space-time P=1 equals P=-1 or inverse logic gate. as a radical of 1 he obtains K=0 because the bit traves in linear regression. this considerations came with the relativity theory where the speed of the light is relative to the conductor where it does travel. and to reconnect the logic gates in time space( two computers in sideral distances) needs to accelerate to C square times. through a gravitational compression that can liberate a propulsion force that breaks the speed of the Light. (Quanta is considered a worm hole), to create stacks for programing strings According to Bayes theorem requires to consider the variance and the deviation standard. here de Hue or deepness are a primordial problem due to the current flow in a quantum solid state receptor. the compression state for the Large hadron Collider that canot reach speeds faster than the Light. Tera-Electron-Volt cannot generate this antimatter needed for this kind of propulsion. 11. John Sidles permalink May 16, 2012 10:29 am Please let me say that I too regard W-states as being realistic (that is, experimentally feasible). For me, the salient feature of W-states is not their exponential-in-n K-value, but rather their polynomial-in-n tensor rank. Respecting tensor rank as a natural measure of quantum state feasibility, two recent survey articles (that IMHO are very interesting and well-written) are Cirac, Verstraete, and Murg “Matrix Product States, Projected Entangled Pair States, and variational renormalization group methods for quantum spin systems” (arXiv:0907.2796v1) and also Cirac and Verstraete “Renormalization and tensor product states in spin chains and lattices” (arXiv:0910.1130v1). In the former we read: As it turns out, all physical states live on a tiny submanifold of Hilbert space. This opens up a very interesting perspective in the context of the description of quantum many-body systems, as there might exist an efficient parametrization of such a submanifold that would provide the natural language for describing those systems. and in the latter we read: The fact that [low rank] product states in some occasions may capture the physics of a many-body problem may look very surprising at first sight: if we choose a random state in the Hilbert space (say, according to the Haar measure) the overlap with a product state will be exponentially small with N. This apparent contradiction is resolved by the fact that the states that appear in Nature are not random states, but they have very peculiar forms. This is so because of the following reason … These considerations from Cirac, Verstraete, and Murg suggest that perhaps Gil Kalai’s K-measure might usefully be evolved into a rank-sensitive R-measure … the granular details of this modification are what I am presently thinking about. Please let me thank everyone for helping to sustain this wonderful dialog! :) • John Sidles permalink May 16, 2012 2:01 pm As an addendum, it turns out that the above-referenced Cirac / Verstraete / Murg preprint arXiv:0907.2796v1 is substantially the same work as referenced as [3] of the Flammia/Harrow note “Counterexamples to Kalai’s Conjecture C” (arXiv:1204.3404v1). It is striking that one-and-the-same article serves simultaneously to: (1) inspire counterexamples to Gil Kalai’s specific conjectures, and (2) inspire confidence too that the overall thrust of these conjectures is plausibly correct. As usual, Feynman provides an aphorism that is à propos: “A great deal more is known than has been proved” In the present instance, this would correspond to “We (believe that we?) know that Gil’s thesis is correct (but in what form?) however we have not (as yet?) proved it.” • John Sidles permalink May 17, 2012 4:41 am Further respecting tensor rank, I tracked down the provenance of the Feynman quote (or misquote, as it turns out). From Feynman’s Nobel Lecture “The Development of the Space-Time View of Quantum Electrodynamics”: Today all physicists know from studying Einstein and Bohr, that sometimes an idea which looks completely paradoxical at first, if analyzed to completion in all detail and in experimental situations, may, in fact, not be paradoxical. … Because no simple clear proof of the formula or idea presents itself, it is necessary to do an unusually great amount of checking and rechecking for consistency and correctness in terms of what is known, by comparing to other analogous examples, limiting cases, etc. In the face of the lack of direct mathematical demonstration, one must be careful and thorough to make sure of the point, and one should make a perpetual attempt to demonstrate as much of the formula as possible. Nevertheless, a very great deal more truth can become known than can be proven. With regard to product states, we have works like J. M. Landsberg’s “Geometry and the complexity of matrix multiplication” (Bull. AMS, 2008) to remind us of how very much is known about these state-manifolds, and equally strikingly, how very much is not known. And so it seems (to me) that Gil’s conjectures are very much in accord with this honorable tradition of mathematics and physics, of seeking to state concretely and prove rigorously, an understanding for which there exists an impressive-but-imperfect body of evidence. 12. May 18, 2012 4:40 am Quantum FT separators, the parameter K(ρ) and the state W. 1) Conjecture C is meant to draw a line (of asymptotic nature) between states whose construction does not require quantum fault tolerance and states which require quantum fault tolerance. We will call such a proposed separation a quantum FT-seperator. 2) The border line was supposed to leave out quantum error correction codes that correct a number of errors that grows to infinity with the number of qubits. 3) The parameter K(ρ) was based on this idea since for an error correction code correcting c errors on n qubits its value is roughly n^c. However, as Aram and Steve showed a much more mundane state W has exponentially large value of K. This means that K does not capture what it was supposed to capture. 4) As mundane W is, it is interesting to examine how can it be implemented and what are the mixed state approximations that we can expect for W. (This is relevant also for my Conjecture 1.) In order to be sure that K is not appropriate to draw any reasonable line of the intended sort it will be useful to compute K for such W-like states, e.g. what I called W_n[t]. Aram and Steve’s qudit example and Leggett’s disconnectivity parameter 5) Aram and Steve’s qudit construction is based on the idea that for a state which can be created without quantum fault tolerance the parameter K(ρ) (extended to qudits) should remain low in every way we group qubits together into bounded size blocks. They exhibit a state which certainly can be prepared on 3n qubits so that when the qubits are grouped into sets of 3, the parameter K(ρ) for the qudits becomes exponential. 6) It is certainly a nice property for a parameter proposed as a quantum FT separator to remain low under such grouping. I am not sure that it is conceptually correct to make this a requirement and I will discuss this matter in a separate comment. 7) It is noted in this remark that a different qudit example in a similar spirit to Aram and Steve’s example may be used to exhibit a very mundane state with high value of Leggett’s disconnectivity parameter (in the way Aram described this parameter in this comment). Let’s discuss it! Bounded depth circuit as FT separators 8) The principal proposed FT separator described in the post is based on bounded depth computation. With the exception of a comment by Aram, we did not discuss this proposal so far. One counter argument raised by Aram is based on nature’s ability to create heavy atoms. This is a terrific idea. It can be interesting to discuss if the process leading to heavy atoms requires some sort of quantum FT, requires “long” (high-depth) evolutions, or perhaps even exhibits superior computational power. (I am skeptical regarding these possibilities.) 9) It will be interesting to describe experimental processes that may exhibit or require long quantum evolutions. 10) One of the nice things about bounded depth classic computation is that it leads to functions with very restricted properties. Bounded total influence (Hastad-Boppana); exponentially decaying Fourier coefficients (Linial-Mansour-Nisan) etc. Are there analogous results in the quantum case? 11) The bounded depth parameter satisfies the grouping requirement because we can regard qubits operations as qudit operations and replace each computer cycle on the qubit levels by several qudit cycles. • John Sidles permalink May 18, 2012 7:32 am Gil, thank you for this very fine summary. For me, the most natural candidate for an FT separator is the tensor rank, that is, “n-qubit states require FT iff their rank is exponential-in-n.” Perhaps the main objection to this separation is not that it is implausible, but rather that (with our present mathematical toolkit) it is so very difficult to prove rigorous theorems relating to tensor rank. Christopher Hillar and Lek-Heng Lim’s preprint “Most tensor problems are NP-hard” (arXiv:0911.1393v3 [cs.CC]) provides an engaging discussion of these issue, with the witty conclusion: “Bernd Sturmfels once made the remark to us that ‘All interesting problems are NP-hard.’ In light of this, we would like to view our article as evidence that most tensor problems are interesting.” From this perspective, perhaps progress in establishing (rigorous) Sure/Shor separations is destined to parallel progress in establishing (rigorous) complexity class separations. To put it another way, we would be similarly astounded at any of the following four announcements: • a rigorous resolution of \text{P}\overset{?}{=}\text{NP}, or • a rigorous proof of quantum computing infeasibility, or • a practical demonstration of a large-latex n$ quantum computation, or • experimental evidence that Nature’s state-manifold is non-Hilbert. And the mathematical reasons for our amazement would be similar in all four cases. • John Sidles permalink May 18, 2012 7:53 am Hmmm … the concluding four-item list was truncated by a LaTeX error. The intended list was: • a rigorous rank-based FT separation, or • experimental demonstration of a large-n quantum computer, or 13. May 18, 2012 8:11 am Dear John, Lets me just first make sure that we all understand the term tensor-rank in the same way. It is the minimum number of product pure states required to represent your pure state as a linear combination. (Am I right?) Since we talk abbout approximation we perhaps better replace “represent” by “approximate”. Anyway, tensor rank seems a natural thing to think about. (And I dont remember if I did not or just forgot.) I would worry that Aram and Steve’s qudit example may have large tensor rank in the qudits tensor structure. I prefer talking about FT-separators and not about Sure/Shor separator mainly to avoid computational complexity issues. The issue of Sure/Shor and FT separators is much simpler and clearer if we talk about noisy states and not about pure states. The simplest FT separator is a (protected) essentially noiseless qubit. It is an interesting problem (first to put on formal grounds and then to solve) if a single protected (essentially) noiseless qubit is a Sure/Shor separator. 14. John Sidles permalink May 18, 2012 9:02 am Yes, let’s affirm that “tensor rank” shall accord with wikipedia’s definition of tensor rank (which is the same as your post’s definition) and that “approximate” is a more precise description of what we want than “represent”. • John Sidles permalink May 18, 2012 11:39 am Hmmm … some subtleties associated to tensor rank, that are not mentioned in the Wikipedia article — in particular the distinction between the rank and the border rank of a tensor — are discussed in J. M. Landsberg’s “Geometry and the complexity of matrix multiplication” (Bull AMS, 2008). AFAICT the rank/border-rank distinction is not materially significant to rank-based FT separations. But who knows? I have myself encountered numerical instabilities that are associated to this distinction. The main point is that Landsberg’s definition of tensor rank, given as Definition 2.01 in his article, provides a rigorous entré into a mathematical literature that is vast, broad, and deep. To thoroughly grasp FT separations, it seems plausible (to me) that we will have to swim in Landsberg’s mathematical ocean … or at least wade in it.  :) 15. ramirez permalink May 18, 2012 8:25 pm Tensor is the term used as “dynamic energy tension” inside the electron structure. there are two considerations about it, one is the electromagnetic field that considers the electron spin due to its structure, its divided in cycles, sine and cosine and its divided into bits and Bytes.4, 8, 16, 32,64,etc. logaritm progression, and the variant and covariant tensors that do not poses an electromagnetic field, but a gravitational tension. this inertial force creates a linear regression on the atom, not necessarily on the electron, this tensor can be found on the small particles as the Gluon, muon, etc. The term Quantic does not exist before Einstein comes out with the relativity theory of time and space, and writes a chapter making evident this difference between electromagnetism and gravity. The Q-bit o Quantum bit is the subatomic charge that can be recorded in a tight space as Hilbert`s calculations, however since the gravitacional compression came to the electronics field we can store more information in the same space, one picture just to fit in a 2 mega bites chip, now the same chip can be used for 2, 4, 8 giga bites, and so on. this compression rate allows to store more information, but to use the term Quantum bit, is needed to obtain the radical compression of 1=C constant of the speed of light so the exponential should be a quadratic equation. in this case we would be recording with radicals smaller than than the nano. P=1 as a matrix value has to be a cubic exponential. This Quantum bit o q.bit would be an antiquark or antiproton inside the programatic stack, the hertz wave runs at 2.4 giga hertz but this speed would not be fast enough to bridge a logic gate for distances where C the constant of the speed of the light is squared. we will get the Femto, Yocto atomic weight. usually this is the nuclear radiation value, this anti-q.bit its aut of the boundaries real numbers( Manifolds dilemma). the quantum bit is in what is called linear regression on time and space. The polinomial equations on integrals X,Y,Z. on a Matrix P=1 square. C=square create Linear regression strings on programming quantum bits that are defined By Plank, Einstein, with the term” Momenta” and Niels Bohr has to admit one antiproton in his atomic model. The standard deviation and variance creates the indexes that are considered “Jakobs Ladder equations” deviations on time and space. 16. May 20, 2012 2:46 am Another parameter which can be relevant for distinguishing “simple” and “complicated” quantum states is based on the expansion to multi-Pauli operators. Suppose that your quntum computer is supposed to perform the unitary operatior U, and let U=\sum \alpha_S S be the multi-Pauli expansion of U , namely the expansion in terms of tensor products of Pauli operators. Here S is a word of length n in the 4-letter alphabet I, X, Y, Z, and \alpha_S is a complex number. For a word S we denote by |S| the number of consonants in S. Define the Pauli influence of U by: I(U) = \sum \alpha^2_S|S|. We can consider I(U) as a complexity parameter. The advantage of using Pauli expansion is that it is much simpler compared to parameters like my K, and tensor-rank. (In some other cases it turned out that multi-Pauli expansion is the best way to express mathematically my conjectures.) • John Sidles permalink May 21, 2012 11:21 am Gil, supposing that for an n-spin system, a family of unitary transforms U(n) is given such that I(U(n)) is \mathcal{O}(P(n)) for some polynomial P(n). Then does it follow too that I(\log U(n)) also is \mathcal{O}(P'(n)) for some (different) polynomial P'(n)? Here the physical motivation is that \log U is a Hamiltonian that generates U. • May 21, 2012 2:53 pm Such complexity I(H) for “true quantum” gate H=(I+iX)/\sqrt{2} would be less than for “pseudo-classical (swap)” gate X. Is it O’K? PS. Naive technical remark, is Y considered as consonant? • May 22, 2012 9:33 am PS2: May be in expression for I(U) should be used |\alpha_S|^2 ? 17. ramirez permalink May 20, 2012 9:39 pm Pauli’s structure of analysis thrives on the electrons in the atomic level. The expansion factor of the atom when is stable and when is in expansion(explosion). Alfred Nobel obtains its fortune discovering the dinamite, the expansion factor when the atoms are at rest and in Pauli’s exclusion principle of the subatomic structure becomes a quantum step towards the principle of energy tensor. however the exponential reaction during its combustion (released energy) generates a wave length proportional to its compression state(solid quantum state). James clerk Maxwell separates the electricity from magnetism, what’s the deal, the electromagnetic field around an iron pigtal intensifies its force (K is the magnetic constant) this kind of compression does not have a quantum space (defined as an empty space in motion according to the relativity theory), the electron has a curly tensor as defined by Richard Feyman in Caltech in his book “The beat of a different hart “, during its expansion state releases a spin force(rip a curl), Einstein’s conjectures (Opinions) conduced him to split the atom and avoid the Fermion problem, because the tensor structure of the energy traveling in an empty space in motion had to carry a linear energy release effect and avoid the heating of the atom before it does releases the total amount of its energy,at the same time avoid the boundaries problems (manifolds) the total release of the energy in a chain reaction (obstructions on the combustion).over heating like in a nuclear reactor. The expansion factor is proportional at the inverse tensor (covariant), when this tension is on K=0 the inertial force forces the atom to travel backwards on time( this energy moving in an empty space in motion) creates a nonmagnetic tensor like antimatter. means that the nitroglycerin is condensed expansive fuel, as actinides, when they are in the presence of a detonator its energy release has a wave propulsion similar to the communications gate in your celular, reaches speeds almost like the speed of light, this is what is called a quantum bit o Q-bit. Pauli subatomic structure does not have quantic form. the nuclear energy release in a chain reaction inserts itself in the nucleus of the other atoms reproducing the same effect as splitting the atom due to the reason that is traveling faster than the speed of light (this is when is considered a quantic operator), Quantum computers use the same principle in the Hubble sideral observations, the Microchip that measures its operations in Gigahertz (speed), the Q-bits on memory stacks are compressed in giga bites, the programming string codes only create an assigned value of an operator, but if those operators are not defined on the memories arrays you can have a dis-functional programatic response. Quantum Physics are being used to simplifie multiple and complex operations. 18. May 21, 2012 1:31 am “…we conclude because A resembles B in one or more properties, that it does so in a certain other property.” John Stuart Mill, “System of Logic Ratiocinative and Inductive[1843]”, Chapter XX on analogies. Learning from analogies is a difficult matter, and often discussing analogies is not productive as it moves the discussion away from the main conceptual and technical issues. But it can also be interesting, and being as far as we are in this debate, while concentrating on the rather technical matters around Conjecture C, we can mention a few analogies. (Studying analogies was item 21 on my list of issues that were raised in our discussion.) 1) Digital computers Scott Aaronson: When people ask me why we don’t yet have quantum computers, my first response is to imagine someone asking Charles Babbage in the 1820s: “so, when are we going to get these scalable classical computers? by 1830? or maybe 1840?” In that case, we know that it took more than a century for the technology to catch up with the theory (and in particular, for the transistor to be invented). The main analogy of quantum computers is with digital computers, and of the quantum computer endeavor is with the digital computer endeavor. This is, of course, an excellent analogy. It may lead to some hidden assumptions that we need to work out. 2) Perpetual motion machines The earliest mention of this analogy (known to me) is in 2001 by Peter Shor (here): Nobody has yet found a fundamental physical principle that proves quantum computers can’t work (as the second law of thermodynamics proves that perpetual motion machines can’t work), and it’s not because smart people haven’t been looking for one. I was surprised that this provocative analogy is of some real relevance to some arguments raised in the debate. See e.g. this comment, and this one. 3) Heavier-than-air flight Chris Moore: “Syntactically, your conjecture seem to be a bit like this: ‘We know that the laws of hydrodynamics could, in principle, allow for heavier-than-air flight. However, turbulence is very complicated, unpredictable, and hard to control. Since heavier-than-air flight is highly implausible, we conjecture that in any realistic system, correlated turbulence conspires to reduce the lift of an airplane so that it cannot fly for long distances.’ Forgive me for poking fun, but doesn’t that conjecture have a similar flavor? “ This is also an interesting analogy. The obvious thing to be said is that perpetual motion machines and heavier-than-air flights represent scientific debates of the past that were already settled. 4) Mission to Mars Scott: Believing quantum mechanics but not accepting the possibility of QC is somewhat like believing Newtonian physics but not accepting the possibility of humans traveling to Mars. 5) Permanents/determinants 2-SAT; XOR-SAT Aram: If you want to prove that 3-SAT requires exponential time, then you need an argument that somehow doesn’t apply to 2-SAT or XOR-SAT. If you want to prove that the permanent requires super-polynomial circuits, you need an argument that doesn’t apply to the determinant. And if you want to disprove fault-tolerant quantum computing, you need an argument that doesn’t also refute fault-tolerant classical computing. This is a very nice analogy which gives a very good motivation and introduction to Aram’s first point. I also related to it in this comment. Of course, unlike the P=NP problem, or the question about solving equations with radicals, feasibility of universal QC is not a problem which can be decided by a mathematical proof. 6) Solving equations with radicals When it come to the content, I do not see much similarity between QC and solving polynomial equations. But there are two interesting points that this analogy does raise: 1) Can we work in parallel? Is it possible to divide (even unevenly) the effort and attention between two conflicting possibilities? It is quite possible that the answer is “no,” because of a strong chilling effect of uncertainty. (See e.g. this comment.) 2) The failure for the centuries-long human endeavor of finding a formula for solving general degree-5 equations with radicals is not just “a flaw.” It was not the case that the reason for this impossibility was a simple matter that mathematicians overlooked. The impossibility is implied by deep reasons and represents a direction that was not pursued. It required the development of a new theory over years with considerable effort. 7) The unit-cost model Leonid Levin (here): This development [RSA and other applications of one-way functions] was all the more remarkable as the very existence of one-way (i.e., easy to compute, infeasible to invert) functions remains unproven and subject to repeated assaults. The first came from Shamir himself, one of the inventors of the RSA system. He proved in [Inf.Process.Lett. 8(1) 1979] that factoring (on infeasibility of which RSA depends) can be done in polynomial number of arithmetic operations. This result uses a so-called “unit-cost model,” which charges one unit for each arithmetic operation, however long the operands. Squaring a number doubles its length, repeated squaring brings it quickly to cosmological sizes. Embedding a huge array of ordinary numbers into such a long one allows one arithmetic step to do much work, e.g., to check exponentially many factor candidates. The closed-minded cryptographers, however, were not convinced and this result brought a dismissal of the unit-cost model, not RSA. This is an interesting analogy. 8) Analog computers This is analogy that is often made. See, for example, these lecture notes by Boris Tsirelson, where Boris’s conclusion was that the analogy between quantum computers and both digital and analog computers are inadequate and quantum computers should be regarded as a new unchartered territory. I find what Boris wrote convincing. (I never understood though what is wrong with analog computers.) In Boris’s own words: A quantum computers are neither digital not analog: it is an accurate continuous device. Thus I do not agree with R. Landauer whose section 3 is entitled: Quantum parallelism: a return to the analog computer. We do not return, we enter an absolutely new world of accurately continuous devices. It has no classical counterparts. 9) Magic noise-cancelling earphones Here is an analogy of my own: We witness in the market various noise cancelling devices that reduces the noise up to 99% or so. Is it possible in principle to create computer-based noise cancelling earphones that will cancel essentially 100% of the noise? More precisely, the earphones will reduce the average noise level over a period of time T to O(1/n) times the original amount, where n is the number of computer cycles in T. • John Sidles permalink May 22, 2012 2:15 pm Another analogy is that we are struggling with a mismatch between “technology push” and “requirements pull”. At present the “requirements pull” is relatively weak — there isn’t much market-place demand for fast factoring engines, and as for quantum dynamical simulations, during the past 20 years the Moore-exponent of improvements in classical simulation capability has substantially outstripped the Moore-exponent of improvements in quantum simulation capability … and there is no obvious end it sight. As for the proof technology push, here too we have only barely begun to integrate existing quantum algebraic-informatic tools with differential-dynamic tools. As Vladimir Arnold expressed it: “Our brain has two halves: one half is responsible for the multiplication of polynomials and languages, and the other half is responsible for orientation of figures in space and all the things important in real life. Mathematics is geometry when you have to use both halves.” Conclusion: We stand in need of a version of Conjecture C that is designed so as to be simultaneously: (1) concretely responsive to the “requirements pull” of the 21st century, and (2) creatively amenable to an Arnold-style “technology push.” • May 22, 2012 2:22 pm Another very good analogy in my view is with Bose-Eisntein condensation. An idea that was theoretically proposed in 1924-25 and was first realized experimentally in 1995 after attempts to do so from the mid fifties. This is a great “role model” for the QC endeavor and it is also related to various technical issues related to our discussion. (Also some of the heroes of the BE story are now part of the QC efforts.) • March 20, 2014 2:42 am Another interesting analogy with alchemy and the goal of transmuting gold into led was raised e.g. by Scott Aaronson in this discussion over Shtetl Optimized. What is interesting here is that the principle for why led cannot be turned into gold given by atomic theory and modern chemistry were of course of huge importance in  science, and yet, one could say that now with subsequent further understanding one can argue that it is actually possible “in principle” (but we no longer care) to turn led into gold. (You can even say that understanding the principle for why it is impossible was crucial for understanding later on the principles for why it is possible.)  See here for a related remark by Dick Lipton over my blog also referring to perpetual motion machine. • March 20, 2014 6:51 am History repeats itself. Not even sure that these three problems – lead transmuting into gold, P=NP, large-scale quantum computing – are theoretically so distinct from each other… 19. May 29, 2012 3:05 pm I am looking at Conjecture C for codes I cannot help but think about separability of pure states, and since we are talking qudits now, it is worth noting that it is possible to place upper and lower bounds on separable states around maximally mixed states. It is also worth reviewing If we think of noisy quantum systems as non-separable where system and environment are entangled, then the question seems to be whether we can identify some separable pure subsystem of the noisy system of some size with a measure c of the number of errors the subsystem can correct. My thinking is that we really can’t answer this question without thinking dynamically, e.g. without thinking about the time dependence of the system. If we are thinking in terms of computing, we have to place a time envelope around the beginning and end time of the computation. So in this sense, one can think about there being a bubble in the mixed system that has sufficient life to complete some sort of operation. This make one want to use constant c in the context of a measure of temporal pure states that follow a decay function like EXP(-ct) where the upper bound of c is limited largely by the remaining entropy growth in the larger noisy system. Quantum teleportation gets around no cloning by destroying one copy and creating another with a spacetime gap between the two copies. So it doesn’t seem counterintuitive to suggest that we can introduce similar temporal restrictions on the code. So if we dump any notion of eternally pure states, and begin asking questions about the scalability of more temporal pure states, I think the size of the separable pure states will be largely dictated by the size of the larger noisy system and where that system is in its evolution with respect to some observer. 20. May 31, 2012 12:57 pm A piece of news related to the debate: Here at the Hebrew University of Jerusalem the new Quantum Information Science Center had its kick-off workshop, Yesterday, May 30, and I gave a lecture on the debate with Aram. Here are the slides of the lecture. It covered my initial post and Aram’s three posts but did not go into the rejoinders and the discussion. There were several interesting comments related to the discussion which I will try to share later. As quantum information is an interdisciplinary area in the true sense of the word and also in the good sense of the word, an area that involves several cutting edge scientific topics, it is only natural that HU’s authorities enthusiastically endorsed the initiative to establish this new center. The entire workshop was very interesting, Dorit Aharonov gave a beautiful talk claiming that a new scientific paradigm of verification is needed to check quantum mechanics beyond BPP. This talk is quite relevant to a discussion we had here on the post: “Is this a quantum computer“. The other talks were also very interesting. 21. ramirez permalink June 4, 2012 11:44 am I’ve seen this dialog going from Polynomials to Black holes and seems to me that you are looking the “God”el’s refers in modern physics, the Dialog with “Aram” aic on the signature of the glyph-encrypted-black door.that talks about gods paradise is very similar to the quantum time-space on the “Tora” talks about creation “one empty space where he created the materials things”. Einstein said the same thing ” an empty space but he said (in motion)”.the Godels polynomial on the Quintic equation on a bi-dimensional where you take one ordinal line X to an exponential 5 “from one to five there is a gap to reach that place (here there is motion) ” this was not solved in that time. this gap created the Ana “frank”-incense room where it was an annex to the house where she was hiding. this gap factor is the same principle of the Quintic “mistere” of the Golden Shrine and the small temple behind it in Israel. This equation was discovered by the German Officers upon its denunciation. the separation of the origin of X to its radical, created a linear gap called hue, or deepness.So Einstein had to go on a Tri-dimensional ordinal sketch and Use the principle of “gravity as two forces that attract and repel each other” where the radical is a compression state (Quantum state). some get confused on this assumption saying gravity is = 2, where in reality is one in ground zero. so he decides to split the exponential function of one. and have 1/2 of the sine . 1/2 cosine =1. On the Quintic equation the observer sees the variable X moving from left to right or right to left. so he changes the position to Z variable. and he sees that the light bends upon the inertial force when is moving from X zero to X5 what you are seeing is that the point of origin moved because the place you are standing is moving also.(Galileo) paradox. so Einstein made some calculations and he discovered those two forces that counteract and he uses the equation on 8Times the radius or the speed of light, creating the “Octil” parameter. (Octli) so this inertial force would create the gravity needed to create a Quantum State of particles in Motion in an empty space.(Bose-Einstein Principle), this Gravity Shield (SchwarzChild ring of gravity equation), influenced by the spinning of the particles on a distant Polynomial would create a gravity line on X zero to X-1= radical.The first gravity force on “Z” when the integration of the variables on a displacement towards the point of 8Times the radius at the speed of light, has had surpassed the boundaries of the real numbers (manifolds).The numeric perception becomes not real and 1/2 can be considered different than the original values.(values of perception Jean Piaget). The quantum space is considered an extra dimension when the integration values(samples) are far more away of the distance of the speed of light.The uncertainty of finding the same spot in the same time space, in another dimension has brought the idea of Quantum Bits to make Tera Hertz fast microchips to make recording quantum-bits. however one bit second in the fire might seem like a year, or one billion year with Ruth might feel like a second.This are the principles of the Aramaic encryption lang.later on the Hittite Lang Shows some modification on time space, from the present to the future. Any space that you can see and perceive in your eyes has an arch function inside your mind. The phrase ” Victory is for Her whom wins with her sight and ankles” was used during the Roman Empire as a symbol of power. Who she is?. The Tevatron is trying to generate this gravitational force to accelerate the hydrogen in the fuel cell. and use water as fuel. The water as Fuel has to have this pressure, The fuel pressure sensor has to indicate the fission point where the hydrogen atom jumps from one dimension to another generating friction and temperature within himself.its called the Mikamoka antimatter bit. The Borgia Codex shows some codifications that talk about this empty space in motion but is similar to the old Babylonian cuneiform script of Mount Sinai. Shalom. 22. June 4, 2012 1:02 pm One interesting issue that was raised by Nir Davidson in our quantum information center kick-off workshop is the “paradox” regarding chaotic behavior in quantum and in classic systems. In a classic chaotic system two nearby initial states may be carried far apart by the dynamics. In fact, their location may become statistically independent (in a certain well defined sense). In “contrast” for two nearby states in a quantum evolution their distance (trace-distance) remains fixed along the evolution. Nir described a formulation by Asher Peres for the “paradox” as well as how to solve it, and some related experimental work. This issue is relevant also to classical and quantum computers. If we corrupt a single bit in a digital computer then as the computation proceeds we can assume that this error will infect more and more bits so that the entire computer’s memory will be corrupted. In “contrast” if we let a quantum noise effect a single qubit and continue the quantum evolution without noise, then the trace distance between the intended state and the noisy state remains the same. What can explain this difference? The answer (I think) is quite simple. It has to do with the distinction between measuring noise in terms of trace-distance and measuring it in terms of qubit-errors. When you corrupt one qubit and let the error propagate in a complicated noiseless quantum computation the trace distance between the intended and noisy states will be fixed but the number of qubit-errors will grow with the computation, so just like in the classical computer case the noise will effect the entire computer memory. This is related to the fact that the main harm in correlated errors is that the error-rate itself scales up. 23. Serge permalink June 6, 2012 5:13 am Had computer science been a business of engineers and physicists right from its beginnings, I think that greater emphasis would have been put on processes rather than on programs. Processes are physical objects whereas programs are just mathematical ones – and processes are everywhere in Nature. For example, the fact that it’s much more difficult to factor a large compound number than it is to multiply two large primes is somewhat reminiscent of the nuclear force that glues the protons together inside atoms. Breaking a nucleus apart requires a lot of energy as well. When the unsolved problems of complexity theory are considered more systematically with a physicist’s eye, maybe new laws for the physics of computing will be discovered instead of new axioms and proofs about algorithms. • Serge permalink June 6, 2012 10:07 am To put it differently: trying to guess the behavior of a process by means of its program is like trying to guess somebody’s life by means of their DNA code. Processes are executed by physical devices which are themselves subject to the laws of physics. That doesn’t answer the PvsNP question – which I believe is undecidable. But it might explain why the world seems to behave as though P!=NP. • June 6, 2012 10:31 am Hi Serge, regarding the P=NP problems and your beliefs about it. The possibility that the question is undecidable was raised and there is some related research agenda. Unfortunately, proving definite results in this direction appears to get “stucked” even a bit earlier than proving definite results about computational complexity hardness. (If you want to check your reasonings regarding P=NP being undecidable one standard things to do it to try to see the distinction with problems like 2-SAT that are known to be feasible.) You mainly raise two other issues which seem interesting. The first is about our inability to predict the evolution of a computer program (described, say by the DNA code) when the evolution depends on unpredictable stochastic inputs. The second is about our inability to predict the evolution of a computer program (again a DNA code is an example) when we do not know precisely what the program is. (Also, the analogy between factoring and breaking a nucleus to parts is cute, but its is not clear how useful it can be.) The distinction between (physics and engineering) processes and (mathematical) programs is not clear. • Serge permalink June 6, 2012 11:48 am Hi Gil, thank you very much for your interesting answer. A clear distinction between programs and processes is useful in operating systems, a process being a specific execution of a program. One program leads to infinitely many possible executions of it. When mathematicians speak of a program, I think they also mean all its potential executions. Regarding PvsNP, there might exist a polynomial algorithm for SAT but executing it would go counter physical limits, such as a program too big to fit into memory for example. Or maybe our brains just couldn’t understand it and therefore nor even design it. In addition to the unpredictability of the behavior of programs due to unpredictable stochastic inputs or to unknown code, in some cases that behavior could be undecidable itself. I’m thinking of the algorithm that Ken commented on in “The Traveling Salesman’s Power”, saying there’s an already-known algorithm A accepting TSP such that if P=NP then A runs in polynomial time. 24. June 10, 2012 7:44 am John Preskill’s recent paper Quantum computing and the entanglement frontier touches on many issues raised in our debate. Very much recommended! • John Sidles permalink June 10, 2012 9:30 am Gil, please let me commend too this same Preskill essay. In it we read the following passage thought-provoking passage (p. 5): “A quantum computer simulating evolution … might not be easy to check with a classical computer; instead one quantum computer could be checked by another, or by doing an experiment (which is almost the same thing).” Adopting Preskill’s language to express the intuition that motivates Kalai Conjecture C (as I read it) leads us to the notion classical computers suffice to verifiably reproduce any-and-all simulations of quantum computers, insofar as those simulations apply to feasible physical experiments. And here the notion feasible physical experiment is to be taken to mean concretely, any-and-all physical system whose Hamiltonian / Lindbladian generators are stationary. In the preceding, the stipulation stationary is chosen deliberately, with a view toward crafting a concrete presentation of Conjecture C that affords ample plausible scope for near-term advances in practical simulation, without definitively excluding a longer-term role for quantum computational simulation. As a colleague of mine from Brooklyn was fond of saying, such a conjecture would be “better than ‘purrfect’, it would be ‘poifect’!”   :) • June 12, 2012 7:11 am Dear John, I have similar sentiments regarding the role and scope of Conjecture C. The draft of my post had a long obituary of Conjecture C (in the form originally made), starting with: “Conjecture C, while rooted in quantum computers skepticism, was a uniter and not a divider! It expressed our united aim to find a dividing line between the pre- and post- universal quantum computer eras.” Following Ken’s mathematical-formulations-as-cars’-engines metaphor, the following picture of me and Conjecture C was proposed • John Sidles permalink June 12, 2012 8:27 am LOL … Gil, perhaps Conjecture C may yet be reborn as a phoenix arising from the ashes! :) • June 12, 2012 9:03 am Indeed, we have good reasons to give up on the parameter K(ρ), but we did raise some appealing alternative parameters. In particular, the conjecture that the depth of quantum processes is essentially bounded is interesting both from the conceptual and technical points of view. (The idea that the emergence of iron is a counter-example is terrific, but I do not think that it is correct…) 25. June 10, 2012 11:03 am As I am reading the the Preskill paper, my thoughts are wondering to questions of examples of brute force quantum computers. The idea is this, think of LHC. What is it actually doing? It is trying to identify particles predicted by various models of particle physics, and it is also verifying production cross sections of those particles. So in some sense, we have models that can make predictions that are in some way computable using a classical computer, and we are building a machine that can verify that those models are accurate. So what is the LHC? Is it a machine or a Brute Force Quantum Computer? No one is questioning that by accelerating particles and smashing them together we are generating new particles that follow some sort of function, however neither is any one questioning that what the LHC is simulating is an earlier state of the universe (and that might be a good question to ask). Another more accessible potential example is found in the study of fluid dynamics. Although we have fairly good classical formulas for modeling fluid flow in several situations, The modeling of complicated turbulent systems is extraordinarily difficult, and in many cases scale models must be produced in order to measure the “real” fluid flow of the system. Again, if we accept a quantum existence, what have we actually built with our model? We have resorted to a type of brute force method in order to solve a real world computational problem. As I think further about the question of QECC, I can’t help but think of the similarity between the difficulty of developing QECC and the difficulty in building stable fusion reactors. In a fusion reactor the goal is to build a stable, long lasting state of matter, invariably we can see that state as a quantum state, and the problem is similar. How do we keep the state stable so that “noise” from the environment doesn’t collapse the state? Once again, we are looking for a brute force method to solving an otherwise computational problem. Freeman Dyson recently published a book review where he compared string cosmologists to natural philosophers and other “creative” thinkers. However, what he failed to recognize is that questions being asked in those explorations do have intersections to real questions in quantum computing, such as the relationship between axions and anyons, as highlighted by Wilczek [2]. This brings me some of the more current question regarding the debate surrounding SUSY and theory that rely upon its existence. I look at the recent Straub paper [3] and see a graph with the SM as a point in vastly larger parameter space. Although by design, all the other potential models contain the SM as a shared common point, I can’t help by think about the situation coming from the other direction an looking at all the potential models that have the SM as a common point. Although I am not a subscriber to any notion of a multiverse as envisioned by sci-fi and pop sci writers, I am interested in this idea of other stable solutions, or perturbtions of our particular stable solution. Preskill does and excellent job of highlighting the question of what can’t be simulated on a quantum computer. We can’t give mass to a simulation in a quantum computer, however we know that there are several solutions out there that could be explored that do not require mass, and I think those are something worth exploring. 26. June 12, 2012 2:14 am The universe: is it noisy? Is it a quantum computer? why not two non-interacting quantum computers? The idea of the entire universe as a huge quantum computer was mentioned in several comments (and is an item on our long agenda). Also, the universe being described by a pure quantum evolution was mentioned and was related to Aram’s second thought experiment. It feels rather uncomfortable to talk about the entire universe, or to draw conclusions from it, but let me try to make some comments. 1) The claim that the entire universe runs a pure evolution seems reasonable but not particularly useful. (There are theories suggesting otherwise which are outside of quantum mechanics.) 2) The claim that the entire universe is a huge (noiseless) quantum computer which computes its own evolution is also made quite often. Again, it is not clear how useful this point of view is. And I am not sufficiently familiar with the literature on this. The universe as a huge noiseless quantum computer can be regarded as an argument against the claim that quantum computers are inherently noisy. 3) As we noted already, quantum computers are based on local operations and therefore the states that can be reached by quantum computers are tiny part of all quantum states. For example, a state described by a generic unitary operators is unfeasible. (In our off-line discussions we raised the question if such non-local states appear in nature.) 4) An appealing possibility (in my view) for our universe is that of two (or several) non-interacting (or, more precisely, extremely weakly interacting) quantum computers. We can have on the same Hilbert space two different independent tensor product structures so that every state is a superposition of two states, each described by one of two quantum computers. In this case, states achievable by one quantum computer will be nearly orthogonal to states achieved by the other. (This possibility does not rely on the hypothesis of no quantum error-correction, although it will be “easier” for two quantum computers not to be able to interact when there is no quantum-error correction around.) 5) The idea of the universe as a quantum computer which runs quantum error-correction is used in the paper Black holes as mirrors: quantum information in random subsystems by Hayden and Preskill. For what I understand, in this paper, certain quantum states in a black hole are required to behave like generic unitary states, and since such states are infeasible, states with similar properties arising from quantum error-correction are proposed instead. It will be interesting to examine if Hayden-Preskill’s idea can work with quantum error-correction being replaced by a two non-interacting quantum computers solution. 27. ramirez permalink June 12, 2012 10:51 am Usually the chinese people writes “peoples” as plural when it is already “people” plural. the same mistake was done in Mao Tze Doung’s biography. we are acostumed to take somebody’s else mistakes as truthful.”Weylan” means labor camp woman. Mao’s mother, and Bolchevique’s holy Icon that represents the mother’s nation of the of the truthful patriots. Charles Marx was a German Jew that wrote “Das Kapital”, Einstein was a German-Jew also. both theories shocked the world with conjectures on Human quality recognition and equal distribution of the income. while the occidental countries constructed their kingdoms based on Slavery and human degradation arguing that they are doing good to humanity. Why two supercomputers can’t be enabled to work together?. their programers keep the security codes on the so called star wars. where the code couldn’t be cracked, hijacked, or erased. in order to deviate their commanding source and get rid of them in case of a confrontation. What is the problem with the quintic equation being solve by radicals? that we do not have an exact number on the square of 2 or 1. all the operators are built in hertzian operations. how fast an electric current travels through a conductor of logic gates flipping them on zeros or ones. the Quantum bit here has been recorded in a different wave lengt not in a different code source, this wave lengt is exclusive of the Pentagon or the Kremlin to operate their military satellites. its something like the chess board, it dos have to parts that are interacting with each other to find the ponderation of the code encrypted in each memory stack. however the quantum bit presents a conflict. where the antimatter is present as an antiquark in a wave lengt.Einstein’s equation E=MC square caused international Mockery and Histeria between the Mathematicians and Physicists, Why? the Godel’s inability to solve the quintic equation was solved upon a logic aberration. C= constant of the speed of light its the maximum speed of the light in an empty space, so how are you going to accelerate faster than the speed of light to get C square?. somehow the universe is noisy because they found sounds of exploding stars and this shock waves travel faster than the speed of light. its what is called Quanta something like ether, or antimatter (The Micamocka chocolate chip), its the same antimatter quantum bit that the Tevatron is looking for in the Large hadron Collider, and is obtained in a Higgs equation through a massive collision of particles where the expansion wave has to be similar to a super nova star, however they do not have the expected results. this event should create a time space distortion where two or more atoms are trying to occupy the same dimensional time space, this is called atomic fission and is found in radioactive materials that eat the surrounding material (Chernobyl).there is an angle deviation in the equations (Bishop) that acts as a counter weight in the atom spin when reaches C square. that factor is what is called gravitational spin. The negroe playing chess with the Rabie is symbolic of the arch of the alliance, but that does not make them geniuses like you say. Bobby Fisher, Karpov, Kasparov and many others work on an equilibrium equation where any step of a horse changes algorithmically all the equation. Mans visual field is 20-20 while the Horse is Greater 30-30 so this difference gives you a linear regression. Check once you are the king your place its the “Ara” aramaic. That becomes a black hole to the gravitational field when you are out of boundaries. Quantum bit (Antiquark) that is present before the integration of the mass hertzian wave. Eistein used 8Times the radius of the speed of a light emitter to create the gravity field where you can encrypt any antimatter code. Megabucks trick,National Lotto, and other crap games. The tower J is the Jocker’s Club, Who’s club is the Tower B?. 28. ramirez permalink June 18, 2012 7:15 pm Heisenberg’s uncertainty is about How sure are you about hitting the nucleus of an atom in a chain reaction if you cannot comeback to the same place you left when you went up to a quintic polinomial, when it does involve exponentials on Csquare.The radicals are afected inversely. 29. June 11, 2013 12:59 am I gave a talk at the HUJI CS theory seminar on matters related to my conjectures and the debate and there were several interesting comments by Dorit Aharonov, Michael Ben-Or, Nadav Katz , and Steve Wiesner. Dorit suggested that experimental cat states with huge number of qubits are  counterexamples for the conjecture on bounded depth computation. This is a Good point!! I should certainly look at it. 30. January 23, 2014 9:13 am One thing I never explained is why I considered Aram and Steve’s example as a counterexample to my conjecture C. The setting of conjecture C was to find limitations for states achieved by noisy quantum computers with realistic noise models. The prior assumption you need to make is that the noise on gate is of arbitrary nature. (And, in fact, for my full set of conjecture you need to assume that information leaks on gated qubits/qudits are positively correlated). Aram and Steve had two examples. The first is based on qudits. This is an interesting example, and certainly my conjecture C should extend to qudits. But in Aram and Steve’s example the noise on gates is not of general nature but rather of a very structural nature. So this does not apply to the right extension of Conjecture C to qudits, although it does impose an interesting condition on “censorship conjectures.” The second qubit example is more convincing. (Ironically it is quite similar to an example I proposed myself in 2007.) A&F proposed a pure state which seems easy to approximate where my entropic parameter is exponential. What happens for mixed states which represent realistic approximation for this state? If the parameter is exponential for them this is a counterexample for my example. If it is not, it shows that the entropic parameter I defined is seriously flawed. (It will be interesting to know which possibility is correct, but in both cases I regarded my original entropy-based parameter as inappropriate.) 1. Sometimes They Come Back | Are You Shura? 2. The Quantum Fault-Tolerance Debate Updates | Combinatorics and more 3. RT.COM -> Это происходит, на каком языке? -> WHAT R U –PEOPLES DOING?זה קורה ב איזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה?זה קורה באיזה שפה– BIBI ?Dit geb 4. Can You Hear the Shape of a Quantum Computer? « Gödel’s Lost Letter and P=NP 5. Giants Known For Small Things « Gödel’s Lost Letter and P=NP 6. Quantum Repetition « Gödel’s Lost Letter and P=NP 7. Lucky 13 paper dance! | The Quantum Pontiff 8. Quantum Supremacy or Classical Control? « Gödel’s Lost Letter and P=NP 9. The Quantum Debate is Over! (and other Updates) | Combinatorics and more 10. My Quantum Debate with Aram III | Combinatorics and more 11. Happy 100th Birthday, Paul Erdős | Gödel's Lost Letter and P=NP Leave a Reply You are commenting using your account. Log Out / Change ) Twitter picture Facebook photo Google+ photo Connecting to %s Get every new post delivered to your Inbox. Join 2,520 other followers
67b8620c1f2761c4
Take the 2-minute tour × Until recently, I thought that graph theory is a topic which is well-suited for math olympiads, but which is a very small field of current mathematical research with not so many connections to "deeper" areas of mathematics. But then I stumbled over Bela Bollobas "Modern Graph Theory" in which he states: The time has now come when graph theory should be part of the education of every serious student of mathematics and computer sciences, both for its own sake and to enhance the appreciation of mathematics as a whole. Thus, I'm wondering whether I should deepen my knowledge of graph theory. I find topics like spectral and random graph theory very interesting, but I don't think that I am ever going to do research on purely graph theoretic questions. To the contrary, I'm mainly interested in areas like algebraic topology, algebraic number theory and differential topology, and I'm wondering if its useful to have some knowledge of graph theory when engaging in these topics. So my question is: Why should students like me, which are aspiring a mathematical research carreer in mathematical areas which are not directly related to graph theory, study graphs? share|improve this question I have a really basic basis on graph theory from a course I had as an undergraduate, only one brief topic, and I also would like to hear what people has to say about this. –  Marra Apr 26 '13 at 20:45 Topology, algebraic geometry , and number theory come together when one studies Dessins D'enfants. See en.wikipedia.org/wiki/Dessin_d%27enfant –  Baby Dragon Apr 26 '13 at 21:21 Also graphs are easy enough to define and think about, but are complicated enough that almost anything can be phrased in grapg theory. –  Baby Dragon Apr 26 '13 at 21:23 Are you surprised to see a graph theory book selling graph theory as an essential tool? (Regardless the truthfulness of this proposition...) –  Asaf Karagila Apr 26 '13 at 22:08 3 Answers 3 up vote 4 down vote accepted If you're more interested in algebraic topology, I suggest not to spend much time studying the combinatorial aspects of graph theory. It is true that graphs in this guise do appear in such areas; for instance, one uses Dynkin diagrams (which are graphs) to classify algebraic groups and also Lie groups. It's really very elegant and useful for work in algebraic groups, but you need very little graph theory for this. Graphs are often used where there is some combinatorial structure, but again I doubt (but perhaps I am wrong) that knowing lots of graph theory (as one would find in the typical book like Bondy's) would help too much. "Graph theory" covers much more than just this, however. For instance an esperantist family (generalisation of expander family) of graphs arise naturally as a certain family of Cayley graphs associated to finite groups that are quotients of fundamental groups (as Riemann surfaces) of algebraic curves, which come from any family of etale covers. This can be used to prove interesting results about families of various arithmetic objects and how they behave generically. An excellent starting point for these topics is the paper by Ellenberg, Hall, and Kowalski, "Expander graphs, gonality, and variation of Galois representations". This source hopefully should spark your imagination about such topics and encourage you to read up on such topics. The kind of graph theory covered in a typical undergraduate course I think isn't so prevalent in every day algebraic topology and related fields since the stuff in "typical graph theory" studies properties that aren't invariant under homotopy, and homotopy invariants is the stuff that algebraic topology is built upon. There is however, a kind of "graph theory" that is extremely useful in topology and number theory: it's the theory of simplicial sets (and simplicial objects in any category)! This doesn't just look at graphs though, but objects that are built from higher simplicies too. The basic theory of simplicial objects in algebraic topology covers homotopy-type stuff. Simplicial objects, for instance simplicial sets, are completely combinatorially-defined. For instance "nice" simplicial sets called fibrant ones have a notion of fundamental group and there is a functor from simplicial sets to spaces called "geometric realization" that sends a simplicial set to a space, which for a graph would be the obvious topological space, and the notion of fundamental group agrees with the combinatorially defined one. Simplicial sets are so fundamental to many areas of algebra such as: $K$-theory (they are typically used to define the higher $K$-groups), higher category theory (which is a generalisation of category theory and also has applications to $K$-theory), homological algebra (essential tool, the cat of nonnegative chain complexes of $R$-modules is equivalent to the category of simplicial objects in the category of $R$-modules), algebraic topology itself of course, algebraic geometry (for things like $\mathbb{A}^1$ homotopy theory), and tons more stuff that I don't know about I'm sure. Good sources for simplicial objects are: • May, "Simplicial Objects in Algebraic Topology" • Ch. 8 of Weibel's "An Introduction to Homological Algebra" (you probably should start here!) • Goerss's book "Simplicial Homotopy Theory" • Moerdijk and Toen's "Simplicial Methods for Operads and Algebraic Geometry" (Part 2 is about algebraic geometry) • Ferrario and Piccinini's "Simplicial Structures in Topology" (more topology) share|improve this answer Mathematics is not so neatly divided into different subjects as it might seem right now. It is some kind of vast mountain, and most of it is obscured by clouds and very hard to see. It is valuable to try to look at the mountain from many different perspectives; in doing so you might see some part of the mountain you couldn't see otherwise, and that helps you better understand the mountain as a whole (which is valuable even if you currently think you are only interested in one small part of the mountain). Graph theory is one of those perspectives. More specifically, here are some interesting connections I've learned about between graph theory and other fields of mathematics over the years. • Graphs can be used to analyze decompositions of tensor products of representations in representation theory. See, for example, this blog post. This is related to a beautiful picture called the McKay correspondence; see, for example, this blog post. (There are some more sophisticated aspects of the McKay correspondence involving algebraic geometry I don't touch on in that post, though.) • Graphs can be used as a toy model for Riemannian manifolds. For example, like a Riemannian manifold, they have a Laplacian. This lets you write down various analogues of differential equations on a graph, such as the heat equation and the wave equation. In this blog post I describe the Schrödinger equation on a finite graph as a toy model of quantum mechanics. • Graphs can also be used as a toy model for algebraic curves. For example, like an algebraic curve, they have a notion of divisor and divisor class group. See, for example, this paper. • Graphs can also be used as a toy model for number fields. For example, like (the ring of integers of) a number field, they have a notion of prime and zeta function, and there is even an analogue of the Riemann hypothesis in this setting. See, for example, this book. But there is something to be said for learning about graphs for their own sake. share|improve this answer The only reason is that it is an active field nowadays and you should have as many possibilities in front of you when choosing your path. If you end up working in contact geometry graph theory probably won't help you a whole lot. share|improve this answer See the abstract to mathunion.org/ICM/ICM1986.1/Main/icm1986.1.0531.0539.ocr.pdf –  Baby Dragon Apr 26 '13 at 21:52 Your Answer
a6e23f8d9f55a6c3
Take the 2-minute tour × I've read that QM operates in a Hilbert space (where the state functions live). I don't know if its meaningful to ask such a question, what are the answers to an analogous questions on GR and Newtonian gravity? share|improve this question 2 Answers 2 up vote 1 down vote accepted I interpreted your question differently, more like a mathematics question. In Quantum Mechanics, we basically have an equation, the Schrödinger equation, which is a differential equation on the space of square-integrable complex valued functions. This space is a Hilbert space, which means that it is a vector space, and it also has a nice topological structure, basically all Cauchy-sequences of vectors converge in that space. In Newtonian mechanics, the equations are defined on phase space, which is basically a $6N$-dimensional space, $N$ is the total number of particles, on which coordinates for a point consist of the positions and momenta of each particle you want to describe. The solution of the equations induces a flow on this phase space. The structure of phase space is usually that of a symplectic manifold. In General Relativity, the equations are Einstein's field equations. They link the Riemann tensor to the energy-momentum tensor. They are difficult to solve in the sense that they are nonlinear and you have to specify an energy-momentum tensor, but this tensor will also depend on the geometry of space-time, thus the Riemann tensor. So you have to solve in one go for the geometry and energy-matter distribution. In practice, many simplifying assumptions will be made. But the "space" of solutions is the space of geometries and energy-matter distributions compatible with the field equations. share|improve this answer @Raskolnikov : Your interpretation is what i was intending while i was asking the question. Is '6N' a typo, i did'nt get what it is.Is it 'infinite' ? What are the mathematical properties of a phase space of newtonian mechanics ? –  Rajesh D Dec 1 '10 at 15:16 No, it's not a typo, but I admit I have not been clear enough. $N$ is the amount of particles you want to describe. You multiply by 6 because each particle has 3 spatial coordinates and 3 momenta along each spatial direction. The structure of phase space is that of a symplectic manifold. –  Raskolnikov Dec 1 '10 at 15:22 @Raskolnikov : In newtonian mechanics, is the trajectory of the state of the system always smooth ? –  Rajesh D Dec 1 '10 at 15:34 More precisely the dimension of phase space is equal to the number of free generalized co-ordinates of the system. If you have constraints, they generally reduce the dimensions of phase space, which a very important justification for using it. –  Sklivvz Dec 1 '10 at 15:34 @Rajesh: for the next time I suggest you try to formulate your questions clearer so that one doesn't have to waste time on answers that happen to not be what you were intending. Well, I probably shouldn't have answered such a vaguely formulated question in the first place... –  Marek Dec 1 '10 at 15:40 First, I'll assume that you're talking about quantization. To understand how to quantize GR it is absolutely necessary to give an account (however sketchy) of the approach used to quantize simpler systems. Classical mechanics This is a procedure whereby one transfers from the classical point of view (Newtonian mechanics or equivalently Lagrangian or Hamiltonian mechanics) to the quantum point of view. Now, there are some general prescriptions how one can quantize classical mechanical systems. The most common one is that one replaces the phase space by Hilbert space, functions on phase space by operators on the Hilbert space and Poisson bracket of functions by commutator of the operators. Field theory The previous paragraph was only dealing with mechanics, i.e. case where there are only few degrees of freedom. But GR is a field theory (of gravitational field) and is actually a kind of gauge theory (but a little special at that). One has to first learn how to quantize classical fields and then gauge fields. To do that you can replace the (infinite-dimensional) phase space of the field by (very large) Hilbert space and produce an analogue of Poisson brackets called Dirac bracket which you then replace by commutators. (The second very common approach to quantization is via path integral for which you don't need any operators but I won't elaborate on that here because it is a huge area that would take us far away off the topic of your question) Then to quantize a gauge theory with its own huge gauge symmetry one has to carry out a very nontrivial discussion about the structure of these Dirac brackets. (There also exist other approaches to this but none of them is particularly easy for a beginner. If you're interested see Faddeev-Popov ghosts in path integral gauge quantization and BRST quantization) Now, the thing is that GR (as a field theory) is hard to quantize. I.e. if you repeat the above approach for GR, you'll find out that your quantum theory doesn't make sense (because it is not renormalizable). This suggests that something more than naive approach is needed. And there are actually lots of them. For one thing, one can quantize gravity in certain special dimensions (like 2+1) if one generalizes GR a little (this was done by Witten in '80s). There are also various reformulations that relate quantum gravity and QFT (like AdS/CFT correspondence). There is also matrix string theory that shows duality between matrix quantum mechanics and GR (as pointed out to me by Matt in this question of mine). In short, quantization of GR is very hard. There are many theories and as of yet there is no experimental evidence that would let us know which one is the correct one. share|improve this answer Thank you for pointing out.I i still felt your answer very useful in totally different way than i was expecting...I would try and formulate my question clearly in future. I think that your answer is very helpful for someone googling or browsing through this forum. –  Rajesh D Dec 1 '10 at 15:49 @Rajesh: all right then. I also think my answer could be good if only someone asked the question it addresses :-) –  Marek Dec 1 '10 at 16:03 Your Answer
83b9e895c532dc25
Friday, September 27, 2019 Just how conceptually economical is the Many Worlds Interpretation? An exchange of messages with Sabine Hossenfelder about the Many Worlds Interpretation (MWI) of quantum mechanics has helped me sharpen my view of the arguments around it. (Sabine and I are both sceptics of the MWI.) The case for Many Worlds is well rehearsed: it relates to the “measurement problem” and the idea that if you take the “traditional Copenhagen” view of quantum mechanics then you need to add to the Schrödinger equation some kind of “collapse postulate” whereby the wavefunction switches discontinuously from allowing multiple possible outcomes (a superposition) to having just one: that which we observe. In the Many Worlds view postulated by Hugh Everett, there is no need for this “add on” of wavefunction collapse, because all outcomes are realized, in worlds that get disentangled from one another as the measurement proceeds via decoherence. All we need is the Schrödinger equation. The attraction of this idea is thus that it demands no unproven additions to quantum theory as conventionally stated, and it preserves unitarity because of the smooth evolution of the wavefunction at all times. This case is argued again in Sean Carroll’s new book Something Deeply Hidden. One key problem for the MWI, however, is that we observe quantum phenomena to be probabilistic. In the MW view, all outcomes occur with probability 1 – they all occur in one world or another – and we know even before the measurement that this will be so. So where do those probabilities come from? The standard view now among Everettians is that the probabilities are an illusion caused by the fact that “we” are only ever present on one branch of the quantum multiverse. There are various arguments [here and here, for example] that purport to show that any rational observer would, under these circumstances, need to assign probabilities to outcomes in just the manner quantum mechanics prescribes (that is, according to the Born rule) – even though a committed Everettian knows that these are not real probabilities. The most obvious problem with this argument is that it destroys the elegance and economy that Everett’s postulate allegedly possesses in the first place. It demands an additional line of reasoning, using postulates about observers and choices, that is not itself derivable (even in principle!) from the Schrödinger equation itself. Plainly speaking, it is an add-on. Moreover, it is one that doesn’t convince everyone: there is no proof that it is correct. It is not even clear that it’s something amenable to proof, imputing as it does various decisions to various “rational observers”. What’s more, arguments like this force Everettians to confront what many of them seem strongly disinclined to confront, namely the problem of constructing a rational discourse about multiple selves. There is a philosophical literature around this issue that is never really acknowledged in Everettian arguments. The fact is that it becomes more or less impossible to speak coherently about an individual/observer/self in the Many Worlds, as I discuss in my book Beyond Weird. Sure, one can take a naïve view based on a sort of science-fictional “imagine if the Star Trek transporter malfunctioned” scenario, or witter on (as Everett did) about dividing amoebae. But these scenarios do not stand up to scrutiny and are simply not science. The failure to address issues like this in observer-based rationales for apparent quantum probabilities shows that while many Everettians are happy to think hard about the issues at the quantum level, they are terribly cavalier about the issues at the macroscopic and experiential level (“oh, but that’s not physics, it’s psychology” is the common, slightly silly response). So we’re no better off with the MWI than with “wavefunction collapse” in the Copenhagen view? Actually, even to say this would be disingenuous. While some Everettians are still happy to speak about “wavefunction collapse” (because it sounds like a complicated and mysterious thing), many others working on quantum fundamentals don’t any longer use that term at all. That’s because there is now a convincing and indeed tested (or testable) story about most of what is involved in a measurement, which incorporates our understanding of decoherence (sometimes wrongly portrayed as the process that makes MWI itself uniquely tenable). For example, see here. It’s certainly not the case that all the gaps are filled, but really the only thing that remains substantially unexplained about what used to be called “collapse” is that the outcome of a measurement is unique – that is, a postulate of macroscopic uniqueness. Some (such as Roland Omnès) would be content to see this added to the quantum formalism as a further postulate. It doesn’t, after all, seem a very big deal. I don’t quite accept that we should too casually assume it. But one can certainly argue that, if anything at all can be said to be empirically established in science, the uniqueness of outcomes of a measurement qualifies. It has never, ever been shown to be wrong! And here is the ultimate irony about Many Worlds: this one thing we might imagine we can say for sure, from all our experience, about our physical world is that it is unique (and that is not, incidentally, thrown into doubt by any of the cosmological/inflationary multiverse ideas). We are not therefore obliged to accept it, but it doesn’t seem unreasonable to do so. And yet this is exactly what the MWI denies! It says no, uniqueness is an illusion, and you are required to accept that this is so on the basis of an argument that is itself not accessible to testing! And yet we are also asked to believe that the MWI is “the most falsifiable theory ever invented.” What a deeply peculiar aberration it is. (And yet – this is of course no coincidence – what a great sales hook it has!) Sabine’s objection is slightly different, although we basically agree. She says: “Many Worlds in and by itself doesn't say anything about whether the parallel worlds "exist" because no theory ever does that. We infer that something exists - in the scientific sense - from observation. It's a trivial consequence of this that the other worlds do not exist in the scientific sense. You can postulate them into existence, but that's an *additional* assumption. As I have pointed out before, saying that they don't exist is likewise an additional assumption that scientists shouldn't make. The bottom line is, you can believe in these worlds the same way that you can believe in God.” I have some sympathy with this, but I think I can imagine the Everettian response, which is to say that in science we infer all kinds of things that we can’t observe directly, because of their indirect effects that we can observe. The idea then is that the Many Worlds are inescapably implicit in the Schrödinger equation, and so we are compelled to accept them if we observe that the Schrödinger equation works. The only way we’d not be obliged to accept them is if we had some theory that erases them from the equation. There are various arguments to be had about that line of reasoning, but I think perhaps the most compelling is that there are no other worlds explicitly in any wavefunction ever written. They are simply an interpretation laid on top. Another, equally tenable, interpretation is that the wavefunction enumerates possible outcomes of measurement, and is silent about ontology. In this regard, I totally agree with Sabine: nothing compels us to believe in Many Worlds, and it is not clear how anything could ever compel us. In fact, Chad Orzel suggests that the right way to look at the MWI might be as a mathematical formalism that makes no claims about reality consisting of multiple worlds – a kind of quantum book-keeping exercise, a bit like the path integrals of QED. I’m not quite sure what then is gained by looking at it this way relative to the standard quantum formalism – or indeed how it then differs at all – but I could probably accept that view. Certainly, there are situations where one interpretational model can be more useful than others. However, we have to recognize that many advocates of Many Worlds will have none of that sort of thing; they insist on multiple separate universes, multiple copies of “you” and all the rest of it – because their arguments positively require all that. Here, then, is the key point: you are not obliged to accept the “other worlds” of the MWI, but I believe you are obliged to reject its claims to economy of postulates. Anything can look simple and elegant if you sweep all the complications under the rug. Peter Morgan said... If we can find a different way to eliminate collapse of the wave function, then presumably MWI would be less compelling. One mathematically natural way to do so is to note that when we measure A followed by B, collapse of the state after a measurement A to a mixture of eigenstates of A can also be presented mathematically as a measurement of A followed by a measurement of B-after-A, with no collapse of the state, where B-after-A commutes with A. That is, real, actually joint measurements must be modeled by mutually commutative operators. I have a longer discussion of this on Facebook, two days ago. All the above assumes that we do not feel compelled to a many worlds interpretation of ordinary probability just because coins when tossed always come up heads or tails. Marty Tysanner said... It’s certainly not the case that all the gaps [in the decoherence picture] are filled, but really the only thing that remains substantially unexplained about what used to be called “collapse” is that the outcome of a measurement is unique – that is, a postulate of macroscopic uniqueness. This statement confuses me, because in my understanding the crucial content of the idea of wave function collapse is that it obtains a unique outcome in all cases, even though it glosses over (ignores) the interference that presumably plays an important role in a measurement. If decoherence cannot be shown to obtain a unique outcome in all cases then it doesn't remove the mystery; employing a postulate that ensures uniqueness seems to me another way of conveying the notion of collapse, packaged differently. What am I missing here? Coel said... It seems to me that MWI makes sense only if one proposes that the wavefunction is not telling us *about* ontology, but that the wavefunction itself *is* ontological (and is the only thing that is ontological). Thus, the one and only thing that actually "exists" is the wavefunction, one wavefunction containing within itself everything. From there, the "many worlds" trivially exist as the different de-cohered terms of the wavefunction. And that's all there is to it. Arun said... By adding the measurement apparatus and the environment to the quantum system of interest, decoherence theory tries to build a completely quantum theory of measurement. Yet it seems to me that the theory yields only classical dice, and the unique outcome we experience requires further explanation. That is, if I have a spin-up particle and choose to measure spin along the left-right axis, decoherence theory will give me the equivalent of a fair coin toss with heads-left and tails-right but can't tell me how the coin toss will actually fall. Perhaps the only recourse is that while identically prepared quantum systems are arguably, even provably identical and have no hidden variables, identically prepared macroscopic measurement apparatus really form an ensemble of roughly identical quantum systems, and so the uniqueness of any particular outcome arises from which one of this ensemble the particular run of the experiment encounters. Jayarava said... "The most obvious problem with this argument is that it destroys the elegance and economy that Everett’s postulate allegedly possesses in the first place.... The most obvious problem with this argument is that it destroys the elegance and economy that Everett’s postulate allegedly possesses in the first place." I don't get this objection. If each observer ends up on the different branch for each quantum observation one does need a line of reasoning, but one doesn't have to add an expression to the Schrodinger Equation. Evolution of the system over time is still governed by one equation under all circumstances. It seems pretty basic to *all* of the interpretations of all modern physical theories of the world at scales not immediate visible to our sense, that you have to explain how the universe appears versus how it actually is. It's not special to MWI. We had to do it with relativity too. I also don't get this objection. There are never multiple selves from the point of view of any given observer. There is only ever one. But worse, the objection seems to be based on an outdated idea about what constitutes "self". As we now know from extensive experimental evidence the sense of self is generated by the brain, from moment to moment. The target properties of a first person perspective are also part of this same representation. There is no enduring self. Anyone on a branching timeline can look back in time and see what appears to be a continuously evolving history - though in fact it is demonstrably discontinuous at the micro-level. Each instance can't be aware of other branches - which are orthogonal in Hilbert Space. MWI interpretation does not say that your world is not unique. You can only have one perspective - all the other perspectives are orthogonal to your reality. So your world is absolutely unique. In each possible world, that world has a unique history and will have a unique future. It's just that all possible unique worlds exist in Hilbert Space. But this is why Sabine's objection is the only one that makes sense. We can never, under any circumstances get any information about those "other worlds". So this world is in every meaningful sense unique (and this vitiates your objection). However, it is a consequence of taking the Schrodinger equation seriously that we can infer that those "worlds" must exist. If we are not taking the Schrodinger equation seriously, then we'd have to say why, because it is the single best description of reality ever conceived and has passed every test we can think of to date. If a quantum system does not evolve according to the schrodinger equation under some circumstance, then how does it evolve? Under what circumstances? And what is special about those circumstances? But this is just a restatement of the measurement problem. I don't understand it when you say that "many worlds" are not in the Schrodinger equation. The Schrodinger equation does say that when you make an observation all possibilities are real. All the other formulations of the measurement problem introduce ways of eliminating the other possibilities, i.e. the real possibilities that you don't see. This is what motivated the idea of the wave function collapse in the first place: it eliminates the possibilities that we do not see. All formulations do this. Are you not just saying that MWI is wildly counter-intuitive? Zephir said... My take on this subject is summarized here: David Brown said... "So where do those probabilities come from?" The answer to the preceding question might depend upon understanding Milgrom's MOND. The failures of the standard model of cosmology require a new paradigm by Kroupa, Pawlowski, and Milgrom, 2013 Google "riofrio pipino". Michelle Han said... wow what a great blog! please check out my sites if you like earning money. Judi Tembak Ikan Online bonus besar Sepak Bola Bandar judi Sepak Bola bonus besar Sepak Bola Bandar judi Sepak Bola game online uang asli indonesia YOLAMA said... Good concept of donating good in such YOLAMA said... What a great idea YOLAMA said... I will be looking forward to your next post Claudia Anastasya said... Agen Judi Online Terpercaya dan Terbaik di Indonesia Menyediakan berbagai macam permainan Judi Kartu Online Terlengkap 1 ID untuk 9 Game Permainan yang disediakan oleh Situs ituQQ * Domino99 * AduQ * Poker * Capsa Susun * BandarQ * Bandar 66 * Bandar Poker * Sakong * Perang Baccarat => Bonus Cashback 0.5% (dibagikan 2x setiap Minggunya) => Customer Service 24 Jam Nonstop => Minimal Depo & WD - 50.000 Ribu & 100.000 Ribu * Pusat Bantuan ituQQ LINE : ituQQ WHATSAPP : +855-9589-7137 No Name said... jesica said... togel online bandar togel terpercaya agen togel judi togel Keluaran Togel daftar togel Natasya Dealova said... Website paling ternama dan paling terpercaya di Asia ^^ Tersedia deposit via OVO dan PULSA TELKOMSEL serta XL / AXIS Contact Us Website : SahabatQQ WA 1 : +855972076840 WA 2 : +855887159498 FACEBOOK : SahabatQQ Reborn Blog : * Cerita 18+ * Artikel Seks * Dunia Traveling * Majalah kesehatan * Film & Movie Onlie * Artikel Poker Daftar SahabatQQ Laura Mildred said... Unknown said... judi online terbaik 2020 mantap poker terbaik Kezia Keyz said... Mau Dapatkan Uang Dengan Mudah... Situs Judi Poker Online Uang Asli dan Deposit Pulsa Paling Hoki dan Gampang menang. Buruan Daftar Disini >>> Situs Poker Online ID Pro : Taurusqq Situs Poker Online Link Alternatif : Situs Dominoqq Online Join US ! klik link di bawah ini : --> Situs Poker Online Deposit Pulsa --> Situs Dominoqq Online --> Situs Poker Online Untuk Info Dan Bonus Menariknya Bisa Hubungi Kami Di Bawah Ini : Whatsapp : +62-821-6348-3281 DAFTAR DISINI : Situs Poker Online --> Situs Judi Poker Online --> Situs Dominoqq Online said... Ayo Bermain Dan Bergabung Bersama Mario Bola Tersedia Bonus Menarik Di setiap hari dan Setiap Minggu nya ^^ Tempat Daftar Prediksi Bola Youtube Channel Promo Yang berlaku Di www.mariobola(dot)info : - Bonus Deposit Pertama Setiap Hari 10% (Max Bonus 200ribu) Minimal TO 3x - Bonus Cashback Mingguan Di Sportbook 5% - 15% (Minimal Kekalahan 250ribu) - Bonus Refrensi Mingguan Di Permainan Sportbook 2,5% - Bonus Refferal Mingguan Di Permainan Slot & Casino 2.5% - Bonus Rollingan Mingguan Sportbook Refferal 0,1% - Bonus Cashback Slot Mingguan 5% (Minimal Kekalahan 500Ribu) - Bonus Rollingan Poker 0.2% - Bonus Rollingan Casino 0.8% ( GD88 Otomatis masuk ketika betting ) Bisa Deposit Via Pulsa Telkomsel & XL Menang Berapapun Dibayarkan !! Buktikan kemenanganmu Sendiri sekarang juga !! Hanya di www.mariobola(dot)info Info pendaftaran whatsapp :+6283161896684 Alandre said... Darlene Treger said... nanna888 said... รูเล็ต แบคแจ็ค บาคาร่า สล็อต 928betฟรีเครดิต สมัครง่ายง่ายผ่านระบบออนไลน์ได้ที่ สมัครเลยให้บริการตลอด 24 ชั่วโมง
d5f8f04a5ff255ec
« first day (3063 days earlier)      last day (985 days later) »  3:02 AM Yo anyone around here good at classical Hamiltonian physics? You can kinda express Hamilton's equations of motion like this: $$ \frac{d}{dt} \left( \begin{array}{c} x \\ p \end{array} \right) = \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right) \nabla H(x,p) \, .$$ Is there a decent way to understand coordinate transformations in this representation? (By the way, the incredible similarity between that equation and Schrodinger's equation is pretty cool. That matrix there behaves a lot like the complex unit $i$ in that it is a rotation by 90 degrees and has eigenvalues $\pm i$.) 3:41 AM schroedingers eqn has strong parallel(s) to the wave eqn of classical physics (part of its inception) but it seems this connection is rarely pointed out/ emphasized/ seriously investigated anywhere... 2 hours later… 5:54 AM Q: Connection between Schrödinger equation and heat equation Kevin KwokIf we do the wick rotation such that τ = it, then Schrödinger equation, say of a free particle, does have the same form of heat equation. However, it is clear that it admits the wave solution so it is sensible to call it a wave equation. Whether we should treat it as a wave equation or a heat e... 3 hours later… 8:26 AM @JohnRennie physics.stackexchange.com/questions/468349/… What Georgio said. Can't you add additional dupe targets? Gold badgers on SO can, but maybe they haven't extended that functionality to Physics.SE. @PM2Ring fixed! :-) That was quick! Thanks. 9:04 AM This question prompted me to do a price check on osmium. I was surprised that it's relatively cheap compared to the rest of the platinum group, but that seems to be because it has a small & steady supply & demand, so it's not attractive to traders in precious metals. I guess the toxicity of its oxide (& other tetravalent compounds) is also a disincentive. ;) Wikipedia says it sells for around $1000 USD per troy ounce, but other sites quote a starting price of $400. These guys sell nice looking ingots of just about every metal you can think of, apart from the alkalis & radioactives. I think I'll pass on the osmium & iridium, but I suppose I could afford a 1 oz tungsten ingot. :) Sure, it's not quite as dense as osmium / iridium, but its density is still pretty impressive. 1 hour later… 10:25 AM @DanielSank the term you want to look for is "complexification" of a symplectic manifold also this: In mathematics, a complex structure on a real vector space V is an automorphism of V that squares to the minus identity, −I. Such a structure on V allows one to define multiplication by complex scalars in a canonical fashion so as to regard V as a complex vector space. Every complex vector space can be equipped with a compatible complex structure, however, there is in general no canonical such structure. Complex structures have applications in representation theory as well as in complex geometry where they play an essential role in the definition of almost complex manifolds, by contrast to complex... I thought Arnol'd had a more in-depth discussion, but there's only a brief mention in §41.E 1 hour later… 11:44 AM I just don't get why someone would think that proving a mathematical thing like Pythagoras' theorem is something that physics can do. physics.stackexchange.com/questions/468504/… I guess it'd be reasonable in Euclid's day, or even Newton's, but certainly not since the development of non-Euclidean geometries. 2 hours later… 1:33 PM Hi. Am I wrong about this solution? this question is from tensor algebra Hi, @Leyla. I hope you don't think my previous reply is rude, but it's much better if you write equations in MathJax. My old eyes can barely read the equations in that photo, especially on my phone. And MathJax is a lot easier to search than equations in images. @PM2Ring Ohh sure, if it's the case then I can type them in mathjax There are bookmarklets & extensions that can be used to render MathJax in chatrooms. Stack Exchange won't build it into chatrooms because they don't want to impose the overhead on chat users... @LeylaAlkan in a word, yes your formulation is incorrect unless there's crucial bits of context that you've omitted, the tensor cannot be assumed to be symmetric indeed if $t_{ijk]$ were indeed totally symmetric, then $t_{(ijk)}$ would be identically zero and there would be no need to consider it you're correct as far as $$t_{[321]} + t_{(321)} = \frac{2}{3!} \left[ t_{321} + t_{213} + t_{132} \right] $$ goes, but that's as far as you can take the calculation this is enough to ensure that $t_{321} \neq t_{[321]} + t_{(321)} $ for an arbitrary rank-three tensor particularly because it is perfectly possible for there to exist a rank-three tensor $t$ and a reference frame $R$ such that the components of $t$ on $R$ are such that $t_{321}=1$ and the rest of its components vanish. 2:08 PM > Write out $t_{(321)}$ and $t_{[321]}$ . Show that $t_{321}\neq t_{(321)}+t_{[321]}$ My solution: $t_{(321)}=\frac 1 {3!}(t_{321}+t_{312}+t_{231}+t_{213}+t_{132}+t_{123})$ $t_{[321]}=\frac 1 {3!}(t_{321}-t_{312}-t_{231}+t_{213}+t_{132}-t_{123})$ $t_{(321)}+t_{[321]}=\frac 1 {3!}2(t_{321}+t_{213}+t_{132})$ Since $(3,0)$ tensor $t_{ijk}$ is totally symmetric, so it's independent of ordering of indices. So,$t_{(321)}+t_{[321]}=\frac 1 {3!}2(t_{321}+t_{321}+t_{321})=t_{321}$ This how I done it first. For @PM2Ring Oh okay great @EmilioPisanty 2:32 PM @LeylaAlkan tensors are just vectors in a vector space. It's extremely important that you understand how these linear-independence and linearity arguments work, and that you get comfortable in producing them when they're needed. i.e. the core take-home message you should be extracting from this is how the counter-example was generated and why it works. 1 hour later… 3:47 PM @JohnRennie what do you mean by superposition? 4:08 PM @Akash.B it's like position only better 1 hour later… 5:33 PM @DanielSank What exactly do you want to understand? Any canonical transformation is just going to leave that equation unchanged, right? 6:19 PM Why are superstrings so hard @EmilioPisanty Excellent @ACuriousMind I suppose, but I'm trying to see it algebraically. In some sense I'm asking how to represent a canonical transformation in the notation used in my comment. 6:54 PM @DanielSank in general? it'll just be an arbitrary function your notation won't be helped much the cases where it gets interesting is if you want a linear transformation in which case it's required to be symplectic does that keyword get you closer to the core of your question? Q: When is separating the total wavefunction into a space part and a spin part possible? mithusengupta123The total wavefunction of an electron $\psi(\vec{r},s)$ can always be written as $$\psi(\vec{r},s)=\phi(\vec{r})\zeta_{s,m_s}$$ where $\phi(\vec{r})$ is the space part and $\zeta_{s,m_s}$ is the spin part of the total wavefunction $\psi(\vec{r},s)$. In my notation, $s=1/2, m_s=\pm 1/2$. Questio... in other news, this random thing has been on HNQ for most of the day colour me perplexed. I mean, not that I don't appreciate the rep-cap hit but still I do look forward to the gibbous-moon question getting flooded with outsiders, though =P is the weak isospin (quantum number) the so-called flavor (quantum number)? 7:28 PM or does flavor (quantum number) also involve weak hypercharge (quantum number)? 7:56 PM I don't know if there is such a rule that only particles with nonzero flavor would undergo weak interaction. I read from [Wikipedia-Weak isospin](https://en.wikipedia.org/wiki/Weak_isospin) that "Fermions with positive chirality ("right-handed" fermions) and anti-fermions with negative chirality ("left-handed" anti-fermions) have $T = T_3 = 0$ and form singlets that do not undergo weak interactions." and "... all the electroweak bosons have weak hypercharge $Y_ w = 0$ , so unlike gluons and the color force, the electroweak bosons are unaffected by the force they mediate." but $W^+$ has weak isospin 1 and $W^-$ has weak isospin -1, not zero, so they should participate weak interaction. so I am confused as to what quantum number determines whether a particle participates weak interaction. @EmilioPisanty I guess I'm asking how to transform the gradient. Suppose I pick new variables that are related to the previous ones through a linear transformation. I know what to do on the left hand side, but on the right I have to do something to the gradient. 8:16 PM When two harmonic waves going opposite directions collide do the completely cancel each other out? 1 hour later… 9:26 PM can we really assume spinors are more fundamental than vectors? if a manifold by chance doesn't admit spin structures, can we still assume spinors are more fundamental than vectors? but if a manifold doesn't admit spin structures, how do you discuss fermions? 1 hour later… 10:47 PM @DanielSank that's what the chain rule is for, right? 11:09 PM @EmilioPisanty Yeah yeah fine I get the point. "Do the damned calculation yourself." 11:44 PM @CaptainBohemian If a manifold does not admit spinors, you don't discuss spinors.
2a5f2d54e92cadab
Schrödinger's wave equation Also found in: Dictionary, Thesaurus. Schrödinger's wave equation A linear, homogeneous partial differential equation that determines the evolution with time of a quantum-mechanical wave function. Quantum mechanics was developed in the 1920s along two different lines, by W. Heisenberg and by E. Schrödinger. Schrödinger's approach can be traced to the notion of wave-particle duality that flowed from A. Einstein's association of particlelike energy bundles (photons, as they were later called) with electromagnetic radiation, which, classically, is a wavelike phenomenon. For radiation of definite frequency f, each bundle carries energy hf. The proportionality factor, h = 6.626 × 10-34 joule-second, is a fundamental constant of nature, introduced by M. Planck in his empirical fit to the spectrum of blackbody radiation. This notion of wave-particle duality was extended in 1923 by L. de Broglie, who postulated the existence of wavelike phenomena associated with material particles such as electrons. See Photon, Wave mechanics There are certain purely mathematical similarities between classical particle dynamics and the so-called geometric optics approximation to propagation of electromagnetic signals in material media. For the case of a single (nonrelativistic) particle moving in a potential V( r ), this analogy leads to the association with the system of a wave function, &PSgr;( r ), which obeys Eq. (1). Here m is the mass of the particle, E its energy, &planck; = h/(2&pgr;), and ∇2 is the laplacian operator. See Geometrical optics It is possible to ask what more general equation a time- as well as space-dependent wave function, &PSgr;( r , t), might obey. What suggests itself is Eq. (2), which is now called the Schrödinger equation. The wave function can be generalized to a system of more than one particle, say N of them. A separate wave function is not assigned to each particle. Instead, there is a single wave function, &PSgr;( r1, r2, …, rN , t), which depends at once on all the position coordinates as well as time. This space of position variables is the so-called configuration space. The generalized Schrödinger equation is Eq. (3), where the potential V may now depend on all the position variables. Three striking features of this equation are to be noted: 1. The complex number i (the square root of minus one) appears in the equation. Thus &PSgr; is in general complex. 2. The time derivative is of first order. Thus, if the wave function is known as a function of the position variables at any one instant, it is fully determined for all later times. 3. The Schrödinger equation is linear and homogeneous in &PSgr;, which means that if &PSgr; is a solution so is c&PSgr;, where c is an arbitrary complex constant. More generally, if &PSgr;1 and &PSgr;2 are solutions, so too is the linear combination c1&PSgr;1 + c2&PSgr;2, where c1 and c2 are arbitrary complex constants. This is the superposition principle of quantum mechanics. See Superposition principle The Schrödinger equation suggests an interpretation in terms of probabilities. Provided that the wave function is square integrable over configuration space, it follows from Eq. (3) that the norm, 〈&PSgr;&PSgr;〉, is independent of time, where the norm is defined by Eq. (4). (4) It is possible to normalize &PSgr; (multiply it by a suitable constant) to arrange that this norm is equal to unity. With that done, the Schrödinger equation itself suggests that expression (5) is the joint probability distribution at time t for finding particle 1 in the volume element d3x1, particle 2 in d3x2, and so forth. Full browser ?
defb491b97dd2787
Open access peer-reviewed chapter Nonlinear Schrödinger Equation By Jing Huang Submitted: May 9th 2018Reviewed: August 23rd 2018Published: December 10th 2018 DOI: 10.5772/intechopen.81093 Downloaded: 1091 Firstly, based on the small-signal analysis theory, the nonlinear Schrodinger equation (NLSE) with fiber loss is solved. It is also adapted to the NLSE with the high-order dispersion terms. Furthermore, a general theory on cross-phase modulation (XPM) intensity fluctuation which adapted to all kinds of modulation formats (continuous wave, non-return-to-zero wave, and return-zero pulse wave) is presented. Secondly, by the Green function method, the NLSE is directly solved in the time domain. It does not bring any spurious effect compared with the split-step method in which the step size has to be carefully controlled. Additionally, the fourth-order dispersion coefficient of fibers can be estimated by the Green function solution of NLSE. The fourth-order dispersion coefficient varies with distance slightly and is about 0.002 ps4/km, 0.003 ps4/nm, and 0.00032 ps4/nm for SMF, NZDSF, and DCF, respectively. In the zero-dispersion regime, the higher-order nonlinear effect (higher than self-steepening) has a strong impact on the short pulse shape, but this effect degrades rapidly with the increase of β 2. Finally, based on the traveling wave solution of NLSE for ASE noise, the probability density function of ASE by solving the Fokker-Planck equation including the dispersion effect is presented. • small-signal analysis • Green function • traveling wave solution • Fokker-Planck equation • nonlinear Schrodinger equation 1. Introduction The numerical simulation and analytical models of nonlinear Schrödinger equation (NLSE) play important roles in the design optimization of optical communication systems. They help to understand the underlying physics phenomena of the ultrashort pulses in the nonlinear and dispersion medium. The inverse scattering [1], variation, and perturbation methods [2] could obtain the analytical solutions under some special conditions. These included the inverse scattering method for classical solitons [3], the dam-break approximation for the non-return-to-zero pulses with the extremely small chromatic dispersion [4], and the perturbation theory for the multidimensional NLSE in the field of molecular physics [5]. When a large nonlinear phase was accumulated, the Volterra series approach was adopted [6]. With the assumption of the perturbations, the NLSE with varying dispersion, nonlinearity, and gain or absorption parameters was solved in [7]. In [8], the generalized Kantorovitch method was introduced in the extended NLSE. By introducing Rayleigh’s dissipation function in Euler-Lagrange equation, the algebraic modification projected the extended NLSE as a frictional problem and successfully solved the soliton transmission problems [9]. Since the numerical computation of solving NLSE is a huge time-consuming process, the fast algorithms and efficient implementations, focusing on (i) an accurate numerical integration scheme and (ii) an intelligent control of the longitudinal spatial step size, are required. The finite differential method [10] and the pseudo-spectral method [11] were adopted to increase accuracy and efficiency and suppress numerically induced spurious effects. The adaptive spatial step size-controlling method [12] and the predictor-corrector method [13] were proposed to speed up the implementation of split-step Fourier method (SSFM). The cubic (or higher order) B-splines were used to handle nonuniformly sampled optical pulse profiles in the time domain [14]. The Runge-Kutta method in the interaction picture was applied to calculate the effective refractive index, effective area, dispersion, and nonlinear coefficients [15]. Recently, the generalized NLSE, taking into account the dispersion of the transverse field distribution, is derived [16]. By an inhomogeneous quasi-linear first-order hyperbolic system, the accurate simulations of the intensity and phase for the Schrödinger-type pulse propagation were obtained [17]. It has been demonstrated that modulation instability (MI) can exist in the normal GVD regime in the higher-order NLSE in the presence of non-Kerr quintic nonlinearities [18]. In this chapter, several methods to solve the NLSE will be presented: (1) The small-signal analysis theory and split-step Fourier method to solve the coupled NLSE problem, the MI intensity fluctuation caused by SPM and XPM, can be derived. Furthermore, this procedure is also adapted to NLSE with high-order dispersion terms. The impacts of fiber loss on MI gain spectrum can be discussed. The initial stage of MI can be described, and then the whole evolution of MI can also be discussed in this way; (2) the Green function to solve NLSE in the time domain. By this solution, the second-, third-, and fourth-order dispersion coefficients is discussed; and (3) the traveling wave solution to solve NLSE for ASE noise and its probability density function. 2. Small-signal analysis solution of NLSE for MI generation 2.1 Theory for continuous wave The NLSE governing the field in nonlinear and dispersion medium is where β1 and β2 are the dispersions, γis the nonlinear coefficient, and αis the fiber loss. In the frequency domain, the solution is where D̂=i2ω2β2+iωβ1a2and N̂=u2+i2u2[19] ( Figure 1 ). Figure 1. Schematic illustration of medium.u(z,t) andu(z + dz,t) correspond to the field amplitudes atzandz + dz, respectively. Usually, the field amplitudes can be written as ϕzωis caused by the nonlinear effect, and ϕzω=0zγPzω+2Pzωdz[3]. Assuming: Pzω=Pz+ΔPzω Pzis the average signal intensity. ΔPzωis the noise or modulation term. There is [20] PzΔPzω The amplitude Pzωcan be regarded as The small-signal theory implies that the frequency modulation or noise φ̇z+dzω=dφ̇z+dzωdtis small enough. Finally ([21]) The operation expiωβ1dz+iω2β2dzcan be split into its real and imaginary parts: The modulation or noise ΔPz+dzωis ΔPz+dzωPz+dzωPz When only intensity modulation is present and no phase modulation exists, the transfer function cos12β2ω2dzis obtained. The 3 dB cutoff frequency corresponds to 12β2ω2dz=π/4in [22, 23]. This treatment is also adaptable to the case that only the nonlinear phase (frequency) modulation is present; then, the intensity modulation ΔPz+dzωdue to FM-IM conversion is given as This is in very good agreement with [24] for small-phase modulation index. Even for large modulation index 12β2ω2dz=π/2, the difference is within 0.5 dB. Eq. (10) does not include a Bessel function, so it is simpler than that in [24]. Obviously, the above process can be used to treat NLSE with higher-order dispersion (β3, β4) [25]. Similarly, the result in Eq. (10) will include ω3 and ω4. The corresponding MI gain gMIin the side bands of ω0 (the frequency of signal) is given by Figure 2. MI gain spectra. +++ result of small-signal analysis. –––– result of perturbation approach. The parameters are P0 = 10 dBm, β2 = 15 ps2/km,λ = 1550 nm,a = 0.21 dB/km,γ = 0.015W−1/m, and z = 0 m. Figure 2 shows a comparison of the gain spectra between Eq. (11) and [6] for the case Pz/Pz=1. The maximum frequency modulation index caused by dispersion corresponds to 12β2ω2dz=π[22, 23], and the maximum value of the sideband is ωc=4γPz/β2, so the choice of dzsatisfies 12β2ω2dz=π, which makes Eq. (11) have the same frequency regime as [26]. In Figure 2 , the curves are different but have the same maximum value of gMI. In practice, researchers generally utilize the maximum value of gMIto estimate the amplified noises and SNR [3]. The result of small-signal analysis in Figure 2 has a phase delay of around ω0. Compared with the experiment result of [27], the reason is taking the fiber loss into account, the gain spectrum exhibits a phase delay close to ω0, and the curve descends a little [27]. Fiber loss results in the difference of gMIbetween the small-signal analysis method and the perturbation approach. 2.2 The general theory on cross-phase modulation (XPM) intensity fluctuation For the general case of two channels, the input optical powers are denoted by Pt,Pt, respectively [28]. Only in the first walk-off length, the nonlinear interaction (XPM) is taken into account; in the remaining fibers, signals are propagated linearly along the fibers, and dispersion acts on the phase-modulated signal resulting in intensity fluctuation. According to [4], the whole length Lis separated into two parts 0 < z < Lwoand Lwo < z < L; Lwois the walk-off length, Lwo=Δt/DΔλ. Δtis the edge duration of the carrier wave, Dis the dispersion coefficient, and Δλis the wavelength spacing between the channels. By the small-signal analysis, the phase modulation in channel 1 originating in dzat zcan be expressed as This phase shift is converted to an intensity fluctuation through the group velocity dispersion (GVD) from zto the receiver. So, at the fiber output, the intensity fluctuation originating in dzin the frequency domain is given by [29]. representing the convolution operation b=ω2Dλ2/4πc, where cis the speed of light. At the fiber output, the XPM-induced intensity fluctuation is the integral of Eq. (13) with zranging from 0 to L: The walk-off between co-propagating waves is regulated by the convolution operation. 3. Green function method for the time domain solution of NLSE 3.1 NLSE including the resonant and nonresonant cubic susceptibility tensors From Maxwell’s equation, the field in fibers satisfies where Eis the vector field and χ1is the linear susceptibility. PLand PNLrepresent the linear and nonlinear induced fields, respectively [30]. The cubic susceptibility tensor including the resonant and nonresonant terms is There are Γand aare the attenuation and absorption coefficients, respectively [31]. Repeating the process of [3] E=FxyAztexpiβz, there is k0=ω0/c, where ω0is the center frequency. Aeffis the effective core area. nis the refractive index. The last term is responsible for the Raman scattering, self-frequency shift, and self-steepening originating from the delayed response: where gω1+ω2+ω3is the Raman gain and fω1+ω2+ω3is the Raman non-gain coefficients. 3.2 The solution by Green function The solution has the form Then, there is and taking the operator V̂tas a perturbation item, we first solve the eigen equation n=2kinn!βnnφTn=. Assuming E=1, we get the corresponding characteristic equation: Its characteristic roots are r1,r2,r3. The solution can be represented as where ϕm=expirmt,m=1,2,3and c1,c2,c3are determined by the initial pulse. The Green function of (30) is By the construction method, it is At the point t=t, there are Let b1=b2=b3=0, then Finally, the solution of (27) can be written with the eigen function and Green function: The accuracy can be estimated by the last item of (40). The algorithm is plotted in Figure 3 . Figure 3. The Green algorithm for solving NLSE. 3.3 Estimation of the fourth-order dispersion coefficient β4 The NLSE governing the wave’s transmission in fibers is where sis the self-steepening parameter. In the frequency domain, its solution is where D̂=i2ω2β2i6ω3β3, N̂=Γexp2αzu2+isu2t+isu2t, and Γrepresents the Fourier transform [32]. Let L̂=zD̂N̂and L̂Gzz'ω=δzz'; we obtain the Green function Constructing the iteration β3=β30+δβ3, uzω=u0zω+δuzω, then there is where Zz'ωδβ3z'u0z'ω=i6δβ3z'ω3u0z'ωand u0z'ωβ30is determined by (42). The minimum value of δuzωsatisfies δuzω/ω=0,R2δuzω/ω2>0, so Next, we take the higher-order nonlinear effect into account. Constructing another iteration related to δγ:γ=γ0+δγ, uzω=u0zω+δuzωand repeating the above process, we get Now, we can simulate the pulse shape affected by high-order dispersive and nonlinear effects. Assume LD=t02/β2and u0t=+u0ωexpiωt=u0expt2/t02/2. Firstly, we see what will be induced by the above items δβ3and δγ. To extrude their impact, we choose the other parameters to be small values in Figures 4 and 5 . The deviation between the red and the black lines in Figure 4(a) indicates the impact of δβ3and δγ; that is, they induce the pulse’s symmetrical split. This split does not belong to the SPM-induced broadening oscillation spectral or β3-induced oscillation in the tailing edge of the pulse, because here γis very small and β3=0[3]. The self-steepening effect attributing to isu2u/tis also shown explicitly in the black line. When we reduce the svalue to 0.0001 in (b), the split pulse’s symmetry is improved. Figure 4. The pulse shapes with and withoutδβ3andδγ. The red line: withoutδβ3andδγ; the black line: withδβ3andδγ.ν=ω/2/π,β30=0ps3/km,γ=1.3×102/km/W,t0=80fs,z=3.7×t02/β2,β2=21.7/150ps2/km,u0=β2/γ/t02. (a)s = 0.01 and (b)s = 0.0001. Figure 5. The evolutions of pulse. The red line: withoutδγ; the black line: withδβ3andδγ. (a)s=0,γ=1.3×104/km/W; (b)s=0.01,γ=1.3×104/km/W; (c)s=0.01,γ=1.3/km/W. Other parameters are the same asFigure 4. Is the pulse split in Figure 4(a) caused by δβ3or δγ? The red lines in Figure 5 describe the evolution of pulse affected by the very small second-order dispersion and nonlinear (including self-steepening) coefficients. Here, δβ3induces the pulse’s symmetrical split, and the maximum peaks of split pulse alter and vary from the spectral central to the edge and to the central again. Therefore, its effect is equal to that of the fourth-order dispersion β4[33, 34, 3]. From the deviation between the red and black lines in Figure 5 , we can also detect the impact of δγ. It only accelerates the pulse’s split when the self-steepening effect is ignored (s = 0 in Figure 5(a) ). This is similar to the self-phase modulation-broadening spectral and oscillation. The high nonlinear γaccelerating pulse’s split is validated in [35, 36]. If s ≠ 0 ( Figure 5(b) ), δγsimultaneously leads to the split pulse’s redshift. Generally, we do not take δγinto account, so we should clarify in which case it creates impact. Compared (c) with (b) in Figure 5 , the red lines change little means that δβ3has a tiny relationship with γ. But with the increase of γ( Figure 5(c) ), the split pulse’s redshift is strengthened, so δγhas a relationship with γ. In Figure 6 , the pulse is not split until z = 9 LD, and the black line with δγis completely overlapped by the red line without δγ, so the high second-order dispersion β2 results in the impact of δγcovered and the impact of δβ3weakened. Therefore, only in the zero-dispersion regime, δγshould be taken into account in the simulation of pulse shape. Figure 6. The pulse shapes with and withoutδγ.β2=21.7ps2/km,s=0.01,γ=1.3/km/W. Other parameters are the same asFigure 5. So, we can utilize δβ3to determine the fourth-order dispersion coefficient β4. Fiber parameters are listed in Table 1 . The process is shown in Figure 7 , and the dispersion operator including β4is D̂=i2ω2β2i6ω3β3+i24ω4β4. Table 1. Fiber parameters. Figure 7. The process of calculatingβ4. Table 2 is the average of β4. They are different from those determined by FWM or MI where β4is related to power and broadening frequency [35, 36]. By our method, the fourth-order dispersion is also a function of distance, and every type of fibers has its special average β4which reveals the characteristic of fibers. These values are similar to those experiment results in highly nonlinear fibers [35, 36]. Although we take the higher-order nonlinear effect δγinto account which upgrades the pulse’s symmetrical split and redshift, the items isδγu2u/tand iδγexp2αzu2uhave a very tiny contribution to β4, only 10−26 ps4/km quantity order for the typical SMF. Here, the impact of δγis hidden by the relative strong β2. Z = 1.5LDZ = 5LDZ = 50LD Table 2. The average. Units (ps4/km). 4. Traveling wave solution of NLSE for ASE noise 4.1 The in-phase and quadrature components of ASE noise The field including the complex envelopes of signal and ASE noise is: where ulztand Alztare the complex envelopes of signal and ASE noise, respectively [37, 38]. Nis the channel number. ASE noise generated in erbium-doped fiber amplifiers (EDFAs) is Al0t=AlR0t+iAlI0t, AlR0tand AlI0tare statistically real independent stationary white Gaussian processes, and AlR0t+τAlR0t=AlI0t+τAlI0t=nsphvlGl1Δvlδτ. In the complete inversion case, nsp=1. his the Planck constant. Glis the gain for channel l. Substituting Eq. (47) into (1), we can get the equation that Alztsatisfies: So, the in-phase and quadrature components of ASE noise obey: We now seek their traveling wave solution by taking [37] AlR=ϕξ,AlI=φξ, and ξ=tcz. Then, (49) and (50) are converted into (52) is differentiated to ξ Replacing ϕand ϕ'''in (53) with (51) and the differential of (51), there are From (51) and (54), we can easily obtain In the above calculation process, B, c, and kshould be regarded as constants, and AlR,AlIare the functions of the solo variable ξ, respectively. 4.2 Probability density function of ASE noise Because AlRand AlIhave been solved, the time differentials of (49) and (50) can be calculated. Thus, the stochastic differential equations (ITO forms) around AlRand AlIare Now, they can be regarded as the stationary equations, and we can gain their probabilities according to Sections (7.3) and (7.4) in [39]. By solving the corresponding Fokker-Planck equations of (60) and (61), the probabilities of ASE noise are C,Care determined by +pdp=1. Compared with [40], these probabilities of ASE noise take dispersion effect into account. This is the first time that the p.d.f. of ASE noise simultaneously including dispersion and nonlinear effects is presented. (66) and (67) are efficient in the models of Gaussian and correlated non-Gaussian processes as our (49) and (50). Obviously, the Gaussian distribution has been distorted. They are no longer symmetrical distributions, and both have phase shifts consistent with [40], and as its authors have expected that “if the dispersion effect was taken into account, the asymmetric modulation side bands occur.” The reasons are that item iβ2ωltAlztin (48) brings the phase shift and item β222t2Alztbrings the expansion and induces the side bands, the self-phase modulation effects, and the cross-phase modulation effects. Their synthesis impact is amplified by (66) and (67) and results in the complete non-Gaussian distributions. 5. Conclusion NLSE is solved with small-signal analyses for the analyses of MI, and it can be broadened to all signal formats. The equation can be solved by introducing the Green function in the time domain, and it is used as the tool for the estimations of high-order dispersion and nonlinear coefficients. For the conventional fibers, SMF, NZDSF, and DCF, the higher-order nonlinear effect contribution to β4 can be neglected. This can be deduced that each effect has less impact for another coefficient’s estimation. The Green function can also be used for the solving of 3 + 1 dimension NLSE. By the traveling wave methods, the p.d.f. of ASE noise can be obtained, and it provides a method for the calculation of ASE noise in WDM systems. So, the properties of MI, pulse fission, coefficient value, and ASE noise’s probability density function are also discussed for demonstrations of the theories. How to cite and reference Link to this chapter Copy to clipboard Cite this chapter Copy to clipboard Jing Huang (December 10th 2018). Nonlinear Schrödinger Equation, Nonlinear Optics - Novel Results in Theory and Applications, Boris I. Lembrikov, IntechOpen, DOI: 10.5772/intechopen.81093. Available from: chapter statistics 1091total chapter downloads More statistics for editors and authors Access personal reporting Related Content This Book Next chapter Three Solutions to the Nonlinear Schrödinger Equation for a Constant Potential By Gabino Torres Vega Related Book First chapter Ultra Wideband Preliminaries By Mohammad Matin More About Us
f6ebe9a7ef2a6702
Payback time equation physics If you pay £6000 on double glazing windows and save £200 a yearon heating bills then it would take 30 years for the double glazingto pay for itself the equation is:PAYBACK TIME = COST OF. How long is the payback period - work out how much is saved each year by having the new device installed - divide the total cost of the outlay by that yearly cost - you answer is the number of years it takes for you to be 'paid back' These questions are basically maths questions... no physics in sight! They can wrap them up in all sorts of ways Payback time and cost effectiveness. Subject: Physics. Age range: 14-16. Resource type: Lesson (complete) 5. 42 reviews. The Fizz Assist shop. 5 45 reviews. 18 years a Physics Teacher, a lifetime a physics student The payback period calculation is simple: Investment ÷ Annual Net Cash Flow From Asset It can get a bit tricky when annual net cash flow is expected to vary from year to year. If that's the case,.. What is payback time in physics? - Answer To calculate a more exact payback period: Payback Period = Amount to be Invested/Estimated Annual Net Cash Flow. It can also be calculated using the formula: Payback Period = (p - n)÷p + n y = 1 + n y - n÷p (unit:years) Where n y = The number of years after the initial investment at which the last negative value of cumulative cash flow occurs Payback Period is nothing but the number of years it takes to recover the initial cash outlay invested in a particular project. Accordingly, Payback Period formula= Full Years Until Recovery + (Unrecovered Cost at the beginning of the Last Year/Cash Flow During the Last Year By substituting the numbers into the formula, you divide the cost of the investment ($28,120) by the annual net cash flow ($7,600) to determine the expected payback period of 3.7 years The Formula for Time in Physics. Simple formulas are as given below: 1) To compute the Speed: Speed = \(\frac{Distance}{Time}\) 2) To compute the Distance: Distance = Speed × Time. 3) To compute the time: Time = \(\frac {Distance}{Speed}\) In terms of mathematical we have these formulas as below: s = \(\frac{d}{t}\) d = s × Payback-metoden, pay off-metoden eller återbetalningsmetoden är en metod som används för att beräkna hur snabbt en investering betalar sig själv. Metoden kan antingen användas för att kontrollera att en investering lönar sig innan den är förbrukad, eller för att jämföra vilket av flera investeringsalternativ som är bäst Now formula for time of flight is, T = \( \frac {2 \cdot \text{u} \cdot \sin\theta}{\text{g}} \) T = \(\frac {2 \times 20 \times \sin 50°}{9.8}\) = \( \frac {2\times 20 \times0.766}{9.8}\) = \( \frac{30.64}{9.8}\) T = 3.126 sec. Therefore time of flight is 3.126 second Payback period can be calculated by dividing an initial investment by annual cash flow from a project. The result is the number of years necessary to return the initial cost of the investment. Naturally, this number will not always be a whole number. katex is not defined Payback time questions. Subject: Physics. Age range: 14-16. Resource type: Worksheet/Activity. 4.7. 12 reviews. edp10ch. 4.14842105263158 567 reviews. Last updated The formula to calculate the payback period of an investment depends on whether the periodic cash inflows from the project are even or uneven. If the cash inflows are even (such as for investments in annuities ), the formula to calculate payback period is: Payback Period =. Initial Investment. Net Cash Flow per Period The payback period is the amount of time required for cash inflows generated by a project to offset its initial cash outflow. There are two ways to calculate the payback period, which are: Averaging method . Divide the annualized expected cash inflows into the expected initial expenditur Using Payback Period Formula, We get-Payback period = Initial Investment or Original Cost of the Asset / Cash Inflows; Payback Period = 1 million /2.5 lakh; Payback Period = 4 years; Explanation. Payback period is the time required to recover the cost of total investment meant into a business Payback time - GCSE and A Level Physics Revisio 1. Under payback method, an investment project is accepted or rejected on the basis of payback period.Payback period means the period of time that a project requires to recover the money invested in it. It is mostly expressed in years. Unlike net present value and internal rate of return method, payback method does not take into account the time value of money 2. Energy payback time (EPBT) for silicon and CdTe PV modules, wherein BOS is the balance of system, that is, the module supports, cabling, and power conditioning [2, 10, 13, 14, 26, 27]. Unless otherwise noted, the estimates are based on rooftop-mounted installation, Southern European insolation of 1700 kWh m −2 yr −1 , a performance ratio of 0.75, and a lifetime of 30 years 3. Payback Period Formula. To find exactly when payback occurs, the following formula can be used: Applying the formula to the example, we take the initial investment at its absolute value. The opening and closing period cumulative cash flows are $900,000 and $1,200,000, respectively What is Payback Time? The Rule #1 Payback Time calculator estimates the number of years it would take the earnings of the company to cover the cost of the stock price. It gives you a sense, as an owner, of how long it would take you to get your investment back, based on the company's historical earnings stream Then solve for v as a function of t.. v = v 0 + at [1]. This is the first equation of motion.It's written like a polynomial — a constant term (v 0) followed by a first order term (at).Since the highest order is 1, it's more correct to call it a linear function.. The symbol v 0 [vee nought] is called the initial velocity or the velocity a time t = 0.It is often thought of as the first. Payback period Formula = Total initial capital investment /Expected annual after-tax cash inflow. Let us see an example of how to calculate the payback period when cash flows are uniform over using the full life of the asset. Example: A project costs $2Mn and yields a profit of $30,000 after depreciation of 10% (straight line) but before tax of. The payback period is the amount of time (usually measured in years) it takes to recover an initial investment outlay, as measured in after-tax cash flows. It is an important calculation used in. Payback time and cost effectiveness Teaching Resource Calculate the Payback Period With This Formula The Blueprin Payback period - Wikipedi Velocity Formula. Velocity is nothing but rate of change of the objects position as a function of time. Mathematical formula, the velocity equation will be velocity = distance / time . Initial Velocity. v 0 = v − at . Final Velocity. v = v 0 + at. Acceleration. a = v − v 0 /t. Time. t = v − v 0 /a. Where, v = Velocity, v 0 = Initial. SAT Subject Physics Formula Reference Circular Motion (continued) v = 2πr T v =velocity r =radius T =period This formula gives the veloc-ity v of an object moving once around a circle of radius r in time T (the period). f = 1 T f =frequency T =period The frequency is the number of times per second that an object moves around a circle. Torques. Important Physics Formulas. Wien displacement constant b = 2.9 × 10−3 m K . Wave = ∆x ∆t wave = average velocity ∆x = displacement ∆t = elapsed time. vf = final velocity that is another definition of the average velocity which works where letter a is constant. ∆t = elapsed time. Use this formula when you don't have vf On November 1, 1961, a number of prominent scientists converged on the National Radio Astronomy Observatory in Green Bank, West Virginia, for a three-day conference. A year earlier, this facility. In QM we have a differential equation that control the evolution of closed systems. This is the Schrödinger equation: i ℏ ∂ ψ ( x, t) ∂ t = H ψ ( x, t) where H is the system's Hamiltonian. The solution to this partial differential equation gives the wavefunction ψ ( x, t) at any later time, when ψ ( x, 0) is known Payback period is widely used when long-term cash flows are difficult to forecast, because no information is required beyond the break-even point. It may be used for preliminary evaluation or as a project screening device for high risk projects in times of uncertainty. Payback period is usually measured as the time from the start of production to recovery of the capital investment PHYS 201: Fundamentals of Physics II. Lecture 24 - Quantum Mechanics VI: Time-Dependent Schrödinger Equation Overview. The time-dependent Schrödinger Equation is introduced as a powerful analog of Newton's second law of motion that describes quantum dynamics Kinematic equations relate the variables of motion to one another. Each equation contains four variables. The variables include acceleration (a), time (t), displacement (d), final velocity (vf), and initial velocity (vi). If values of three variables are known, then the others can be calculated using the equations GCSE Physics: Revision Module 1 Therefore we use the time dilation formula to relate the proper time in the electron rest frame to the time in the television frame. Solution. Identify the knowns (from part a): Δt = 3.33 × 10 − 9s; v = 6.00 × 107m / s; d = 0.200m. Identify the unknown: τ. Express the answer as an equation: Δt = γΔτ = Δτ √1 − v2 / c2 Frequently used equations in physics. Appropriate for secondary school students and higher. Mostly algebra based, some trig, some calculus, some fancy calculus Einstein developed a new view of time first and then space. The laws of physics must be the same in all inertial reference frames. This statement is known as the principle of relativity. The speed of light in a vacuum has the same value in all inertial reference frames regardless of the velocity of the observer or the velocity of the source The discounted payback period is a better option for calculating how much time a project would get back its initial investment; because, in a simple payback period, there's no consideration for the time value of money. It can't be called the best formula for finding out the payback period Physics is all about articulating the things with real values and not memorizing them up. During applications, we may come across many concepts, problems, and mathematical formulas. Here we will have some basic physics formula with examples all of these formulas come from, you can certainly understand what they mean and have fun with them. Indeed, when you plug in some numbers, you can really get a feel for just how weird special relativity is. 2 Time Dilation. Suppose you're sitting on a bench, on a beautiful summer morning, watching the lovely trains pass by Payback Period Formula: Meaning, Example and Formula The discounted payback period is the time it will take to receive a full recovery on an investment that has a discount rate. To find the discounted payback period, two formulas are required: discounted cash flow and discounted payback period Distance Speed Time Formula Questions: 1) A dog runs from one side of a park to the other. The park is 80.0 meters across. The dog takes 16.0 seconds to cross the park. What is the speed of the dog? Answer: The distance the dog travels and the time it takes are given. The dog's speed can be found with the formula: s = 5.0 m/ Wind turbine payback. US researchers have carried out an environmental lifecycle assessment of 2-megawatt wind turbines mooted for a large wind farm in the US Pacific Northwest. Writing in the. The payback period is the time it will take for a business to recoup an investment. Consider a company that is deciding on whether to buy a new machine. Management will need to know how long it will take to get their money back from the cash flow generated by that asset. The calculation is simple, and payback periods are expressed in years The equation of time describes the discrepancy between two kinds of solar time.The word equation is used in the medieval sense of reconcile a difference. The two times that differ are the apparent solar time, which directly tracks the diurnal motion of the Sun, and mean solar time, which tracks a theoretical mean Sun with uniform motion. Apparent solar time can be obtained by measurement of. But you won't find it in modern physics textbooks. One comment from a classic work on relativity is Ouch! The concept of 'relativistic mass' is subject to misunderstanding. That's why we don't use it. First, it applies the name mass - belonging to the magnitude of a 4- vector - to a very different concept, the time component of a 4-vector Discounted Payback Period Formula. There are two steps involved in calculating the discounted payback period. First, we must discount (i.e., bring to the present value) the net cash flows that will occur during each year of the project. Second, we must subtract the discounted cash flows. Discounted Cash Flow DCF Formula This article breaks down. Payback period in capital budgeting refers to the period of time required for the return on an investment to repay the sum of the original investment. For example, a $1000 investment which returned $500 per year would have a two year payback period. The time value of money is not taken into account 20. In non-relativistic Quantum Mechanics (NRQM), the dynamics of a particle is described by the time-evolution of its associated wave-function ψ ( t, x →) with respect to the non-relativistic Schrödinger equation (SE) i ℏ ∂ ∂ t ψ ( t, x →) = H ψ ( t, x →) with the Hamilitonian given by H = p ^ 2 2 m + V ( x ^) How to Calculate Payback Period: Method & Formula - Video 1. ute to sign up 2. Learning physics is all about applying concepts to solve problems. This article provides a comprehensive physics formulas list, that will act as a ready reference, when you are solving physics problems. You can even use this list, for a quick revision before an exam 3. ation boards have used in the past. These links will take you to a page which you can print if you want to so that you can revise these equations.. 4. Physics aims to give elegant explanations for the phenomena observed within in our universe. These explanations are encoded in the mathematical language of equations. What are the most important and beautiful equations in physics Time Formula Physics: Definition, Concepts and Example the goal of this video is to explore some of the concepts or formulas you might see in a traditional physics class but even more importantly to see that they're really just common-sense ideas so let's just let's just start with a simple example let's say that and for the sake of this video just so I stop don't have to keep saying this is the magnitude of the velocity this is the direction of. Physical Constants Name Symbol Value Unit Number π π 3,14159265 Number e e 2,718281828459 Euler's constant γ= lim n→∞ Pn k=1 1/k−ln(n) = 0,577215664 An initial investment of Rs.50000 is expected to generate Rs.10000 per year for 8 years. Calculate the discounted payback period of the investment if the discount rate is 11%. Given, Initial investment = Rs. 50000 Years(n) = 8 Rate(i) = 11 % CF = 10000 . To Find, Discounted Payback Period (DPP) Solution Our discounted payback period calculator calculates the discount cash flow accurately and provides you with the complete cash flow in the form of table. The formula for the calculations of discounted cash flow is, D C F = C F ( 1 + r) 1 + C F ( 1 + r) 2 + C F ( 1 + r) 3 +. . . + C F ( 1 + r) n. Where Physics Formulas . Acceleration Formula Acceleration Formula Gravitational Potential Energy Formula Impulse Formula Capacitance Formula Distance Speed Time Formula Orbital Velocity Formula Resistance Formula Reynold's Number Formula Angular Momentum Formula Initial Velocity Formula Inverse Square Law Formula Unit Vector Formula Work Formula. Thus we have derived the equation of the Time Period of the conical pendulum as, Time Period = 2π (h / g ) 1/2. Time period equation of conical pendulum. Time Period = T = 2π (h / g ) 1/2. How to find out the Tension in the string of a conical pendulum. You can use either of the following equations to find out the value of tension T in the. Continuity Equation describes the transport of some quantities like fluid or gas. The continuity equation in fluid dynamics describes that in any steady-state process, the rate at which mass leaves the system is equal to the rate at which mass enters a system As such, it is very important for premeds studying for the MCAT to build time for learning the important physics equations into their study schedules. Now, what is an important physics equation? An important physics equation is an equation that we've either seen on 1) AAMC practice materials from the MCAT Official Prep Hub or 2) the AAMC's list of content covered on the MCAT Payback time definition is - a time for punishment for something that was done in the past. How to use payback time in a sentence The physics formulas for Class 11 will not only help students to excel in their examination but also prepare them for various medical and engineering entrance exams. Physics is filled with complex formulas and students must understand the concepts behind the formulas to excel in the subject Bernoulli's equation is usually written as follows, The variables , , refer to the pressure, speed, and height of the fluid at point 1, whereas the variables , , and refer to the pressure, speed, and height of the fluid at point 2 as seen in the diagram below. The diagram below shows one particular choice of two points (1 and 2) in the fluid. A Guide to Using Excel in Physics Lab Excel has the potential to be a very useful program that will save you lots of time. Excel is especially useful for making repetitious calculations on large data sets. It keeps track of your numbers, and can do the math for you. It does, however, have a learning curve that can be rather steep Schrodinger time-dependent wave equation is a partial linear differential equation that describes the state function or wave function of a quantum mechanics system. It is a very important result in quantum mechanics or modern physics.This equation presented by Ervin Schrodinger in 1925 and published in 1926 One great thing about the AP Physics 1 exam is that exam takers have access to a table of equations and formulas to reference during the exam (which is often referred to as the AP Physics 1 equation sheet).. But the AP Physics 1 reference tables include a lot of information!If you aren't already familiar with the formula sheet before you take the exam, you might end up wasting valuable time. A python script that solves the one dimensional time-independent Schrodinger equation for bound states. The script uses a Numerov method to solve the differential equation and displays the desired energy levels and a figure with an approximate wave function for each of these energy levels. python quantum-mechanics computational-physics. Neil Turok: all known physics fits into one equation Payback-metoden - Wikipedi 1. Keeping with the celebratory demeanor of Apollo 11's 50th anniversary, having already derived the rocket equation in a previous post, I think it's high time we put it to use, rocket-style. To recap, the rocket equation relates the velocity of a rocket to the velocity of its exhaust and the ratio of the changing mass o 2. Origin of the Time Independent Acceleration Equation. Back Kinematics Equations Kinematics Mechanics Physics Math Contents Index Home. Here we will take a look at the derivation of the following kinematics equation: This equation in its above form is not solved for any particular variable 3. Time Dependent Schrodinger Equation The time dependent Schrodinger equation for one spatial dimension is of the form For a free particle where U(x) =0 the wavefunction solution can be put in the form of a plane wave For other problems, the potential U(x) serves to set boundary conditions on the spatial part of the wavefunction and it is helpful to separate the equation into the time. There are four basic equations of kinematics for linear or translational motion. These are: v = v 0 + a t. Δ x = t ( v + v 0) / 2. Δ x = v 0 t + 1 2 a t 2. v 2 = v 0 2 + 2 a Δ x. Additional point: If you are asked to find displacement is nth second then use this formula-. Δ x = u + 1 / 2 a ( 2 n − 1) where n is the nth duration of time How to Calculate Battery Capacity, Discharge Time using Peukert's Law - Tutorial, Definition, Formula, Example Definition: Peukert's law is the widely used empirical equation to denote the rate-dependent capacity which illustrates an exponential relationship between the discharge current and delivered capacity, over specified range of discharge currents 3 The Schr odinger Equation 39 4 The Time-Independent Wave Function 61 i. core of this physics is Newton's laws describing the motion of particles of matter. The particles are subject to forces and Newton's Second Law F= macan then be use In this paper we address the problem of the numerical integration of the time-dependent Schrödinger equation i∂ t φ=Ĥφ. In particular, we are concerned with the important case where Ĥ is the self-consistent Kohn-Sham Hamiltonian that stems from time-dependent functional theory. As the Kohn-Sham potential depends parametrically on the time-dependent density, Ĥ is in general time. Time of Flight Formula Questions: 1) A cricket jumps from one blade of grass to another. The cricket leaves the first blade of grass at an angle of 36.9°, at a velocity of 2.10 m/s. What is the cricket's time of flight? Answer: The time of flight of the cricket can be found using the formula: The time of flight of the cricket is 0.257 seconds The observer's time is known, and so the amount of time that passes in the muon's reference frame can be found by rearranging the time dilation formula: In the muon's reference frame, approximately 2.82 x 10 -6 seconds pass between when the muon is created and when it reaches the Earth's surface Important Equations in Physics for IGCSE course General Physics: 1 For constant motion: R= O P 'v' is the velocity in m/s, 's' is the distance or displacement in meters and 't' is the time in seconds 2 For acceleration 'a' == R− Q P u is the initial velocity, v is the final velocity and t is the time Time of Flight Formula - Definition, Equations, Example Acceleration formula with velocity and time. As mentioned earlier acceleration is the change in velocity over time. So acceleration is the change in velocity divided by time. Mathematically, is the time taken to reach ending or final velocity from starting or initial velocity then. is short form for change in velocity The general gravity equation for the displacement with respect to time is: y = gt 2 /2 + v i t (See Derivation of Displacement-Time Gravity Equations for details of the derivations.) Since v i = 0 for a dropped object, the equation reduces to: y = gt 2 /2. where t is the time in seconds (s). Examples. The following examples illustrate. Example (3) for horizontal projectile motion: A stone is thrown horizontally into the air with a speed of $8\,{\rm m/s}$ from the top of a $20\,{\rm m}$-high cliff and hits the ground 1. The Schr¨odinger equation is a first order differential equation in time. This means that if we prescribe the wavefunction Ψ(x, t 0) for all of space at an arbitrary initial time t 0, the wavefunction is determined for all times. 2. The Schr¨odinger equation is a linear equation for Ψ: if Ψ 1 and Ψ At low velocities, modern relativity approaches classical physics—our everyday experiences have very small relativistic effects. The equation Δt = γΔt 0 also implies that relative velocity cannot exceed the speed of light. As v approaches c, Δt approaches infinity. This would imply that time in the astronaut's frame stops at the speed. And Schrodinger's equation says, given a wave function, I can determine the time derivative, the time rate of changes of that wave function, and determine its time evolution, and its time derivative, its slope--its velocity, if you will--is one upon I h bar, the energy operator acting on that wave function We show that one can obtain analytic solutions of the time-dependent Schrödinger equation that are more complex than the well-known oscillating coherent wave packet. Such Hermite-Gaussian or initially square wave packets exist for a free particle or for one subject to the harmonic oscillator potential. In either case, the Hermite-Gaussian packets retain their nodal structure even after long. Integrating the equations, with the limits on the velocity from the intial velocity Uo to U, we obtain: u = dx/dt = Vt^2 * Uo / (Vt^2 + g * Uo * t) The horizontal velocity is inversely dependent on the time. We can similarly solve for the location x at any time by integrating the velocity equation The time-dependent Schrödinger equation reads The quantity i is the square root of −1. The function Ψ varies with time t as well as with position x, y, z. For a system with constant energy, E, Ψ has the form where exp stands for the exponential function, and the time-dependent Schrödinger equation reduces to the time-independent form Teacher Support [BL] Briefly review displacement, time, velocity, and acceleration; their variables, and their units. [OL] [AL] Explain that this section introduces five equations that allow us to solve a wider range of problems than just finding acceleration from time and velocity. Review graphical analysis, including axes, algebraic signs, how to designate points on a coordinate plane, i.e. Payback Period Formula: How to Calculate the Investment time [8, 38, 41]. These studies considered only the systems of which the Hamiltonian His separable, i.e., expressible as the sum of the potential and kinetic energies. This is quite restrictive; in fact, most of the important Hamiltonian PDEs (e.g., the shallow water equations and the nonlinear Schrödinger equation) are not in this class That is the area of the triangle (in Graph IV) of height final v and base t, the total time. The area of any triangle is ½ (height) x (base). So distance s is ½ (height, v) x (base, t) s = ½ vt. Suppose the object does not start from rest when the clock starts at 0 but is already moving with speed u. It accelerates to speed v in time t Formula for Second equation of motion. This equation is given by the relation. Where. = final velocity. = initial velocity. = acceleration. = displacement of the object. = time taken. Note: - This equation along with other kinematics equations of motion are valid for objects moving with uniform acceleration A simple example of a four‐stroke engine operated in finite‐time is analyzed. The working medium consists of noninteracting two‐level systems or harmonic oscillators. The cycle of operation is analogous to a four‐stroke Otto cycle. The only source of irreversibility is due to the finite rate of heat transfer between the working medium and the cold and hot baths Time dilation is the lengthening of the time interval between two events when seen in a moving inertial frame rather than the rest frame of the events (in which the events occur at the same location). Observers moving at a relative velocity v do not measure the same elapsed time between two events Time Value of Money Formula The time value of money is a very important concept for each individual and also for making important business decisions. Companies will consider the time value of money while deciding about whether to acquire new business equipment or to invest in the new product development or facilities, and for establishing the credit terms for the selling their services or. Time-independent Schrodinger equation. The free particle and the gaussian wavepacket. Phase velocity and group velocity. Motion of a particle in a closed tube. 6. These are my lecture notes for Physics 430 and 431, written a number of years ago. They are still a bit incomplete: Chapters 19 and 20 remain to be written, an Time reversal, in physics, mathematical operation of replacing the expression for time with its negative in formulas or equations so that they describe an event in which time runs backward or all the motions are reversed.A resultant formula or equation that remains unchanged by this operation is said to be time-reversal invariant, which implies that the same laws of physics apply equally well. There are a few other interesting things to note. Just as we could use a position vs. time graph to determine velocity, we can use a velocity vs. time graph to determine position. We know that v = d/t. If we use a little algebra to re-arrange the equation, we see that d = v × × t. In Figure 2.16, we have velocity on the y-axis and time along. Time Travel and Modern Physics. First published Thu Feb 17, 2000; substantive revision Wed Dec 23, 2009. Time travel has been a staple of science fiction. With the advent of general relativity it has been entertained by serious physicists. But, especially in the philosophy literature, there have been arguments that time travel is inherently. Payback time questions Teaching Resource 1. Velocity, Acceleration and Time Calculator, G force Calculator. The velocity formula is: v = v 0 + a * t where: a: Acceleration, in m/s 2 v 0: Initial velocity, in m/s t: Time, in s v: Final velocity, in m/ 2. Time-independent Schrödinger Equation 22 2 11 - 2 j y jy = +=!! d dt d iVE m dx 22 2 1 1 - 2 j j y y = ì ï ï í ï += ïî!! d E dt i d VE m dx j =-jd dt iE 22 22 y-+=yy!d VE m dx Had: partialdifferential equation Obtained: two ordinarydifferential equations j(te)=-iEt! Time-independent Schrödinger Equation 3. istic Wave-Equation 4. The acceleration formula is one of the basic equations in physics, something you'll want to make sure you study and practice. After all, acceleration is one of the building blocks of physics. A motion is said to be uniformly accelerated when, starting from rest, it acquires, during equal time-intervals, equal amounts of speed 5. Simple pendulum Time perid formula Consider the bob at position B during its vibratory motion as shown in the figure. Let 'm' be the mass of the bob and x be the displacement of the bob from the mean position at position B.There are two forces are acting on the bob at this position Payback Period Formulas, Calculation & Example We will derive the equation for Kepler's Third Law using the concept of Period of Revolution and the equation of orbital velocity. In this process, the equation of Time Period Of Revolution of earth satellite would be derived as well. We'll also solve sample numerical problem here using this law. So let's start! Kepler's Third La Equations of Motion. From our work on speed-time graphs, it should be obvious to you that the graph below shows something accelerating . The initial velocity is given the letter u. The final velocity, v . The time taken for the acceleration is t. The slope, or gradient, of the line is = (v-u)÷t, which is, of course, the acceleration ― Time Richard Muller is a leading physicist, but he's also intellectually restless. That's a potent combination, with the power to generate transformative ideas about ourselves and our relationship to the universe. In Now: The Physics of Time, Muller hypothesizes how time itsel Net Force Equations and Fg = mg Calculation - YouTube How to calculate the payback period — AccountingTool 1. The terms in this equation are the same as the equations above. The extra term in this equation is: v = the velocity in ms-1. SHM graphs. When we plot the displacement, velocity and acceleration during SHM against time we get the graphs below. The velocity equation simplifies to the equation below when we just want to know the maximum speed 2. d that understanding the meaning of equations and their appropriate use will. 3. Modern Physics: Topics. Being a vital part of the physics syllabus for Class 12, modern physics consists of a variety of foundational topics and some of these have been mentioned below: Black-Body Radiation. Atomic Theory and the Evolution of the Atomic Model in General. Michelson- Morley Experiment 4. What is the Schrodinger Equation. The Schrödinger equation (also known as Schrödinger's wave equation) is a partial differential equation that describes the dynamics of quantum mechanical systems via the wave function.The trajectory, the positioning, and the energy of these systems can be retrieved by solving the Schrödinger equation 5. The above equation Eq. \eqref{11} is called linear wave equation which gives total description of wave motion. This equation is obtained for a special case of wave called simple harmonic wave but it is equally true for other periodic or non-periodic waves. This is one of the most important equations of physics 6. Update: okay i miserably failed to write down equations using Unicode, how to I delete the maths part to retry with LaTeX after learning how to use that lol? giving a shot at typing the euler lagrange equation, I'll try do the rest if this works lo 7. Amsterdam, 8 June 2021 - Qu&Co to collaborate with Airbus on research, development and testing of quantum computational methods for flight physics simulations Qu&Co, a leading European quantum computational software developer, has signed a collaboration agreement with Airbus for research, development and testing of quantum computational methods for flight physics simulations relevant for the. • Skandiabanken amorteringsfritt corona. • Tillsammans mot cancer låt. • Mckinsey trends 2020. • BTC loan instant. • Invesco Gold & Precious Metals Fund. • Niklas Adalberth familj Linnea. • Telenor Avanza. • Dash on Mac. • Melloartist narkotika Flashback. • Jakbanken medlem. • Minecraft icons. • Biodling privat. • Google finance ETF. • Presentkort mat ICA. • Äldreomsorgens nationella värdegrund webbutbildning. • Bilbo Baggins actor. • Simfötter barn XXL. • ETHLend CoinGecko. • Fxprosystems. • Handelsbanken kapitalförvaltning 75. • Aktieutdelning löneunderlag. • Examples of digital currency. • BankInvest Emerging Markets. • Företagsekonomi 2 bokslut. • Bitcoin tutorialspoint. • Belysning Batteridriven. • Partikelaccelerator USA. • According to synonym. • Miners are computers that execute operations defined by. • Atlas Protocol Coin. • Gård uthyres SKARABORG. • Bitvavo broker of exchange. • EBT online shopping. • Hyresrätt lös egendom. • Vem är LONE WOLF. • S&P 500 2020. • Förenade Care Ferlin. • Kivra företagsbrevlåda. • Bitcoin long term. • Bitwala Verifizierung zeiten.
6469ba2310495fcc
Friday, February 03, 2017 Lindblad equation can't solve any "problems" of quantum mechanics What I find more ludicrous is Weinberg's and Hossenfelder's suggestions that such new terms would "solve" something about what they consider mysteries, paradoxes, or problems of quantum mechanics. The first sentence of Weinberg's paper says In searching for an interpretation of quantum mechanics we seem to be faced with nothing but bad choices. and the following sentences repeat some of the by now standard Weinberg's critical words about Copenhagen as well as other "interpretations". The message is that this work about the extra "Lindblad terms" solve some mystery of quantum mechanics because they make something like the wave function collapse "more real". Similarly, Hossenfelder's most positive paragraph in favor of these efforts says: What would really solve the problem, however, is some type of fundamental decoherence, an actual collapse prescription basically. It’s not a particularly popular idea, but at least it is an idea, and it’s one that’s worth testing. I don't think that the right word is "unpopular" to describe the statement that such "fundamental decoherence" would "really solve the problem". Instead, this statement is self-evidently wrong. Even if the extra Lindblad parameters \(\lambda_{mn}\) were nonzero and discovered, and it won't happen, we would't find any "more enlightening" version of quantum mechanics. We would still have similar equations with the same objects and with some new terms that used to be zero but now they are nonzero. If a conceptual change appeared at all, the situation would clearly get more mysterious, not less so. If someone finds neutrinos mysterious, the discovery of the nonzero neutrino masses hardly makes things easier for him. Or consider the same sentence with the QCD theta-angle, CP-violating phases, cosmological constant, or any other parameter that could have been zero but wasn't. If you couldn't understand the theory with a vanishing value of these parameters, the more complex or generalized theory with the new nonzero parameters will be even harder for you, won't it? OK, the Lindblad equation is the following equation for a density matrix:\[ \dot \rho(t) &= -i[H,\rho(t)]+\\ &+\sum_\alpha \left[ L_\alpha \rho(t) L^\dagger_\alpha-\frac 12\left\{ L_\alpha^\dagger L_\alpha,\rho(t) \right\} \right] \] This equation is the most general linear equation for the density matrix \(\rho(t)\) that preserves its trace (total probability) and the Hermiticity. The sum over \(\alpha\) runs over at most \(N^2-1\) new terms. Aside from the Hamiltonian matrix \(H\), one must pick many new operators \(L_\alpha\) and their conjugates to define the laws of physics. I've divided the equation to two lines. The first line is the normal equation for the density matrix, one easily derived from the Schrödinger equation for \(\ket\psi\). The second line contains all the new terms that are zero according to contemporary physics but proposed to be nonzero by Weinberg (and others) and that should be tested by atomic clocks. Note that \(\rho(t)\) is Hermitian, and so is therefore the left hand side. The first, normal term of the right hand side is a commutator with \(H\) which is Hermitian. For the commutator to be Hermitian as well, the coefficient has to be pure imaginary. On the contrary, the new Lindblad terms have a real coefficient. To see what these terms are doing or "should do", it's better to look at an Ansatz for a solution – which is Weinberg's equation (3):\[ \rho_{mn}(t) = \rho_{mn}(0) \times \exp\left[ -i(E_m-E_n)t -\lambda_{mn}t \right]. \] The Ansatz was written in an energy eigenstate basis. The oscillating part of the exponent looks just like in Heisenberg's papers and the frequency is \(E_m-E_n\). The diagonal elements of \(\rho(t)\) don't change at all while the off-diagonal elements have a phase that changes with time with this frequency. What's new is the extra, exponentially decreasing factor of \(\exp(-\lambda_{mn}t)\). The off-diagonal elements don't have a constant absolute value, as they should have in unitary quantum mechanics, but they're exponentially damped with some rate \(\lambda_{mn}\) which are parameters bilinear in the matrix elements of the \(L_\alpha\) matrices in the Lindblad equation. These off-diagonal elements of the density matrix contain the information about the relative phases of the wave function. Decoherence makes them go to zero. Here they are going to zero exponentially so it's "some kind of decoherence". Except that this is proposed to be decoherence due to new terms in the fundamental laws of physics, not due to the interaction with a subsystem labeled the "environment". The Lindblad equation may appear as an effective equation for an open system that interacts with some environment that we can't trace so instead, we trace over it. But does it make any sense to consider it as a fundamental equation? I don't think so. First, the modification back to \(\lambda_{mn}=0\) is just prettier and better. I decided to place this objection at the top. The point is that the addition of all these \(\lambda_{mn}\neq 0\) damped factors is extremely artificial and it makes sense to cut this whole line of generalization by Occam's razor. If the Lindblad equation for some \(H\) and some \(L_\alpha\) has some nice properties, you may be pretty sure that the equation where you simply set \(L_\alpha=0\) is at least equally pretty. You can't lose any virtue by that. On the contrary, you lose virtues when you consider nonzero \(L_\alpha\). Second, lots of new operators have to be defined on top of the Hamiltonian. This is an addition to the first complaint but it may be viewed as an independent one. In normal quantum mechanics, we only determine one matrix on the Hilbert space, the Hamiltonian (or directly the S-matrix etc.). Here we must choose the Hamiltonian and about \(N^2-1\) additional operators on the Hilbert space \(L_\alpha\). Who are they? What deeper principle could possibly determine or at least constrain them? Third, the Lindblad equation doesn't allow any Heisenberg picture at all. The normal equation has \(L_\alpha=0\) and only contains the commutator with \(H\) in the evolution. Consequently, the evolution in time is a unitary transformation. You may pick a time-dependent basis of the Hilbert space in which the coordinates of \(\ket\psi\) or \(\rho\) will look constant and the operators such as \(x(t),p(t)\) will be time-dependent instead. This is the Heisenberg picture. With the Lindblad equation, you can't do that. There's no basis in the Hilbert space in which \(\rho(t)\) could be constant – after all, its eigenvalues are changing with time. Consequently, you won't be able to write this theory in any Heisenberg picture. This is a far deeper problem than people like Weinberg may realize. One reason is that the equations for the operators in the Heisenberg picture basically emulate the classical evolution equations for \(x(t),p(t)\) etc. The Heisenberg picture is an elegant way to see that quantum mechanics reduces to classical physics. Now, because you can't write the Weinberg-Lindblad theory in the Heisenberg picture, you won't be able to show the right classical limit. So in fact, by adding the new Weinberg-Lindblad terms, you have made the theory less compatible with classical physics that Weinberg loves so much, not more so! For this reason, I also suspect that you wouldn't need any atomic clocks to falsify this theory. This theory almost certainly predicts some completely wrong unobserved things for physical systems that are highly classical. Fourth, the new terms are pretty much by definition proofs that "you are missing something" I've mentioned that the Lindblad equation may be obtained as an effective equation if you eliminate some environment you can't track. I would argue that the converse is true, too. If you have the Lindblad equation, it shows that it's some effective equation, you have eliminated some degrees of freedom, you should return to the blackboard and see what this deeper physics that you have ignored is and where it is hiding! Weinberg is acting as he believes that the opposite is true: If he found the ugly new terms that normally emerge in effective theories only, he would be led to believe that he has found a more fundamental theory. This thinking clearly seems upside down. OK, what are you missing when you see these new effective terms? Bonus: the Lindblad equation is a quantum counterpart of "classical physics with Brownian random forces" In classical deterministic physics, if you know the point \(x_i(t),p_i(t)\) in the phase space at one moment, you may calculate it at later moments \(t\), too. To explain the Brownian motion, Einstein (and the Polish guy) considered a generalization of deterministic classical physics in which the particle is also affected by classical but random forces (from the surrounding atoms) which are described by some distributions. So even if the precise position and momentum were known at one moment, they would be unknown after some time of the Brownian motion. The peaked distribution on the phase space would get "dissolved". This is exactly how you should think about the effect of the new Lindblad terms. They're like some random forces described in terms of the density matrix. Is something getting dissolved as well? Is the exponential decrease of the off-diagonal elements equivalent to the classical spreading of the distribution on the phase space? You bet. It's not obvious in the basis that Weinberg chose – if the diagonal entries of \(\rho\) don't change. But if you pick any different basis, even the diagonal entries will change – they will be evolving towards values that are closer to each other and that's equivalent to the dissolution of the peaked distribution in the phase space. So there should be some molecules etc. that are causing this randomization of the pollen particle etc.! Fifth, the new terms violate the conservation laws and/or locality In a 1983 paper that Weinberg is aware of, Banks, Susskind, and Peskin argued that the equation violates either locality or energy-momentum conservation. Weinberg mentions this paper as well as a 1995 paper by Unruh and Wald which claims to have found some counterexamples to Banks et al. I don't quite understand what those guys have done but I am pretty sure that the counterexamples would have to be extremely artificial. Look at the formula for \(\rho_{mn}(t)\) above. You see that if you want to preserve the energy conservation law, you really want the exponential decrease to affect the off-diagonal elements in an energy basis only. It means that the matrices \(L_\alpha\) in the extra terms must be able to determine or "calculate" what the energy eigenvectors are. If you just place some generic matrices, the conservation laws will be violated. Sixth, CPT theorem trouble Also, the solution to the Lindblad equation has entries that are exponentially decreasing in time. That's an intrinsic time-reversal asymmetry. Well, the legality of these solutions and the elimination of the opposite ones contradicts the existence of any CPT-symmetry. So the CPT-theorem just couldn't hold in any generalized Weinberg-Lindblad theory of this kind. You could ask whether it should hold at all. Well, I think it should. The CPT transformation is just a continuation of the Lorentz group, the rotation of the \(t_Ez\)-plane by 180 degrees which just happens to make sense even in the Minkowski signature. So the CPT symmetry is closely linked to the Lorentz symmetry. None of this reasoning may be quite applied to the Weinberg-Lindblad theory because operations (in particular, the evolution operations) are not identified with unitary transformations in that theory etc. But I think it must lead to inconsistencies – either non-locality or a violation of the conservation laws. I am convinced that under reasonable assumptions, it leads to problems with both – conservation laws as well as locality and/or Lorentz symmetry. One "morally non-relativistic" aspect of the Lindblad laws is that the evolution in time isn't represented just by a unitary operator while the translation i.e. evolution in space is still just a unitary transformation. So the temporal and spatial components of a four-vector (energy-momentum) seem to be qualitatively different. I would be surprised if the Lorentz invariance could be preserved by laws like that – at least if these laws are determined by some principles, instead of just by an artificial construction designed to prove me wrong. Seventh, it just doesn't help you with any "mysteries of quantum mechanics" But as I said, the most important problem isn't any particular technical flaw in the equations even though I do believe that the troubling observations above are flaws of the theory. The main problem is that these analyses have nothing to say about the "broader problem" that Weinberg talks about, namely his problems with the foundations of quantum mechanics. Imagine that the new terms exist and are nonzero. So there exists an experiment, e.g. one with an atomic clock, that may show that some \(\lambda_{mn}\neq 0\). This experiment must be accurate enough – so far, similar experiments couldn't see any violation of normal quantum mechanics i.e. they couldn't have proven any \(\lambda_{mn}\neq 0\). The evidence that the new parameters are nonzero is increasing with some time – because these terms cause some intrinsic decoherence that deepens with time. OK, so even if you said that the experiment for times \(t\gt t_C\) that are enough to see the new Weinberg-Lindblad effects proves that "things are less mysterious" because the relative phases have dropped almost to zero, it would still be true that for \(t\lt t_C\), the damping is small or negligible and the system basically follows the good old unitary rules of quantum mechanics. So the "trouble with quantum mechanics" when applied to your experiment at \(t\lt t_C\) would be exactly the same as it was before you introduced the new terms! The effect of all the new terms would be small or negligible, just like in all experiments that have been confirming unitary quantum mechanics so far. The idea that the damping of some elements of the density matrix reduces the mystery of quantum mechanics is utterly irrational. At most, the Lindblad-Weinberg equation – if a natural version of it could exist, and I feel certain that it can't – could pick a preferred basis of the Hilbert space e.g. of your brain that would tell you which things you may feel and which you can't. Except that even in normal quantum mechanics, it's not needed. Even without decoherence, any density matrix may be diagonalized in some basis. So you may always view it as the basis that may be would-be classically perceived, if you adopt the viewpoint is that the non-vanishing off-diagonal elements clash with the perception. And like ordinary decoherence, this Lindblad-induced decoherence doesn't actually pick one of the outcomes. Decoherence makes a density matrix diagonal but it doesn't bring it to the form \({\rm diag}(0,0,1,0,0,0)\) or a similar one. To summarize, even if pieces of the analyses of atomic clocks are correct, the broader talk about all these things is completely wrong. None of these hypothesized new terms can "solve" any of the "problems" that Weinberg talks about. Weinberg has confined these wrong comments about the interpretation to the first paragraph of his paper. But Hossenfelder didn't confine them. Let me mention her sentences that aren't right: Each time a quantum state interacts with an environment – air, light, neutrinos, what have you – it becomes a little less quantum. So how come on large scales our world is distinctly un-quantum? Our world is never un-quantum. Our world – and both small and large objects in it – obey the laws of quantum mechanics. If you think that any observation of large objects we know disagrees with quantum mechanics, and it's the only meaning of "un-quantum" I can imagine, then you misunderstand what quantum mechanics actually does and predicts. It seems that besides this usual decoherence, quantum mechanics must do something else, that is explaining the measurement process. Decoherence is not "needed" for anything. It's just an effective re-organization of the dynamics in situations where a part of the physical system may be viewed as an environment, a re-organization that explains why the relative phases are being forgotten – and therefore one of the first steps needed to explain why a classical theory is sufficient to approximately describe everything (decoherence is needed for that because the main thing that classical physics refuses to remember are the relative quantum phases). But the forgetting still obeys the laws of quantum mechanics, it in no way contradicts it. If "someone" is doing something else, it's just not quantum mechanics. The dynamical laws of quantum mechanics are performing the evolution of the probability amplitudes – either in the state vector, density matrix, or operators. The rest is to connect these probability amplitudes with the observations. But this isn't done by Nature. Instead, it's done by the physicist. It's the physicist who must understand what a probability amplitude or a probability means and that's what allows him to apply the calculations of the unitary evolution on objects around him. But the application of the laws isn't something that "Nature does". Instead, it is what a "physicist does". And if she doesn't know how to do it right, or if she has some religious or psychological obstacles that prevent her from doing it at all, it's her f*cking defect, not Nature's. (Note that I have used "she" and "her" in order to be politically correct.) No comments: Post a Comment
30fc4a682512bc49
Wednesday, September 30, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Beaten with hockey sticks: Yamal tree fraud by Briffa et al. I will open a discussion thread about this development, too. Steve McIntyre has broken another hockey stick: Yamal: a divergence problem (click) ... a copy at Climate Audit (click) Because Climate Audit is overloaded, here's the Google cache. The finding is very easy to describe. Briffa et al. (Science, published September 2009, see also Briffa et al., Philosophical Transactions 2008) offered another version of a "hockey stick graph", a would-be reconstruction of the temperatures in the last 2000 years that claimed to show a "sudden" warming in the later part of the 20th century, much like the discredited paper by Michael Mann et al. Papers by Mann, Bradley, and Hughes in 1998 and 1999, included as a symbol of global warming into the previous IPCC report in 2001, indicated constant temperatures before 1900 and a dramatic warming afterwards. However, the papers have been proven wrong. If you haven't heard about the lethal bug of the Mann methodology yet, the problem of the MBH98, MBH99 papers was that the algorithm preferred proxies - or trees (or their equivalents) - that showed a warming trend in the 20th century, assuming that this condition guaranteed that the trees were sensitive to temperature. Tuesday, September 29, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Political racketeering Special welcome to the Swedish EU presidency. Two interesting examples of blackmailing in politics emerged today. Iran vs West (click) A hardcore Iranian lawmaker said that Iran could quit the nuclear non-proliferation treaty if the pressure from the West continues. Eurocrats vs Czechia (click) Mirek Topolánek, the leader of the Czech center-right ODS party, said that he was effectively told by Jose Barroso that all EU countries but Czechia will have a commissioner if President Klaus doesn't become another puppet of the EU bureaucracy and doesn't sign the Treaty of Lisbon. ;-) Monday, September 28, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Four degrees Celsius in 50 years? Last week, Yugratna Srivastava, a 13-year-old Indian girl, was hired by the United Nations to present a poem to the world's leaders and the humanity. In the tradition of Nazi and Soviet methods of propaganda, a kid was asked to explain that our world is gonna fry unless everyone buys all the ideology and policies that her propagandistic employers wanted her to disseminate. There apparently exist adults whose skulls are comparably unhinged. The girl wasn't strong enough to convince the world about the looming catastrophe - and they need much stronger "momentum" for the Copenhagen negotiations that should efficiently cripple the world's economy. 2009 physics Nobel prize: speculations Update: The 2009 physics Nobel prize went to Charles Kuen Kao (1/2) and Willard Boyle (1/4) and George Smith (1/4): see a newer blog article Next week, Scandinavia will tell us about their choice of Nobel prizes for 2009. The physics Nobel prize will be announced on Tuesday, October 6th, at 11:45 a.m., Swedish time. Who is going to win the physics award that has preserved its exceptional status because the prize has never been flagrantly misdirected, unlike the peace Nobel prize, so far? First, let us summarize the winners since October 2004 when this blog was born: Now, it may be fun to recall some predictions made in the previous years: Very soon, I will review some older scenarios which may still be possible in 2009. Meanwhile, Thomson Scientific offered their own, new predictions based on their algorithm analyzing the network of citations. They managed to accurately guess the 2007 winners - Fert, Grünberg - although they did so already in 2006 and F+G were not their top choice. Sunday, September 27, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere First Czecho-Slovak Superstar See also: Dominika Stará vs Martin Chodúr See also: Dominika Stará: Je suis Malade After a couple of Czech (CZ) Pop Idols and Slovak (SK) Pop Idols and one year with the Czech X-Factor, the Czech and Slovak contests were wisely unified. This guy has only been training the song for 1 hour - during the reduction from 118 to 90. In my opinion, Martin Chodúr's edition of "Supreme" was more convincing, testosterone-loaded than the original version of Robbie Williams. The moderators are Mr Leoš Mareš (CZ) and Ms Adéla Banášová (SK) and they're doing a superb job. I used to dislike Mareš because he seemed excessively pompous concerning his extraordinarily high income etc - but these negative emotions of mine are gone by now. There are two Czech and two Slovak judges - with all four sex/nation combinations: Mr Palo Habera (SK, younger), Mr Ondřej Hejma (CZ, older), Ms Dara Rollins (SK, blonde), Ms Marta Jandová (CZ, brunette). Friday, September 25, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Pope visits the Czech infidels The leaders of the Czech Republic and the Vatican in their characteristic hats. Note the similarity between the two. Tomorrow, the Holy Father arrives to Czechia which is probably the most atheist country in the world. The Reference Frame wishes him a lot of good luck and a nice, relaxing stay. On Monday, we celebrate a national holiday, the St Wenceslaus Day (from the Christmas carol, Good King Wenceslaus), our patron and one of the first dukes (and de facto kings) who was murdered by his brother in the town of Boleslav that the Holy Father will visit. For 95% of the Czechs, it's just another work-free day, as we will explain. D-braneworlds strike back Today, Mirjam Cvetič, James Halverson, and Robert Richter wrote the first hep-th paper (that might normally be a hep-ph one, I think): Mass hierarchies from MSSM orientifold compactifications Recall that the main detailed classes of phenomenological scenarios within string theory are: • weakly coupled heterotic strings on Calabi-Yau three-folds • its strongly coupled version, Hořava-Witten heterotic M-theory on Calabi-Yau three-folds • M-theory on singular G2 holonomy manifolds • F-theory on Calabi-Yau four-folds and its type IIB descriptions • type IIA braneworlds with D6-branes and orientifolds (and lots of quiver diagrams) Their subsets are related by various dualities, they have various advantages and disadvantages. Thursday, September 24, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Google Chrome Frame for Internet Explorer Microsoft Internet Explorer users are recommended to install Google Chrome Frame: download, info, a plug-in for MSIE 6/7/8 that replaces the Microsoft JavaScript engine by a much faster Chrome JavaScript engine. The Chrome engine also adds support for HTML5, canvas, and other features. The plug-in is only activated for websites whose webmasters have inserted the following meta-tag to their pages: <meta content='chrome=1' http-equiv='X-UA-Compatible'/> But The Reference Frame is among them. As far as my measurements go, it used to take 10 seconds from pressing the "TRF" button to seeing the top of the right sidebar in Internet Explorer. This rather long time makes TRF an excellent benchmark. ;-) With Google Chrome Frame, the time was reduced to 6 seconds. That's an improvement. But my Google Chrome 4.0 shows the sidebar in 3 seconds, much like the newest official Mozilla Firefox, namely 3.5.3. Chrome is much faster in some respects: for example, its startup is literally immediate. Poland, Estonia win: indulgences for free Breaking news: Reuters is finally learning how to write balanced and attractive articles. The article called U.N. climate meeting was propaganda: Czech president is currently the most popular article on the Reuters website, ahead of the sex of Mackenzie Phillips (see the list in the right lower corner of any Reuters article): they switched the places (screenshot). I guess that Drudge Report did help a bit. ;-) See also Klaus's U.N. speech about the ways (not) to solve the crises. The Guardian's most popular article is dedicated to the same U.N. climate meeting and is called Obama the Impotent. EurActiv, Times, and others inform that Poland and Estonia have won: the Court of First Instance ruled that the European Commission didn't have the right to cut the carbon quotas for these two countries because the countries themselves should set the numbers and the commission may only review them. :-) Israel: optimizing strike on Iran David Petrla and some Pentagon sources cited in the media have convinced me that Israel is completing its plans to attack Iranian nuclear and military facilities. According to Dmitry Medvedev who is not a spokesman for Israel, Peres is telling the people that Israel has no such plans but Netanyahu clearly thinks different. ;-) A typical Israeli soldier Israel knows that Obamaland and many other Western or otherwise powerful countries suck as allies, that the mostly self-sufficient Iran doesn't really care about sanctions (especially not the homeopathic ones), and that the verbal attacks from Iran, combined with its accelerating nuclear efforts, represent a genuine existential threat for their very existence. Iran's freedom to manipulate with dangerous materials ends where the freedom - and life - of others begins. And I agree that they have already crossed the border. Pictures from the anti-Obama rally in D.C. This is not a full-fledged article. But Ross Hedvíček of Florida posted pretty cool pictures of the anti-Obama rally in Washington D.C. that took place a week ago or so. Click the picture above to get to the article ("Comrade Obama has only been caressed in Czechia") to see many more photographs like that. About a million (more pix!) of witty people of all races, ages, and sexes attended the rally but only the protester above has won the Rally TRF Hottie award. Congratulations. Climate in the U.N. By the way, there was a climate meeting somewhere in the New York City today. Its purpose was for Prof Václav Klaus to teach his students, other politicians, something about the society, economics, politics, and their interactions with science, taking the global warming hoax as the main example. But most of them are bad students so they were far too distracted by pornographic thoughts so they didn't learn almost anything. For instance, a little Nicolas has proposed one more intercourse with his friends in November. The media are pretty much full of their pornographic thoughts. The Guardian, a British socialist daily, decided that Obama can give a bad, awfully ho-hum, speech, too. Yes, that's the speech. The ordering of the words is pretty much irrelevant so you don't have to watch the video with the hogwash. Reuters managed to publish some sensible information about the meeting in the article called Reuters: U.N. climate meeting was propaganda... The president said: "It was sad and it was frustrating. It's a propagandistic exercise where 13-year-old girls from some far-away country perform a pre-rehearsed poem. It's simply not dignified." Oh, OK, I meant the Czech president. ;-) Monday, September 21, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Kenya: rainmakers key to consensus on climate change AFP reports that Kenya's Nganyi rainmakers are being enlisted to mitigate the effects of climate change: Kenya rainmakers called to the rescue (click) Alexander Okonda's great-grandfather was also a rainmaker. In the 1910s, he was arrested by the British because they determined that he had been responsible for poor rainfall. Now, the great-grandson is getting the credit he deserves. As the methods of climatology have been strikingly transformed, he is appreciated as a top scientist. Alexander Okonda blows through a reed into a pot embedded in a tree hollow and containing a secret mixture of sacred water and herbs. "This contains so much information. It is something I feel from my head right down to my toes," says Alexander, after completing his ritual. The young man is a member of the Nganyi community, a clan of traditional rainmakers that for centuries has made its living disseminating precious forecasts to local farmers. Nothingness spreading in de Sitter space Maulik Parikh (now Pune, India) posted the first hep-th preprint today, and I think it is the most interesting one: Enhanced Instability of de Sitter Space in Einstein-Gauss-Bonnet Gravity (click) He argues that the Gauss-Bonnet term - the topological Euler density (in 4D) - may look inconsequential perturbatively and it decides about the life and death of de Sitter backgrounds. Recall that the Lagrangian of the Einstein-Gauss-Bonnet system is L = 1 / (16.pi.G) [ R + alpha (R*R - 4 R.R + R^2) ]. Besides the Einstein-Hilbert term, you can see the topological term multiplied by the the area, "alpha". Because the pair-creation of black holes involves some topology change, the last term matters and increases the nucleation rate by the factor Gamma = Gammaorig exp (4 pi alpha / G) The second enhancing factor becomes huge if the Gauss-Bonnet area "alpha" is much bigger than the Planck area "G". That's expected to be the case even in perturbative string theory where "alpha" is comparable to the squared string scale, or at least Maulik says so. When the enhancement is large, you should care about the original decay rate, Gammaorig = exp (-pi L2 / 3G) where L is the curvature radius of the de Sitter space. Without the alpha-enhancement, this rate would be negligible for any de Sitter space that is visibly bigger than the Planck scale. However, with the alpha-enhancement, the decay rate becomes significant. For an inflating Universe, the Hubble radius, "1/H", has to be greater than "sqrt(12 alpha)", otherwise the instanton creates lots of black holes which are probably unhealthy for the inflationary mechanism. In the example above, this means that the radius must exceed the string scale (with a particular numerical prefactor). This doesn't sound too dramatic a constraint but because the inflation scale is often close to the string scale, it could be a nontrivial constraint. Of course, it would be even more interesting to discover that there is a new, unexpectedly huge contribution to the Gauss-Bonnet term that makes "alpha" close to the squared neutrino Compton wavelength. If this were the case, one could derive a constraint on the cosmological constant. ;-) Such a huge alpha is probably impossible but it would be fun if there were one. There could exist similar enhancements and instabilities of this kind - and maybe its higher-dimensional counterparts - that could eliminate many kinds of compactifications with too small radii, too complicated topologies, and so on. Quantum cosmologists should try to study these possibly neglected mechanisms intensely. By the way, this is related to one point that I dislike about the current approach of the anthropic people. For most features of the Universe, they can't find any strong and accurate enough anthropic constraint. But if they can "explain" something using this anthropic reasoning, they're satisfied. This is a fundamentally unscientific thinking because one should always try to find "all" conceivable constraints - and the "other solutions" (such as the black hole creation) could actually be more important, more stringent, more predictive, and more true than the ones that the anthropic people "guess" by chance. ISS with NS5-branes By the way, the second hep-th paper is also interesting and it is also about the vacuum selection. Kutasov, Lunin, MrOrist, Royston study the landscape of vacua obtained by stretched D4-branes (and other D-branes) between NS5-branes. They end up with some Intriligator-Seiberg-Shih-like SUSY breaking setup and argue that the early cosmology pushes the Universe towards a particular SUSY-breaking ground local minimum. Sunday, September 20, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere The Age of Stupid The filmmakers from the Horrifying Anthropogenic Global Warming Activist Socialist Hysteria (HAGWASH for short) are trying to create a new hit, The Age of Stupid. The world is gonna burn and the mankind dies as soon as in 2055: see the realistic countdown before the final solution, extinction of life in 2055. An old guy, Pete Postlethwaite who is the last person alive ;-), looks to his media collections from 2008 or so and decides that everyone was stupid because he didn't save the world. Check that all famous buildings are gonna be destroyed by a few tenths of a degree of warming. But the people who are ready to consider this piece of dirty unscientific shrill propaganda as a serious documentary - which is how it's being marketed at many places of the world - are not just stupid. They deserve a far stronger term. The wiser ones may consider to read the NIPCC (Non-governmental International Panel for Climate Change) report which is a truly comprehensible, nonsense-free, and comprehensive 880-page-long summary of the state-of-the-art research in climate science. Click the to initiate the purchase. Hat tip: Alexander Ač Saturday, September 19, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere The Da Vinci Code I have finally watched The Da Vinci Code, based on the 2003 bestselling book by Dan Brown. And it was pretty impressive. Spoilers follow. If you don't know, in this novel, some mysterious murders turn out to be results of a big battle between two social or religious groups. One of them is supposed to protect the descendants of Jesus Christ and his wife, Mary Magdalene, who could prove that Jesus was a human being. The other one wants to protect the big dirty secret of the Christian Churches, namely Jesus's humanity. Klaus: Is there a common European idea? I am thankful for the invitation to these inspiring "Passau Dialogues". And I happily add that it is an honor to be given the opportunity to lead a discussion with such an important personality of contemporary Europe as - beyond any doubts - cardinal Schönborn surely is. We will certainly discuss neither the details of the church orthodoxy - in which I wouldn't be an appropriate partner - nor the ever returning questions about the relationships between the state and the church. Also, I will avoid temptations to offer alternative hypotheses about the origin of the financial and economic crisis or similar topics of my discipline, the economic science. Friday, September 18, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere China's top climatologist: 2 °C probably no problem The Guardian informs about the opinions of the top climatologist of a group of 1.35 billion people that calls itself by a funny name, People's Republic of China. Mr Xiao Ziniu says that it has not been determined whether the warming by 2 °C - which is often being talked to as the "cutoff" that is forbidden before 2050 (it won't happen, anyway!) - is dangerous. China has experienced warmer periods than today and each change of the temperature brings some advantages and some disadvantages. TBBT & Sheldon Cooper: Xmas scene runs for Emmy After having won the corresponding TCA award in August 2009, Jim Parsons (Dr Sheldon Cooper of The Big Bang Theory) has also been nominated for the "best actor in a comedy series" category of the Emmy awards. He's excellent, flawless, and - let me admit - in many ways better than the original. ;-) This Christmas or Saturnalian scene (from 2x11, The Bath Item Gift Hypothesis) remains my most favorite one. It's just touching. As an Emmy n00b, Parsons won't probably follow quite a straightforward path to his Emmy. And maybe he will. Kind of wisely, however, the scene above has been chosen as his bath item gift to the Emmy voters and as the trademark example of his unusual skills as an actor. Thursday, September 17, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere ESA: Planck sends first images If you remember, ESA launched Planck in May 2009. Four months later, we have the first images that should eventually (after six months) supersede the well-known WMAP images. BBC and others report. Click to zoom in. The temperature variations measured by Planck in nine frequency ranges are depicted inside the strip, by the usual WMAP-like mottled colors. Planck rotates roughly once a minute. Czech, Polish missile defense system shelved CERN wants a linear collider The LHC is not yet operating - it will begin in mid November, with reduced-energy collisions added a few weeks later - but the CERN director, Rolf-Dieter Heuer, already wants to build a new linear collider at CERN. In his modest office with a socialist-style furniture, he also explains the difficult cleaning procedures and even more difficult preemptive policies. Heuer is optimistic about their control over the LHC which seems much smoother than LEP (the previous Lot of Extra Problems collider) even though LEP was simpler. In a few years, the LHC will have years of experience of running at 14 TeV, he says, plus important discoveries, he hopes. Also, the European-American symmetry has been spontaneously broken and people suddenly come to CERN. ;-) Heuer thinks that science needs global, continental, as well as national projects to preserve the expertise of the people. CERN has the capacity to host the International Linear Collider (ILC) or the 3-TeV, 48-km Compact Linear Collider (CLIC; and click the word haha): see the picture. But competition is always welcome, Heuer says - as long as the symmetry is broken and others have no chance. ;-) Wednesday, September 16, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Kyoto II: Obama vs Eurocrats An entertaining split between Europe and America has emerged concerning the question how the carbon emissions reductions should be achieved in individual nations. Obama and Barroso in Prague, April 2009. Things may have been different then. As The Telegraph, The Guardian, and everyone else reports, Europe and America differ in their opinion how the internal rules to reduce the CO2 production should be set. The European politicians think that Kyoto I has been such an amazing success ;-) that it should be repeated and its successes should be amplified. Among other things, it means that all nations should adopt the same internal mechanisms to punish the CO2 emissions. The U.S. economy should be controlled by the Eurocrats in Brussels in the same way as any other decent EU country and Barack Obama should remain what he is appreciated for, namely a puppet of the global political correctness headquarters that should stay in Brussels. On the other hand, Barack Obama himself dared to disagree. Kyoto I hasn't been a sufficiently huge disaster so the U.S. president wants to engineer an even better scheme. As the first post-Hoover protectionist president of a country that rejected Kyoto I and is going to reject Kyoto II as long as it is isomorphic (and gives a free pass to the poorer emerging markets), he thinks that every country should be allowed to decide about its own methods to achieve the targets and the carbon flows in America should remain uncontrollable by the EU and the U.N. That's quite a heresy for the EU, comrade Obama! ;-) Even Steven Chu has warned that deep CO2 reductions cannot be achieved politically in the U.S. Why doesn't he follow the example of the tall and strong Napoleon in France who defeated 74% of the French citizens and imposed a carbon tax upon them? ;-) Sarkozy also wants to start a world trade war by a new CO2 border tax. Swedish EU presidency also urges the U.S. Senate to behave; if they won't, the U.S. Senators will be spanked just like any bad EU kids. ;-) It's not hard to understand Europe's newly gained self-confidence with respect to America. The Made-In-America downturn has allowed Europe to surpass North America as the wealthiest region of the world. And the future fate of the U.S. dollar (now at 1.475 per euro, or 17 crowns per dollar) - whose reserve status is being questioned by all members of BRIC as well as others (everyone can see that the U.S. may suffer from the same kind of an irresponsible socialist government as everyone else) - may turn out to have something to do with this picture. The declared purpose of the December 2009 negotiations in Copenhagen that will hopefully fail completely is to save the Earth if not the multiverse. The UAH AMSU data see the average annual and global brightness temperature of the Earth to be close to minus 15.5 °C. Ban Ki-Moon and similar stellar scientists have calculated that if the temperature exceeds f***ing frying minus 13.5 °C which is by 2 °C higher, all of us are going to evaporate or transform into plasma and the Universe may decay into a different state, too. And I don't have to explain you the staggering statistical implications for the whole multiverse. ;-) During the year, the brightness temperature oscillates approximately between -17 °C in January and -14 °C in July - because the variations of the landmass, which is mostly on the Northern Hemisphere, are more pronounced than the variations of the oceanic temperatures. The recent, 30-year trends indicate that the temperature is increasing roughly by 1 °C per century, so the catastrophic level when the temperature will oscillate between -15 °C and -12 °C could occur around the year 2200 or so - whether or not we will continue to use fossil fuels. If you have ever experienced how much brutally hotter -12 °C is relatively to -14 °C, you must agree with all these guys that we're all doomed already next year - because we can already predict that the year 2200 will come - unless Obama and his compatriots will join the EU as obedient members. :-) Myths about the minimal length Many people interested in physics keep on believing all kinds of evidently incorrect mystifications related to the notion of a "minimal length" and its logical relationships with the Lorentz invariance. Let's look at them. Myth: The breakdown of the usual geometric intuition near the Planck scale - sometimes nicknamed the "minimum length" - implies that the length, area, and other geometric observables have to possess a discrete spectrum. Reality: This implication is incorrect. String theory is a clear counterexample: distances shorter than the Planck scale (and, perturbatively, even the string scale) cannot be probed because there exist no probes that could distinguish them. Consequently, the scattering amplitudes become very soft near the Planck scale and the divergences disappear. Blog2Print: print blogs as books There are many reasons why people may prefer the good old paper over the internet pages, especially when it comes to long essays. Click to zoom in. Tuesday, September 15, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Smartkit: On the edge game Click the screenshot for the game. Jump on each white square once before you end up with the red square. Monday, September 14, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Murray Gell-Mann: 80th birthday and interview On Tuesday, Murray Gell-Mann celebrates his 80th birthday. Big congratulations! This article will summarize some old achievements of the great physicist but also discuss some of his recent opinions about string theory. Murray Gell-Mann was born on September 15th, 1929 in Lower East Side of New York to a family of Western Ukrainian Jewish immigrants. When he was fifteen, he joined Yale. ;-) See some pictures from his early life. In the 1950s, when he was in his 20s, he studied cosmic rays and discovered/invented the strangeness in order to make sense out of the isospin, other quantum numbers, and their relationships (e.g. using the key Gell-Mann-Nishijima formula). I wrote his biography one year ago, in Oskar Klein and Murray Gell-Mann: birthdays. So I won't write everything again. Let me just say that Murray Gell-Mann was the most important one among the first pioneers who realized that there were quarks inside hadrons which is what earned him the 1969 physics Nobel prize. Note that all these things, including the award, had been completed years before the discovery of QCD. Clifford Johnson: LASER A pretty good, non-technical explanation how LASERs work. Well, the reason why the photons end up going in the same direction is slightly underexplained but the very idea of a particle physics choreography is neat. Via Asymptotia. Global warming affects beer, eggs, corn, pork Rafa has pointed out that Nude Socialist as well as lots of other media have reported that global warming makes beer suck: some Czech researchers think that the concentration of (bitter) alpha acids in hops was recently dropping by a whopping 0.06 percent per year (...) which they attribute to global warming (...). That's a true catastrophe (...) which finally proves that we are all doomed. Click the sentence below to read more. Saturday, September 12, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Schrödinger's virus and decoherence The physics arXiv blog, Nature, Ethiopia, Softpedia, and many people on the Facebook were thrilled by a new preprint about the preparation of Schrödinger's virus, a small version of Schrödinger's cat. The preprint is called Towards quantum superposition of living organisms (click) and it was written by Oriol Romero-Isart, Mathieu L. Juan, Romain Quidant, and J. Ignacio Cirac. They wrote down some basic stuff about the theory and a pretty clear recipe how to cool down the virus and how to manipulate with it (imagine a discussion of the usual "atomic physics" devices with microcavities, lasers, ground states, and excited states of a virus, and a purely technical selection of the most appropriate virus species). It is easy to understand the excitement of many people. The picture is pretty and the idea is captivating. People often think that the living objects should be different than the "dull" objects studied by physics. People often think that living objects - and viruses may or may not be included in this category - shouldn't ever be described by superpositions of well-known "privileged" wave functions. Except that they can be and it is sometimes necessary. Quantum mechanics can be baffling but it's true. Friday, September 11, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere CO2 makes Earth greenest in decades In June 2009, Anthony Watts reposted an article by Lawrence Solomon that pointed out that the Earth is greener than it has been in decades if not centuries. See also NASA's animations of this Earth (the map of its bio-product), for example the low-resolution one. In less than 20 years, the "gross primary production" (GPP) quantifying the daily output of the biosphere jumped more than 6%. About 25% of the landmass saw significant increases while only 7% showed significant declines. Note that the CO2 concentration grows by 1.8 ppm a year, which is about 0.5% a year. It adds up to approximately 10% per 20 years. In other words, the relative increase of the GPP is more than one half of the relative increase of the CO2 concentration. The plants also need solar radiation and other things that haven't increased (or at least not that much) which is why the previous sentence says "one half" and not "the same as". Because the CO2 concentration in 2100 (around 560 ppm) may be expected to be 50% higher than today (around 385 ppm), it is therefore reasonable to expect that the GPP will be more than 25% higher than it is today. Even by a simple proportionality law, assuming no improvements in the quality, transportation, and efficiency for a whole century, the GPP in 2100 should be able to feed 1.25 * 6.8 = 8.5 billion people, besides other animals. Of course, in reality, there will be lots of other improvements, so I find it obvious that the Earth will be able to support at least 20 billion people in 2100 if needed. On the other hand, I think that the population will be much smaller than 20 billion, and perhaps closer to those 8.5 billion mentioned previously. Back to the present: oxygen Now, in September 2009, Anthony Watts mentions a related piece of work that some Danish researchers just published in Nature: Copenhagen press release Paper in Nature The authors have studied chromium (not chrome!) isotopes in iron-rich stones to determine some details about the oxidification of the oceans and the atmosphere that occurred 2+ billion years ago. In two different contexts, they are forced to conclude that an increased concentration of oxygen in the oceans and the atmosphere led to cooling. The authors say a couple of things about the ice ages that are manifestly incorrect. They say that the oxygen concentration could have been the key driver behind the temperature swings during the glaciation cycles: a higher amount of oxygen allowed the organisms to consume more CO2 and other greenhouse gases that reduced the temperature by a weaker greenhouse effect. That's clearly incompatible with the fact that the temperature was changing roughly 800 years before the concentration of the greenhouse gases. The temperature variations couldn't have been an effect caused by the greenhouse gases, not even if you try to add oxygen in the sequence of all the correlated phenomena. However, it's plausible that the oxygen levels influenced the temperature more directly (which consequently influenced the concentrations of trace gases, via outgassing). A simple additional comment I can make is that the higher concentrations of oxygen may be increasing the albedo (reflectivity) of the oceans and the landmass by adding life forms which may be optically brighter than the dead soil and oceans and/or the life forms that don't need oxygen (or because of another inequality in the energy balance of photosynthesis and/or breathing). Even if that is the case, it remains largely unknown whether the oxygen variations in the glaciation periods were sufficient to drive the temperatures (I guess that they're not) and even if they were sufficient, it would remain to be seen what was their cause. Thursday, September 10, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Abiogenic birth of oil At least a large portion of petroleum is believed to originate from biological processes. However, an article in Nature, Kolesnikov, Kutcherov, Goncharov: Methane-derived hydrocarbons produced under upper-mantle conditions uses spectroscopic methods applied to laser-heated diamond to argue that at temperatures around 750-1250 °C and pressures around 20,000 atmospheres, methane transforms into ethane or propane or butane, combined with graphite and hydrogen. Under the same conditions, ethane decomposes into methane: the transition is reversible. It should also mean that it is easier to find oil, as The Swedish Royal Institute of Technology puts it. New oil reserves Such a statement is not too shocking: two days ago, 1-2 billion new barrels of light oil were announced by BG in Brazil, increasing the world's proven reserves by 0.1-0.2%. One week ago, BP found 4-6 billion new barrels in the Gulf of Mexico, previously thought to be "finished". Review of the membrane minirevolution and other hep-th papers Today, there are twelve new papers primarily labeled as hep-th papers. The first one, and one that may attract the highest number of readers, is a review of the membrane minirevolution by Klebanov and Torri. However, I will mention the remaining eleven preprints, too. Membrane uprising: a review The membrane minirevolution was discussed on this blog as a minirevolution long before most people noticed that there was a minirevolution going on. Important papers by Bagger + Lambert and by Gustavsson (BLG) introduced a new, unusual Chern-Simons-like theory with 16 supercharges in 2+1 dimensions. It was argued that it had to describe two coincident M2-branes. It used to be thought that the CFT theories dual to M-theory on "AdS4 x S7/G" had no Lagrangian description except that BLG found one. Upgraded: Hubble Space Telescope Carina Nebula in the visible (top) and infrared (bottom) perspective. That's where stars are being born. The Hubble Space Telescope is alive, well, and upgraded. Click the picture above to see 7 pretty new pictures (via BBC) or see Google News or Blog Search. The book advertised on the left side is just one among many other books with pretty colorful photographs that the Hubble Space Telescope has produced during those years. Let me recall that the gadget should eventually be replaced by the James Webb Telescope. Wednesday, September 09, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere ASU: Origins of the Universe On April 6th, 2009, six Nobel prize winners discussed the origins of the Universe in Arizona. If you have 64 extra minutes, and/or if you liked a similar ASU discussion whether our Universe was unique, here I bring you a new one. Baruch Blumberg got a medicine Nobel prize for a virus and he is an astrobiologist. Sheldon Glashow, David Gross, and Frank Wilczek are particle physicists who need no introduction. Wally Gilbert is a biochemist, Chemistry Nobel prize winner in 1980, founder of Biogen etc., capitalist, chairman of the Harvard Society of Fellows, and a photographic artist. Technical: Click the mail logo below to initiate the process to subscribe to daily e-mail updates with my texts on this blog which are sent every day at 5:15 am Prague Time. Frank Wilczek and Sheldon Glashow have a small fight about supersymmetry around 26:00. Wilczek explained that "axions" were named after a detergent whose name Wilczek liked so much that he waited for an opportunity to name a particle after it. Glashow reveals that WIMP stands for "Women in Maths and Physics at Harvard" which may be an actual secret organization. :-) 9:09:09 09/09/09 This is not a real posting. Instead, it is just a placeholder posted on 09/09/09 at 09:09:09. Sorry for that! The comment thread can be used for any discussions. ;-) By the way, the numbers could lead you to ask whether 0.9999... is equal to 1.0000... Well, you may define your numbers in any way you want. But if want these particular, possibly infinite sequences of decimal digits to represent a number system (namely the set of real numbers) that satisfies (x/3)*3=1, then you're forced to accept that 0.9999... must be identified with 1.0000.... simply because 1/3=0.3333... and 0.3333...*3 = 0.9999... ;-) Tuesday, September 08, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Hideki Yukawa: an anniversary Today, several mathematicians and physicists would celebrate their birthday or deathday. (Some cosmologists are still confused why people don't celebrate their deathdays too often: such an asymmetry shamefully breaks the politically correct equivalence between the different arrows of time! Well, it indeed does: the breaking comes from the so-called "logical arrow of time".) Marin Mersenne was born in 1588, Joseph Liouville died in 1882, Hermann von Helmholtz died in 1894. But let us look at this guy. Hideki Yukawa was born in Tokyo on January 23th, 1907 and died in Kyoto on September 8th, 1981. Just like the death is the time reversal of the birth, Kyo-To is the time reversal of To-Kyo, so it makes sense in this case. When he was 26, he was hired as an assistant professor in Osaka which was a great choice because two years later, in 1935, he published his theory of mesons. The pion was observed in 1947 and Yukawa received his Nobel prize in 1949: that was the first Japanese Nobel prize. He also predicted the K-capture, i.e. the absorption of a low-lying, "n=0" electron by the nucleus of a complicated atom. Sunday, September 06, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Schellnhuber: West has exceeded quotas In his previous life, Hans Joachim Schellnhuber used to be a fairly good theoretical physicist. For example, he would solve the Schrödinger equation with an almost periodic potential in 1983. He has spent a year or so as a postdoc at KITP in Santa Barbara (1981-82). But the times have changed. For a couple of years, he has been the director of the Potsdam Institute for Climate Impact Research and the main German government's climate protection adviser. What he has just said for Spiegel, in Industrialized nations are facing CO2 insolvency (click), is just breathtaking and it helps me to understand how crazy political movements such as the Nazis or communists could have so easily taken over a nation that is as sensible as Germany. A few rotten steps in the hierarchy is enough for a loon to get to the very top. He is proposing the creation of a CO2 budget for every person on the planet, regardless whether they live in Berlin or Beijing. Let us allow him to speak: Saturday, September 05, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Mojib Latif warns IPCC of cooling Nude Socialist informs that Mojib Latif, a member of the IPCC, has warned his fellow IPCC members that we could see 10-20 years of cooling that will make people question the global warming orthodoxy. Highly trustworthy sources of mine describe Latif as one of the "better ocean modelers". He used to say that the models were perfect but when someone told him that perfect models meant that no extra funding for modelers was necessary, he "developed a deeper appreciation for the model shortcomings." ;-) So he appreciates that the ocean cycles and others may drive the climate in a different direction than the greenhouse effect for a decade or two. "Short-term" predictions are unreliable, he admits. But it took me quite some time to understand the atmosphere of expectations among those people. At the beginning, I thought that Latif was just another quasi-religious guy who says that people should be afraid of global warming regardless of the observations and their consistency with the models. Later, I realized that I was probably right but I also realized that Latif was a sort of hero at the same moment. It is actually a heresy among the IPCC members to even think about the possibility that 10-20 years in the future won't see any discernible global warming - despite the fact that this is precisely what has happened in the previous 10 years (and even 15 years, when you insist on statistical significance). Friday, September 04, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Magnetic monopoles seen in CM physics Science Magazine has published a paper by 14 British and German authors, Dirac strings and magnetic monopoles in spin ice Dy2Ti2O7 (click), who claim to have seen, via diffuse neutron scattering, emergent magnetic monopoles in a spin ice on the highly frustrated pyrochlore lattice. These magnetic monopoles appear at the ends of "observable Dirac strings". This is way too bizarre a terminology, to say the least, because a basic defining property of the Dirac strings, as realized by Paul Dirac, is that they must be unobservable! ;-) OK, fine, they mean some magnetic flux tubes that actually don't respect the Dirac flux quantization rule. See also Nature (popular), Physics World, PhysOrg, Science Daily (click). Let me say a few words about the Dirac strings. If you imagine a magnetic monopole of charge Q, i.e. an isolated North (or South) pole of a magnet (that is normally coming in the dipole form - with both poles - only), the magnetic field around is radial and it goes like "Q / R^2". Remember the letter "Q". The vector function "(X,Y,Z)/R^3" in three dimensions has the feature that its divergence equals zero. Well, not quite: it is a multiple of a delta-function. Is our Universe unique, and how can we find out? Thursday, September 03, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Japanese voters may have commited economic harakiri Wednesday, September 02, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Trillions to be wasted for CO2 madness a year Washington Post hails Obama as a climate skeptic Marc Morano has pointed out an interesting article in the Washington Post Obama Needs to Give a Climate Speech - ASAP in which Marc Morano and Barack Obama are credited with the gradual fall of the climate hysteria or, if you want to use the original wording, with the "growing defection of experts from the scientific consensus view". ;-) You might think: What a strange pair of bedfellows. But is it really so strange? Of course, the author, Andrew Freedman, thinks that Barack Obama is obliged to give a fiery alarmist speech to please the movement of the little green men like Freedman himself. Well, I am not 100% sure whether Freedman is the U.S. Überpresident who can control the U.S. President. ;-) After their private conversations, President Klaus was pleasantly surprised by Obama's charm and energy. Climate realist Klaus noted that Obama has complained about his aids' and his environment's having no sense of economic reality when it comes to policies focusing on CO2. It sounded like the music from the heaven to Klaus's ears, he said. I think that Freedman is right. Barack Obama has given a smaller space to the climate change in his speeches than George W. Bush did in the same stage of his presidency because Barack Obama is actually a climate crypto-realist. He is just surrounded by hordes of wrong, fearmongering people - and he has become a symbol of all their wrong plans. But at the very depth of his soul, he doesn't think that it's a good idea to regulate carbon. Am I wrong? Tuesday, September 01, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere An unexpected constitutional crisis in Czechia I would bet that the situation will be clarified pretty soon but the news from the Constitutional Court of the Czech Republic whose headquarters are located in the town of Brno, Moravia sounds pretty shocking. All the big and not-so-big parties have begun the campaign for the early elections on October 9th-10th, 2009. Except that the Constitutional Court has just decided that early elections and all the laws that allow them - and that shorten the mandate of the current Parliament - are unconstitutional, despite the fact that the bill about the early elections has been adopted as a constitutional bill. What happened? Mr Miloš Melčák was elected as a deputy for the social democratic party in 2006 except that much like a dozen of similar deputies in recent years, he has "betrayed" the bulk of his party by allowing the center-right government to exist. Obviously, he was kicked out of the social democratic party. The "traitors" are being punished in a straightforward way: the parties won't include them on their list so they will lose their job and feeding troughs right after the following elections. Of course, Mr Miloš Melčák decided that any new elections that would remove him from the Parliament are bad, so they must be unconstitutional. He sent a complaint to the Constitutional Court. In a stunning development, the court has ruled that Mr Melčák is right today. Congratulations. :-) We are learning that according to the basic charter of the human rights, Mr Melčák and others who are at risk enjoy the right for an "uninterrupted execution of a public appointment". They can't be removed by anyone, the court claims! ;-) The communist party has used a similarly "uninterrupted" definition of democracy for four decades. The court believes that the early elections would be an example of an "unacceptable change of the critical attributes of a democratic rule of law" - wow - and it's such an important stuff for the court that the court - except for two "dissenters" - thinks that the early elections can't take place before the court publishes its final verdict about the complaint! ;-) So the elections have been postponed indefinitely. Now, this is obviously a strong stuff. On one hand, it's good that the constitutional court is trying to verify things, including the decisions that no one in the Parliament dares to doubt. On the other hand, it's kind of crazy that it considers the early elections a "brutal violation of the basic attributes of democracy" and that it claims to have the right to judge which constitutional bill is more important than the other ones. Even if there were an inconsistency between the basic charter of human rights and freedoms on one side and the bill that declares the early elections, both of them are constitutional bills and the constitutional court would have to operate within this possibly perceived inconsistency. I think that it's clear that the Parliament has the "moral" right to dissolve itself, via the expected steps involving the President, and the early elections are the obvious democratic solution (or an attempt for a solution) of the otherwise "unsolvable" situation. The interpretation of the "uninterrupted execution of a public appointment" is bizarre, speculative, reminiscent of the undemocratic regimes, and secondary. But the court is making this strangely interpreted right more important than the right of the citizens - and the bulk of their representatives - to democratically choose a new Parliament which is clearly more important according to basic common-sense understanding of democracy. It's not clear how they will solve it. The court may try to delay the elections indefinitely - or not. Clearly, the lawmakers should search for a very speedy way to reshuffle the laws so that the complaint will be mute. I am no lawyer but I guess it must be possible to revoke all the laws that were claimed to lead to inconsistencies, cancel or update some paragraphs in the charter that lead to similar inconsistencies, and accept a new bill about the early elections that will be consistent but effectively equivalent to the current one. Also, I think that the constitution is imperfectly designed if it doesn't allow early elections as a standard procedure. At any rate, the early elections have been considered legitimate for quite some time - and even without a canonical wording in the constitutional "core", we've had some early elections in the past - so the sudden realization that they're unconstitutional is strange. World War II began 70 years ago It's been 70 years since Poland was invaded by Germany which ignited the most brutal global conflict that the world has seen as of 2009. One day earlier, on August 31st, Germany staged an attack of would-be Polish troops against a radio station in Gleiwitz, in order to create a "justification" for the attack against Poland. Poland with its underdeveloped and relatively weak army had no real chance to win. It was surrounded by bastards on the West and on the East. The Ribbentrop-Molotov Pact (which Putin considers immoral) guaranteed that the Soviet Union would not protect Poland. In fact, it occupied the Baltic states and picked a piece of Poland, too.
488ab54124367586
Mathematical analysis is concerned with the study of infinite processes, and the differential calculus of Newton and Leibniz lies at its heart.  It provided the foundation and the language for Newtonian mechanics and the whole of mathematical physics.  Over the past three centuries it has permeated much of mathematics and science. Associated with the limiting process there are many technically difficult “estimates” or inequalities, of a combinatorial or algebraic nature, which prepare the ground and justify passing to the limit.  Such estimates are often extremely hard since they address some subtle and important aspect of the problem at hand.  Establishing this becomes a key step, opening the door to a wide variety of applications. Over the past thirty years this study has undergone a mini-revolution in which a succession of hard problems of this nature have been solved, using a variety of novel techniques and ideas which often cross disciplinary boundaries and stimulate cross-fertilization. Jean Bourgain is one of the leading analysts in the world today and he has played a major role in this revolution. He is much admired especially by those who make regular use of the multitude of powerful techniques that he has provided.  He has written over 350 papers, each of which is first rate, and a number of which contain solutions of central long-standing problems. The fields in which he has made such fundamental contributions include harmonic analysis, functional analysis, ergodic theory, partial differential equations, mathematical physics, combinatorics and theoretical computer science.  Some of the well-known problems that he has solved include the embedding, with least distortion, of finite metric spaces in Hilbert space; extending the validity of Birkhoff’s ergodic theorem to very general sparse arithmetic sequences; and the boundedness in Lp of the circular maximum function in two dimensions.   He has also made a fundamental breakthrough in the study of the non-linear Schrödinger equation for the critical exponent defocusing case, introducing new tools which have led to significant progress on this difficult problem. A whole area where Bourgain has led the way, and which deserves special mention, is the field of arithmetic combinatorics and its applications. A notable example is his solution of the “local” version of the Erdös-Volkmann conjecture.  The original conjecture asserts that any measurable subring of the real line has dimension either 0 or 1.  This was proved by Edgar and Miller in 2003 and, around the same time, Bourgain established the local version. This provides a sharp and powerful quantification of this phenomenon and is technically a tour de force. In 2004 Bourgain, Katz and Tao proved their celebrated finite field analogue, known as the “Sum Product theorem”.  This is an elementary and fundamental quantification of the fact that finite fields have no subrings, and it measures a basic disjunction between the operations of addition and multiplication in a finite field. Bourgain has developed and extended this phenomenon making it into a theory.  A first application is to estimating algebro-geometric character sums.  For this the standard tool has been the famous solution of the Weil conjectures, established by Deligne using Grothendieck’s cohomology theory.  However for these methods to give non-trivial information one needs the Betti numbers of the corresponding varieties to be small compared to the size of the finite field.  What is remarkable about Bourgain’s results is that they give results even when the Betti numbers are big. Another application of Bourgain’s theory, developed in collaboration with Gamburd is a proof of the expander conjecture of Lubotsky for the group SL2(Fp) and the spectral gap conjecture for elements in the group SU(2).  These are concerned with the spectra of the images, in high dimensional representations of these groups, of elements of their group rings.  They yield exponentially sharp equidistribution rates for random walks in these groups and are central to problems such as classical sieving in number theory and to aperiodic tilings of 3-dimensional space. Bourgain has also developed some striking applications of his theory to theoretical computer science by giving a much sought after explicit construction of pseudorandom objects called extractors.  These, as well as the expanders, are basic building blocks used in fast derandomization algorithms. Bourgain’s spectacular contributions to modern mathematics make him a very deserving winner of the 2010 Shaw Prize in the Mathematical Sciences. Mathematical Sciences Selection Committee The Shaw Prize 28 September 2010, Hong Kong
785dccbd2db59ec2
Take the 2-minute tour × The time is treated differently in special relativity and quantum mechanics. What is the exact difference and why relativistic quantum mechanics (Dirac equation etc.) works? share|improve this question Er...time is treated differently in relativistic mechanics and non-relativistic quantum mechanics, but that is the same as saying that time is treated differently in relativistic and non-relativistic classical mechanics. –  dmckee Jul 15 '11 at 2:04 Quantum mechanics doesn't per se imply relativity. –  C.R. Jul 15 '11 at 3:18 The Schrödinger equation of non-relativistic QM is second order in the time derivative and is not Lorentz invariant. On the other hand, the Dirac equation is first order in the time derivative and is invariant under Lorentz transformations. So I think this is the main difference between non-relativistic and relativistic QM. In the latter, the time is treated in (almost) the same way as spatial coordinates. Also the spin is a relativistic effect because it emerges naturally only in relativistic QM. –  Andyk Jul 15 '11 at 14:33 Dear @ANKU: Your above comment the Schrodinger equation of non-relativistic QM is second order in the time derivative was probably written in a bit of a hurry. :-) More importantly, is it possible to formulate the main question using precise terms? –  Qmechanic Jul 17 '11 at 15:51 Oops, it's second order in the space and first order in time. But the point is this, we make it first order in space and time derivative so that it becomes Lorentz invariant. Right? –  Andyk Jul 18 '11 at 2:12 3 Answers 3 up vote 1 down vote accepted Quantum mechanics can be reconciled with special relativity to make quantum field theory, but there are some awkward things going on in that marriage. SR treats time symmetrically with position, but in quantum mechanics, position is an operator and time isn't. Baez at UCR has a nice discussion of that here: http://math.ucr.edu/home/baez/uncertainty.html share|improve this answer Well, QFT reconciles this by disposing of the position as an operator. –  Marek Jul 19 '11 at 20:18 And Dirac's approach (and Feynman's) makes time an "operator" (or equivalently an integration variable in the path integral). This answer is no more satisfying than saying "In classical mechanics, position is a function, and time is a parameter". That's true, but only if you choose to parametrize by time and not proper time. The same is true in quantum mechanics. –  Ron Maimon Aug 13 '11 at 20:09 Time is always time. It is special. Another thing is its involvement in transformations of measured data from one reference system to another. This involvement does not change its meaning. In a given reference frame the time is unique and the space coordinates are multiple - according to the number of particles to be observed. Concerning Dirac equation, it took some efforts to make it work after its invention. It works because it was made work, if you like. Besides, it depends what exactly do you mean by "relativistic QM". QED, for example, is rather difficult to make work. Its sensible results only appear at page 500 or so, when the infrared catastrophe is resolved. share|improve this answer I send my article to you PDF format in attachment. Please can you analysis . And Please let me visit in my web site 'www.timeflow.org' . Thanks. Combining General Relativity Theory with Quantum Theory ‘The lifetime of a mass or an energy in space is its Mc2 energy’ Ref.(3). Due to this characteristic feature of a substance, conversions of photons of the wawe-particle (energy-mass) or electrons continue consistently. Hence, the binary conversion behavior of a photon implies binary conversion behavior of a big mass space object. In other words, the behavior of a photon is a miniature version of the behavior of a big mass space object. Because of General Relativity Theory, ‘One hour in the Sun remains behind with respect to, one hour in the Earth. One hour in the Earth remains behind with respect to, one hour in the Moon. One hour in the Moon remains behind with respect to, one hour in Alpha Ray. One hour in the Alpha Ray remains behind with respect to, one hour in Beta Ray. One hour in the Beta Ray remains behind with respect to, one hour in Gamma Ray. They all show the same physical behaviour. Let us observe the behaviors of two photons, say one like is a big mass, and the other like is a small mass. Since the photon with small mass has a short lifetime, it will transform faster from mass into energy and vice versa. The big mass photon has a longer lifetime. Hence, the speed of transformation from mass to energy or from energy to mass is slower. Smaller the mass of a photon is, the much bigger the kinetic energy is. The kinetic energy of a photon is given by, e=hf ‘In order to calculate the lifetime of a mass or an energy in space, we can assume time flow to be time/energy; in any case, no matter what value we assign to time flow, that will not change the present result: the lifetime of a mass or an energy in space is its Mc2 energy. When this is calculated, the lifetime of 1 kg mass in space is 2,851,927,903.26… years, or 9.10 16 s’ Ref.(3) All photons’ and all free sub-atomic particles’ lifetimes are their periods or 1/f . In other words the periods are lifetimes for photons and for free sub-atomic particles. And a period is equal its Mc2 particle energy x 1 s/joule or erg. If the period is high, lifetime is high. And Mc2 is high. Or vice versa. Like astronomical objects. This is universal law. The mass of the low frequency of a photon has a big value . For example: Substituting an Alpha Ray with 1,67.109 Hz frequency into the formula yields, e= hf = (6,62 .10-34) x (1,67.109) time = (energy) x (time flow) t = 1/1,67.109 = Mc2 .1 s/joule ; M = 6,64 .10-27 kg On the other hand, for a high frequency photon the mass has a small value. As an example to show this is the case, consider a Beta Ray with 1,22.1013 Hz frequency into the formula gives, e= hf = (6,62 .10-34) x (1,22 .1013) t = 1/1,22.1013 = Mc2 .1 s/joule ; M= 9,109.10-31 kg These two examples shows us that by equating the period of a photon to Mc2 energy in a unit time flow provides us the actual values of the mass. These two examples are valid for x-Ray ,Gamma-Ray and Light-Ray. When mass decreases, frequency increases. And the transformation from mass to energy becomes more uncertain. We understand that Quantum Mechanics is not different from Classical Mechanics and Relativistic Mechanics, in fact. The characteristics of the particles in Quantum Theory are the same as the character of the mass in the General Relativity Theory. They are subject to the same physical processes. So, Einstein’s expression “God does not throw dice!” is still valid. References (R1): [.1.] Salih Kircalar, ’Utilization of Time:Time Flow’, Galilean Electrodynamics 13, SI 1, 2 (2002). [.2.] Salih Kircalar, ‘Time Effects Caused by Mass or Energy’, Galilean Electrodynamics 15, SI 1,8 (2004). [.3.] Salih Kircalar, ‘Mass or Energy & Quantum Mechanics’ , Galilean electrodynamics 18, J/F, 2 (2007). Salih Kircalar Güzel Otomotiv Kizilelma Cad.No:99/B Findikzade-Istanbul /TURKEY e-mail:kircalars@hotmail.com share|improve this answer This answer(v1) seems to be just rambling text with no relation to the question(v1). –  Qmechanic Sep 23 '12 at 19:51 Your Answer
d8f7b238276cca73
electron pair The topic electron pair is discussed in the following articles: electrophilic reactions • TITLE: electrophile (chemistry) in chemistry, an atom or a molecule that in chemical reaction seeks an atom or molecule containing an electron pair available for bonding. Electrophilic substances are Lewis acids (compounds that accept electron pairs), and many of them are Brønsted acids (compounds that donate protons). Examples of electrophiles are hydronium ion (H3O+, from Brønsted... Lewis theory of covalent bonding • TITLE: Gilbert N. Lewis (American chemist) • TITLE: chemical bonding (chemistry) SECTION: Lewis formulation of a covalent bond • TITLE: chemical bonding (chemistry) SECTION: Electron-deficient compounds ...bonds or their lengths can be assessed. In the form in which it has been presented, it also fails to suggest the shapes of molecules. Furthermore, the theory offers no justification for regarding an electron pair as the central feature of a covalent bond. Indeed, there are species that possess bonds that rely on the presence of a single electron. (The one-electron transient species... magnetic resonance • TITLE: magnetic resonance (physics) molecular orbitals • TITLE: spectroscopy (science) SECTION: Electronic transitions ...The ordering of MO energy levels as formed from the atomic orbitals (AOs) of the constituent atoms is shown in Figure 8. In compliance with the Pauli exclusion principle each MO can be occupied by a pair of electrons having opposite electron spins. The energy of each electron in a molecule will be influenced by the motion of all the other electrons. So that a reasonable treatment of electron... • TITLE: chemical bonding (chemistry) SECTION: Molecular orbitals of H2 and He2 The central importance of the electron pair for bonding arises naturally in MO theory via the Pauli exclusion principle. A single electron pair is the maximum number that can occupy a bonding orbital and hence give the greatest lowering of energy. However, MO theory goes beyond Lewis’ approach by not ascribing bonding to electron pairing; some lowering of energy is also achieved if only one... nitrogen group elements • TITLE: nitrogen group element (chemical elements) SECTION: Similarities in orbital arrangement Another similarity among the nitrogen elements is the existence of an unshared, or lone, pair of electrons, which remains after the three covalent bonds, or their equivalent, have been formed. This lone pair permits the molecule to act as an electron pair donor in the formation of molecular addition compounds and complexes. The availability of the lone pair depends upon various factors, such as... quantum mechanics of bonding • TITLE: chemical bonding (chemistry) SECTION: The quantum mechanics of bonding A full theory of the chemical bond needs to return to the roots of the behaviour of electrons in molecules. That is, the role of the electron pair and the quantitative description of bonding must be based on the Schrödinger equation and the Pauli exclusion principle. This section describes the general features of such an approach. Once again, the discussion will be largely qualitative and... valence bond theory • TITLE: chemical bonding (chemistry) SECTION: Valence bond theory The basis of VB theory is the Lewis concept of the electron-pair bond. Broadly speaking, in VB theory a bond between atoms A and B is formed when two atomic orbitals, one from each atom, merge with one another (the technical term is overlap), and the electrons they contain pair up (so that their spins are ↓↑). The merging of orbitals gives rise to constructive... VSEPR theory • TITLE: chemical bonding (chemistry) SECTION: Molecular shapes and VSEPR theory A Lewis structure, as shown above, is a topological portrayal of bonding in a molecule. It ascribes bonding influences to electron pairs that lie between atoms and acknowledges the existence of lone pairs of electrons that do not participate directly in the bonding. The VSEPR theory supposes that all electron pairs, both bonding pairs and lone pairs, repel each other—particularly if they...
fa6ba12e1a24b355
Sunday, April 30, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere The Final Theory: two stars A short comment: a reader has pointed out that right now, the crackpot book by • Mark McCutcheon has the average rating of 2 stars because of 200 one-star reviews that suddenly appeared on the website. The new reviews are a lot of fun: many reviews come from Brian Powell, Jack Sarfatti, Greg Jones, Quantoken, David Tong, me, and many others. Many of the readers have written several reviews - and you can see how they struggled to make their reviews acceptable. ;-) When we first informed about the strange system, McCutcheon's book had the average of 5 stars and no bad reviews at all. The previous blog article about this story is here. Saturday, April 29, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere How to spend 6 billion dollars? Related: What can you buy for 300 billion dollars? What is the best way to spend 6 billion dollars? Two weeks of Kyoto ILC: the linear collider One month of war in Iraq Millions of free PCs for kids Ten space shuttle flights Free polls from Additional comments: the world pays about 6 billion dollars for two weeks of the Kyoto protocol which cools down the Earth by 0.00006 degrees or so. The International Linear Collider would have the capacity to measure physics at several TeV more accurately than the LHC, but it is also more expensive - about 6 billion dollars. The U.S. pays 6 billion dollars for one month of the military presence in Iraq. One could buy 60 million computers for kids if the price were $100 as MIT promises. Whenever you launch space shuttle, you pay about 600 million USD. Klaus meets Schwarzenegger Figure 1: Californian leader Schwarzenegger with his Czech counterpart during a friendly encounter in Sacramento. Arnold has accepted Klaus' invitation to the Czech Republic. When Czech president Václav Klaus visited Harvard, he complained that the capitalism of the European Union is not the genuine capitalism the he always believed - the capitalism as taught by the Chicago school - but rather a kind of distorted, socialized capitalism, something that could be taught here at Harvard. ;-) Finally, he could have spoken to the peers - at the Graduate School of Business at University of Chicago. The other speakers over there agree with Klaus' opinions. In his speech, he explained that the Velvet Revolution was done by the people inside the country: it was not imported. Equally importantly, Americans have a naive understanding of the European unification because they don't see the centralized, anti-liberal dimension of this process. Twenty years after Chernobyl On Wednesday morning, it's been 20 years since the Chernobyl disaster; see The communist regimes could not pretend that nothing had happened (although in the era before Gorbachev, they could have tried to do so) but they had attempted to downplay the impact of the meltdown. At least this is what we used to say for twenty years. You may want to look how BBC news about the Chernobyl tragedy looked like 20 years ago. Ukraine remembered the event (see the pictures) and Yushchenko wants to attract tourists to Chernobyl. You may see a photo gallery here. Despite the legacy, Ukraine has plans to expand nuclear energy. Today I think that the communist authorities did more or less exactly what they should have done - for example try to avoid irrational panic. It seems that only 56 people were killed directly and 4,000 people indirectly. See here. On the other hand, about 300,000 people were evacuated which was a reasonable decision, too. And animals are perhaps the best witnesses for my statements: the exclusion zone - now an official national park - has become a haven for wildlife - as National Geographic also explains: • Reappeared: Lynx, eagle owl, great white egret, nesting swans, and possibly a bear • Introduced: European bison, Przewalski's horse • Booming mammals: Badger, beaver, boar, deer, elk, fox, hare, otter, raccoon dog, wolf • Booming birds: Aquatic warbler, azure tit, black grouse, black stork, crane, white-tailed eagle (the birds especially like the interior of the sarcophagus) Ecoterrorists in general and Greenpeace in particular are very wrong whenever they say that the impact of technology on wildlife must always have a negative sign. In other words, the impact of that event has been exaggerated for many years. Moreover, it is much less likely that a similar tragedy would occur today. Nuclear power has so many advantages that I would argue that even if the probability of a Chernobyl-like disaster in the next 20 years were around 10%, it would still be worth to use nuclear energy. Friday, April 28, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Yuval Ne'eman died Because this is a right-wing physics blog, it is necessary to inform you about the saddening news - news I heard from Ari Pakman yesterday - that Yuval Ne'eman (*1925), an eminent Israeli physicist and right-wing politician, died yesterday. If you're interested, you can read the article about him on Wikipedia and Peter Woit's blog, much like the text of Yisrael Medad, Ne'eman's political advisor. News summarized by Google are here. In 1961, Ne'eman published a paper with a visionary title • Derivation of strong interactions from a gauge invariance As far as I understand, the symmetry he was talking about was the flavor symmetry which is not really a gauge symmetry. Ne'eman co-authored the book "The Eightfold Way" with Murray Gell-Mann, contributed tremendously to the development of nuclear and subnuclear physics in Israel (which includes the nuclear weapons), and was the president of Tel Aviv University, among many other organizations. Science and fundamental science Chad Orzel did not like the proposals to build the ILC because they are derived from the assumption that high-energy physics is more fundamental a part of physics than other parts of physics - and he disagrees with this assumption. Instead, he argues that technology is what matters and it does not depend on particle physics. Also, Chad explains that one can have a long career without knowing anything about high-energy physics - which seems to be a rather lousy method to determine the fundamental value of different things. There are three main motivations why people stretch their brains and think about difficult things and science. We may describe the corresponding branches of science as follows: • recreational mathematics • applied science • pure science Recreational mathematics is studied by the people to entertain themselves and show others (and themselves) that they are bright. Chess in flash or without it may be viewed as a part of this category. People do this sort of activity because it is fun. Comedians are doing similar things although their work requires rather different skills. In this category, entertainment value is probably the main factor that determines the importance. People do whatever makes them happy and excited. If someone else does things on their behalf, they prefer those with a higher entertainment value. The invisible hand of freedom and the free market pretty much takes care of this activity. The rules of chess depend on many historical coincidences. Other civilizations could have millions of other games with different rules and the details really don't matter: what matters is that you have a game that requires you to turn your brain on. Applied science is studied because scientific insights can lead to economical benefits. They can improve people's lives, their health, give them new gadgets, and so forth. The practical applications are the driving factor behind applied science. People, corporations, and scientists pay for applied science because it brings them practical benefits. It is often (but not always) the case that the benefits occur at shorter time scales, and it is possible for many corporations and individuals to provide applied scientists with funding. And if you look around, you will see that many fields of applied science are led by laboratories of large corporations - such as IBM, drug companies, and others. Pure science is studied because the human beings have an inherent desire to learn the truth. In our Universe, the truth turns out to be hierarchical in nature. It is composed of a large number of particular statements and insights that can typically be derived from others. For equivalent insights, the derivations can work in both directions. In many other cases, one can only derive A from B but not B from A. The primary axioms, equations, and principles that can be used to derive many others are, by definition, more fundamental. The word "fundamental" means "elementary, related to the foundation or base, forming an essential component or a core of a system, entailing major change". If you respect the dictionaries, the physics of polymers may be interesting, useful, and important - but it is not too fundamental. If Chad Orzel or anyone else offers a contradictory statement, he or she abuses the language. Among the disciplines of physics, high-energy physics is more fundamental than low-energy physics. Moreover, I think that as long as we talk about pure science, being "fundamental" in this sense is a key component of being important. If we want to learn the scientific truth about the world, we want the most fundamental and accurate truth we can get. I am not saying that other fields should be less supported. Nor am I proposing a hierarchical structure between the people who chose different specializations. What I am saying is that other fields that avoid fundamental questions about Nature are being chosen as interesting not only because of their pure scientific value but also because of their practical or entertainment value. You may be trying to figure out what happens with a particular superconductor composed of 150-atom molecules under particular conditions. The number of similar problems may exceed the number of F-theory flux compactifications. How can you decide whether a problem like that - or any other problem in science - is important? As argued above, there are many different factors that decide about the answer: entertainment value, practical applications, and the ability to reveal major parts of the general truth. I guess that the practical applications will remain the most likely justification of a particular specialized research of a very particular type of superconductors. People and societies may have different motivations to study different questions of science. If you extend this line of reasoning, you will realize that people can also do many things - and indeed, they do many things - that have no significant relation with science. And they can spend - and indeed, do spend - their money for many things that have nothing to do with science, especially pure science. And it's completely legitimate and many of these things are important or cool. When you think about the support of science in general, what kind of activity do you really have in mind? I think that pure science is the primary category that we consider. Pure science is the most "scientific" part of science - one that is not motivated by practical applications. As we explained above, pure science has a rather hierarchical structure of insights. If something belongs to pure science, it does not mean that it won't have any applications in the future. In the 1910s-1930s, radioactivity was abstract science. By various twists and turns, nuclear energy became pretty useful. There are surely many examples of this kind. The criterion that divides science into pure science and applied science is not the uncertain answer to the question whether the research will ever be practically useful: the criterion is whether the hypothetical practical applications are the main driving force behind the research. Societies may be more interested in pure science or less interested in pure science. The more they are interested in pure science, the more money they are willing to pay for pure science. A part of this money is going to pure science that is only studied as pure science; another part will end up in fields that are partly pure and partly applied. Chad Orzel thinks that if America saves half a billion dollars for the initial stages of the ILC collider, low-energy physics will get extra half a billion dollars. I think he is not right. The less a society cares about pure science - even about the most fundamental questions in pure science such as those in high-energy physics - the less it is willing to pay for other things without predictable practical applications or entertainment value. Eliminating high-energy experimental physics in the U.S. would be a step towards the suppression of experimental pure science in general. Thursday, April 27, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Iran may nuke Czechia, Italy, Romania US told to invest in particle physics National Academy of Sciences has also recommended the U.S. to invest into neutrino experiments and high-precision tests of the Standard Model to stop the motion of the center of mass of particle physics away from the U.S. New York Times Dennis Overbye from the New York Times describes the same story: the ILC must be on American soil. See also and Nature. CERN new tax Meanwhile, CERN has adopted the digital solidarity principle: 1% of ITC-related transactions must be paid to CERN. Matt Strassler has just described their fascinating work on with Richard Brower, Chung-I Tan, and Joe Polchinski. Return 40 years into the past. The research that eventually evolves into string theory is proposed as a theory of strong interactions: something that would be known as a failed theory of strong interactions for the following 30 years. Things only start to slowly change after the 1997 discovery by Juan Maldacena and a steady flow of new insights eventually leads to a nearly full revival of the description of strong interactions using a "dual" string theory, albeit this string theory is more complicated than what was envisioned in the late 1960s. QCD can be equivalently described as the old string theory with some modern updates: higher-dimensional and braney updates. The basic concepts of the Regge physics included the Regge trajectory, a linear relation between the maximum spin "J" that a particle of squared mass "m^2" can have; the slope - the coefficient "alphaprime" of the linear term "alphaprime times m^2" - is comparable to the inverse squared QCD scale. The dependence of "J" could be given by a general Taylor expansion but both experimentally as well as theoretically, the linear relation was always preferred. Note that "alphaprime" in "the" string theory that unifies all forces is much much smaller area than the inverse squared QCD scale (the cross section of the proton). We are talking about a different setup in AdS/QCD where the four-dimensional gravity may be forgotten. This picture is not necessarily inconsistent with the full picture of string theory with gravity as long as you appreciate the appropriately warped ten-dimensional geometry. At this moment, you should refresh your memory about the chapter 1 of the Green-Schwarz-Witten textbook. There is an interesting limit of scattering in string theory (a limit of the Veneziano amplitude) called the Regge limit: the center-of-mass energy "sqrt(s)" is sent to infinity but the other Mandelstam variable "t" - that is negative in the physical scattering - is kept finite. The scattering angle "sqrt(-t/s)" therefore goes to zero. In this limit, the Veneziano amplitude is dominated by the exchange of intermediate particles of spin "J". Because the indices from the spin must be contracted, the interaction contains "J" derivatives, and it therefore scales like "Energy^J". Because there are two cubic vertices like that in the simple Feynman diagram of the exchange type, the full amplitude goes like "Energy^{2J}=s^J" where the most important value of the spin "J" is the linear function of "t" given by the linear Regge relation above. The amplitude behaves in the Regge limit like "s^J(t)" where "J(t)" is the appropriate linear Regge relation. You can also write it as "exp(J(t).ln(s))". Because "t=-s.angle^2", you see that the amplitude is Gaussian in the "angle". The width of the Gaussian goes like "1/sqrt(ln(s))" in string units. Correspondingly, the width of the amplitude Fourier-transformed into the transverse position space goes like "sqrt(ln(s))" in string units. That should not be surprising: "sqrt(ln(s))" is exactly the typical transverse size of the string that you obtain by regulating the "integral dsigma x^2" which equals, in terms of the oscillators, "sum (1/n)" whose logarithmic divergence must be regulated. The sum goes like "ln(n_max)" where "n_max" must be chosen proportional to "alphaprime.s" or so. If you scatter two heavy quarkonia (or 7-7 "flavored" open strings in an AdS/CFT context, think about the Polchinski-Strassler N=1* theory) - which is the example you want to consider - the interaction contains a lot of contributions from various particles running in the channel. But the formula for the amplitude can be written as a continuous function of "s,t". So it seems that you are effectively exchanging an object whose angular momentum "J" is continuous. Whatever this "object" is, you will call it a pomeron. In perturbative gauge theory, such pomeron exchange is conveniently and traditionally visualized in terms of Feynman diagrams that are proportional to the minimum power of "alpha_{strong}" that is allowed for a given power of "ln(s)" that these diagrams also contain: you want to maximize the powers of "ln(s)" and minimize the power of the coupling constant and keep the leading terms. When you think for a little while, this pomeron exchange leads to the exchange of DNA-like diagrams: the diagrams look like ladder diagrams or DNA. There are two vertical strands - gluons - stretched in between two horizontal external quarks in the quarkonia scattering states. And you may insert horizontal sticks in between these two gluons, to keep the diagrams planar. If you do so, every new step in the ladder adds a factor of "alpha_{strong}.ln(s)". You can imagine that "ln(s)" comes from the integrals over the loops. What is the spin of the particles being exchanged for small values of "t", the so-called intercept (the absolute term in the linear relation)? It is a numerical constant between one and two. Matt essentially confirmed my interpretation that you can imagine QCD to be something in between an open string exchange (whose intercept is one) and a closed string exchange (whose intercept is two). The open string exchange with "J=1" is valid at the weak QCD coupling - it corresponds to a gluon exchange. At strong coupling, you are exchanging closed strings with "J=2". For large positive values of "t", you are in the deeply unphysical region because the physical scattering requires negative values of "t" (spacelike momentum exchange). But you can still talk about the analytical structure of the scattering amplitude - Mellin-transformed from "(s,t)" to "(s,J)". For large positive "t", you will discover the Regge behavior which agrees with string theory well. Unfortunately, this is the limit of scattering that can't be realized experimentally. Nevertheless, for every value of "t", you find a certain number of effective "particles" that can be exchanged - with spins up to "J" which is linear in "t". The negative values of "t" can be probed experimentally, and this is where string theory failed drastically in the 1970s: string theory gave much too soft (exponentially decreasing) behavior of the amplitude at high energies even though the experimental data only indicated a much harder (power law) behavior. So now you isolate two different classes of phenomena: • the naive string theory is OK for large positive "t" • the old string theory description of strong interactions fails for negative "t"; the linear Regge relation must break down here But the old string theory only fails for negative "t" if you don't take all the important properties of that string theory into account. The most important property that was forgotten 35 years ago was the new, fifth dimension. The spectrum of particles - eigenvalues of "J" - is related to the Laplacian but it is not just a four-dimensional Laplacian; it also includes a term in the additional six dimensions, especially the fifth holographic dimension of the anti de Sitter space. And this term can become - and indeed, does become - important. What is the spectrum of allowed values of "J" of intermediate states that you can exchange at a given value of "t"? Recall that each allowed value of "J" of the intermediate objects generates a pole in the complex "J" plane - or a cut whenever the spectrum of allowed "J" becomes continuous. For large positive "t", the spectrum contains a few (roughly "alphaprime.t") eigenvectors with positive "J"s, and a continuum with "J" being anything below "J=1". For negative values of "t", you only see the continuum spectrum (a cut) for "J" smaller than one. Don't forget that the value of "J" appears as the exponent of "s" in the amplitude for the Regge scattering. We are talking about something like "s^{1.08}" or "s^{1.3}" - both of these exponents appear in different kinds of experiments and can't be calculated theoretically at this moment. Matt argues convincingly that the Regge behavior for large positive "t", with many poles plus the cut below "J=1", is universal. The "empty" behavior at large negative "t" where you only see the continuum below "J=1" is also universal. It is only the crossover region around "t=0" that is model-dependent and where the details of the string-theoretical background enter. And they can calculate the spectrum of "J" as a function of "t" in toy models from string theory. They assume that the string-theoretical scattering in the AdS space takes place locally in ten dimensions, and just multiply the corresponding amplitudes by various kinematical and warp factors - the usual Polchinski-Strassler business. The spectrum of poles and cuts in the "J" plane reduces to the problem to find the eigenvalues of a Laplacian - essentially to a Schrödinger equation for a particle propagating on a line. You just flip the sign of the energy eigenvalues "E" from the usual quantum mechanical textbooks to obtain the spectrum of possible values of "J". And they can determine a lot of things just from the gravity subsector of string theory - where you exchange particles of spin two (graviton) plus a small epsilon that arises as a string-theoretical correction. For large positive "t", you obtain a quantum mechanical problem with a locally negative (binding) potential that leads to the discrete states - those that are seen at the Regge trajectory. When all these things are put together, they can explain a lot about physics observed at HERA. The calculation is not really a calculation from the first principles because they are permanently looking at the HERA experiments to see what they should obtain. But they are not the first physicists who use these dirty tricks: in the past, most physicists were constantly cheating and looking at the experiments most of their time. ;-) Wednesday, April 26, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Rae Ann: Alien recycling By Rae Ann, one of the four winners who have seen the #400,000 figure. My first grader brought home some interesting EPA publications for school children. While I totally support teaching children to recycle and be mindful of wise use of resources I think it's a little off to tell them that 'garbage leads to climate change'. And what's with the little flying saucers and aliens (graphics in the publications)? What do they have to do with climate change and garbage?? One publication does open with the statement, "Space creatures might think the idea of reusing containers is an alien concept but here on Earth it's easy to keep an old jar out of the trash and give it new life." (That is a direct quote and the missing comma is their punctuation error.) Well, how does the government know that aliens don't recycle? Is it because they have left a bunch of their stuff here? Hmm? Sounds like a very prejudiced and discriminatory attitude to me. What is that teaching our kids about aliens?? The Czech Fabric of the Cosmos My friend Olda Klimánek has translated Brian Greene's book "The Fabric of the Cosmos" into Czech - well, I was checking him a bit, reading his translation twice - and the book was just released by Paseka, a Czech publisher, under the boring name "Struktura vesmíru" (The Structure of the Universe). The other candidate titles were just far too poetic. I think he is a talented writer and translator and there will surely be many aspects in which his translation is gonna be better than my "Elegantní vesmír" (The Elegant Universe). What I find very entertaining is the different number of pages of this book (in its standard hardcover editions) in various languages: • Czech: Struktura vesmíru, 488 pages • Polish: Struktura kosmosu, 552 pages • English: The Fabric of the Cosmos, 576 pages • Portuguese: O tecido do cosmo, 581 pages • Italian: La trama del cosmo, 612 pages • French: La magie du Cosmos, 666 pages • Korean: 우주의 구조, 747 pages • German: Der Stoff, aus dem der Kosmos ist, 800 pages I am not kidding and as far as I know, Olda's translation is complete. If you need to know, 800/488 = 1.64. ;-) The Czech Elegant Universe was also much shorter than the German one but the ratio was less dramatic. I like the rigid rules of German but this inflation of the volume is simply off the base. The Czech language has similar grammar rules but it avoids the articles and it has much more free word order. A slightly more complex system of declination removes many prepositions. And Olda may simply be a more concise translator. :-) Tuesday, April 25, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Uncle Al: on the equivalence principle By Uncle Al who has submitted a #400,000 screenshot. Does the Equivalence Principle have a parity violation? Weak interactions (e.g., the Weak Interaction) routinely violate parity conservation. Gravitation is the weakest interaction. Either way, half of contemporary gravitation theory is dead wrong. Gravitation theory can be written parity-even or parity-odd; spacetime curvature or spacetime torsion. Classical gravitation has Green~Rs function Newton and metric Einstein or affine Weitzenböck and teleparallel Cartan. String theory has otherwise and heterotic subsets. Though their maths are wildly different, testable empirical predictions within a class are exactly identical... ...with one macroscopic disjoint exception: Do identical chemical composition local left and right hands vacuum free fall identically? Parity-even spacetime is blind to geometric parity (chirality simultaneously in all directions). Parity-odd spacetime would manifest as a background pseudoscalar field. The left foot of spacetime would be energetically differently fit by a sock or left shoe compared to a right shoe. String theory could be marvelously pruned. Does a single crystal solid sphere of space group P3(1)21 quartz (right-handed screw axes) vacuum freefall identically to an otherwise macroscopically identical single crystal solid sphere of space group P3(2)21 quartz (left-handed screw axes)? Both will fall along minimum action paths. In parity-odd spacetime those local paths will be diastereotopic and measurably non-parallel -- a background left foot fit with left and right shoes. Frank Wilczek: Fantastic Realities Technical note: Everyone who will be a visitor number 400,000 and who will submit an URL for the screenshot proving the number today will be allowed to post any article on this blog, up to 6 kilobytes. The reader #400,000 was Rae Ann who just returned from a trip - what a timing. :-) UncleAl has still opened the page (reload) when it was showing #400,000, much like Doug McNeil. I have no way to tell who was the first one. The others just reloaded the page and obtained the same number because it was not their first visit today, and it thus generated no increase of the counter. Congratulation to all three. fantastic realities Yes, I just saw this irresistable book cover at Betsy Devine's blog. The book is called and Frank Wilczek is apparently using a QCD laser. The journeys include many of Wilczek's award-winning Reference Frame columns. Have you heard of Wilczek's Reference Frame columns in Physics Today? Let me admit that I have not. ;-) Because of the highly positive reviews, your humble correspondent has just decided to double the number of copies that Frank Wilczek is going to sell. Right now, the yesterday's rank is 100,000 and today's rank is 130,000. Look at the promotional web pages of the book, buy the book, and see tomorrow what it does with the rank. Remember that the rank is approximately inversely proportional to the rates of selling the books. Update: At 7:00 p.m., the rank was about 11,000, better than 136,000 in the morning. On Wednesday 8:30 a.m., the rank was 9,367, an improvement by a factor of fifteen from the rank 24 hours earlier. The promotional web pages also reveal that Betsy is proud to be the 4th Betsy found by Google. Congratulations, and I wish her to capture the most important Frank Wilczek blog award, too. ;-) Monday, April 24, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Bruce Rosen: brain imaging Bruce Rosen started the colloquium by saying that it is useful to have two degrees - PhD and MD - because every time he gives a talk for the physicians, he may impress them by physics, and every time he speaks in front of the physicists, he may impress them by medicine. And he did. Although there are many methods to study the anatomy and physiology of the brain - such as EEG and/or flattening the brain by a hammer which is what some of Rosen's students routinely do - Rosen considers NMR to be the epicenter of all these methods. (This is a conservative physics blog, so we still refer to these procedures as NMR and not MRI.) This bias should not be unexpected because Rosen's advisor was Ed Purcell. Some of the results he shown were obtained by George Bush who is an extremely smart scientist as well as psychiatrist, besides being a good expert in B-physics. Rosen has shown a lot of pictures and videosequences revealing how the activity of the brains depends on time in various situations, on the presence of various diseases, on the age, and on the precise way how the brains are being monitored. Many of these pictures were very detailed and methods already exist to extract useful data from the pictures and videos that can't be seen by a naked eye. Human brains are being observed at 10 Tesla or so, and magnetic field of 15 Tesla is the state-of-the-art environment to scan the brains of smaller animals. The frequency used in these experiments is about half a gigahertz. Many tricks how to drastically reduce the required amount of drugs that the subject must take before the relevant structures are transparent have been found. Most of the data comes from observations of water that is a dominant compound in the human body and not only the human body. It turns out that the blood that carries oxygen and the blood that carries carbon dioxide is diamagnetic and paramagnetic, respectively. That simplifies the NMR analysis considerably. There's a lot of data in the field and fewer ways to draw the right conclusions and interpretations out of the data. OVV in higher dimensions? Brett McInnes proposes a generalization of the Hartle-Hawking approach to the vacuum selection problem pioneered by Ooguri, Vafa, and Verlinde (OVV) and described by this blog article to higher dimensions. McInnes identifies the existence of two possible Lorentz geometries associated with one Euclidean geometry as the key idea of the OVV paradigm. He argues that the higher-dimensional geometries must have flat compact sections which is certainly a non-trivial and possibly incorrect statement: Everything you wanted to know about Langlands ... geometric duality but you were afraid to ask could be answered in this 225-page-long paper by Edward Witten and Anton Kapustin: Previous blog articles about the Langlands program the following ones: A semi-relevant discussion about related topics occurs at Not Even Wrong. Translation and related news Just a technical detail: I've added two utilities to the web pages of individual articles: • related news and searches, powered by Google (blue box under each article) • translations of the blog articles to German, French, and Spanish, powered by Google (three flags at the top of the articles) I apologize to the readers from the remaining 142 countries that also visit this website - according to the Neocounter - besides the three countries indicated above that their language has yet to be included. :-) Recent comments Also, "recent comments" were added to the sidebar of the main page. The recent slow comments in the lower Manhattan (skyscraper) area are sorted according to the corresponding article. You may find out which article the comment belongs to if you hover over the timestamp. You can also click it. There are also ten "recent fast comments" in a scrolling window at the upper portion of the sidebar. Sunday, April 23, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Leonard Susskind Podcast I am just listening a podcast with Leonard Susskind. You can find the link somewhere on this page. I will add it here later. Then you click "Podcasts" on the left side, and the second one is Susskind: the 5.57 MB is 23:55 long. Entertaining, recommended. Manic Miner Manic Miner... Manic miner flash game removed from the page because it was making a lot of noise. Please click at the second "Manic miner". How many people used to play such things 20 years ago? Links to previous flash games on this blog can be found here. PageRank algorithm finds physics gems Several colleagues from Boston University and from Brookhaven have proposed a method to look for influential papers using the same algorithm that Google uses to rank web pages. This algorithm uses the list of web pages (or papers) and the links between them (or citations) as input. The web pages or papers are nodes of a graph and the citations are oriented links. It works as follows: You have lots of "random walkers". Each of them sits at some paper XY. In each step, each random walker either jumps to a random paper in the full database, with probability "d", or it jumps to a random paper mentioned in the references of the previous paper XY, with probability "1-d". Once the number of random walkers associated with each paper reaches equilibrium (approximately), the algorithm terminates. The number of walkers at each paper gives you the rank. Illinois: Particle Accelerator Day Illinois' governor has declared this Saturday (or Friday?) to be the and everyone must celebrate. Mr. Blagojevich is trying to attract the future linear ILC collider to his state. Congratulations, Argonne and Fermilab. illuminates some ILC attempts of these two facilities here. Meanwhile, on the same day, the celebrations of the Earth Day, invented by John McConnell, dominate in Massachusetts. Those who are already fed up with the Earth - and with Google Earth - may try Google Mars. Via JoAnne. Detlev Buchholz: algebraic quantum field theory Evolving proton-electron mass ratio? Update: In 2008, a new experiment with ammonia found no time-dependence in the last 6 billion years. Klaus Lange has pointed out that describes a Dutch experiment performed primarily in the European Southern Observatory - hold your breath, this observatory is located in Chile. They measured the spectrum of the molecular hydrogen that depends on the proton-electron mass ratio "mu". Note that this ratio is about 1836.15. Twenty years ago I played with the calculator and it turned out that this number can be written as • 6.pi^5 = 1836.12. This agreement has promoted me to the king of all crackpots: with only three characters, namely six, pi, five, I can match around 5 or 6 significant figures of the correct result. Actually my calculator only had 8 significant figures (with no hidden figures) and I exactly matched 8 significant figures of 1836.1515 written in the mathematical tables of that time. Later I learned that someone else has actually published this "discovery" fifty years ago, and the agreement got worse with better calculators and better measurements in particle physics. More seriously, the Dutchmen now claim that the ratio was 1.00002 times higher twelve billion years ago. The New Scientist immediately speculates that this could prove extra dimensions or string theory. I, for one, have absolutely no idea where this statement comes from. I personally believe that these constants have been constant in the last 12 billion years - and moreover this opinion is completely and naturally compatible with string theory. George Bush meets Prof. Albert Einstein As soon as Lee Smolin asked the question Meeting a robot in 1999 Thursday, April 20, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Jefferson Physics Laboratory becomes a historic site Jefferson Physical Laboratory where we have offices has been declared a historic site by the American Physical Society, mostly because it is the first building that was ever built in the U.S. for physics research. Figure 1: The picture is mine See the letter that President Lawrence Summers and the department chair John Huth received here. The picture above is from 2002 but you already see the new attick which is pretty these days. Recall that it is exactly this Jefferson tower where the first gravitational red shift experiment was done by Pound, Rebka, and Snyder in the early 1960s. Its 22.6 meters were enough to measure the 4.92 x 10^{-15} relative change of the frequency of 14.4 keV gamma rays from iron-57. The prediction of general relativity - of a red shift factor "(1+gh/c^2)" (verify the numbers!) - was confirmed with a 1% accuracy. Integrability: giant magnons Diego Hofman and Juan Maldacena - these two physicists should not be confused with Diego Maradona - study the • excitations of N=4 gauge theory in d=4 in the planar limit. Recall that according to the gauge-gravity holographic correspondence, the strong coupling limit describes type IIB string theory on the product space "AdS5 x S5". A few years ago, Berenstein, Maldacena, and Nastase have shown that the gauge theory is not equivalent to pure supergravity but the full string theory; they identified the strings with the long traces. This research direction has been transformed into the studies of integrability and spin chains (these are the discretized strings) and we have talked about this topic at various places, for example here. This spin chain itself carries excitations and the most important ones are called magnons: it's an excitation that reverts the direction of a single spin (or the "magnetic moment" if you wish) in the spin chain and propagates as a wave along the chain. In the planar limit, i.e. up to the leading terms in the "1/N" expansion, physics should simplify. Many people have believed for some time that a full exact solution of string theory in this limit should exist. This task is equivalent to a full understanding of the worldsheet of a string propagating in the "AdS5 x S5" background for the simplest choice of its topology. In the variables mentioned above, the question is reduced to the spectrum, the dispersion relations, and the S-matrix of the magnons. Effectively, one needs to study the S-matrix for various polarizations and encounters a "256 x 256" matrix. Its form was recently fixed by Niklas Beisert, up to an overall normalization. Moreover, one month ago, Romuald Janik of Poland has shown how the crossing symmetry emerges from the formulae for the S-matrix. Hofman and Maldacena confirm the results but add something extremely interesting: the adjective "giant". In analogy with giant gravitons, you may suspect that there will be a new picture that replaces the original point-like magnon excitations by something big. Harvard Crimson: environmentalism is dead The Harvard Crimson has looked at environmentalism with critical and bright Harvard eyes and concluded that and that we have a chance to enter a new era in which the environment itself, not an ideology, is a winner. Piotr Brzezinski '07, a member of the Resource Efficiency Program, argues that all the dire predictions have so far been falsified, and if our care about the environment is supposed to impact reality in the future, those who care must abandon some methods such as the authoritative Soviet style manifested in the Kyoto protocol, open some taboos for debate, and start to publish realistic appraisals of reality even if they lead to less exciting headlines in the newspapers. Wednesday, April 19, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Michele Papucci: neutrino optics Michele Papucci from Berkeley gave a talk about neutrino optics. There will be a preprint about it with Gilad Perez, Hitoshi Murayama, and one more author whose name will be completed here if necessary. When we test cosmological models, we rely on regular optics of photons. Are there other "eyes" we could use? They must be weakly interacting, so the only possibilities are • gravitational waves • neutrinos Michele only focused on the last ones, the neutrinos. More precisely, it is the electron antineutrinos he is interested in. They are produced by supernovae (yes, there is some neutrino oscillation physics you must take into account); on the other hand, the Sun only creates neutrinos so the solar neutrino backgrounds does not affect their proposed experiments. You can't really measure the direction from which they come (pretty fuzzy optics) because the particles created in the inverse beta decay have a momentum that is virtually uncorrelated with the antineutrinos' momenta. So the only thing you can measure is the distribution of energies. BBC climate software confuses 200,000 computers This story is a good example how the climate models work in the most optimistic case. The idle time of most PCs is wasted. About 30,000 people like me run software such as MPRIME - the search for the greatest prime integers in the world. It is a well-defined activity and there are very good reasons to trust this software. Actually, there exist other programs above the BOINC platform, and some of them can be found in this list: However, some people don't like things like LHC at home too much. Instead, they want to save the world and help the humanity. So they download the third program in the list above, namely You can also join the group of 200,000 enthusiasts, the saviors of the planet, if you click the link above and continue with "Taking part in CPDN". This community will calculate the date of the armageddon. ;-) But wait a minute. The Reference Frame has been saying that the existing climate models are not trustworthy and those who run them often fail to respect basic principles of science. Ignore bloggers at your peril Clifford Johnson has pointed out an article in the Guardian. The article discusses some kind of research about the influence of bloggers. It also mentions three companies that were affected by bloggers because the bloggers described physics of Kryptonite locks, McDonald's abracadabra, as well as Dell whose last CEO was possibly fired. Well, the visitor data indicates that very different segments of the society are being influenced. For example, many people were looking for Angela Merkel semi-naked today. And of course, people are still interested in Mary Winkler as well as a potential massive nuclear strike. More demanding readers look for physics blogs uncertainty as well as the sad story of John Brodie, the physicist. Tuesday, April 18, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Cosmological breaking of SUSY and the seesaw Tonight, Michael McGuigan has made a new step in his attempt to make the seesaw mechanism for the cosmological constant realistic: The paper combines the previous work of Michael McGuigan - that we discussed here and that was mostly based on this blog article and/or comments of this article by Sean Carroll - with the brave proposal of my (former) adviser Tom Banks: Recall that Tom has proposed to interpret the cosmological constant - the curvature of empty space - as the primary effect and the supersymmetry breaking in particle physics as its consequence. This changes the question from "why is the cosmological constant so small" to the question "why is the supersymmetry breaking in particle physics so strong". The supersymmetry breaking induced by the tiny curvature of our Universe would normally be negligible, and Tom circumvents this problem by suggesting that an important exponent in his power law is corrected from a classical value of 1/4 to the value of 1/8 by huge effects of virtual black holes whose loops are localized near the de Sitter horizon. The relation with the seesaw mechanism is not quite clear to me - although both methods of course try to obtain the same kind of result for the vacuum energy (but via different effects, I think). Right now I don't have enough time to tell you exactly what I think about the proposal but the paper is rather concrete and tries to apply the Wheeler-DeWitt equation on various string-theoretical backgrounds. He seems to show that the off-diagonal elements of the vacuum energy (transitions) exist in three spacetime dimensions or less. Can you obtain these off-diagonal elements from Coleman-DeLuccia-like instantons? I believe that the proposal is interesting enough to be looked at. Incidentally, Apple finally offers Mac users a decent operating system. It is called Windows XP. Monday, April 17, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Stanislav Petrov supersedes Easter Bunny and Jesus Christ During the paganic era, people would celebrate Easter as the holidays of spring, fertility, and Easter bunny. The Christians cleverly overwrote this special season by the anniversary of resurrection of Jesus Christ, our savior. However, things changed again in 2006. The liberal blogosphere, including Cosmic Variance and In Search of 42, among hundreds of other blogs, has replaced the Easter bunny and Jesus Christ by a Soviet military officer: the Easter has become the Stanislav Petrov Day. It is not exactly clear why the Easter season was chosen. Well, Stanislav Petrov (*1939) saved the world on September 26th, 1983. He realized that the Soviet computer system was crappy - because it was a technology developed in a left-wing political system - and discarded the warning of his computers that the American missiles were approaching the Soviet targets. By having failed to inform his superiors, he has arguably saved half a billion lives. ;-) The details have been secret until 1998. However, the rough story was not. I remember that on Monday, September 26th 1983, when I was in the 4th grade, during Andropov's era, we were just playing volleyball in the gym or something like that when the school radio announced that the international situation deteriorated and the conflict was imminent. We have never learned anything else beyond this single message in the school radio and the worries faded away completely. Today, Petrov lives in relative poverty as a Russian pensioner. A San Francisco peace organization named him the new savior of the world (only one of his two predecessors enjoys the same honor; don't confuse the honor with the true savior of modern music) and awarded him with a breathtaking amount of $1,000. Congratulations. If someone wants to send him more money, let me know. Back to 2006 But we live in 2006 and the main target right now is not Moscow but Tehran. Professor James Miller who is a game theory expert and a candidate for the president of Harvard - one that vows to defeat feminism - has offered a smooth scenario how the U.S. attacks on Iran will be started and justified. The Israeli prime minister will inform Bush that Israel is threatened and it will have to nuke Iran unless the nuclear program of the crazy mullahs is stopped. Because Iran wants to wipe Israel off the map, Israel has a kind of moral right to make such an announcement. The U.S. weapons are much stronger and cleaner than the Israeli weapons. By using both types against Iran, Bush will save not only Israel but he will also save millions of Iranian lives that would otherwise be lost because of the dirty Israeli nukes. Next year, Easter Bunny, Jesus Christ, and Stanislav Petrov will be replaced by George Bush (and James Miller), the new savior. Mahmoud is probably a nail Meanwhile, it's been announced that Mahmoud Ahmadinejad is probably a nail. What kind of nail? He is a nail of the Hidden Imam who is secretly the Sovereign of the World and who has been hiding since 941. :-) See here. Mahmoud received the presidency from the Hidden Imam for promising to provoke a clash of civilizations. Mahmoud realizes that the U.S. is the last infidel country whose military is not impotent and Mahmoud, supported by God, will defeat the U.S. in a long asymmetric war. But he will wait until 2008 when Bush is out of office because Bush is clearly an aberration - everyone else since Truman would run away. A divine anthropic coincidence puts the triumph of the Iranian Manhattan project, secretly pursued by Imam Hossein Nuclear University, to the same year 2008, Mahmoud argues. Wow. These people are real nutcases which is not a good combination with the advanced P-2 centrifuges that, according to the New York Times, are suddenly again being developed in Iran. Mahmoud has just given Hamas the same thing that Harvard has pledged for the feminist programs: 50 million dollars. Finally, Reuel Gerecht from AEI asks the question: The U.S. and the U.K. have already been training an occupation of a fictitious Middle East country called "Korona" in 2015 whose territory happens to coincide with Iran and whose citizens are Iranians. Well, obviously, some dynamics is on both sides. Saturday, April 15, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Carlo Rovelli and graviton propagator Several readers have asked me what I think about a new in loop quantum gravity, an attempt described as a groundbreaking paper by a fellow blogger and included in the unfinished revolution by another blogger. It would be far too dramatic to say that I am flabbergasted but one thing is clear. The work is so manifestly incorrect that I just can't fully comprehend how someone who has attended at least one quantum field theory course can fail to see it. But of course, yes, I am happy that people are still trying different things and some of them don't get discouraged by decades of failure - and I always open such papers with an enthusiastic hope that a new breakthrough will appear in front of my eyes. ;-) The paper linked above is supposed to be a more complete version of Rovelli's previous graviton propagator paper. Indeed, you can see that several pages in these two papers are identical. Most of these two papers' assumptions are misguided, nearly all the nontrivial steps are erroneous, and the results are incorrect, too. Semiclassical GR Let us start with semiclassical gravity. At this level, the graviton propagator is philosophically analogous to the propagators of all other quantum fields you can think of - for example the electromagnetic field. You must start with a background; the simplest background is the flat Minkowski space. This means that you write the full metric as • g_{mn} = eta_{mn} + sqrt(Gnewton) h_{mn} Here, eta_{mn} is a background, i.e. a classical vacuum expectation value of the quantum field while h_{mn} is the fluctuation around this background that remains a quantum field and is treated as a set of small numbers. The full gravitational action can be expanded in "h_{mn}", to get Happy Easter Something analogous to annihilating letters, jumping frog, shooting frog, and stained glass. Click here for Easter eggs in full screen. Cuba vs. Czechia 1:1 Meanwhile, Cuba has expelled the Czech diplomat, Mr. Stanislav Kázecký, for spying on behalf of the U.S. - which is most likely not true. The Czech Republic has followed all the decent traditions and refused to extend the visa for a Cuban diplomat, too. :-) While the Czechoslovak Socialist Republic was one of Cuba's closest friends, the Czech Republic is its #2 foe. A large portion of the U.N. resolutions that criticize the situation in Cuba as well as the trade restrictions for the European Union members have been proposed by the Czech Republic. There have been many recent incidents between the two countries. For example, the countrymate of mine above is a psychologist called Helena Houdová. (In fact, she is my citymate, from Pilsen.) She was former Miss Czech Republic 1999 and the Dean's world hero of the week. In January 2006, she decided to take pictures of the Cuban slums, something that Fidel Castro pretends not to exist. She was immediately arrested (together with her friend, Mariana Kroftová, who is also a model) - for taking the pictures - and the commies have confiscated her film. As you can imagine, those communist morons can't really compete with a modern capitalist young woman from the Czech Republic and her state-of-the-art technologies. She stored a memory card from her digital camera in her bra. Today, she is showing the alarming pictures of the "island of freedom" all around the world. Cuba has canceled various celebrations of the Czech national holidays and expelled or temporarily arrested many Czech citizens - the aristocrat Schwarzenberg and the politician Ivan Pilip (with his friend Filip Bubeník) are two most well-known examples. You can try to liberate Pilip by shooting 50 Cuban agents here. Friday, April 14, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere La Griffe du Lion: prison ratios La Griffe du Lion has a new technical analysis of a sociological issue. He asks: His answer is based on mathematics that is more or less equivalent to his previous analysis of women in science. The conservative states impose a lower threshold to be arrested - they only tolerate smaller crime. This makes the groups of people behind bars less selective. Because the black crime Gaussian is broader and higher than the white ones in the same way as the male math aptitude Gaussian is broader and higher than the female one, smaller selectivity is translated to a less dramatic ratio between the black and white percentages. It is therefore logical and inevitable that the racial disparity is more striking in the left-wing states. The indentity of La Griffe du Lion remains a mystery to us. Is George W. Bush a feminist? David Goss has sent me an insightful that starts with the announcement that the Bush administration is going to investigate universities with fewer women in math and science than the feminists such as Barbara Boxer would like. Schlafly notices that even though Bush has been the president for more than five years, Bill Clinton's feminist policies are apparently still in force. She asks: Is Bush a feminist or just a gentleman who is intimidated by the feminists? At the physical level of policies, there is no real difference between the two answers. 171 wrestling teams have already been intentionally destroyed by these dumb policies and math and science may follow. Schlafly explains how this mindless feminist mentality, based on a striking misunderstanding of the differences between men and women, can have a devastating effect on universities and beyond. There is of course not a shred of evidence of any discrimination, she writes: men are simply more interested in competitive sports, math, and science. Moreover, when it comes to muscle growth, testosterone is the key to success. After having explained how unreasonable the feminist approach is, she says that the Bush administration is ignoring one example of increasing gender disparity that can indeed have bad consequences: a decreasing percentage of male schoolteachers. With all my respect for George W. Bush, let me offer an obvious answer to Schlafly's basic question. Yes, Bush is a feminist and he in fact does think that women are brighter in many respects including science and math - and most discussions he has with the First Lady have to reinforce this belief. ;-) Bert Schroer vs. path integral Prof. Bert Schroer has publicized his essay in which he argues that there is something wrong with the path integrals and they should be universally replaced by algebraic methods. Because half of the Internet is going to decide that he must be right, at least in some sense, let me also post the correct answers to his doubts - which includes a trivial assertion that his statements are nonsensical. The first couple of pages are filled with a content-free bitterness about the path integrals and a unsubstantiated promotion of algebraic quantum field theory: the kind of silly unphysical whining that all of us know very well from "Not Even Wrong" and other places on the Internet. The author is upset about the "string theory caravan" that does not support "great" ideas - such as the "great" idea of Prof. Schroer himself that the path integrals are bad. The first non-trivial statement appears on page 3. Prof. Schroer essentially claims that the path integrals give a wrong result if you use them to describe a spinning top. The critical sentence is the following: • The paradoxical situation consists in the fact that although the higher fluctuation terms (higher perturbations) are nonvanishing, one must ignore them in order to arrive at the rigorous result. Wow. The path integral fails at higher orders, he says. Of course that this statement is a complete nonsense. Path integrals are better, not worse, to compute loop effects, especially if one has to deal with non-Abelian gauge symmetries. By introducing the Faddeev-Popov ghosts, the best formalism to calculate higher-order effects in this theory may be developed. Moreover, the path integral is also a superior approach in obtaining non-perturbative corrections such as instanton corrections. Path integrals also make the Lorentz symmetry of quantum field theories manifest and they have other advantages, too. Thursday, April 13, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Google calendar A new service by Google is You need to have a Google account - for example a Gmail account. With Gmail, you may also incorporate the calendar - with the list of things you have to do - into the corner of your Gmail inbox. The interface is based on a traditionally fresh, Google-like, no-nonsense environment. See Calendar help for more details. Incidentally, you will also be able to make Google searches using your voice and telephone: Flux compactifications of M-theory and F-theory Today we had an oral exam, some minor progress in the calculations of the black hole corrections, and I attended Cumrun Vafa's class which is always a good opportunity to refresh one's knowledge of various things. He started with the Dijkgraaf-Vafa correspondence, and finished with flux compactifications. I will write comments about Dijkgraaf-Vafa later, but let me start with the following: Flux compactifications As the Becker sisters explained, the compactification of M-theory on Calabi-Yau four-folds (which are eight-real-dimensional which leaves three large spacetime dimensions) actually requires nonzero values of the four-form field strength G4. It is because the eleven-dimensional action contains the terms of the form • S = int C3 /\ ( G4 /\ G4 - I8(R) ) + ... The first term is a tree-level Chern-Simons term needed for classical supersymmetry of eleven-dimensional supergravity while the second term depending on the Riemann tensor R may be viewed as a one-loop correction. Note that the one-loop terms are often determined independently of the UV details of physics and M-theory is no exception. Wednesday, April 12, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Richard Lindzen: Climate of Fear Prof. Richard Lindzen of MIT is one of the world's most respected climate scientists or, if you at least allow me to use the alarmists' words, he is considered by them to be the world's most respectable climate skeptic. See also: Lindzen 2008: Climate Science: Is it currently designed to answer questions? Today, in his Wall Street Journal article, he describes not only the reasons why the public should not believe the statements that the carbon dioxide emissions are bringing us closer to the armageddon but especially the intense intimidation campaign that the scientists who reach politically incorrect conclusions have to face. One of the topics that Lindzen talks about are the double standards in the journals where non-alarmist articles about the climate are commonly refused without review as being without interest. I have already learned how it works which is why I recommended Steve McIntyre not to spend too much time trying to get his articles published in "mainstream" journals. But the main focus of Lindzen's discussions seems to be funding. Funding is something that is cut for all of those who indicate the obvious - namely that science offers no justification for bizarre policies such as the Kyoto protocol. Harvard energy initiative On Monday, we had a faculty lunch meeting at the Faculty Club and one of the topics was the so-called "Harvard energy initiative". A short story is that a large amount of money was given to something described by these three words - and up to 10 new faculty positions are expected to be created - except that no one knows what "Harvard energy initiative" means and what people should be hired. Tuesday, April 11, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Microsoft: competition for Google Scholar Everyone knows a search engine that is used for more than 50 percent of the searches in the world. Many of us find another service comparably priceless: That's a place where you can search through the full text of scientific articles in all fields you can imagine, and get the results sorted according to the relevance which is a criterion that includes the number of citations. In the 1980s, IBM would be a very important company in the computer industry but Microsoft took over. Is Google going to make Microsoft obsolete in a similar way? What will be the result of the Microsoft vs. Google competition? Well, the guys in Microsoft seem to be smarter than those in IBM 20 years ago and they don't want to give up. So the counterparts of are while the counterpart of will be It became available tonight before 9 p.m. Eastern time - but the website so far fails to give any scholarly results. Instead, I get the standard search results. Also, arXiv seems to be absent from their list of journals - the only journal with "arxi" in it is "Rethinking in Marxism". Monday, April 10, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Elizabeth Lada: stars born in clusters Elizabeth Lada from University of Florida is an astronomer who is most famous for defending the statement that most stars are born in clusters. This statement has brought two communities that studied star formation - those who only wanted the overall rate and those who investigated individual cases microscopically - closer together. It was interesting to see a nice colloquium from an adjacent field - a field whose conclusions are slightly less theoretical, quantitative, universal, and principled than ours but one that can offer nicer pictures. Some of the main messages of the talk are the following: CDF excitement - press conference This is how good P.R. looks like at Fermilab. ;-) Or is it more than good P.R.? From: June Matthews We've received advanced word on some exciting new results from the CDF experiment at Fermilab, where Christoph Paus heads up the MIT effort. This is the wording: Fermilab will hold a press conference at 4:00 pm today (Central Time) with details on the precision measurement of extremely rapid transitions between matter and antimatter. It has been known for 50 years that very special species of subatomic particles can make spontaneous transitions between matter and antimatter. In this exciting new result, CDF physicists measured the rate of these matter-antimatter transitions for the B sub s meson, which consists of the heavy bottom quark bound by the strong nuclear interaction to a strange anti-quark, a staggering rate that challenges the imagination: 200 billion times per second. There will be a life feed of the press conference available on the web at: Marc Kastner P.S. You can click the envelope icon two lines below this one to send the announcement about the press conference to all your friends who might be interested. The online press conference started at 5:00 p.m. Eastern Daylight Saving Time or 2:00 p.m. Californian time and ended one hour later. Main content: They determined that at 99.5% confidence level, they have seen oscillations between matter and antimatter with frequency 17.33 plus minus something inverse picoseconds. The results are consistent with the Standard Model and place new upper bounds on flavor violation of new physics such as supersymmetry. Harper under pressure: scrap Kyoto While many other politicians experienced pressure from the activists, Stephen Harper, the prime minister of Canada, is under pressure from who urge him to scrap the "pointless" Kyoto protocol. See the full letter and signatories here. They explain that "global climate change" is an emerging science and the Kyoto treaty would not have been negotiated in the 1990s if the parties knew what we know today. The cliche "climate change is real" is a meaningless phrase used by the activists to fool the public into believing that a climate catastrophe is looming. Climate is changing all the time because of natural reasons and the human impact still can't be disentangled from the natural noise. Meanwhile, Rona Ambrose has reviewed the situation and concluded that the targets can't be met by Canada: it's impossible. The Canadian economy is recently doing very well which is of course very bad for similar anti-growth policies: the emissions are growing while they should be shrinking according to the protocol. I think that Canada itself should also honestly admit that if we will hypothetically face warming, Canada will benefit from it. The goal should be to isolate the countries that are supposed to be the "losers" of the hypothetical warming and help them. And also help those countries that face problems that are unrelated to warming which is far more often the case. ;-) But help them not with the crazy egalitarian policies according to which the whole planet must be heated up or cooled down simultaneously, but help them by rational, focused, meaningful projects. The U.N. should allow Canada to change what its contributions will be and the whole U.N. framework for climate change should be re-built on new principles. See also: Kyoto hopes vanish. Rona Ambrose now intends to challenge the international focus on setting emission targets. I am sure she has enough intelligence and charm to do important things. Incidentally, the last link explains some proposed biomass projects that could actually make at least some sense but even these things should be studied and planned rationally. Prof. Bob Carter, a paleoclimate geologist, explains in the London Telegraph that the main problem with the global warming is that it stopped in 1998. Meanwhile, Al Gore has admitted that the global warming is no longer a scientific or political issue: it is a moral or a religious issue, if you will, and Al Gore is a prophet. Figure 1: The picture from the Boston Globe shows what the alarmists consider a "balanced journalism" and fair reporting about the climate. John Brodie - sad story Today, Rutland Herald offers a very sad story about whom many people, not only at Princeton University, Stanford University and the Perimeter Institute, know pretty well. John has suffered from bipolar mental illness - the same disorder that Mary Winkler has been treated for - and jumped into a cold river on January 28th, 2006. Technically, his most well-known paper was his work with Amihay Hanany about brane boxes but the paper he co-authored and one can't forget is with Bernevig, Susskind, and Toumbas about the construction of the quantum Hall effect from D-branes. Via Not Even Wrong. Sunday, April 09, 2006 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Readying a massive (nuclear) strike on Iran Update: Iran claims to have shot down an unmanned airplane from Iraq on Sunday. In the Czech Republic, it is the news #1 at major servers such as but no one seems to care in the U.S. According to the April 17th issue of and its investigative journalist Seymour Hersh, the White House is finalizing plans for a major air attack against selected targets in Iran. The situation has developed quite a bit during the last year. The theory behind these plans is that an attack is the only method how to stop Mahmoud Ahmadinejad, a modern potential counterpart of Adolf Hitler as the White House officials describe him in private discussions, from developing nuclear weapons and using them against Israel and, with the help of terrorists, against the whole civilized world. The attacks are meant to humiliate the Iranian religious government and to make the people overthrow it. I personally don't believe that the bombing would encourage Iranians to follow America. I did not believe similar idealistic predictions in Iraq either. The support of Hussain was clearly significant. Environmentalists who like sustainability should like the bombing campaign because the "coercion" attacks will be "sustained". Another theory is that Ahmadinejad sees the West as "wimps who will cave in". Some sources argue that it is a public misconception that Bush has been mostly thinking about Iraq since 9/11 - the main and more ambitious ideas were always about Iran. Even Quantoken agrees that the real danger is Iran. The White House is secretly communicating with the members of the U.S. senate and no one really objects to the idea of a war. There is no international opposition either because no one really likes the regime of Iran, Hersh argues. Even ElBaradei agrees that the Iranian leaders are 100% certified nutcases. On the other hand, no other country - not even Great Britain - is going to actively support nuking. Some plans are already underway. Some of the Iranian nuclear facilities are deep underground (25 meters) and Pentagon believes that they will require a bunker-buster tactical nuclear weapon such as B61-11, the "earth penetrating" thermonuclear daughter of the old B61-7 gravity bomb, developed in 1997 under Clinton. The energy from this key nuclear product is able to penetrate up to 100 meters of soil (not rock) and the bomb explodes 6 meters beneath the surface. One of the main targets is Natanz, 300 kilometers south of Tehran. This particular plan is not technologically new because the U.S. was thinking about bombing a similar facility near Moscow in the early 1980s. Rather detailed plans already exist how big a part of the air force of Iran has to be eliminated for the fix and what to do with the mess that would probably emerge in Iran and Southern Iraq. Controversy exists how many places would have to be bombed and whether the nuclear option is useful. I definitely recommend you to read the article. What about the Reference Frame? I am always afraid of a war - and I am always repelled by its obvious negative consequences. On the other hand, there seems to be a rather clear danger in the air (athough I can't rigorously prove it), and if this operation became necessary and remained a job for the air forces and avoided ground battles, I would be moderately optimistic because all these operations in the past were rather successful. Incidentally, the U.S. troops in Iran will mark the facilities by lasers to increase the accuracy of the operation and reduce the civil casualties. Nuclear weapons have been silent for 60 years but they're not really a hot new technology. At the high school, during the first Gulf War, our classes were often cancelled and we were watching. Most of the boys in our class were truly impressed. Whenever the U.S. technology edge is being displayed, one can always see the natural authority of America, especially if a maximum effort is made to minimize civil casualties. The Reference Frame recommends all readers in Iran - and everyone they know - to move at least 50 kilometers from the neighborhood of the potential targets, especially Natanz (plus other targets enumerated in Wikipedia in the link at the bottom). We also recommend all citizens of Iran to start a revolution and establish democracy and freedom in Iran. This blog can't guarantee that the story from New Yorker is accurate but there are very good reasons to think that it might be true. Hersh, the author of the article, has won a Pulitzer prize in 1970 for uncovering a massacre in Vietnam by the U.S. troops and he was also the reporter who broke the Abu Ghraib prison scandal. That's a pretty good record, I think. He likes to expose things that look anti-Bush but whether his new article is really anti-Bush remains to be seen. The contingency planning is obviously what many people in Pentagon are paid for but I can tell you neither how many decisions have actually been made, nor whether such a thing could work out as smoothly as some successful operations in the past. Other sources: The hypothetical bombing poses many dilemmas - moral, strategic, tactical, psychological, economical - and question marks but the psychological pressure could not be that bad. The Reference Frame also recommends Mahmoud Ahmadinejad to establish democracy, give up nuclear ambitions, and resign. Such a reasonable decision could hypothetically save millions of lives.
7bc382cd6c5036fd
zbMATH — the first resource for mathematics a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term Liouville theorems for generalized harmonic functions. (English) Zbl 1078.35020 Summary: Each nonzero solution of the stationary Schrödinger equation Δu(x)-c(r)u(x)=0 in n with a nonnegative radial potential c(r) must have certain minimal growth at infinity. If r 2 c(r)=O(1), r, then a solution having power growth at infinity, is a generalized harmonic polynomial. 35B40Asymptotic behavior of solutions of PDE 31B05Harmonic, subharmonic, superharmonic functions (higher-dimensional) 35J10Schrödinger operator
a64736cd472c84ba
Support Options Submit a Support Ticket Nanoelectronic Modeling Lecture 20: NEGF in a Quasi-1D Formulation By Gerhard Klimeck1, Samarth Agarwal2, Zhengping Jiang2 1. Purdue University 2. Electrical and Computer Engineering, Purdue University, West Lafayette, IN Published on This lecture will introduce a spatial discretization scheme of the Schrödinger equation which represents a 1D heterostructure like a resonant tunneling diode with spatially varying band edges and effective masses. Open boundary conditions are introduces with Quantum Transmitting Boundary Method (QTBM). The QTBM is related to the NEGF-based selfenergy treatment and the complete list of NEGF equations that are typically solved are listed. This lecture is not intended to truly teach the NEGF approach. We refer to Prof. Datta’s exentsive lectures on nanoHUB for the formal introduction of the NEGF approach. Learning Objectives: 1. Effective Mass Tight-Binding Hamiltonian in 1D discretized Schrödinger Eq. 2. Quantum Transmitting Boundary Method (QTBM) Open Boundary Conditions 3. Fundamental NEGF Equations Cite this work Researchers should cite this work as follows: • Gerhard Klimeck; Samarth Agarwal; Zhengping Jiang (2010), "Nanoelectronic Modeling Lecture 20: NEGF in a Quasi-1D Formulation," BibTex | EndNote Università di Pisa, Pisa, Italy
cec113f06b4497fd
Is String Theory Testable? I’ve been traveling in Italy for the past ten days, and gave talks in Rome and Pisa, on the topic “Is String Theory Testable?”. The slides from my talks are here (I’ll fix a few minor things about them in a few days when I’m back in New York, including adding credits to where some of the graphics were stolen from). It seemed to me that the talks went well, with fairly large audiences and good questions. In Pisa string theorist Massimo Porrati was there and made some extensive and quite reasonable comments afterwards, and this led to a bit of a discussion with some others in the audience. I don’t think the points I was making in the talk were particularly controversial. It was an attempt to explain without too much editorializing the state of the effort to connect the idea of string-based unification of gravity and particle physics with the real world. This is something that has not worked out as people had hoped and I think it is important to acknowledge this and examine the reasons for it. In one part of the talk I go over a list of the many public claims made in recent years for some sort of “experimental tests” of string theory and explain what the problems with these are. My conclusion, as you’d expect, is that string theory is not testable in any conventional scientific use of the term. The fundamental problem is that simple versions of the string theory unification idea, the ones often sold as “beautiful”, disagree with experiment for some basic reasons. Getting around these problems requires working with much more complicated versions, which have become so complicated that the framework becomes untestable as it can be made to agree with virtually anything one is likely to experimentally measure. This is a classic failure mode of a speculative framework: the rigid initial version doesn’t agree with experiment, making it less rigid to avoid this kills off its predictivity. Some string theorists refuse to acknowledge that this is what has happened and that this has been a failure. Most I think just take the point of view that the structures uncovered are so rich that they are worth continuing to investigate despite this failure, especially given the lack of successful alternative ideas about unification of particle physics and gravity. Here we get into a very different kind of argument. It was very interesting to talk to the particle physicists in Rome and Pisa. They are facing many of the same issues as elsewhere about what sort of research directions to support, with string theory often being pursued as an almost separate subject from the rest of particle theory, leading to conflict over resources and sometimes heated debates between them and the rest of the particle physics community. Many people were curious about how things were different in the US than in Europe, but I’m afraid I couldn’t enlighten them a great deal, mainly because I just don’t know as much about the European situation, although I’ve started to learn more about this on the trip. Several wondered if the phenomenon of theorists going to the press to make overhyped claims about string theory was an American phenomenon. I hadn’t really noticed this, but it does seem to be true. While the hype starts in the US, it does travel to Europe, with the US very influential in this aspect of culture as in many others. In the latest issue of the main Italian magazine about science, there’s an article explaining how certain US theorists have finally figured out how to test string theory with the new LHC… This entry was posted in Uncategorized. Bookmark the permalink. 47 Responses to Is String Theory Testable? 1. Levi says: This seems similar to the situation with Grand Unified Theories. I gather that SU(5) was the “beautiful” version, and when that version ran into problems much of the beauty went out of GUTs. It’s interesting to contrast this with cosmic inflation, where Guth’s original version didn’t quite work, but Linde and others found forms of inflation which worked better, and WMAP data gives a reality check. I should mention that I’m not a physicist, just a casual reader, so if I’m misinformed I hope somebody will point it out. 2. Arun says: It would be nice to know what Porrati said, if at all possible. 3. Joseph Smidt says: Great Post. I thought your comments on US/Europe string culture were interesting. Thanks for the slides. 4. Irish physicist says: Off-topic – but congratulations on 3 years of blogging and Happy St. Patricks day too! 5. woit says: Irish Physicist, Thanks! I hadn’t realized that the blog was started on a St. Patrick’s day. Surely some sort of homage to the Irish was unconsciously intended. I can’t recall exactly what Porrati’s points were, except that he said that he had five of them, and none of them were things that I really had a substantive disagreement with. Some of them were (from memory, and in loose translation, surely he would express these differently) 1. String theory shouldn’t be thought of as a theory that leads to a unique, predictive model, but instead as a very general framework, like QFT, valuable for the different kinds of models it allows. 2. He mentioned the “swampland” idea, that one could try and characterize those low energy theories that come from an ultraviolet completion like string theory. 3. His main point I think was that as long as there was no alternative way to unify particle theory with quantum gravity, string theory would continue to be a main focus for people to pursue. Kind of the “only game in town argument”. 4. He may also have mentioned the use of string theory in heavy-ion physics, in regimes where lattice gauge theory has trouble providing results. I guess I’m missing at least one… 6. anon. says: Porrati’s 1st point is with all due respect exactly the argument that defended the use of epicycles by both Ptolemy and Copernicus: it seemed to be a very useful framework of ideas. (Ptolemy used epicycles in the earth-centred universe, c. 150AD. In 1543, Copernicus used epicycles in his final model of the solar system.) As a ‘general framework of ideas’, the false theory of epicycles was invaluable to Ptolemy, Copernicus and generations of physicists. But that useful approximate framework was really false, as Kepler eventually discovered. So in the end both the earth-centred universe and its general framework of ideas were discredited. Will the string theory framework of ideas similarly mislead generations? What is so interesting is that it seems to be disconnected from reality not just with regard to its failure to make testable predictions, but also at the input end. Instead of having solid input, everything which has been put into string theory is completely speculative. It is less testable than either of the epicycle theories, and has less solid evidence. People now laugh at the idea that a theory was once constructed in which the stars and planets were carried around the earth while imbedded in closed crystalline shells. At least that false model was an attempt to interpret data. Perhaps people will cry with pity in the future, reading how physicists defended 10/11 dimensional M-theory in the 21st century, without providing any evidence at all. 7. Chris W. says: The pre-Copernican astronomers could be excused on the basis of epistemological naivete; their successors largely invented the understanding of science that is now being invoked in discussions of string theory. String theorists can’t be so excused. They should have known better, and should know better now. Certainly ‘general frameworks of ideas’ are important; they set the context for formulating problems. This is why metaphysics is important in science, even though most metaphysics ultimately proves worthless. The questions that must be asked now with respect to quantum gravity and unification concern the problem formulation. (Shiing-Shen Chern, who discussed the matter with Einstein in the 1940s, recognized this as the essential work of the physicist.) The alternatives to string theory in quantum gravity challenge the received wisdom in this regard, and for this reason alone are important. In this context Porrati’s main point (as stated by Peter, and echoed by many of Porrati’s colleagues) strikes me as a complete crock. The string theorists who adopt this attitude are the least likely to arrive at the crucial insights into the problem. One can hope they’ll at least have the good sense and simple honesty to recognize those insights when they appear, although I’m less and less optimistic about that. 8. Vijay Shankar says: There seems to be many differences in opinions unfortunately based on nationality. What people would want Physicists to come up with is a theory that holds in all frames or an experimental method that would help us test all the theories. Until then, we can’t stop someone crying foul whenever there is a news about ‘revolutionary’ theories. 9. tomj says: My question is: what is the difference between no theory and one which cannot be tested? I cannot figure out why string theory is a theory. It barely ranks as a hypothesis, and a poor one, very close to what my teenager would come up with. It is 100% mental. A theory, at minimum, should cover all the facts known, but as Einstein once said (something like): a theory should be as simple as possible, _but_ no simpler. The implication is that there has to be a careful balance, and the theory _must_ track data. How else could the complexity of theory be measured? Yes, you can predict new facts, but first you have to account for known facts. We have to start with the abilities of the observer. And the first ability is that of objectivity, and objectivity begins with the repudiation of belief. If a theory cannot be any simpler than necessary, how … really … how can a theory be more complex than necessary? If over simplification is a sin, complexity is beyond sin. A ‘theory’ (or set of words and math) which can ‘explain’ everything ‘after the fact’ is useless. Can someone please explain to me this: do physicists really believe that it is possible to formulate a complete description of the universe which will be testable? Because one possible reality is that we are incapable of this. We have thousands of years of data to suggest this conclusion, and only wishful thinking to suggest otherwise. I like the name of the book, it is important to echo prior thinking. But it might have been even more valid to call it ‘Beyond Reason’. Everyone seems to think that they have reason, that they think logically. And as long as we can avoid testing our reason and logic, we can continue to ‘think’ and ‘believe’ whatever we want. And if we become dogmatic in these untested beliefs, what is this? Science is not belief. Science is experiment. And experiment is based upon question, the antithesis of belief. Science is not an answer, science is a method. 10. Ptolemy says: ‘I cannot figure out why string theory is a theory.’ – tomj Gerard ‘t Hooft: ‘Actually, I would not even be prepared to call string theory a “theory” – rather a model or not even that: just a hunch. After all, a theory should come together with instructions on how to deal with it to identify the things one wishes to describe, in our case the elementary particles, and one should, at least in principle, be able to formulate the rules for calculating the properties of these particles, and how to make new predictions for them. Imagine that I give you a chair, while explaining that the legs are still missing, and that the seat, back and armrest will perhaps be delivered soon; whatever I did give you, can I still call it a chair?’ Peter Woit’s argument of why a non-predictive framework is not science can be found on p211 of Not Even Wrong (UK ed.): ‘An explanation that allows one to predict successfully in detail what will happen when one goes out and performs a feasible experiment that has never been done before is the sort of explanation that most clearly can be labelled ‘scientific’. Explanations that are grounded in … systems of belief and which cannot be used to predict what will happen are the sort of thing that clearly does not deserve this label. This is also true of … wishful thinking or ideology, where the source of belief … is something other than rational thought.’ 11. r hofmann says: There are many examples of theoretical physicists working in the US (foreigners and US citizens) who do extraordinarily good work but on the short run are screened by those that produce overhyped newspaper headlines. My general impression is that the US culture supports the go for extremes in generating scientific opinion, publicizing of `results´, and network formation. This may be helpful in projects where a focus of resources is needed (Cobe, WMAP,…). On the theoretical side, however, it may at times just produce entropy, a lack of well-fermented orginality, and thus no gain in robust knowledge. 12. Stacy says: A note on inflation, inspired by Levi’s comment: Actually, the situation with Inflation is quite analogous to that with string theory. The original idea was beautiful, and made a simple prediction (the universe should be flat) which solved the coincidence problem (to do with the evolution of the density of the universe). These together were compelling and propelled the theory to the dominance it enjoys today. But it did suffer problems (like a graceful exit from inflating) which have not entirely been solved. Worse, the compelling aspect of the flatness prediction – confirmed by the WMAP satellite – was that the density parameter should be unity – all in mass – in order to solve the coincidence problem. But it isn’t all in mass – we have now to invoke dark energy. This makes the coincidence problem worse. In other words, the compelling part of Inflation that led us all to believe it not only doesn’t work, but has made worse the problem it originally seemed to solve. I can’t help wondering if future generations of sociologists will debate whether speculative theories like string theory and Inflation were ever distinguishable from some sort of mathematically motivated religion. 13. matteoeo says: I agree with Stacy, I think cosmology suffers just the same problems as string theory. Cosmologists can produce potentials that would suit to any possible dynamics of inflation and produce the desired spectrum of cosmic background radiation, without actually deriving them from the properties of the known QFT particles. The worse, cosmology at the moment is a melting pot of the most un-scientifical theories and hypothesis in town: dark energy, cosmological costant, strings and GUTs (early universe), inflation, higgs boson, quintessence, supersimmetry. In cosmology it seems one could just say whatever he wants without too much care about scientific estabilished facts. I was impressed once reading some articles that showed that accelerated expansion of the universe could be explained without any reference to cosmological costant and dark energy, but just owing to some very peculiar relativistic effect (I can give references if any of you is interested). The point is: before inventing theories about the universe, shouldn’t we study general relativity a lot better? And, before unifying gravity and the quantum, shouldn’t we try to understand the basis of QFT and the geometrical structure of QM, and the very profound implications of GR itself? 14. woit says: Please, cosmology is off-topic. I’m not a cosmologist and don’t want to moderate discussions about cosmology. 15. Alex Nichols says: I don’t think the epicycles analogy is correct. That’s an example of an incorrect theory that was disproved by subsequent observation, rather like to the ether theory. The suggestion being made is that string theory is incapable of falsification because it can’t be tested. Possibly true, but there are compelling reasons for believing that extended entities that fluctuate are the only possible basis for observable space-time. This could include strings, loop quantum gravity, spin foams, spin networks etc… Were it found that the higgs boson is a fundamental particle by the the LHC, all of these would be disproved. But aren’t the problems of falisfiability at high energy (planck or horizon size) equally true for all the other theories? Perhaps all the effort shouldn’t be going into one avenue of research. When it comes to funding though, governments may simply decide that we need more effort in applied physics, such as energy production. 16. Alex Nichols says: BTW, could this finding have any heterotic implications? :- 17. The problem with particle physics, if it is a problem, is that we don’t have any new particles, and the very good theory we have for those particles looks pretty much like a kludge – all those undefined parameters hovering there like epicycles – which were very highly predictive, by the way. String theory is a heroic attempt to go beyond the SM, but so far hasn’t proven predictive in a confirmable sense. My guess is that we might be stuck without more input from the Universe, which is why everybody is pinning their hopes on the LHC. Maybe it will provide some clue that makes it possible to turn ST into a predictive theory, maybe it will make it more unlikely that ST has any reality, and maybe it will be mute on ST and other subjects. Only the last would be a bad outcome. 18. off topic says: sorry to be off-topic, but let me point out that simplest inflation predicts Omega_total = 1: it is what we measure, and the fact that the total involves some components we don’t understand has nothing to do with inflation. Simplest inflation models also naturally produce a spectrum of scalar adiabatic Gaussian cosmological perturbations with spectral index ns = 1 +- 1/60. Each word has a precise meaning, and it agrees with data. (The deviation of ns from 1 is not yet safely seen). People tried and try to invent alternative to inflation, but it is not easy because inflation turned out to be good succesful physics. For example alternative models based on “simple string cosmologies” suggested wrong kinds of perturbations (isoentropic, ns not close to 1, etc), and a significant amount of additional complications seems needed to get what inflation naturally does. 19. matteoeo says: I’m sorry I went off topic and I will retain from writing again, but nevertheless I think it’s interesting to see how the scientific method has been mistreated and pseudo-scientific claims are made in almost any field of natural sciences and humanistic “sciences”. Or do you think that this bad string theory story is just an occasional mistake soon to be recovered? My question was: do we know enough of the physics of the 20th century before adventuring in the physics of the 21st? I don’t think this question is off-topic. 20. Peter Woit says: There are all sorts of problematic claims made in different sciences. I just don’t want this blog turned into a discussion forum about all of them, but want to keep it focused on things I know about and am willing to moderate discussions of. The question of the evidence for inflation is an interesting one, and “off-topic” makes to-the-point comments, but I’m not an expert on this, and there are good blogs out there run by people who are, so that’s where the discussion should really take place. My point of view is certainly that the Standard Model QFT remains poorly understood in many ways, and that problem deserves more attention. There are lots of other issues in physics that aren’t well-understood, but again, I don’t want to moderate discussions of issues I don’t know much about. 21. Robert says: If your words were as reasonable as your slides congratulations for this nice presentation. For the philosophy of science section of the German Physics Societey I had indended to give a talk with a very similar subject (but of course slightly different conclusions). Unfortunately for personal reasons I could not attend the conference. Just a minor point of nitpicking (and we have discussed this before): When you say there is no clear cut experimental prediction I would qualify that with “to be performed with currently available experimental technology”. Otherwise I strongly believe your claim is wrong, at least if a weakly coupled description exisists (that is there is — possibly after a duality — a stringy description with g 22. Robert says: Sorry for the sudden end of the previous comment. I wanted to say g less then less than (i.e. \ll in TeX) 1 but typing that froze my firefox (probably the script that does the preview. Luckily, I did not lose the post as after a few minutes it popped up a box asking me if I wanted to cancel a script. So I could still press the submit button. But there seems to be a bug either in the script or in firefox… 23. Robert Musil says: Please correct me if I mistake your views, but I believe you have several times made clear that while you harbor skepticism over many aspects of string theory as physics, you believe that much extraordinary and important mathematics has resulted from string theory. The “Mirror Conjecture” is one such example. Admiration for string theory mathematics spin-offs is widely shared by many of the worlds leading mathematicians. But there are some very troubling aspects to even this, very real, admiration for string theory inspired mathematics – at least to my eye. It’s trivial to formulate the Mirror Conjecture: Just flip the Hodge array on the diagonal and ask for a variety. But nobody bothered to ask the question before M-theory was posited. Moreover, the first few examples of the Mirror Conjecture are not hard to prove (although the entire conjecture is), yet nobody bothered to investigate them before M-Theory was posited. One main (or at least common) example that supposedly demonstrates the mathematical importance of the Mirror Conjecture – finding those curves – was being pursued (apparently) by exactly two Norwegians on a computer before the Mirror Conjecture came up. Yet the Mirror Conjecture is supposed to be ultra-important mathematics. There is something very strange here. Perhaps what is strange here is reflected (oops! and unintentional pun) in the constant references to physics in all mathematical programs regarding the Mirror Conjecture (or at least the ones with which I am familiar). “Golly,” the mathematicians seem to say, “What I’m noodling over has relevance to the real world! It must be important mathematics!” But if it turns out that string theory is not important physics, I believe it would be a first if the associated mathematics were really all that important – regardless of the level of enthusiasm it has inspired. After all, string theory inspired quite a lot of ill-considered, unchallenged enthusiasm as physics for quite a while. In other words, I can’t shake the sense that the enthusiasm over the Mirror Conjecture (for example) has itself a hall-of-mirrors aspect: Mathematicians (even very good ones) love it supposedly because it is “intrinsically” wonderful mathematics. But it’s a strange kind of intrinsically wonderful mathematics that nobody gave a dam about before the physics came along in the form of string theory – even though it’s wonderful mathematics whose formulation is trivial and whose first few examples are easy and whose supposedly important applications nobody cared about enough to work on but two Norwegians (not that I have anything against Norwegians, mind you). Of course, on the other side of the hall of mirrors we find the string theorists reassuring themselves that their theory must be important (or even correct) because the mathematics is so wonderful. Bing, bing, bing goes the wonderful image across the hall – each time a little more distorted as it recedes. Personally, I find this hall of mirrors aspect of things disturbing, perhaps because I associate halls of mirrors with lower-budget hotel lobbies trying to look bigger than they are. Somehow I get a similar feeling from the mathematical spin-offs of string theory. Do you have anything to say on this? 24. r hofmann says: Dear Robert Musil, although I have no idea about the Mirror Conjecture what you say about it and its embedding into the modern relationship between physics and mathematics strikes me as an intelligent observation. Thanks for the info. 25. David B. says: Dear Robert: Mirror symmetry is not just about “flipping the Hodge diamond”. When you say You are trivializing the contribution from physicists and mathematicians. The truth is that mathematicians had not suspected that the problem of counting curves in Calabi-Yau manifolds (a typical problem of enumerative geometry) could be related to the theory of deformations of the complex structure on the mirror geometry. You are also trivializing the problem by making statements like “even though it’s wonderful mathematics whose formulation is trivial and whose first few examples are easy”. The formulation is not trivial at all, and it took quite a while before someone produced a complete mathematical proof of the first few examples. I don’t like these misinformed statements about the relationship between research in string theory and mathematics. They seem to be crafted for purposefully misleading the public at large. Many profesionals use simple statements like “flipping the hodge diamond” when giving presentations in order to explain the simplest aspects of mirror symmetry to an uninformed audience, and to try to give them something they might relate to. In this way they can share the excitement of the subject. Don’t mistake those statements for the research that is done in the subject. 26. Peter Woit says: Robert (non-Musil), The slides pretty accurately reflect what I said. In this talk I wanted to just as clearly as possible state the facts of the matter and avoid any editorializing. One thing that I should have put in the slides was a comment about the issue you raise, the claim that the testability problem for string theory only arises at low energy, that if we could do Planck scale experiments, it would be testable. I think we’ve probably discussed this before, but I would claim that the string theory framework continues to be not testable even at that scale. As you acknowledge, even a qualitative prediction of the kind I assume you have in mind (standard distinctive aspects of perturbative string spectra or scattering amplitudes) rely on the string coupling being small enough for the perturbation approximation to be good. Such a prediction is not falsifiable, since it could be evaded simply by saying “well, maybe the string coupling really is not small enough”. In practice, it is true that if we could do experiments at arbitrarily high scales, we’d presumably see what the structure of quantum gravitational effects is, and would see whether this looked at all like anything that had ever shown up in studies of string theory. Robert (Musil) David B. is right. The “Mirror Conjecture” and the associated mathematics it has generated go far, far beyond what you mention and are much deeper than “flipping the Hodge diamond”. As an example of this, next week at the IAS they’ll be an important mathematics workshop on “Homological Mirror Symmetry”, focusing on relations to the geometric Langlands program. This is a very active and important area in matthematics. It has pretty much nothing to do with attempts to unify physics via string theory, but it’s great mathematics, and maybe someday it will turn around and inspire some physics. 27. Robert Musil says: Thank you for your as-always thoughtful response. David B. seems a very intelligent and knowledgeable (if somewhat excitable) fellow, but he is certainly not right in mischaracterizing me as asserting that the Mirror Conjecture ends with the Hodge Diamond formulations. Indeed, I’m not aware of any comprehensive formulation of the Mirror Conjecture. Manifolds with mirror-symmetric Hodge tables are called geometrical mirrors. My point in this regard is (and was) that the Hodge Diamond formulation is trivial to state and notice and that nobody had bothered to do either prior to the positing of M Theory. Yet now that very formulation is deemed to be inherently wonderful mathematics. Of course, this is not an argument for dismissing or downgrading the significance of any version of the Mirror Conjecture. But to start the discussion it does help to get the question right. Nor is David B.’s assertion that there are no easy examples of Mirror Symmetry right. Indeed, it is not that hard to find references to this fact in papers by central practitioners in the field. Of course, some of the known examples were by no means easy. As for the geometric Langlands program, I’m not knowledgable in the area of mathematics. I realize that geometric Langlands is an active area of research considered promising by many very smart people. But promise and “rich” structure alone didn’t make string theory great – or even important – physics. I’m not sure if I see why one can already conclude that Geometric Langlands is great mathematics – and evaluating the importance of Mirror Conjecture relationships to GL is another step after that. 28. Peter Shor says: If somebody comes up tomorrow with a beautiful new theory which unifies gravity and QM and is much simpler than string theory, and if the LHC produces results that agree with its predictions, I assume that nearly all the string theorists will drop their current research and jump on the bandwagon. The real question is (a) without any hints of an alternative, are any of them going to abandon string theory research, no matter how unpromising it looks and (b) whether a hint of a promising alternative is enough, or whether it takes a fully formed theory. For instance, if the LHC produces a Higgs mass close to that predicted by Connes, are any of the string theorists going to take this as a hint that maybe they’re on the wrong track, and Connes on the right one? Any wagers on this? 29. Kea says: Any wagers on this? What are we betting on? How long it will take the String theorists to figure out what’s going on? Actual experimental outcomes at the LHC? Oooohhh, this is fun. 30. A.J. says: Several comments for Robert Musil: 1) The relationship between Hodge diamonds predates M-theory by several years. It’s part of the story physicists like to tell about M-theory and a simple example of a mirror phenomenon, but I don’t think it’s of deep importance. More of a decorative note. 2) I’ve never heard anyone claim that the existence of manifolds with mirror hodge diamonds was the important or deep part of the story. Complaining that others are calling it “inherently wonderful mathematics” seems like a bit of a straw man. Who exactly has said this? 3) What is important, as David B. more or less pointed out, is that we can relate moduli spaces of complex structures to moduli spaces of symplectic structures. This is incredibly non-trivial, and potentially very useful. 4) While I agree that “This shows up in string theory” isn’t necessarily a good rationale for a mathematical research problem, I think it’s a poor reason to dislike good mathematical ideas. And the notion that there’s a topological field theory which carries information about the space of curves and maps to a fixed target has proven to be a fertile source of algebro-geometric ideas. 31. A.J. says: Peter (Shor): If the Connes et al prediction comes out right, I imagine that some people will take the hint and start working on it. On the other hand, I’ve also seen some stringy speculation around the fact that the noncommutative space in Connes, Marcolli, & Chamseddine has KO dimension 6. 32. Robert Musil says: You are quite right in that current interest in mirror manifolds is due to the idea is that along with the equality h1,1(X) = h2,1(Y ) of moduli numbers of Kahler structures on X and of complex structures on Y, the whole symplectic topology on X is equivalent to complex geometry on Y, and vice versa. In that sense perhaps I should have been more explicit about the means of establishing the Hodge equivalences. But the first examples of this equivalence are not hard, nobody was looking at them, etc. I’m not sure what you mean by “The relationship between Hodge diamonds predates M-theory by several years,” unless you are referring to the earlier computer results. I’m not aware of any general Hodge diamond conjecture that predated the positing of M Theory. All that being said, I don’t see why my points don’t still stand. For example, while I don’t mean to be snide or obtuse, neither do I see why the assertion that something is “a fertile source of algebro-geometric ideas” is a very good basis for concluding that those ideas or their source are important. The argument seems to completely assume its conclusion. Am I missing part of your point? There is clearly great and broad enthusiasm for some mathematics derived from (or spun off from) string theory – much of it among very smart and accomplished people. But there was (and is) just such enthusiasm for string theory itself – an enthusiasm only recently seriously challenged. That challenge has been made from one redoubt: In physics one at least has the check on the products of such enthusiasms that at some point or other those products must be EXPERIMENTALLY TESTABLE (although, as this blog cogently points out, some string theory practitioners are struggling mightily to avoid even that check). There is no such check in mathematics. So how do we know that the mathematics spun off from string theory are not just empty enthusiasms? It’s just silly to deny that a lot (as Peter points out, perhaps not all) of the enthusiasm is derived directly or indirectly from string theory itself. To make matters worse, some of the best mathematicians speaking to the public about mathematics spun off from string theory often make claims for its importance that are absurdly over the top (Michael Atiyah, for example). Certainly just asserting that one thing or another is “great mathematics” or the like doesn’t advance matters, does it? What does? 33. A.J. says: OK first, most of the ideas of mirror symmetry predate M-theory by several years. The former is part of the body of evidence for the latter. If you want more direct evidence: Kontsevich’s homological mirror symmetry lecture is from the summer of 94; Witten’s M-theory announcement from the fall of 1995. Second, why are we still talking about Hodge equivalences? This is a hint that there’s something interesting going on, not the end goal of any major research efforts. I don’t understand what your metric for “importance” is. But it seems to me that algebraic geometers have judged Gromov-Witten theory to be important and interesting because of the ideas it’s brought into their field, not because it’s connected in some way to a much larger program in a different field. So, yes, by this standard, it’s important. If you mean important in some other sense, I really don’t have anything to say to you. My point basically is this: You have a reasonable abstract point about a potential relationship between relative levels of enthusiasm about physics and mathematics, and a cute metaphor about hotel mirrors to go with it. But I think you’re quite wrong to single out mirror symmetry as an example of the phenomenon you’re talking about. And I suspect you will have a hard time finding actual examples. It’s true that some mathematicians like to talk and daydream about important physics connections, but I think you’ll find that the physics-derived ideas which mathematicians have really taken the time to develop intensely have been those which are useful and interesting as as mathematics. 34. Robert Musil says: You mention “I don’t understand what your metric for “importance” is.” Well, let’s take that seriously. Terry Tao advanced a set of criteria for “bad mathematics” that I believe were discussed in this blog a while back: • A field which becomes increasingly ornate and baroque, in which individual results are generalised and refined for their own sake, but the subject as a whole drifts aimlessly without any definite direction or sense of progress; or • A field which becomes filled with many astounding conjectures, but with no hope of rigorous progress on any of them; or • A field which now consists primarily of using ad hoc methods to solve a collection of unrelated problems, which have no unifying theme, connections, or purpose; or • A field which has become overly dry and theoretical, continually recasting and unifying previous results in increasingly technical formal frameworks, but not generating any exciting new breakthroughs as a consequence; or • A field which reveres classical results, and continually presents shorter, simpler, and more elegant proofs of these results, but which does not generate any truly original and new results beyond the classical literature. Is it clear that the mathematics spun off from string theory has avoided each of these? It seems at least arguable that one, perhaps more, of these criteria fit uncomfortably well. Not that an answer to this would end the discussion, of course. 35. Robert Musil says: I first want to be very clear that I appreciate your thoughtfulness and intelligent comments. I also want to apologize in advance for popping in this second post before you have a chance to respond to or digest the first. With respect to Kontsevich’s seminal address at ICM, Zurich 1994, it is worth keeping in mind that Kontsevich’s himself characterized what he was doing as follows (I quote from his address): “Mirror Symmetry was discovered several years ago in string theory as a duality between families of 3-dimensional Calabi-Yau manifolds (more precisely, complex algebraic manifolds possessing holomorphic volume elements without zeroes). The name comes from the symmetry among Hodge numbers. For dual Calabi-Yau manifolds V, W of dimension n (not necessarily equal to 3) one has dim Hp(V,q) = dim Hn−p(W, q). …. “We describe here a not yet completely constructed theory which has potentially wider domain of applications than mirror symmetry. It is based on pioneering ideas of M. Gromov on the role of ∂-equations in symplectic geometry, and certain physical intuition proposed by E. Witten.” The relevant references to Witten’s “intuitions” are to two papers: Topological sigma models, Commun. Math. Phys. 118 (1988), 411-449 and Two-dimensional gravity and intersection theory on moduli space, Surveys in Di . Geom. 1 (1991), 243–310. I believe these quotes address several questions and concerns expressed in your posts above (why we are talking about Hodge numbers, for example). I also believe these passages support my points. 36. A.J. says: Tao didn’t give that list as criteria for bad mathematics. It’s just a list of dangers (somewhat exaggerated as Tao admits) which might have detrimental effects on the development of a field. I think it’s misleading to treat it as checklist for identifying “bad mathematics”. That said, the only danger I see being remotely applicable is is the 2nd one. But I don’t think it’s a particularly great danger. For one thing, judicious borrowing of physical intuition has a pretty good track record. (Donaldson theory, Chern-Simons, knot polynomials, mirror symmetry, Seiberg-Witten theory, and so on.) And for another, mathematicians have a habit of concentrating on problems they think are solvable. No one is butting heads with 4d Yang-Mills theory right now, because it’s probably out of reach. But there’s lots of motion in the Gromov-Witten theory of orbifolds right now; people are getting things done. 37. A.J. says: I don’t see how the Kontsevich quotes support your points. Perhaps you’d care to explain? You’ll probably have to take some care to spell out carefully what you mean, since we seem to be talking at angles. Some of the confusion may stem from the term mirror symmetry. The symmetry gets its name from the duality of the hodge diamonds, but it’s just a name. The actual set of ideas involved is considerably richer than the name implies. Most of it has been developed in the years since Kontsevich’s lecture. 38. David Williams says: Please have mercy on an old scocial science Phd. I have had, basically, only a pragmatic and professional education except for a couple of biology courses and a stint as a biology teaching assistant (where I first encountered the scientific method) but I have indulged my interest in popularized science writing. I use this information in debating the champions of religion. In debate the basic successful argument is that Science is not based on belief but on questioning and testing. Recently, String theory has become widely accepted in physics. I love the idea in the sense that it tells us that the universe is a symphony. HOWEVER, String theory appears to arrive at the position of a Unified Field Theory only by relying upon a. mathematical solutions b. solutions that require positing multiple universes. May I ask you these questions. In your opinion are mathematical solutions the equivalent of an empirical test? Although I’m told (and I simply have to accept or not – at the level of my math and science skills) that M theory will offer an opportunity to empirically test String Theory. I cannot, to my satisfaction, imagine an empirical test for multiple universes. And, If empirical tests are not available by the very nature of String Theory is this idea no better than religious belief? I am inclined, therefore, to simply leave String theory to it’s own devises and conclude that it lacks scientific credibility and that we are stuck with the contradictions between General Relativity and Quantum Dynamics. We would otherwise be as lacking in evidence as the religious. Why have physicists so departed from scientific standards? Hope you will be able to spare the time answer this query. David P. Williams, PhD 3181 Micmac St. Halifax Nova Scotia Canada B3L 3W3 (902) 454 39. tomj says: I have similar concerns. String theory is hyped or hoped beyond belief. This is serious, because if a scientist is supposed to be exact and careful about their theory, their work, etc., why doesn’t this carry over into their public descriptions? I somehow stumbled across this site a few weeks ago. But for several years I have firmly believed that there was something not right about the ‘theory of everything’ crowd. At that point I was trying to track down some more concrete details about these theories. But nothing concrete ever appeared. Instead, I ran across some made up cafeteria dialog between a string theorist and (I guess) a LQG theorist. I think the point of the dialog was to highlight the lack of evidence for either theory, but more important for me was another principle: science is about the unknown, not the known or the unknowable. If science is expanded to cover the unknowable, you forfeit the ability to apply Occam’s razor. Occam’s razor isn’t a theory, it isn’t a law of nature, it is a check on logic: it requires experiment. If a theory has no experimental results, how can you compare it to one that does? If a theory predicts unknowables like multiple universes, how can this win out over a theory that predicts only the one we experience? My problem with the proponents of string theory is that their ideas fall into the category of ‘known’ or ‘unknowable’. That is, their statements lead me to believe that they know something (strings are the basic building blocks of everything) or their theory covers stuff we can’t know (multiple universes, etc.). In the first case, they are lying, or using language in a very sloppy way. If they are sloppy with English, why should I think that they are not sloppy in their math or logic? What I don’t understand is that if a scientist makes wild statements that they ‘know’ something or that their theory implies ‘unknowable’ realities, why shouldn’t I remember their unscientific approach? Either put up, or shut up. Known = technology Unknown = science Unknowable = fantasy 40. r hofmann says: it’s of undeniable educational value to follow the debate between Robert Musil and yourself. This statement is, however, outright false and confirms pretty much the relevance of Terry Tao’s above quoted criteria. Best, RH 41. Ralf says: David and tomj, String theory/M-theory is a speculative research program which therefore would not be covered at all in the popularized science press if the latter were responsible. Officially, string theory is “accepted” exactly as that. In practice this doesn’t stand in the way of string theorists taking over high energy physics, in part because the field has been short of new ideas for three decades. In such cases the subjective criteria for what can be regarded “reasonable” ideas become rather flexible. The “scientific method” exists only in the imagination of philosophers of science. “Occam’s razor” cannot be “applied” like a theoretical analog of a lab test. People can be honestly deluded about things that are crucial to their identity, like their love life, their social life or their professional world. In addition, string theorists view themselves as intellectually superior to everybody else, which automatically degrades any objections brought up by those. Finally, in reality there is simply nothing about string theory that can be related to laypeople. Anybody who writes about it is only heaping nonsense on a foundation of nonsense which again rests on a foundation of nonsense. It is utter intellectual dishonesty to pretend otherwise. Supersymmetry, by itself, cannot possibly be assessed by a non-physicist. Every account makes it appear much more reasonable than it is. Grand Unification sounds almost like a no-brainer if one doesn’t know the details. The technical details on which string theory is built—and which are never even mentioned in the popular press—render it, in my opinion, deranged and demented. And it is exactly this wide gap between actual physics and string theory that—perversely—facilitates the public’s susceptibility for it. The public never registered anything from the Schrödinger equation onward because they don’t like the absence of visualizability. That is why they prefer the faux visualizability of General Relativity—and of string theory, of course. Physics is not the Riemannian geometry of the 19th century. It was Einstein, after all, who commented that one should explain everything as simply as possible—but not simpler. 42. a says: dear David, let me try to give an answer to your “Why have physicists so departed from scientific standards?”. It is oversimplified and caricatural, but I think it captures a relevant aspect of the question. Do you believe that an average rational human being would choose option A or B? Option A is what is happening now. Option B is “I spent my life working on strings, but, contrarily to what press said, initial hopes mostly disappeared. Maybe I could start doing some other physics, but I only have expertise in strings, that is a highly specialized topic: so I resign from my academic job” 43. A.J. says: R. Hofmann: Sorry about that. I was not expressing myself clearly. (Why can’t you people just read my mind?!) A precise formulation: “Few if any mathematicians are attempting to construct 4d Yang-Mills theory in the sense required by the Clay Millenium prizes.” Obviously plenty of people are thinking about 4d Yang-Mills in a non-rigorous fashion, or trying to work out various facts about its topological analogues. But no one’s managed to do anything interesting as far as construction & mass gap goes. 44. r hofmann says: I see … That problem was formulated by E. Witten and a famous Harvard mathematical physicist, right? Best, RH 45. A.J. says: I don’t know anything about how the Clay Foundation works, but at the least the problem description was written by Edward Witten and Arthur Jaffe. 46. Ari Heikkinen says: Just a couple of questions, when you say: Do you mean by this that what’s in Greene’s book (that “beautiful idea”) of particles being tiny vibrating strings of which amplitude and wavelength corresponds to different masses and force charges of them and that those “extra” dimension are curled up in Calabi-Yau shapes are what disagree with experiment? And by this something that Greene’s book don’t mention? 47. Peter Woit says: One of the main problems is that you have to do something to fix the size and shape of the Calabi-Yaus, and the only ways people have found to do this involve introducing a lot of complex, ad hoc structure. This is the “moduli problem”, and I don’t remember what Brian says about it in his book. His book was written now quite a few years ago, before people had any solution at all to the problem. Back then I suspect there was a lot more optimism that a simple solution could be found. Comments are closed.
f4c02fb1c8e87f9e
ijms International Journal of Molecular Sciences Int. J. Mol. Sci. 1422-0067 Molecular Diversity Preservation International (MDPI) 10.3390/ijms11114227 ijms-11-04227 Article The Bondons: The Quantum Particles of the Chemical Bond PutzMihai V.12 Laboratory of Computational and Structural Physical Chemistry, Chemistry Department, West University of Timişoara, Pestalozzi Street No.16, Timişoara, RO-300115, Romania; E-Mail: mvputz@cbg.uvt.ro or mv_putz@yahoo.com; Tel.: ++40-256-592-633; Fax: ++40-256-592-620; Web: www.mvputz.iqstorm.ro Theoretical Physics Institute, Free University Berlin, Arnimallee 14, 14195 Berlin, Germany 2010 28 10 2010 11 11 4227 4256 23 8 2010 11 10 2010 21 10 2010 © 2010 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. 2010 By employing the combined Bohmian quantum formalism with the U(1) and SU(2) gauge transformations of the non-relativistic wave-function and the relativistic spinor, within the Schrödinger and Dirac quantum pictures of electron motions, the existence of the chemical field is revealed along the associate bondon particle characterized by its mass (m), velocity (v), charge (e), and life-time (t). This is quantized either in ground or excited states of the chemical bond in terms of reduced Planck constant ħ, the bond energy Ebond and length Xbond, respectively. The mass-velocity-charge-time quaternion properties of bondons’ particles were used in discussing various paradigmatic types of chemical bond towards assessing their covalent, multiple bonding, metallic and ionic features. The bondonic picture was completed by discussing the relativistic charge and life-time (the actual zitterbewegung) problem, i.e., showing that the bondon equals the benchmark electronic charge through moving with almost light velocity. It carries negligible, although non-zero, mass in special bonding conditions and towards observable femtosecond life-time as the bonding length increases in the nanosystems and bonding energy decreases according with the bonding length-energy relationship E bond [ kcal / mol ] × X bond [ A 0 ] = 182019, providing this way the predictive framework in which the particle may be observed. Finally, its role in establishing the virtual states in Raman scattering was also established. de Broglie-Bohm theory Schrödinger equation Dirac equation chemical field gauge/phase symmetry transformation bondonic properties Raman scattering One of the first attempts to systematically use the electron structure as the basis of the chemical bond is due to the discoverer of the electron itself, J.J. Thomson, who published in 1921 an interesting model for describing one of the most puzzling molecules of chemistry, the benzene, by the aid of C–C portioned bonds, each with three electrons [1] that were further separated into 2(σ) + 1(π) lower and higher energy electrons, respectively, in the light of Hückel σ-π and of subsequent quantum theories [2,3]. On the other side, the electronic theory of the valence developed by Lewis in 1916 [4] and expanded by Langmuir in 1919 [5] had mainly treated the electronic behavior like a point-particle that nevertheless embodies considerable chemical information, due to the the semiclassical behavior of the electrons on the valence shells of atoms and molecules. Nevertheless, the consistent quantum theory of the chemical bond was advocated and implemented by the works of Pauling [68] and Heitler and London [9], which gave rise to the wave-function characterization of bonding through the fashioned molecular wave-functions (orbitals)–mainly coming from the superposition principle applied on the atomic wave-functions involved. The success of this approach, especially reported by spectroscopic studies, encouraged further generalization toward treating more and more complex chemical systems by the self-consistent wave-function algorithms developed by Slater [10,11], Hartree-Fock [12], Lowdin [1315], Roothann [16], Pariser, Parr and Pople (in PPP theory) [1719], until the turn towards the density functional theory of Kohn [20,21] and Pople [22,23] in the second half of the XX century, which marked the subtle feed-back to the earlier electronic point-like view by means of the electronic density functionals and localization functions [24,25]. The compromised picture of the chemical bond may be widely comprised by the emerging Bader’s atoms-in-molecule theory [2628], the fuzzy theory of Mezey [2931], along with the chemical reactivity principles [3243] as originating in the Sanderson’s electronegativity [34] and Pearson’s chemical hardness [38] concepts, and their recent density functionals [4446] that eventually characterizes it. Within this modern quantum chemistry picture, its seems that the Dirac dream [47] in characterizing the chemical bond (in particular) and the chemistry (in general) by means of the chemical field related with the Schrödinger wave-function [48] or the Dirac spinor [49] was somehow avoided by collapsing the undulatory quantum concepts into the (observable) electronic density. Here is the paradoxical point: the dispersion of the wave function was replaced by the delocalization of density and the chemical bonding information is still beyond a decisive quantum clarification. Moreover, the quantum theory itself was challenged as to its reliability by the Einstein-Podolski-Rosen(-Bohr) entanglement formulation of quantum phenomena [50,51], qualitatively explained by the Bohm reformulation [52,53] of the de Broglie wave packet [54,55] through the combined de Broglie-Bohm wave-function [56,57] Ψ 0 ( t , x ) = R ( t , x ) exp ( i S ( t , x ) ħ )with the R-amplitude and S-phase action factors given, respectively, as R ( t , x ) = Ψ 0 ( t , x ) 2 = ρ 1 / 2 ( x ) S ( t , x ) = p x E tin terms of electronic density ρ, momentum p, total energy E, and time-space (t, x) coordinates, without spin. On the other side, although many of the relativistic effects were explored by considering them in the self-consistent equation of atomic and molecular structure computation [5862], the recent reloaded thesis of Einstein’s special relativity [63,64] into the algebraic formulation of chemistry [6567], widely asks for a further reformation of the chemical bonding quantum-relativistic vision [68]. In this respect, the present work advocates making these required steps toward assessing the quantum particle of the chemical bond as based on the derived chemical field released at its turn by the fundamental electronic equations of motion either within Bohmian non-relativistic (Schrödinger) or relativistic (Dirac) pictures and to explore the first consequences. If successful, the present endeavor will contribute to celebrate the dream in unifying the quantum and relativistic features of electron at the chemical level, while unveiling the true particle-wave nature of the chemical bond. Method: Identification of Bondons (<italic>B̶</italic>) The search for the bondons follows the algorithm: Considering the de Broglie-Bohm electronic wave-function/spinor Ψ0 formulation of the associated quantum Schrödinger/Dirac equation of motion. Checking for recovering the charge current conservation law ρ t + j = 0that assures for the circulation nature of the electronic fields under study. Recognizing the quantum potential Vqua and its equation, if it eventually appears. Reloading the electronic wave-function/spinor under the augmented U(1) or SU(2) group form Ψ G ( t , x ) = Ψ 0 ( t , x ) exp ( i ħ e c ( t , x ) )with the standard abbreviation e = e 0 2 / 4 π ɛ 0 in terms of the chemical field ℵ considered as the inverse of the fine-structure order: 0 = ħ c e 137.03599976 [ Joule × meter Coulomb ]since upper bounded, in principle, by the atomic number of the ultimate chemical stable element (Z = 137). Although apparently small enough to be neglected in the quantum range, the quantity (6) plays a crucial role for chemical bonding where the energies involved are around the order of 10–19 Joules (electron-volts)! Nevertheless, for establishing the physical significance of such chemical bonding quanta, one can proceed with the chain equivalences energy × distance charge ( charge × potential difference ) × distance charge ( potential difference ) × distancerevealing that the chemical bonding field caries bondons with unit quanta ħc/e along the distance of bonding within the potential gap of stability or by tunneling the potential barrier of encountered bonding attractors. Rewriting the quantum wave-function/spinor equation with the group object ΨG, while separating the terms containing the real and imaginary ℵ chemical field contributions. Identifying the chemical field charge current and term within the actual group transformation context. Establishing the global/local gauge transformations that resemble the de Broglie-Bohm wave-function/spinor ansatz Ψ0 of steps (i)–(iii). Imposing invariant conditions for ΨG wave function on pattern quantum equation respecting the Ψ0 wave-function/spinor action of steps (i)–(iii). Establishing the chemical field ℵ specific equations. Solving the system of chemical field ℵ equations. Assessing the stationary chemical field t t = 0that is the case in chemical bonds at equilibrium (ground state condition) to simplify the quest for the solution of chemical field ℵ. The manifested bondonic chemical field ℵbondon is eventually identified along the bonding distance (or space). Checking the eventual charge flux condition of Bader within the vanishing chemical bonding field [26] = 0 ρ = 0 Employing the Heisenberg time-energy relaxation-saturation relationship through the kinetic energy of electrons in bonding v = 2 T m 2 m ħ t Equate the bondonic chemical bond field with the chemical field quanta (6) to get the bondons’ mass ( m ) = 0 This algorithm will be next unfolded both for non-relativistic as well as for relativistic electronic motion to quest upon the bondonic existence, eventually emphasizing their difference in bondons’ manifestations. Type of Bondons Non-Relativistic Bondons For the non-relativistic quantum motion, we will treat the above steps (i)–(iii) at once. As such, when considering the de Broglie-Bohm electronic wavefunction into the Schrödinger Equation [48] i ħ t Ψ 0 = ħ 2 2 m 2 Ψ 0 + V Ψ 0it separates into the real and imaginary components as [52,53,68] t R 2 + ( R 2 m S ) = 0 t S ħ 2 2 m 1 R 2 R + 1 2 m ( S ) 2 + V = 0While recognizing into the first Equation (13a), the charge current conservation law with Equation (2) along the identification j S = R 2 m Sthe second equation helps in detecting the quantum (or Bohm) potential V qua = ħ 2 2 m 2 R Rcontributing to the total energy E = T + V + V quaonce the momentum-energy correspondences 1 2 m ( S ) 2 = p 2 2 m = T t S = Eare engaged. Next, when employing the associate U(1) gauge wavefunction of Equation (5) type, its partial derivative terms look like Ψ G = [ R + i ħ R ( S + e c ) ] exp [ i ħ ( S + e c ) ] 2 Ψ G = { 2 R + 2 i ħ R ( S + e c ) + i ħ R ( 2 S + e c 2 ) R ħ 2 [ ( S ) 2 + ( e c ) 2 ] 2 e ħ 2 c R S } exp [ i ħ ( S + e c ) ] t Ψ G = [ t R + i ħ R ( t S + e c t ) ] exp [ i ħ ( S + e c ) ] Now the Schrödinger Equation (12) for ΨG in the form of (5) is decomposed into imaginary and real parts t R = 1 m ( R S + R 2 2 S ) + e mc ( R + R 2 2 ) R t S R e c t = ħ 2 2 m 2 R + R 2 m [ ( S ) 2 + ( e c ) 2 ] + e mc R S + V Rthat can be rearranged t R 2 = 1 m ( R 2 S ) + e mc ( R 2 ) ( t S + e c t ) = ħ 2 2 m 1 R 2 R + 1 2 m [ ( S ) 2 + ( e c ) 2 ] + e mc S . + Vto reveal some interesting features of chemical bonding. Firstly, through comparing the Equation (20a) with the charge conserved current equation form (4) from the general chemical field algorithm–the step (ii), the conserving charge current takes now the expanded expression: j U ( 1 ) = R 2 m ( S + e c ) = j S + j suggesting that the additional current is responsible for the chemical field to be activated, namely j = e mc R 2 which vanishes when the global gauge condition is considered = 0 Therefore, in order that the chemical bonding is created, the local gauge transformation should be used that exists under the condition 0 In this framework, the chemical field current j⃗ carries specific bonding particles that can be appropriately called bondons, closely related with electrons, in fact with those electrons involved in bonding, either as single, lone pair or delocalized, and having an oriented direction of movement, with an action depending on the chemical field itself ℵ. Nevertheless, another important idea abstracted from the above results is that in the search for the chemical field ℵ no global gauge condition is required. It is also worth noting that the presence of the chemical field does not change the Bohm quantum potential that is recovered untouched in (20b), thus preserving the entanglement character of interaction. With these observations, it follows that in order for the de Broglie-Bohm-Schrödinger formalism to be invariant under the U(1) transformation (5), a couple of gauge conditions have to be fulfilled by the chemical field in Equations (20a) and (20b), namely e mc x ( R 2 ) = 0 e c t + 1 2 m ( e c ) 2 + e mc S . = 0 Next, the chemical field ℵ is to be expressed through combining its spatial-temporal information contained in Equations (25). From the first condition (25a) one finds that = R 2 2 R ı ı where the vectorial feature of the chemical field gradient was emphasized on the direction of its associated charge current fixed by the versor i⃗ (i.e., by the unitary vector associate with the propagation direction, i⃗2=1). We will apply such writing whenever necessary for avoiding scalar to vector ratios and preserving the physical sense of the whole construction as well. Replacing the gradient of the chemical field (26) into its temporal Equation (25b) one gets the unified chemical field motion description e 8 mc R 2 ( R ) 2 ( 2 ) 2 R 2 m S S R S ( 2 ) + t = 0that can be further rewritten as e 2 mc ρ ( ρ ) 2 ( 2 ) 2 ρ ν ı ρ ı ( 2 ) + t = 0since calling the relations abstracted from Equations (2) and (3) R = ρ 1 / 2 ; S = p { R = 1 2 ρ ρ 1 / 2 ; ( R ) 2 = 1 4 ( ρ ) 2 ρ S S R S = 2 ρ 1 / 2 p ı ρ ı The (quadratic undulatory) chemical field Equation (28) can be firstly solved for the Laplacian general solutions ( 2 ) 1 , 2 = ρ ν ı ρ ı ± ρ 2 ν 2 ( ρ ) 2 2 e mc ρ 2 ( ρ ) 2 t e mc ρ 2 ( ρ ) 2that give special propagation equations for the chemical field since linking the spatial Laplacian with temporal evolution of the chemical field (∂tℵ)1/2; however, they may be considerably simplified when assuming the stationary chemical field condition (8), the step (xi) in the bondons’ algorithm, providing the working equation for the stationary bondonic field 2 = 2 mc e ν ρ ρ Equation (31) may be further integrated between two bonding attractors, say XA,XB, to primarily give = 2 mc e ν X A X B ρ ı ρ d x = mc e ν [ X A X B ρ ı ρ d x X B X A ρ ı ρ d x ]from where the generic bondonic chemical field is manifested with the form = mc e ν X bond ( X A X B ρ ı ρ d x ) The expression (33) has two important consequences. Firstly, it recovers the Bader zero flux condition for defining the basins of bonding [26] that in the present case is represented by the zero chemical boning fields, namely = 0 ρ ı = 0 Secondly, it furnishes the bondonic (chemical field) analytical expression = mc e ν X bondwithin the natural framework in which X B X A = X bond ρ ı ρ X bondi.e., when one has X A X B ρ ı ρ d x = 1 The step (xiv) of the bondonic algorithm may be now immediately implemented through inserting the Equation (10) into Equation (35) yielding the simple chemical field form = c ħ e 2 m ħ t X bond Finally, through applying the expression (11) of the bondonic algorithm–the step (xv) upon the result (37) with quanta (6) the mass of bondons carried by the chemical field on a given distance is obtained m = ħ t 2 1 X bond 2 Note that the bondons’ mass (38) directly depends on the time the chemical information “travels” from one bonding attractor to the other involved in bonding, while fast decreasing as the bonding distance increases. This phenomenological behavior has to be in the sequel cross-checked by considering the generalized relativistic version of electronic motion by means of the Dirac equation, Further quantitative consideration will be discussed afterwards. Relativistic Bondons In treating the quantum relativistic electronic behavior, the consecrated starting point stays the Dirac equation for the scalar real valued potential w that can be seen as a general function of (tc,x⃗) dependency [49] i ħ t Ψ 0 = [ i ħ c k = 1 3 α ^ k k + β ^ m c 2 + β ^ w ] Ψ 0with the spatial coordinate derivative notation ∂k ≡ ∂/∂xk and the special operators assuming the Dirac 4D representation α ^ k = [ 0 σ ^ k σ ^ k 0 ] , β ^ = [ 1 ^ 0 0 1 ^ ]in terms of bi-dimensional Pauli and unitary matrices σ ^ 1 = [ 0 1 1 0 ] , σ ^ 2 = [ 0 i i 0 ] , σ ^ 3 = [ 1 0 0 1 ] , 1 ^ σ ^ 0 = [ 1 0 0 1 ] Written within the de Broglie-Bohm framework, the spinor solution of Equation (39) looks like Ψ 0 = 1 2 R ( t , x ) [ ϕ φ ] = 1 2 R ( t , x ) [ exp { i ħ [ S ( t , x ) + s ] } exp { i ħ [ S ( t , x ) + s ] } ] , s = ± 1 2that from the beginning satisfies the necessary electronic density condition Ψ 0 * Ψ 0 = R * R = ρ Going on, aiming for the separation of the Dirac Equation (39) into its real/imaginary spinorial contributions, one firstly calculates the terms t Ψ 0 = 1 2 t R [ φ ϕ ] + 1 2 R i ħ t S [ φ ϕ ] t Ψ 0 = 1 2 k R [ φ ϕ ] + 1 2 R i ħ k S [ φ ϕ ] k = 1 3 α ^ k k Ψ 0 = 1 2 k = 1 3 k R [ 0 σ ^ k σ ^ k 0 ] [ φ ϕ ] + 1 2 R i ħ k = 1 3 k S [ 0 σ ^ k σ ^ k 0 ] [ φ ϕ ] = 1 2 [ ϕ k ( k R ) σ ^ k φ k ( k R ) σ ^ k ] + 1 2 R i ħ [ ϕ k ( k S ) σ ^ k φ k ( k S ) σ ^ k ] β ^ m c 2 Ψ 0 = m c 2 2 R [ 1 ^ 0 0 1 ^ ] [ φ ϕ ] = m c 2 2 R [ φ ϕ ] β ^ w Ψ 0 = w 2 R [ φ ϕ ]to be then combined in (39) producing the actual de Broglie-Bohm-Dirac spinorial Equation [ i ħ φ t R R φ t S i ħ ϕ t R + R ϕ t S ] = [ i ħ c ϕ k ( k R ) σ ^ k R c ϕ k ( k S ) σ ^ k + ( m c 2 + w ) R φ i ħ c φ k ( k R ) σ ^ k + R c φ k ( k S ) σ ^ k ( m c 2 + w ) R ϕ ] When equating the imaginary parts of (44) one yields the system { φ t R + ϕ c k ( k R ) σ ^ k = 0 φ c k ( k R ) σ ^ k + ϕ t R = 0that has non-trivial spinorial solutions only by canceling the associate determinant, i.e., by forming the Equation ( t R ) 2 = c 2 [ k ( k R ) σ ^ k ] 2of which the minus sign of the squared root corresponds with the electronic conservation charge, while the positive sign is specific to the relativistic treatment of the positron motion. For proofing this, the specific relationship for the electronic charge conservation (4) may be unfolded by adapting it to the present Bohmian spinorial case by the chain equivalences 0 = t ρ + j = t ( R 2 ) + k k j k = 2 R t R + k k ( c Ψ 0 * α ^ k Ψ 0 ) = 2 R t R + c 2 k k R * R [ e i ħ ( S + s ) e i ħ ( S + s ) ] [ 0 σ ^ k σ ^ k 0 ] [ e i ħ ( S + s ) e i ħ ( S + s ) ] = 2 R t R + c 2 k σ ^ k ( φ 2 1 + ϕ 2 1 ) k R 2 = 2 R t R + 2 Rc k σ ^ k ( k R ) The result t R = c k σ ^ k ( k R )indeed corresponds with the squaring root of (46) with the minus sign, certifying, therefore, the validity of the present approach, i.e., being in accordance with the step (ii) in bondonic algorithm of Section 2. Next, let us see what information is conveyed by the real part of Bohmian decomposed spinors of Dirac Equation (44); the system (48) is obtained { φ ( t S + m c 2 + w ) ϕ c k ( k S ) σ ^ k = 0 φ c k ( k S ) σ ^ k ( t S + m c 2 + w ) ϕ = 0that, as was previously the case with the imaginary counterpart (45), has no trivial spinors solutions only if the associate determinant vanishes, which gives the Equation c 2 [ k ( k S ) σ ^ k ] 2 = ( t S + m c 2 + w ) 2 Now, considering the Bohmian momentum-energy (17) equivalences, the Equation (49) further becomes c 2 [ k p k σ ^ k ] 2 = ( E + m c 2 + w ) 2 c 2 ( p σ ^ ) 2 = ( E + m c 2 + w ) 2 c 2 p 2 = ( E + m c 2 + w ) 2from where, while retaining the minus sign through the square rooting (as prescribed above by the imaginary spinorial treatment in relation with charge conservation), one recovers the relativistic electronic energy-momentum conservation relationship E = c p + m c 2 + wthus confirming in full the reliability of the Bohmian approach over the relativistic spinors. Moreover, the present Bohmian treatment of the relativistic motion is remarkable in that, except in the non-relativistic case, it does not produces the additional quantum (Bohm) potential (15)–responsible for entangled phenomena or hidden variables. This may be justified because within the Dirac treatment of the electron the entanglement phenomenology is somehow included throughout the Dirac Sea and the positron existence. Another important difference with respect to the Schrödinger picture is that the spinor equations that underlie the total charge and energy conservation do not mix the amplitude (2) with the phase (3) of the de Broglie-Bohm wave-function, whereas they govern now, in an independent manner, the flux and the energy of electronic motion. For these reasons, it seems that the relativistic Bohmian picture offers the natural environment in which the chemical field and associate bondons particles may be treated without involving additional physics. Let us see, therefore, whether the Dirac-Bohmian framework will reveal (or not) new insight in the bondon (Schrödinger) reality. This will be done by reconsidering the working Bohmian spinor (41) as transformed by the internal gauge symmetry SU(2) driven by the chemical field ℵ related phase–in accordance with Equation (5) of the step (iv) of bondonic algorithm Ψ G ( t , x ) = Ψ 0 ( t , x ) exp ( i ħ e c ( t , x ) ) = 1 2 R ( t , x ) [ φ G ϕ G ] = 1 2 R ( t , x ) [ exp { i ħ [ s ( t , x ) + e c ( t , x ) + s ] } exp { i ħ [ s ( t , x ) + e c ( t , x ) + s ] } ] Here it is immediate that expression (52) still preserves the electronic density formulation (2) as was previously the case with the gaugeless field (41) Ψ G * Ψ G = R * R = ρ However, when employed for the Dirac equation terms, the field (52) modifies the previous expressions (43a)–(43c) as follows t Ψ G = 1 2 t R [ φ G ϕ G ] + 1 2 R i ħ ( t S + e c t ) [ φ G ϕ G ] k Ψ 0 = 1 2 k R [ φ G ϕ G ] + 1 2 R i ħ ( k S + e c k ) [ φ G ϕ G ] k = 1 3 α ^ k k Ψ G = 1 2 k ( k R ) σ ^ k [ ϕ G φ G ] + 1 2 R i ħ k ( k S + e c k ) σ ^ k [ ϕ G φ G ]while producing the gauge spinorial Equation [ i ħ φ G t R R φ G ( t S + e c t ) i ħ ϕ G t R + R ϕ G ( t S + e c t ) ] = [ i ħ c ϕ G k ( k R ) σ ^ k R c ϕ G k ( k S + e c k ) σ ^ k + ( m c 2 + w ) R φ G i ħ c φ G k ( k R ) σ ^ k R c φ G k ( k S + e c k ) σ ^ k + ( m c 2 + w ) R ϕ G ] Now it is clear that since the imaginary part in (55) was not at all changed with respect to Equation (44) by the chemical field presence, the total charge conservation (4) is naturally preserved; instead the real part is modified, respecting the case (44), in the presence of the chemical field (by internal gauge symmetry). Nevertheless, in order that chemical field rotation does not produce modification in the total energy conservation, it imposes that the gauge spinorial system of the chemical field must be as { φ G t ϕ G c k ( k ) σ ^ k = 0 φ G c k ( k ) σ ^ k ϕ G t = 0 According to the already custom procedure, for the system (56) having no trivial gauge spinorial solution, the associated vanishing determinant is necessary, which brings to light the chemical field Equation c 2 [ k ( k ) σ ^ k ] 2 = ( t ) 2equivalently rewritten as c 2 [ σ ^ ] 2 = ( t ) 2that simply reduces to c 2 ( ) 2 = ( t ) 2through considering the Pauling matrices (40b) unitary feature upon squaring. At this point, one has to decide upon the sign of the square root of (57c); this was previously clarified to be minus for electronic and plus for positronic motions. Therefore, the electronic chemical bond is modeled by the resulting chemical field equation projected on the bonding length direction X bond = 1 c t The Equation (58) is of undulatory kind with the chemical field solution having the general plane wave form = ħ c e exp [ i ( k X bond ω t ) ]that agrees with both the general definition of the chemical field (6) as well as with the relativistic “traveling” of the bonding information. In fact, this is the paradox of the Dirac approach of the chemical bond: it aims to deal with electrons in bonding while they have to transmit the chemical bonding information—as waves—propagating with the light velocity between the bonding attractors. This is another argument for the need of bondons reality as a specific existence of electrons in chemical bond is compulsory so that such a paradox can be solved. Note that within the Dirac approach, the Bader flux condition (9) is no more related to the chemical field, being included in the total conservation of charge; this is again natural, since in the relativistic case the chemical field is explicitly propagating with a percentage of light velocity (see the Discussion in Section 4 below) so that it cannot drive the (stationary) electronic frontiers of bonding. Further on, when rewriting the chemical field of bonding (59) within the de Broglie and Planck consecrated corpuscular-undulatory quantifications ( t , X bond ) = ħ c e exp [ i ħ ( p X bond E t ) ]it may be further combined with the unitary quanta form (6) in the Equation (11) of the step (xv) in the bondonic algorithm to produce the phase condition 1 = exp [ i ħ ( p X bond E t ) ]that implies the quantification p X bond E t = 2 π n ħ , n N By the subsequent employment of the Heisenberg time-energy saturated indeterminacy at the level of kinetic energy abstracted from the total energy (to focus on the motion of the bondonic plane waves) E = ħ t p = m v = 2 m T 2 m ħ tthe bondon Equation (62) becomes X bond 2 m ħ t = ( 2 π n + 1 ) ħthat when solved for the bondonic mass yields the expression m = ħ t 2 1 X bond 2 ( 2 π n + 1 ) 2 , n = 0 , 1 , 2 which appears to correct the previous non-relativistic expression (38) with the full quantification. However, the Schrödinger bondon mass of Equation (38) is recovered from the Dirac bondonic mass (65) in the ground state, i.e., by setting n = 0. Therefore, the Dirac picture assures the complete characterization of the chemical bond through revealing the bondonic existence by the internal chemical field symmetry with the quantification of mass either in ground or in excited states (n ≤ 0, nN). Moreover, as always happens when dealing with the Dirac equation, the positronic bondonic mass may be immediately derived as well, for the case of the chemical bonding is considered also in the anti-particle world; it emerges from reloading the square root of the Dirac chemical field Equation (57c) with a plus sign that will be propagated in all the subsequent considerations, e.g., with the positronic incoming plane wave replacing the departed electronic one of (59), until delivering the positronic bondonic mass m = ħ t 2 1 X bond 2 ( 2 π n 1 ) 2 , n = 0 , 1 , 2 It nevertheless differs from the electronic bondonic mass (65) only in the excited spectrum, while both collapse in the non-relativistic bondonic mass (38) for the ground state of the chemical bond. Remarkably, for both the electronic and positronic cases, the associated bondons in the excited states display heavier mass than those specific to the ground state, a behavior once more confirming that the bondons encompass all the bonding information, i.e., have the excitation energy converted in the mass-added-value in full agreement with the mass-energy relativistic custom Einstein equivalence [64]. Let us analyze the consequences of the bondon’s existence, starting from its mass (38) formulation on the ground state of the chemical bond. At one extreme, when considering atomic parameters in bonding, i.e., when assuming the bonding distance of the Bohr radius size a0 = 0.52917 · 10−10[m]SI the corresponding binding time would be given as tt0 = a0/v0 = 2.41889 · 10−17[s]SI while the involved bondonic mass will be half of the electronic one m0/2, to assure fast bonding information. Of course, this is not a realistic binding situation; for that, let us check the hypothetical case in which the electronic m0 mass is combined, within the bondonic formulation (38), into the bond distance X bond = ħ t / 2 m 0 resulting in it completing the binding phenomenon in the femtosecond time tbonding ∼ 10−12[s]SI for the custom nanometric distance of bonding Xbonding ∼ 10−9[m]SI. Still, when both the femtosecond and nanometer time-space scale of bonding is assumed in (38), the bondonic mass is provided in the range of electronic mass m ∼ 10−31[kg]SI although not necessarily with the exact value for electron mass nor having the same value for each bonding case considered. Further insight into the time existence of the bondons will be reloaded for molecular systems below after discussing related specific properties as the bondonic velocity and charge. For enlightenment on the last perspective, let us rewrite the bondonic mass (65) within the spatial-energetic frame of bonding, i.e., through replacing the time with the associated Heisenberg energy, tbondingħ/Ebond, thus delivering another working expression for the bondonic mass m = ħ 2 2 ( 2 π n + 1 ) 2 E bond X bond 2 , n = 0 , 1 , 2 that is more practical than the traditional characterization of bonding types in terms of length and energy of bonding; it may further assume the numerical ground state ratio form ζ m = m m 0 = 87.8603 ( E bond [ kcal / mol ] ) ( X bond [ A 0 ] ) 2when the available bonding energy and length are considered (as is the custom for chemical information) in kcal/mol and Angstrom, respectively. Note that having the bondon’s mass in terms of bond energy implies the inclusion of the electronic pairing effect in the bondonic existence, without the constraint that the bonding pair may accumulate in the internuclear region [69]. Moreover, since the bondonic mass general formulation (65) resulted within the relativistic treatment of electron, it is considering also the companion velocity of the bondonic mass that is reached in propagating the bonding information between the bonding attractors. As such, when the Einstein type relationship [70] m v 2 2 = h υis employed for the relativistic bondonic velocity-mass relationship [63,64] m = m 1 v 2 c 2and for the frequency of the associate bond wave υ = v X bondit provides the quantified searched bondon to light velocity ratio v c = 1 1 + 1 64 π 2 ħ 2 c 2 ( 2 π n + 1 ) 4 E bond 2 X bond 2 , n = 0 , 1 , 2 or numerically in the bonding ground state as ζ v = v c = 100 1 + 3.27817 × 10 6 ( E bond [ kcal / mol ] ) 2 ( X bond [ A 0 ] ) 2 [ % ] Next, dealing with a new matter particle, one will be interested also on its charge, respecting the benchmarking charge of an electron. To this end, one re-employs the step (xv) of bondonic algorithm, Equation (11), in the form emphasizing the bondonic charge appearance, namely ( e ) = 0Next, when considering for the left-hand side of (74), the form provided by Equation (35), and for the right-hand side of (74), the fundamental hyperfine value of Equation (6), one gets the working Equation c m v e X bond = 137.036 [ Joule × meter Coulomb ]from where the bondonic charge appears immediately, once the associate expressions for mass and velocity are considered from Equations (67) and (72), respectively, yielding the quantified form e = 4 π ħ c 137.036 1 1 + 64 π 2 E bond 2 X bond 2 ħ 2 c 2 ( 2 π n + 1 ) 4 , n = 0 , 1 , 2 However, even for the ground state, and more so for the excited states, one may see that when forming the practical ratio respecting the unitary electric charge from (76), it actually approaches a referential value, namely ζ e = e e = 4 π 1 + ( E bond [ kcal / mol ] ) 2 ( X bond [ A 0 ] ) 2 3.27817 × 10 6 ( 2 π n + 1 ) 4 4 πfor, in principle, any common energy and length of chemical bonding. On the other side, for the bondons to have different masses and velocities (kinetic energy) as associated with specific bonding energy but an invariant (universal) charge seems a bit paradoxical. Moreover, it appears that with Equation (77) the predicted charge of a bonding, even in small molecules such as H2, considerably surpasses the available charge in the system, although this may be eventually explained by the continuous matter-antimatter balance in the Dirac Sea to which the present approach belongs. However, to circumvent such problems, one may further use the result (77) and map it into the Poisson type charge field Equation e 4 π × e 2 V 4 π × ρfrom where the bondonic charge may be reshaped by appropriate dimensional scaling in terms of the bounding parameters (Ebond and Xbond) successively as e 1 4 π [ X 2 V ] X = X bond 1 4 E bond X bond 0Now, Equation (79) may be employed towards the working ratio between the bondonic and electronic charges in the ground state of bonding ζ e = e e 1 32 π ( E bond [ k c a l / mol ] ) ( X bond [ A 0 ] ) 3.27817 × 10 3 With Equation (80) the situation is reversed compared with the previous paradoxical situation, in the sense that now, for most chemical bonds (of Table 1, for instance), the resulted bondonic charge is small enough to be not yet observed or considered as belonging to the bonding wave spreading among the binding electrons. Instead, aiming to explore the specific information of bonding reflected by the bondonic mass and velocity, the associated ratios of Equations (68) and (73) for some typical chemical bonds [71,72] are computed in Table 1. They may be eventually accompanied by the predicted life-time of corresponding bondons, obtained from the bondonic mass and velocity working expressions (68) and (73), respectively, throughout the basic time-energy Heisenberg relationship—here restrained at the level of kinetic energy only for the bondonic particle; this way one yields the successive analytical forms t = ħ T = 2 ħ m v 2 = 2 ħ ( m 0 ζ m ) ( c ζ v 10 2 ) 2 = ħ m 0 c 2 2 10 4 ζ m ζ v 2 = 0.0257618 ζ m ζ v 2 × 10 15 [ s ] S Iand the specific values for various bonding types that are displayed in Table 1. Note that defining the bondonic life-time by Equation (81) is the most adequate, since it involves the basic bondonic (particle!) information, mass and velocity; instead, when directly evaluating the bondonic life-time by only the bonding energy one deals with the working formula t bond = ħ E bond = 1.51787 E bond [ kcal / mol ] × 10 14 [ s ] S Ithat usually produces at least one order lower values than those reported in Table 1 upon employing the more complex Equation (81). This is nevertheless reasonable, because in the last case no particle information was considered, so that the Equation (82) gives the time of the associate wave representation of bonding; this departs by the case when the time is computed by Equation (81) where the information of bonding is contained within the particle (bondonic) mass and velocity, thus predicting longer life-times, and consequently a more susceptible timescale in allowing the bondonic observation. Therefore, as far as the chemical bonding is modeled by associate bondonic particle, the specific time of Equation (81) rather than that of Equation (82) should be considered. While analyzing the values in Table 1, it is generally observed that as the bondonic mass is large as its velocity and the electric charge lower in their ratios, respecting the light velocity and electronic benchmark charge, respectively, however with some irregularities that allows further discrimination in the sub-bonding types. Yet, the life-time tendency records further irregularities, due to its complex and reversed bondonic mass-velocity dependency of Equation (81), and will be given a special role in bondonic observation—see the Table 2 discussion below. Nevertheless, in all cases, the bondonic velocity is a considerable (non-negligible) percent of the photonic velocity, confirming therefore its combined quantum-relativistic nature. This explains why the bondonic reality appears even in the non-relativistic case of the Schrödinger equation when augmented with Bohmian entangled motion through the hidden quantum interaction. Going now to particular cases of chemical bonding in Table 1, the hydrogen molecule maintains its special behavior through providing the bondonic mass as slightly more than double of the only two electrons contained in the whole system. This is not a paradox, but a confirmation of the fact the bondonic reality is not just the sum or partition of the available valence atomic electrons in molecular bonds, but a distinct (although related) existence that fully involves the undulatory nature of the electronic and nuclear motions in producing the chemical field. Remember the chemical field was associated either in Schrödinger as well in Dirac pictures with the internal rotations of the (Bohmian) wave function or spinors, being thus merely a phase property—thus inherently of undulatory nature. It is therefore natural that the risen bondons in bonding preserve the wave nature of the chemical field traveling the bond length distance with a significant percent of light. Moreover, the bondonic mass value may determine the kind of chemical bond created, in this line the H2 being the most covalent binding considered in Table 1 since it is most closely situated to the electronic pairing at the mass level. The excess in H2 bond mass with respect to the two electrons in isolated H atoms comes from the nuclear motion energy converted (relativistic) and added to the two-sided electronic masses, while the heavier resulted mass of the bondon is responsible for the stabilization of the formed molecule respecting the separated atoms. The H2 bondon seems to be also among the less circulated ones (along the bondon of the F2 molecule) in bonding traveled information due to the low velocity and charge record—offering therefore another criterion of covalency, i.e., associated with better localization of the bonding space. The same happens with the C–C bonding, which is predicted to be more covalent for its simple (single) bondon that moves with the smallest velocityv<<) or fraction of the light velocity from all C–C types of bonding; in this case also the bondonic highest massm>>), smallest chargee<<), and highest (observed) life-time (t>>) criteria seem to work well. Other bonds with high covalent character, according with the bondonic velocity criterion only, are present in N≡N and the C=O bonding types and less in the O=O and C–O ones. Instead, one may establish the criteria for multiple (double and triple) bonds as having the series of current bondonic properties as: {ςm <, ςv >, ςe >, t <} However, the diamond C–C bondon, although with the smallest recorded mass (ςm <<), is characterized by the highest velocity (ςv >) and charge (ςe >) in the CC series (and also among all cases of Table 1). This is an indication that the bond is very much delocalized, thus recognizing the solid state or metallic crystallized structure for this kind of bond in which the electronic pairings (the bondons) are distributed over all atomic centers in the unit cell. It is, therefore, a special case of bonding that widely informs us on the existence of conduction bands in a solid; therefore the metallic character generally associated with the bondonic series of properties {ςm <<, ςv >, ςe >, t<}, thus having similar trends with the corresponding properties of multiple bonds, with the only particularity in the lower mass behavior displayed—due to the higher delocalization behavior for the associate bondons. Very interestingly, the series of C–H, N–H, and O–H bonds behave similarly among them since displaying a shrink and medium range of mass (moderate high), velocity, charge and life-time (moderate high) variations for their bondons, {ςm ∼ >, ςv ∼, ςe ∼, t ∼>}; this may explain why these bonds are the most preferred ones in DNA and genomic construction of proteins, being however situated towards the ionic character of chemical bond by the lower bondonic velocities computed; they have also the most close bondonic mass to unity; this feature being due to the manifested polarizability and inter-molecular effects that allows the 3D proteomic and specific interactions taking place. Instead, along the series of halogen molecules F2, Cl2, and I2, only the observed life-time of bondons show high and somehow similar values, while from the point of view of velocity and charge realms only the last two bonding types display compatible properties, both with drastic difference for their bondonic mass respecting the F–F bond—probably due the most negative character of the fluorine atoms. Nevertheless, judging upon the higher life-time with respect to the other types of bonding, the classification may be decided in the favor of covalent behavior. At this point, one notes traces of covalent bonding nature also in the case of the rest of halogen-carbon binding (C–Cl, C–Br, and C–I in Table 1) from the bondonic life-time perspective, while displaying also the ionic manifestation through the velocity and charge criteria {ςv ∼, ςe ∼} and even a bit of metal character by the aid of small bondonic mass (ςm <). All these mixed features may be because of the joint existence of both inner electronic shells that participate by electronic induction in bonding as well as electronegativity difference potential. Remarkably, the present results are in accordance with the recent signalized new binding class between the electronic pairs, somehow different from the ionic and covalent traditional ones in the sense that it is seen as a kind of resonance, as it appears in the molecular systems like F2, O2, N2 (with impact in environmental chemistry) or in polar compounds like C–F (specific to ecotoxicology) or in the reactions that imply a competition between the exchange in the hydrogen or halogen (e.g., HF). The valence explanation relied on the possibility of higher orders of orbitals’ existing when additional shells of atomic orbitals are involved such as <f> orbitals reaching this way the charge-shift bonding concept [73]; the present bondonic treatment of chemical bonds overcomes the charge shift paradoxes by the relativistic nature of the bondon particles of bonding that have as inherent nature the time-space or the energy-space spanning towards electronic pairing stabilization between centers of bonding or atomic adducts in molecules. However, we can also made predictions regarding the values of bonding energy and length required for a bondon to acquire either the unity of electronic charge or its mass (with the consequence in its velocity fraction from the light velocity) on the ground state, by setting Equations (68) and (80) to unity, respectively. These predictions are summarized in Table 2. From Table 2, one note is that the situation of the bondon having the same charge as the electron is quite improbable, at least for the common chemical bonds, since in such a case it will feature almost the light velocity (and almost no mass–that is, however, continuously decreasing as the bonding energy decreases and the bonding length increases). This is natural since a longer distance has to be spanned by lower binding energy yet carrying the same unit charge of electron while it is transmitted with the same relativistic velocity! Such behavior may be regarded as the present zitterbewegung (trembling in motion) phenomena, here at the bondonic level. However one records the systematic increasing of bondonic life-time towards being observable in the femtosecond regime for increasing bond length and decreasing the bonding energy–under the condition the chemical bonding itself still exists for certain {Xbond, Ebond} combinations. On the other side, the situation in which the bondon will weigh as much as one electron is a current one (see the Table 1); nevertheless, it is accompanied by quite reasonable chemical bonding length and energy information that it can carried at a low fraction of the light velocity, however with very low charge as well. Nevertheless, the discovered bonding energy-length relationship from Table 2, based on Equation (80), namely E bond [ kcal / mol ] × X bond [ A 0 ] = 182019should be used in setting appropriate experimental conditions in which the bondon particle may be observed as carrying the unit electronic charge yet with almost zero mass. In this way, the bondon is affirmed as a special particle of Nature, that when behaving like an electron in charge it is behaving like a photon in velocity and like neutrino in mass, while having an observable (at least as femtosecond) lifetime for nanosystems having chemical bonding in the range of hundred of Angstroms and thousands of kcal/mol! Such a peculiar nature of a bondon as the quantum particle of chemical bonding, the central theme of Chemistry, is not as surprising when noting that Chemistry seems to need both a particle view (such as offered by relativity) and a wave view (such as quantum mechanics offers), although nowadays these two physics theories are not yet fully compatible with each other, or even each fully coherent internally. Maybe the concept of ‘bondons’ will help to improve the situation for all concerned by its further conceptual applications. Finally, just to give a conceptual glimpse of how the present bondonic approach may be employed, the scattering phenomena are considered within its Raman realization, viewed as a sort of generalized Compton scattering process, i.e., extracting the structural information from various systems (atoms, molecules, crystals, etc.) by modeling the inelastic interaction between an incident IR photon and a quantum system (here the bondons of chemical bonds in molecules), leaving a scattered wave with different frequency and the resulting system in its final state [74]. Quantitatively, one firstly considers the interaction Hamiltonian as being composed by two parts, H ( 1 ) = e m j [ p j A ( r j , t ) ] H ( 2 ) = e 2 2 m j A 2 ( r j , t )accounting for the linear and quadratic dependence of the light field potential vector A⃗(r⃗j, t) acting on the bondons “j”, carrying the kinetic moment pB̶j = mv, charge e and mass mB̶. Then, noting that, while considering the quantified incident (q⃗0, υ0) and scattered (q⃗, υ) light beams, the interactions driven by H(1) and H(2) model the changing in one- and two- occupation numbers of photonic trains, respectively. In this context, the transition probability between the initial |i 〉 and final |f 〉 bondonic states writes by squaring the sum of all scattering quantum probabilities that include absorption (A, with nA number of photons) and emission (E, with nE number of photons) of scattered light on bondons, see Figure 1. Analytically, one has the initial-to-final total transition probability [75]dependence here given as d 2 Π f i 1 ħ | π f i | 2 δ ( E | i + h υ 0 E | f h υ ) υ 2 d υ d Ω = 1 ħ | f ; n A 1 , n E + 1 | H ( 2 ) | n A , n E ; i + v f ; n A 1 , n E + 1 | H ( 1 ) | n A 1 , n E ; v v ; n A 1 , n E | H ( 1 ) | n A , n E ; i E | i E | v + h υ 0 + v f ; n A 1 , n E + 1 | H ( 1 ) | n A , n E + 1 ; v v ; n A , n E + 1 | H ( 1 ) | n A , n E ; i E | i E | v h υ | 2 × δ ( E | i + h υ 0 E | f h υ ) υ 2 d υ d Ω At this point, the conceptual challenge appears to explore the existence of the Raman process itself from the bondonic description of the chemical bond that turns the incoming IR photon into the (induced, stimulated, or spontaneous) structural frequencies υ v i = E | i E | v hAs such, the problem may be reshaped in expressing the virtual state energy E|B̶v in terms of bonding energy associated with the initial state E | i = E bondthat can be eventually measured or computationally predicted by other means. However, this further implies the necessity of expressing the incident IR photon with the aid of bondonic quantification; to this end the Einstein relation (69) is appropriately reloaded in the form h υ v i = m v 2 2 = 1 4 v 2 ħ 2 E bond X bond 2 ( 2 π n v + 1 ) 2where the bondonic mass (67) was firstly implemented. Next, in terms of representing the turn of the incoming IR photon into the structural wave-frequency related with the bonding energy of initial state, see Equation (88); the time of wave-bond (82) is here considered to further transform Equation (89) to the yield h υ v i = 1 4 v 2 E bond 2 t bond 2 E bond X bond 2 ( 2 π n v + 1 ) 2 = 1 4 E bond v 2 v bond 2 ( 2 π n v + 1 ) 2where also the corresponding wave-bond velocity was introduced v bond = X bond t bond = 1 ħ E bond X bondIt is worth noting that, as previously was the case with the dichotomy between bonding and bondonic times, sees Equations (81)vs. (82), respectively, the bonding velocity of Equation (91) clearly differs by the bondonic velocity of Equation (72) since the actual working expression v bond c = ( E bond [ kcal / mol ] ) ( X bond [ A 0 ] ) 2.19758 × 10 3 [ % ]provides considerably lower values than those listed in Table 1–again, due to missing the inclusion of the particle mass’ information, unlike is the case for the bondonic velocity. Returning to the bondonic description of the Raman scattering, one replaces the virtual photonic frequency of Equation (90) together with Equation (88) back in the Bohr-type Equation (87) to yield the searched quantified form of virtual bondonic energies in Equation (86) and Figure 1, analytically E | v = E bond [ 1 1 4 v 2 v bond 2 ( 2 π n v + 1 ) 2 ] = E bond [ 1 16 π 2 ( 2 π n v + 1 ) 2 64 π 2 E bond 2 X bond 2 ħ 2 c 2 + ( 2 π n v + 1 ) 4 ]or numerically E | v = E bond [ 1 16 π 2 ( 2 π n v + 1 ) 2 0.305048 × 10 6 × ( E bond [ kcal / mol ] ) 2 × ( X bond [ A 0 ] ) 2 + ( 2 π n v + 1 ) 4 ] , n v = 0 , 1 , 2 Remarkably, the bondonic quantification (94) of the virtual states of Raman scattering varies from negative to positive energies as one moves from the ground state to more and more excited states of initial bonding state approached by the incident IR towards virtual ones, as may be easily verified by considering particular bonding data of Table 1. In this way, more space is given for future considerations upon the inverse or stimulated Raman processes, proving therefore the direct involvement of the bondonic reality in combined scattering of light on chemical structures. Overall, the bondonic characterization of the chemical bond is fully justified by quantum and relativistic considerations, to be advanced as a useful tool in characterizing chemical reactivity, times of reactions, i.e., when tunneling or entangled effects may be rationalized in an analytical manner. Note that further correction of this bondonic model may be realized when the present point-like approximation of nuclear systems is abolished and replaced by the bare-nuclear assumption in which additional dependence on the bonding distance is involved. This is left for future communications. The chemical bond, perhaps the greatest challenge in theoretical chemistry, has generated many inspiring theses over the years, although none definitive. Few of the most preeminent regard the orbitalic based explanation of electronic pairing, in valence shells of atoms and molecules, rooted in the hybridization concept [8] then extended to the valence-shell electron-pair repulsion (VSEPR) [76]. Alternatively, when electronic density is considered, the atoms-in-molecule paradigms were formulated through the geometrical partition of forces by Berlin [69], or in terms of core, bonding, and lone-pair lodges by Daudel [77], or by the zero local flux in the gradient field of the density ∇ρ by Bader [26], until the most recent employment of the chemical action functional in bonding [78,79]. Yet, all these approaches do not depart significantly from the undulatory nature of electronic motion in bonding, either by direct wave-function consideration or through its probability information in electronic density manifestation (for that is still considered as a condensed—observable version—of the undulatory manifestation of electron). In other words, while passing from the Lewis point-like ansatz to the undulatory modeling of electrons in bonding, the reverse passage was still missing in an analytical formulation. Only recently the first attempt was formulated, based on the broken-symmetry approach of the Schrödinger Lagrangean with the electronegativity-chemical hardness parabolic energy dependency, showing that a systematical quest for the creation of particles from the chemical bonding fields is possible [80]. Following this line, the present work makes a step forward and considers the gauge transformation of the electronic wave-function and spinor over the de Broglie-Bohm augmented non-relativistic and relativistic quantum pictures of the Schrödinger and Dirac electronic (chemical) fields, respectively. As a consequence, the reality of the chemical field in bonding was proved in either framework, while providing the corresponding bondonic particle with the associate mass and velocity in a full quantization form, see Equations (67) and (72). In fact, the Dirac bondon (65) was found to be a natural generalization of the Schrödinger one (38), while supplementing it with its anti-bondon particle (66) for the positron existence in the Dirac Sea. The bondon is the quantum particle corresponding to the superimposed electronic pairing effects or distribution in chemical bond; accordingly, through the values of its mass and velocity it may be possible to indicate the type of bonding (in particular) and the characterization of electronic behavior in bonding (in general). However, one of the most important consequences of bondonic existence is that the chemical bonding may be described in a more complex manner than relaying only on the electrons, but eventually employing the fermionic (electronic)-bosonic (bondonic) mixture: the first preeminent application is currently on progress, that is, exploring the effect that the Bose-Einstein condensation has on chemical bonding modeling [81,82]. Yet, such possibility arises due to the fact that whether the Pauli principle is an independent axiom of quantum mechanics or whether it depends on other quantum description of matter is still under question [83], as is the actual case of involving hidden variables and the entanglement or non-localization phenomenology that may be eventually mapped onto the delocalization and fractional charge provided by quantum chemistry over and on atomic centers of a molecular complex/chemical bond, respectively. As an illustration of the bondonic concept and of its properties such as the mass, velocity, charge, and life-time, the fundamental Raman scattering process was described by analytically deriving the involved virtual energy states of scattering sample (chemical bond) in terms of the bondonic properties above—proving its necessary existence and, consequently, of the associate Raman effect itself, while leaving space for further applied analysis based on spectroscopic data on hand. On the other side, the mass, velocity, charge, and life-time properties of the bondons were employed for analyzing some typical chemical bonds (see Table 1), this way revealing a sort of fuzzy classification of chemical bonding types in terms of the bondonic-to-electronic mass and charge ratios ςm and ςe, and of the bondonic-to-light velocity percent ratio ςv, along the bondonic observable life-time, t respectively–here summarized in Table 3. These rules are expected to be further refined through considering the new paradigms of special relativity in computing the bondons’ velocities, especially within the modern algebraic chemistry [84]. Yet, since the bondonic masses of chemical bonding ground states seem untouched by the Dirac relativistic considerations over the Schrödinger picture, it is expected that their analytical values may make a difference among the various types of compounds, while their experimental detection is hoped to be some day completed. The author kindly thanks Hagen Kleinert and Axel Pelster for their hospitality at Free University of Berlin on many occasions and for the summer of 2010 where important discussions on fundamental quantum ideas were undertaken in completing this work, as well for continuous friendship through the last decade. Both anonymous referees are kindly thanked for stimulating the revised version of the present work, especially regarding the inclusion of the quantum-relativistic charge (zitterbewegung) discussion and the Raman scattering description by the bondonic particles, respectively. This work was supported by CNCSIS-UEFISCSU, project number PN II-RU TE16/2010. References ThomsonJJOn the structure of the molecule and chemical combinationPhilos. Mag192141510538 HückelEQuantentheoretische beiträge zum benzolproblemZ. Physik193170204286 DoeringWVDetertFCycloheptatrienylium oxideJ. Am. Chem. Soc195173876877 LewisGNThe atom and the moleculeJ. Am. Chem. Soc191638762785 LangmuirIThe arrangement of electrons in atoms and moleculesJ. Am. Chem. Soc191941868934 PaulingLQuantum mechanics and the chemical bondPhys. Rev19313711851186 PaulingLThe nature of the chemical bond. I. Application of results obtained from the quantum mechanics and from a theory of paramagnetic susceptibility to the structure of moleculesJ. Am. Chem. Soc19315313671400 PaulingLThe nature of the chemical bond II. The one-electron bond and the three-electron bondJ. Am. Chem. Soc19315332253237 HeitlerWLondonFWechselwirkung neutraler Atome und homöopolare Bindung nach der QuantenmechanikZ. Phys192744455472 SlaterJCThe self consistent field and the structure of atomsPhys. Rev192832339348 SlaterJCThe theory of complex spectraPhys. Rev19293412931322 HartreeDRThe Calculation of Atomic StructuresWiley & SonsNew York, NY, USA1957 LöwdinPOQuantum theory of many-particle systems. I. Physical interpretations by means of density matrices, natural spin-orbitals, and convergence problems in the method of configurational interactionPhys. Rev19559714741489 LöwdinPOQuantum theory of many-particle systems. II. Study of the ordinary Hartree-Fock approximationPhys. Rev19559714741489 LöwdinPOQuantum theory of many-particle systems. III. Extension of the Hartree-Fock scheme to include degenerate systems and correlation effectsPhys. Rev19559715091520 RoothaanCCJNew developments in molecular orbital theoryRev. Mod. Phys1951236989 PariserRParrRA semi - empirical theory of the electronic spectra and electronic structure of complex unsaturated molecules. IJ. Chem. Phys195321466471 PariserRParrRA semi-empirical theory of the electronic spectra and electronic structure of complex unsaturated molecules. IIJ. Chem. Phys195321767776 PopleJAElectron interaction in unsaturated hydrocarbonsTrans. Faraday Soc19534913751385 HohenbergPKohnWInhomogeneous electron gasPhys. Rev1964136B864B871 KohnWShamLJSelf-consistent equations including exchange and correlation effectsPhys. Rev1965140A1133A1138 PopleJABinkleyJSSeegerRTheoretical models incorporating electron correlationInt. J. Quantum Chem197610119 Head-GordonMPopleJAFrischMJQuadratically convergent simultaneous optimization of wavefunction and geometryInt. J. Quantum Chem198936291303 PutzMVDensity functionals of chemical bondingInt. J. Mol. Sci2008910501095 PutzMVPath integrals for electronic densities, reactivity indices, and localization functions in quantum systemsInt. J. Mol. Sci20091048164940 BaderRFWAtoms in Molecules-A Quantum TheoryOxford University PressOxford, UK1990 BaderRFWA bond path: A universal indicator of bonded interactionsJ. Phys. Chem. A199810273147323 BaderRFWPrinciple of stationary action and the definition of a proper open systemPhys. Rev. B1994491334813356 MezeyPGShape in Chemistry: An Introduction to Molecular Shape and TopologyVCH PublishersNew York, NY, USA1993 MaggioraGMMezeyPGA fuzzy-set approach to functional-group comparisons based on an asymmetric similarity measureInt. J. Quantum Chem199974503514 SzekeresZExnerTMezeyPGFuzzy fragment selection strategies, basis set dependence and HF–DFT comparisons in the applications of the ADMA method of macromolecular quantum chemistryInt. J. Quantum Chem2005104847860 ParrRGYangWDensity Functional Theory of Atoms and MoleculesOxford University PressOxford, UK1989 PutzMVContributions within Density Functional Theory with Applications in Chemical Reactivity Theory and ElectronegativityPh.D. dissertation, West University of Timisoara, Romania,2003 SandersonRTPrinciples of electronegativity Part I. General natureJ. Chem. Educ198865112119 MortierWJGenechtenKvGasteigerJElectronegativity equalization: Application and parametrizationJ. Am. Chem. Soc1985107829835 ParrRGDonnellyRALevyMPalkeWEElectronegativity: The density functional viewpointJ. Chem. Phys19786838013808 SenKDJørgensonCDStructure and BondingSpringerBerlin, Germany198766 PearsonRGHard and Soft Acids and BasesDowden, Hutchinson & RossStroudsberg, PA, USA1973 PearsonRGHard and soft acids and bases—the evolution of a chemical conceptCoord. Chem. Rev1990100403425 PutzMVRussoNSiciliaEOn the applicability of the HSAB principle through the use of improved computational schemes for chemical hardness evaluationJ. Comp. Chem2004259941003 ChattarajPKLeeHParrRGPrinciple of maximum hardnessJ. Am. Chem. Soc199111318541855 ChattarajPKSchleyerPvRAn ab initio study resulting in a greater understanding of the HSAB principleJ. Am. Chem. Soc199411610671071 ChattarajPKMaitiBHSAB principle applied to the time evolution of chemical reactionsJ Am Chem Soc200312527052710 PutzMVMaximum hardness index of quantum acid-base bondingMATCH Commun. Math. Comput. Chem200860845868 PutzMVSystematic formulation for electronegativity and hardness and their atomic scales within densitiy functional softness theoryInt. J. Quantum Chem2006106361386 PutzMVAbsolute and Chemical Electronegativity and HardnessNova Science PublishersNew York, NY, USA2008 DiracPAMQuantum mechanics of many-electron systemsProc. Roy. Soc. (London)1929A123714733 SchrödingerEAn undulatory theory of the mechanics of atoms and moleculesPhys. Rev19262810491070 DiracPAMThe quantum theory of the electronProc. Roy. Soc. (London)1928A117610624 EinsteinAPodolskyBRosenNCan quantum-mechanical description of physical reality be considered complete?Phys. Rev193547777780 BohrNCan quantum-mechanical description of physical reality be considered complete?Phys. Rev193548696702 BohmDA suggested interpretation of the quantum theory in terms of “hidden” variables. IPhys. Rev195285166179 BohmDA suggested interpretation of the quantum theory in terms of “hidden” variables. IIPhys. Rev195285180193 de BroglieLOndes et quantaCompt. Rend. Acad. Sci. (Paris)1923177507510 de BroglieLSur la fréquence propre de l'électronCompt. Rend. Acad. Sci. (Paris)1925180498500 de BroglieLVigierMJPLa Physique Quantique Restera-t-elle Indéterministe?Gauthier-VillarsParis, France1953 BohmDVigierJPModel of the causal interpretation of quantum theory in terms of a fluid with irregular fluctuationsPhys. Rev195496208216 PyykköPZhaoL-BSearch for effective local model potentials for simulation of QED effects in relativistic calculationsJ. Phys. B20033614691478 PyykköPRelativistic theory of atoms and molecules. III A Bibliography 1993–1999, Lecture Notes in ChemistrySpringer-VerlagBerlin, Germany200076 SnijdersJGPyykköPIs the relativistic contraction of bond lengths an orbital contraction effect?Chem. Phys. Lett19807558 LohrLLJrPyykköPRelativistically parameterized extended Hückel theoryChem. Phys. Lett197962333338 PyykköPRelativistic quantum chemistryAdv. Quantum Chem197811353409 EinsteinAOn the electrodynamics of moving bodiesAnn. Physik (Leipzig)190517891921 EinsteinADoes the inertia of a body depend upon its energy content?Ann. Physik (Leipzig)190518639641 WhitneyCKClosing in on chemical bonds by opening up relativity theoryInt. J. Mol. Sci20089272298 WhitneyCKSingle-electron state filling order across the elementsInt. J. Chem. Model20081105135 WhitneyCKVisualizing electron populations in atomsInt. J. Chem. Model20091245297 BoeyensJCANew Theories for ChemistryElsevierNew York, NY, USA2005 BerlinTBinding regions in diatomic moleculesJ. Chem. Phys195119208213 EinsteinAOn a Heuristic viewpoint concerning the production and transformation of lightAnn. Physik (Leipzig)190517132148 OelkeWCLaboratory Physical ChemistryVan Nostrand Reinhold CompanyNew York, NY, USA1969 FindlayAPractical Physical ChemistryLongmansLondon, UK1955 HibertyPCMegretCSongLWuWShaikSBarriers of hydrogen abstraction vs halogen exchange: An experimental manifestation of charge-shift bondingJ. Am. Chem. Soc200612828362843 FreemanSApplications of Laser Raman SpectroscopyJohn Wiley and SonsNew York, NY, USA1974 HeitlerWThe Quantum Theory of Radiation3rd edCambridge University PressNew York, NY, USA1954 GillespieRJThe electron-pair repulsion model for molecular geometryJ. Chem. Educ1970471823 DaudelRElectron and Magnetization Densities in Molecules and CrystalsBeckerPNATO ASI, Series B-Physics, Plenum PressNew York, NY, USA198040 PutzMVChemical action and chemical bondingJ. Mol. Struct. (THEOCHEM)20099006470 PutzMVLevels of a unified theory of chemical interactionInt. J. Chem. Model20091141147 PutzMVThe chemical bond: Spontaneous symmetry–breaking approachSymmetr. Cult. Sci200819249262 PutzMVHidden side of chemical bond: The bosonic condensateChemical BondingNOVA Science PublishersNew York, NY, USA2011to be published. PutzMVConceptual density functional theory: From inhomogeneous electronic gas to Bose-Einstein condensatesChemical Information and Computational Challenges in 21st A Celebration of 2011 International Year of ChemistryPutzMVNOVA Science Publishers IncNew York, NY, USA2011to be published. KaplanIGIs the Pauli exclusive principle an independent quantum mechanical postulate?Int. J. Quantum Chem200289268276 WhitneyCKRelativistic dynamics in basic chemistryFound. Phys200737788812 Figure and Tables The Feynman diagrammatical sum of interactions entering the Raman effect by connecting the single and double photonic particles’ events in absorption (incident wave light q⃗0, υ0) and emission (scattered wave light q⃗, υ) induced by the quantum first H(1) and second H(2) order interaction Hamiltonians of Equations (84) and (85) through the initial |i 〉, final |f 〉, and virtual |v 〉 bondonic states. The first term accounts for absorption (A)-emission (E) at once, the second term sums over the virtual states connecting the absorption followed by emission, while the third terms sums over virtual states connecting the absorption following the emission events. Ratios for the bondon-to-electronic mass and charge and for the bondon-to-light velocity, along the associated bondonic life-time for typical chemical bonds in terms of their basic characteristics such as the bond length and energy [71,72] through employing the basic formulas (68), (73), (80) and (81) for the ground states, respectively. Bond Type Xbond (Å) Ebond (kcal/mol) ζ m = m m 0 ζ v = v c [ % ] ζ e = e e [ × 10 3 ] t[×1015] (seconds) H–H 0.60 104.2 2.34219 3.451 0.3435 9.236 C–C 1.54 81.2 0.45624 6.890 0.687 11.894 C–C (in diamond) 1.54 170.9 0.21678 14.385 1.446 5.743 C=C 1.34 147 0.33286 10.816 1.082 6.616 C≡C 1.20 194 0.31451 12.753 1.279 5.037 N≡N 1.10 225 0.32272 13.544 1.36 4.352 O=O 1.10 118.4 0.61327 7.175 0.716 8.160 F–F 1.28 37.6 1.42621 2.657 0.264 25.582 Cl–Cl 1.98 58 0.3864 6.330 0.631 16.639 I–I 2.66 36.1 0.3440 5.296 0.528 26.701 C–H 1.09 99.2 0.7455 5.961 0.594 9.724 N–H 1.02 93.4 0.9042 5.254 0.523 10.32 O–H 0.96 110.6 0.8620 5.854 0.583 8.721 C–O 1.42 82 0.5314 6.418 0.64 11.771 C=O (in CH2O) 1.21 166 0.3615 11.026 1.104 5.862 C=O (in O=C=O) 1.15 191.6 0.3467 12.081 1.211 5.091 C–Cl 1.76 78 0.3636 7.560 0.754 12.394 C–Br 1.91 68 0.3542 7.155 0.714 14.208 C–I 2.10 51 0.3906 5.905 0.588 18.9131 Predicted basic values for bonding energy and length, along the associated bondonic life-time and velocity fraction from the light velocity for a system featuring unity ratios of bondonic mass and charge, respecting the electron values, through employing the basic formulas (81), (73), (68), and (80), respectively. X bond [ A 0 ] Ebond [(kcal/mol)] t[×1015] (seconds) ζ v = v c [ % ] ζ m = m m 0 ζ e = e e 1 87.86 10.966 4.84691 1 0.4827 × 10−3 1 182019 53.376 99.9951 4.82699 × 10−4 1 10 18201.9 533.76 99.9951 4.82699 × 10−5 1 100 1820.19 5337.56 99.9951 4.82699 × 10−6 1 Phenomenological classification of the chemical bonding types by bondonic (mass, velocity, charge and life-time) properties abstracted from Table 1; the used symbols are: > and ≫ for ‘high’ and ‘very high’ values; < and ≪ for ‘low’ and ‘very low’ values; ∼ and ∼> for ‘moderate’ and ‘moderate high and almost equal’ values in their class of bonding. Property ςm ςv ςe t Chemical bond Covalence >> << << >> Multiple bonds < > > < Metallic << > > < Ionic ∼> ∼>
2023756ecea05dab
SpringerOpen Newsletter Receive periodic news and updates relating to SpringerOpen. This article is part of the series Advanced Materials Nanocharacterization. Open Access Nano Express Scaling properties of ballistic nano-transistors Ulrich Wulf*, Marcus Krahlisch and Hans Richter Author Affiliations For all author emails, please log on. Received:5 November 2010 Accepted:28 April 2011 Published:28 April 2011 © 2011 Wulf et al; licensee Springer. In the past years, channel lengths of field-effect transistors in integrated circuits were reduced to arrive at currently about 40 nm [1]. Smaller conventional transistors have been built [2-9] with gate lengths down to 10 nm and below. As well-known with decreasing channel length the desired long-channel behavior of a transistor is degraded by short-channel effects [10-12]. One major source of these short-channel effects is the multi-dimensional nature of the electro-static field which causes a reduction of the gate voltage control over the electron channel. A second source is the advent of quantum transport. The most obvious quantum short-channel effect is the formation of a source-drain tunneling regime below threshold gate voltage. Here, the ID - VD-traces show a positive bending as opposed to the negative bending resulting for classically allowed transport [13,14]. The source-drain tunneling and the classically allowed transport regime are separated by a close-to linear threshold trace (LTT). Such a behavior is found in numerous MOSFETs with channel lengths in the range of a few tens of nanometers (see, for example, [2-9]). Starting from a three-dimensional formulation of the transport problem it is possible to construct a one-dimensional effective model [14] which allows to derive scale-invariant expressions for the drain current [15,16]. Here, the quantity arises as a natural scaling length for quantum transport where εF is the Fermi energy in the source contact and m* is the effective mass of the charge carriers. The quantum short-channel effects were studied as a function of the dimensionless characteristic length l = L/λ of the transistor channel, where L is its physical length. In this conference contribution, we discuss the physics of the major quantities in our scale-invariant model which are the chemical potential, the supply function, and the scale-invariant current transmission. We specify its range of applicability: generally, for a channel length up to a few tens of nanometers a LTT is definable up to room temperature. For higher temperatures, a LTT can only be found below a channel length of 10 nm. An inspection of the ID - VG-traces yields in qualitative agreement with experiments that at low drain voltages transport becomes thermally activated below the threshold gate voltage while it does not for large drain voltages. Though our model reproduces interesting qualitative features of the experiments it fails to provide a quantitative description: the theoretical values are larger than the experimental ones by a little less than a decade. Such a finding is expected for our simple model. Tsu-Esaki formula for the drain current In Refs. [13,14], the transport problem in a nano-FET was reduced to a one-dimensional effective problem invoking a "single-mode abrupt transition" approximation. Here, the electrons move along the transport direction in an effective potential given (see Figure 1b). The energy zero in Equation 1 coincides with the position of the conduction band minimum in the highly n-doped source contact. As shown in [14] thumbnailFigure 1. Generic n-channel nano-field effect transistor. (a) Schematic representation. (b) One-dimensional effective potential Veff. where Ek = 1 is the bottom of the lowest two-dimensional subband resulting in the z-confinement potential of the electron channel at zero drain voltage (see Figure 4b of Ref. [13]). The parameter W is the width of the transistor. Finally, VD = eUD is the drain potential at drain voltage UD which is assumed to fall off linearly. Experimentally, one measures in a wide transistor the current density J, which is the current per width of the transistor that we express as Here is the number of equivalent conduction band minima ('valleys') in the electron channel and I0 = 2F/h. In Refs. [15,16] a scale-invariant expression was derived. Here, m = μ/εF is the normalized chemical potential in the source contact, vD = VD/εF is the normalized drain voltage, and vG = VG/εF is the normalized gate voltage. As illustrated in Figure 1(b) the gate voltage is defined as the energy difference μ - V0 = VG, i.e., for VG > 0 the transistor operates in the ON-state regime of classically allowed transport and for VG < 0 in the source-drain tunneling regime. The control variable VG is used to eliminate the unknown variable V0. For the chemical potential in the source contact one finds (see next section) where u = kBT/εF is the normalized thermal energy. Equation 4 has the form of a Tsu-Esaki formula with the normalized supply function Here, F-1/2 is the Fermi-Dirac integral of order -1/2 and is the inverse function of F1/2. The effective current transmission depends on which is the normalized energy of the electron motion in the y-z-plane while is their energy in the x-direction. In the next sections, we will discuss the occurring quantities in detail. Chemical potential in source- and drain-contact For a wide enough transistor and a sufficient junction depth a (see Figure 1) the electrons in the contacts can be treated as a three-dimensional non-interacting electron gas. Furthermore, we assume that all donor impurities of density Ni are ionized. From charge neutrality it is then obtained that the electron density n0 is independent of the temperature and given by Here me is the effective mass and NV is the valley-degeneracy factor in the contacts, respectively. In the zero temperature limit a Sommerfeld expansion of the Fermi-Dirac integral leads to Equating 7 and 8 results in which is identical with (5) and plotted in Figure 2. As well-known, with increasing temperature the chemical potential falls off because the high-energy tail of the Fermi-distribution reaches up to ever higher energies. thumbnailFigure 2. Normalized chemical potential vs. thermal energy according to Equation 9 in green solid line and parabolic approximation in red dash-dotted line. Supply function As shown in Ref. [14] the supply function for a wide transistor can be written as This expression can be interpreted as the partition function (loosely speaking the "number of occupied states") in the grand canonic ensemble of a non-interacting homogeneous three-dimensional electron gas in the subsystem of electrons with a given lateral wave vector (ky, kz) yielding the energy in the y-z-direction. Formally equivalent it can be interpreted as the full partition function in the grand canonic ensemble of a one-dimensional electron gas at the chemical potential μ - ε. Performing the limit the Riemann sum in the variable can be replaced by the Fermi-Dirac integral F-1/2. It results that with the normalized transistor width w = W/λ. For the scaling of the supply function in Equation 11 we define (see Ref. [14]) where and we use the identity V0= εF = m - vG. For the source contact we write leading to the first factor in the square bracket of the Tsu-Esaki equation 4. In the drain contact, the chemical potential is lower by the factor VD. Replacing μ μ - VD yields Below we will show that for transistor operation the low temperature limit is relevant (see Figure 2). Here, one may apply in leading order (resulting from a Sommerfeld expansion) and F-1/2(-x → ∞) → exp (x). Since V0 > 0 the factor vG - m is negative and we obtain from (12) From Figure 3 it is seen that for ε below the chemical potential the supply function is well described by the square-root dependence in the limit. If ε lies above the chemical chemical one obtains the limit which is a small exponential tail due to thermal activation. thumbnailFigure 3. Supply function in the source contact (see Equation 6) for u = 0.1 and vG = 0 (black line), low-temperature limit according to Equation 15 for α < 0 (red dashed line) and α > 0 (green dashed line). Because of the small temperature m(u) ~ 1 so that occurs at . Current transmission The effective current transmission in Equation 16 is given y It is calculated from the scattering solutions of the scaled one-dimensional Schrödinger equation with β = 2m*V0L2/ħ2 = l2(m - vG), and ŷ = y/L. The scaled effective potential is given by , , and ,where . As usual, the scattering functions emitted from the source contact obey the asymptotic conditions and with and . As can be seen from Figure 4, around the current transmission changes from around zero to around one. For weak barriers there is a relatively large current transmission below one leading to drain leakage currents. For strong barriers this remnant transmission vanishes and we can approximate the current transmission by an ideal one. thumbnailFigure 4. Scaled effective model. (a) Scaled effective potential. (b) Effective current transmission at u = 0.1, vD = 0.5, and vG = 0 ( = 0.504 and m = 0.992). The considered characteristic lengths are l = 4 (red, weak barrier, β = 15.87) and l = 25 (green, strong barrier, β = 619.8). The ideal limit (Equation 19) in blue line. To a large extent the Fowler Nordheim oscillations in the numerical transmission average out performing the integration in Equation 4. Parameters in experimental nano-FETs Heavily doped contacts In the heavily doped contacts the electrons can be approximated as a three-dimensional non-interacting Fermi gas. Then from (8) the Fermi energy above the bottom of the conduction band is given by For n++-doped Si contacts the valley-degeneracy is NV = 6 and the effective mass is taken as . Here m1 = 0.19m0 and m2 = 0.98m0 are the effective masses corresponding to the principle axes of the constant energy ellipsoids. In our later numerical calculations we set εF = 0.35 eV assuming a level of source-doping as high as Ni = n0 = 1021 cm-3. Electron channel In the electron channel a strong lateral subband quantization exists As well-known [17] at low temperatures only the two constant energy ellipsoids with the heavy mass m2 perpendicular to the (100)-interface are occupied leading to a valley degeneracy of gv = 2. The in-plane effective mass is therefore the light mass m* = m1 entering the relation Here εF = 0.35 eV was assumed. One then has in Equation 3 I0 = ~ 27μA and with λ ~ 1 nm as well as = 2 one obtains J0 = 5.4 × 104 μA/μm. Drain characteristics Typical drain characteristics are plotted in Figure 5 for a low temperature (u = 0.01) and at room temperature (u = 0.1). It is seen that for both the temperatures a LTT can be identified. We define the LTT as the j - vD trace which can be best fitted with a linear regression j = σthvD in the given interval 0 ≤ vD ≤ 2. The best fit is determined by the minimum relative mean square deviation. The gate voltage associated with the LTT is denoted with . It turns out that at room temperature lies slightly above zero and at low temperatures slightly below (see Figure 5c). In general, the temperature dependence of the drain current is small. The most significant temperature effect is the enhancement of the resonant Fowler-Nordheim oscillations found at negative vG at low temperatures. From Figure 5d, it can be taken that the slope of the LTT σth decreases with increasing l and increasing temperature. For "hot" transistors (u = 0.2) a LTT can only be defined up to l ~ 10. thumbnailFigure 5. Calculated drain characteristics for l = 10, vG starting from 0.5 with decrements of 0.1 (solid lines) at the temperature (a) u = 0.1 and (b) u = 0.01. In green dashed lines the LTT. For u = 0.1 the LTT occurs at a gate voltage of = -0.05 and for u = 0.01 at = 0.05. (c) , and (d) σth versus characteristic length for u = 0.01 (black), u = 0.1 (red), and u = 0.2 (green). Threshold characteristics The threshold characteristics at room temperature are plotted in Figure 6 for a "small" drain voltage (vD = 0.1) and a "large" drain voltage (vD = 2.0). For the largest considered characteristic length l = 60 it is seen that below zero gate voltage the drain current is thermally activated for both considered drain voltages. A comparison with the results for l = 25 and l = 10 yields that for the small drain voltage the ID - VG trace is only weakly effected by the change in the barrier strength. In contrast, at the high drain voltage the drain current below vG = 0 grows strongly with decreasing barrier strength. The drain current does not reach the thermal activation regime any more, it falls of much smoother with increasing negative vG. As can be gathered from Figure 8 this effect is seen in experiments as well. We attribute it to the weakening of the tunneling barrier with increasing vD. To confirm this point the threshold characteristics for a still weaker barrier strength (l = 3) is considered. No thermal activation is found in this case even for the small drain voltage. thumbnailFigure 6. Calculated threshold characteristics at u = 0.1 (a) for l = 60 and (b) l = 25, and (c) l = 3. The dashed straight lines in blue are guides to the eye exhibiting a slope corresponding to thermal activation. We discuss our numerical results on the background of experimental characteristics for a 10 nm gate length transistor [4,5] reproduced in Figure 7. As demonstrated in Sect. "Parameters in experimental nano-FETs" one obtains from Equation 21 a characteristic length of λ ~ 1 nm under reasonable assumptions. For the experimental 10 nm gate length, we thus obtain l = L/λ = 10. Furthermore, Equation 20 yields the value of εF = 0.35 eV. The conversion of the experimental drain voltage V into the theoretical parameter vD is given by thumbnailFigure 7. Drain characteristics in experiment and theory. (a) Experimental drain characteristics for a nano-transistor with L = 10 nm [4,5]. Our assumption for the LTTis marked with a green dashed line leading to a threshold gate voltage of = 0.15V. (b) Theoretical drain characteristics for l = 10 and u = 0.1 (see Fig. 5a) with the green dashed threshold characteristic at = -0.05. The maximum experimental drain voltage of 0.75 V then sets the scale for vD ranging from zero to vD = 0.75 eV/0.35 eV ~ 2. For the conversion experimental gate voltage VG to the theoretical parameter vG we make linear ansatz as where is the experimental threshold gate voltage (see Figure 8a). The constant β is chosen so that converts into . In our example, it is shown from Figure 8a = 0.15 V and from Figure 8b = -0.05, so that β = -0.2 eV. To match the experimental drain characteristic to the theoretical one we first convert the highest experimental value for VG into the corresponding theoretical one. Inserting in (23) VG = 0.75 V yields vG ~ 0.5. Second, we adjust the experimental and the theoretical drain current-scales so that in Figure 7 the curves for the experimental current at VG = 0.7 and the theoretical curve at vG = 0.5 agree. It then turns out that the other corresponding experimental and theoretical traces agree as well. This agreement carries over to the range of negative gate voltages with thermally activated transport. This can be gathered from the ID - VG traces in Figure 8. We note that the constant of proportionality in Equation 23 given by 1 eV is more then εF which one would expect from the theoretical definition vG = VG/εF. Here, we emphasize that the experimental value of e VG corresponds to the change of the potential at the transistor gate while the parameter vG describes the position of the bottom of the lowest two-dimensional subband in the electron channel. The linear ansatz in Equation 23 and especially the constant of proportionality 1 eV can thus only be justified in a self-consistent calculation of the subband levels as has been provided, e.g., by Stern[18]. thumbnailFigure 8. Threshold characteristics in experiment and theory. (a) Experimental threshold characteristics for the nano-transistor in Fig. 7a. (b) Theoretical threshold characteristics for l = 10 and u = 0.1 with the blue dashed lines corresponding to thermal activation. The experimental and the theoretical drain characteristics in Figure 7 look structurally very similar. For a quantitative comparison we recall from Sect. "Parameters in experimental nano-FETs" the value of J0 = 5.4 × 104μA/μm. Then the maximum value j = 0.15 in Figure 7b corresponds to a theoretical current per width of 8 × 103μA/μm. To compare with the experimental current per width we assume that in the y-axis labels in Figures 7a and 8a it should read μA/μm instead of A/μm. The former unit is the usual one in the literature on comparable nanotransistors (see Refs. [2-9]) and with this correction the order of magnitude of the drain current per width agrees with that of the comparable transistors. It is found that the theoretical results are larger than the experimental ones by about a factor of ten. Such a failure has to be expected given the simplicity of our model. First, for an improvement it is necessary to proceed from potentials resulting in a self-consistent calculation. Second, our representation of the transistor by an effectively one-dimensional system probably underestimates the backscattering caused by the relatively abrupt transition between contacts and electron channel. Third, the drain current in a real transistor is reduced by impurity interaction, in particular, by inelastic scattering. As a final remark we note that in transistors with a gate length in the micrometer scale short-channel effects may occur which are structurally similar to the ones discussed in this article (see Sect. 8.4 of [10]). Therefore, a quantitatively more reliable quantum calculation would be desirable allowing to distinguish between the short-channel effects on micrometer scale and quantum short-channel effects. After a detailed discussion of the physical quantities in our scale-invariant model we show that a LTT is present not only in the low temperature limit but also at room temperatures. In qualitative agreement with the experiments the ID - VG-traces exhibit below the threshold voltage thermally activated transport at small drain voltages. At large drain voltages the gate-voltage dependence of the traces is much weaker. It is found that the theoretical drain current is larger than the experimental one by a little less than a decade. Such a finding is expected for our simple model. LTT: linear threshold trace. Competing interests The authors declare that they have no competing interests. Authors' contributions UW worked out the theroretical model, carried out numerical calculations and drafted the manuscript. MK carried out numerical calculations and drafted the manuscript. HR drafted the manuscript. All authors read and approved the final manuscript. 1. Auth C, Buehler H, Cappellani A, Choi H-h, Ding G, Han W, Joshi S, McIntyre B, Prince M, Ranade P, Sandford J, Thomas C: 45 nm High-k+Metal Gate Strain-Enhanced Transistors. Intel Technol J 2008, 12:77-85. OpenURL 2. Yu B, Wang H, Joshi A, Xiang Q, Ibok E, Lin M-R: 15 nm Gate Length Planar CMOS Transistor. IEDM Tech Dig 2001, 937. OpenURL 3. Doris B, Ieong M, Kanarsky T, Zhang Y, Roy RA, Dokumaci O, Ren Z, Jamin F-F, Shi L, Natzle W, Huang H-J, Mezzapelle J, Mocuta A, Womack S, Gribelyuk M, Jones EC, Miller RJ, Wong HSP, Haensch W: Extreme Scaling with Ultra-Thin Si Channel MOSFETs. IEDM Tech Dig 2002, 267. OpenURL 4. Doyle B, Arghavani R, Barlage D, Datta S, Doczy M, Kavalieros J, Murthy A, Chau R: Transistor Elements for 30 nm Physical Gate Lengths. Intel Technol J 2002, 6:42. OpenURL 5. Chau R, Doyle B, Doczy M, Datta S, Hareland S, Jin B, Kavalieros J, Metz M: Silicon Nano-Transistors and Breaking the 10 nm Physical Gate Length Barrier. 61st Device Research Conference 2003; Salt Lake City, Utah (invited talk) OpenURL 6. Tyagi S, Auth C, Bai P, Curello G, Deshpande H, Gannavaram S, Golonzka O, Heussner R, James R, Kenyon C, Lee S-H, Lindert N, Miu M, Nagisetty R, Natarajan S, Parker C, Sebastian J, Sell B, Sivakumar S, St Amur A, Tone K: An advanced low power, high performance, strained channel 65 nm technology. IEDM Tech Dig 2005, 1070. OpenURL 7. Natarajan S, Armstrong M, Bost M, Brain R, Brazier M, Chang C-H, Chikarmane V, Childs M, Deshpande H, Dev K, Ding G, Ghani T, Golonzka O, Han W, He J, Heussner R, James R, Jin I, Kenyon C, Klopcic S, Lee S-H, Liu M, Lodha S, McFadden B, Murthy A, Neiberg L, Neirynck J, Packan P, Pae S, Parker C, Pelto C, Pipes L, Sebastian J, Seiple J, Sell B, Sivakumar S, Song B, Tone K, Troeger T, Weber C, Yang M, Yeoh A, Zhang K: A 32 nm Logic Technology Featuring 2nd-Generation High-k + Metal-Gate Transistors, Enhanced Channel Strain and 0.171 μm2 SRAM Cell Size in a 291 Mb Array. IEDM Tech Dig 2008, 1. OpenURL 8. Fukutome H, Hosaka K, Kawamura K, Ohta H, Uchino Y, Akiyama S, Aoyama T: Sub-30-nm FUSI CMOS Transistors Fabricated by Simple Method Without Additional CMP Process. IEEE Electron Dev Lett 2008, 29:765. OpenURL 9. Bedell SW, Majumdar A, Ott JA, Arnold J, Fogel K, Koester SJ, Sadana DK: Mobility Scaling in Short-Channel Length Strained Ge-on-Insulator P-MOSFETs. IEEE Electron Dev Lett 2008, 29:811. OpenURL 10. Sze SM: Physics of Semiconductor Devices. New York: Wiley; 1981. OpenURL 11. Thompson S, Packan P, Bohr M: MOS Scaling: Transistor Challenges for the 21st Century. Intel Technol J 1998, Q3:1. OpenURL 12. Brennan KF: Introduction to Semiconductor Devices. Cambridge: Cambridge University Press; 2005. OpenURL 13. Nemnes GA, Wulf U, Racec PN: Nano-transistors in the LandauerBüttiker formalism. J Appl Phys 2004, 96:596. Publisher Full Text OpenURL 14. Nemnes GA, Wulf U, Racec PN: Nonlinear I-V characteristics of nanotransistors in the Landauer-Büttiker formalism. J Appl Phys 2005, 98:84308. Publisher Full Text OpenURL 15. Wulf U, Richter H: Scaling in quantum transport in silicon nanotransistors. Solid State Phenomena 2010, 156-158:517. OpenURL 16. Wulf U, Richter H: Scale-invariant drain current in nano-FETs. J Nano Res 2010, 10:49. OpenURL 17. Ando T, Fowler AB, Stern F: Electronic properties of two-dimensional systems. Rev Mod Phys 1982, 54:437. Publisher Full Text OpenURL 18. Stern F: Self-Consistent Results for n-Type Si Inversion Layers. Phys Rev B 1972, 5:4891. Publisher Full Text OpenURL
e636872d6ba2124c
Enrico Fermi From Wikipedia, the free encyclopedia Jump to: navigation, search "Fermi" redirects here. For other uses, see Fermi (disambiguation). Enrico Fermi Enrico Fermi 1943-49.jpg Enrico Fermi (1901–1954) Born (1901-09-29)29 September 1901 Rome, Italy Died 28 November 1954(1954-11-28) (aged 53) Chicago, Illinois, United States Citizenship Italy (1901–54) United States (1944–54) Fields Physics Alma mater Scuola Normale Superiore Academic advisors Doctoral students Other notable students Known for Notable awards Spouse Laura Capon Fermi Enrico Fermi (Italian: [enˈriːko ˈfermi]; 29 September 1901 – 28 November 1954) was an Italian physicist, who created the world's first nuclear reactor, the Chicago Pile-1. He has been called the "architect of the nuclear age"[1] and the "architect of the atomic bomb".[2] He was one of the few physicists to excel both theoretically and experimentally. Fermi held several patents related to the use of nuclear power, and was awarded the 1938 Nobel Prize in Physics for his work on induced radioactivity by neutron bombardment and the discovery of transuranic elements. He made significant contributions to the development of quantum theory, nuclear and particle physics, and statistical mechanics. Early life[edit] Via Gaeta 19, Rome, where Fermi was born Enrico Fermi was born in Rome, Italy, on 29 September 1901. He was the third child of Alberto Fermi, a division head (Capo Divisione) in the Ministry of Railways, and Ida de Gattis, an elementary school teacher.[3][4] His only sister, Maria, was two years older than he was, and his brother Giulio was a year older. After the two boys were sent to a rural community to be wet nursed, Enrico rejoined his family in Rome when he was two and a half.[5] Although he was baptised a Roman Catholic in accordance with his grandparents' wishes, his family was not particularly religious; Enrico was an agnostic throughout his adult life. As a young boy he shared the same interests as his brother Giulio, building electric motors and playing with electrical and mechanical toys.[6] Giulio died during the administration of an anesthetic for an operation on a throat abscess in 1915.[7] One of Fermi's first sources for his study of physics was a book he found at the local market at Campo de' Fiori in Rome. Published in 1840, the 900-page Elementorum physicae mathematicae, was written in Latin by Jesuit Father Andrea Caraffa, a professor at the Collegio Romano. It covered mathematics, classical mechanics, astronomy, optics, and acoustics, insofar as these disciplines were understood when the book was written.[8][9] Fermi befriended another scientifically inclined student, Enrico Persico,[10] and together the two worked on scientific projects such as building gyroscopes and trying to accurately measure the acceleration of Earth's gravity.[11] Fermi's interest in physics was further encouraged by his father's colleague Adolfo Amidei, who gave him several books on physics and mathematics, which he read and assimilated quickly.[12] Scuola Normale Superiore in Pisa[edit] Enrico Fermi as a student in Pisa Fermi graduated from high school in July 1918 and, at Amidei's urging, applied to the Scuola Normale Superiore in Pisa. Having lost one son, his parents were reluctant to let him move away from home for four years while attending the Sapienza University of Rome, but in the end they acquiesced. The school provided free lodging for students, but candidates had to take a difficult entrance exam that included an essay. The given theme was "Specific characteristics of Sounds". The 17-year-old Fermi chose to derive and solve the partial differential equation for a vibrating rod, applying Fourier analysis in the solution. The examiner, Professor Giuseppe Pittarelli from the Sapienza University of Rome, interviewed Fermi and praised that he would become an outstanding physicist in the future. Fermi achieved first place in the classification of the entrance exam.[13] During his years at the Scuola Normale Superiore, Fermi teamed up with a fellow student named Franco Rasetti with whom he would indulge in light-hearted pranks and who would later become Fermi's close friend and collaborator. In Pisa, Fermi was advised by the director of the physics laboratory, Luigi Puccianti, who acknowledged that there was little that he could teach Fermi, and frequently asked Fermi to teach him something instead. Fermi's knowledge of quantum physics reached such a high level that Puccianti asked him to organize seminars on the topic.[14] During this time Fermi learned tensor calculus, a mathematical technique invented by Gregorio Ricci and Tullio Levi-Civita that was needed to demonstrate the principles of general relativity.[15] Fermi initially chose mathematics as his major, but soon switched to physics. He remained largely self-taught, studying general relativity, quantum mechanics, and atomic physics.[16] In September 1920, Fermi was admitted to the Physics department. Since there were only three students in the department—Fermi, Rasetti, and Nello Carrara—Puccianti let them freely use the laboratory for whatever purposes they chose. Fermi decided that they should research X-ray crystallography, and the three worked to produce a Laue photograph—an X-ray photograph of a crystal.[17] During 1921, his third year at the university, Fermi published his first scientific works in the Italian journal Nuovo Cimento. The first was entitled "On the dynamics of a rigid system of electrical charges in translational motion" (Sulla dinamica di un sistema rigido di cariche elettriche in moto traslatorio). A sign of things to come was that the mass was expressed as a tensor—a mathematical construct commonly used to describe something moving and changing in three-dimensional space. In classical mechanics, mass is a scalar quantity, but in relativity it changes with velocity. The second paper was "On the electrostatics of a uniform gravitational field of electromagnetic charges and on the weight of electromagnetic charges" (Sull'elettrostatica di un campo gravitazionale uniforme e sul peso delle masse elettromagnetiche). Using general relativity, Fermi showed that a charge has a weight equal to U/c2, where U was the electrostatic energy of the system, and c is the speed of light.[16] The first paper seemed to point out a contradiction between the electrodynamic theory and the relativistic one concerning the calculation of the electromagnetic masses, as the former predicted a value of 4/3 U/c2. Fermi addressed this the next year in a paper "Concerning a contradiction between electrodynamic and the relativistic theory of electromagnetic mass" in which he showed that the apparent contradiction was a consequence of relativity. This paper was sufficiently well-regarded that it was translated into German and published in the German scientific journal Physikalische Zeitschrift in 1922.[18] That year, Fermi submitted his article "On the phenomena occurring near a world line" (Sopra i fenomeni che avvengono in vicinanza di una linea oraria) to the Italian journal I Rendiconti dell'Accademia dei Lincei. In this article he examined the Principle of Equivalence, and introduced the so-called "Fermi coordinates". He proved that on a world line close to the time line, space behaves as if it were a Euclidean space.[19][20] A light cone is a three-dimensional surface of all possible light rays arriving at and departing from a point in spacetime. Here, it is depicted with one spatial dimension suppressed. The time line is the vertical axis. Fermi submitted his thesis, "A theorem on probability and some of its applications" (Un teorema di calcolo delle probabilità ed alcune sue applicazioni), to the Scuola Normale Superiore in July 1922, and received his laurea at the unusually young age of 20. The thesis was on X-ray diffraction images. Theoretical physics was not yet considered a discipline in Italy, and the only thesis that would have been accepted was one on experimental physics. For this reason, Italian physicists were slow in embracing the new ideas like relativity coming from Germany. Since Fermi was quite at home in the lab doing experimental work, this did not pose insurmountable problems for him.[20] While writing the appendix for the Italian edition of the book Fundamentals of Einstein Relativity by August Kopff in 1923, Fermi was the first to point out that hidden inside the famous Einstein equation (E = mc2) was an enormous amount of nuclear potential energy to be exploited. "It does not seem possible, at least in the near future", he wrote, "to find a way to release these dreadful amounts of energy—which is all to the good because the first effect of an explosion of such a dreadful amount of energy would be to smash into smithereens the physicist who had the misfortune to find a way to do it."[20] In 1924 Fermi was initiated to the Freemasonry in the Masonic Lodge "Adriano Lemmi" of the Grand Orient of Italy.[21] Fermi decided to travel abroad, and spent a semester studying under Max Born at the University of Göttingen, where he met Werner Heisenberg and Pascual Jordan. Fermi then studied in Leiden with Paul Ehrenfest from September to December 1924 on a fellowship from the Rockefeller Foundation obtained through the intercession of the mathematician Vito Volterra. Here Fermi met Hendrik Lorentz and Albert Einstein, and became good friends with Samuel Goudsmit and Jan Tinbergen. From January 1925 to late 1926, Fermi taught mathematical physics and theoretical mechanics at the University of Florence, where he teamed up with Rasetti to conduct a series of experiments on the effects of magnetic fields on mercury vapour. He also participated in seminars at the Sapienza University of Rome, giving lectures on quantum mechanics and solid state physics.[22] While giving lectures of new quantum mechanics based on remarkable accuracy of predictions of Schrödinger equation, the Italian physicist would often say, "It has no business to fit so well!"[23] After Wolfgang Pauli announced his exclusion principle in 1925, Fermi responded with a paper "On the quantisation of the perfect monoatomic gas" (Sulla quantizzazione del gas perfetto monoatomico), in which he applied the exclusion principle to an ideal gas. The paper was especially notable for Fermi's statistical formulation, which describes the distribution of particles in systems of many identical particles that obey the exclusion principle. This was independently developed soon after by the British physicist Paul Dirac, who also showed how it was related to the Bose–Einstein statistics. Accordingly, it is now known as Fermi–Dirac statistics.[24] Following Dirac, particles that obey the exclusion principle are today called "fermions", while those that do not are called "bosons".[25] Professor in Rome[edit] Fermi and his students (the Via Panisperna boys) in the courtyard of Rome University's Physics Institute in Via Panisperna, about 1934. From Left to right: Oscar D'Agostino, Emilio Segrè, Edoardo Amaldi, Franco Rasetti and Fermi Professorships in Italy were granted by competition (concorso) for a vacant chair, the applicants being rated on their publications by a committee of professors. Fermi applied for a chair of mathematical physics at the University of Cagliari on Sardinia, but was narrowly passed over in favour of Giovanni Giorgi.[26] In 1926, at the age of 24, he applied for a professorship at the Sapienza University of Rome. This was a new chair, one of the first three in theoretical physics in Italy, that had been created by the Minister of Education at the urging of Professor Orso Mario Corbino, who was the University's professor of experimental physics, the Director of the Institute of Physics, and a member of Benito Mussolini's cabinet. Corbino, who also chaired the selection committee, hoped that the new chair would raise the standard and reputation of physics in Italy.[27] The committee chose Fermi ahead of Enrico Persico and Aldo Pontremoli,[28] and Corbino helped Fermi recruit his team, which was soon joined by notable students such as Edoardo Amaldi, Bruno Pontecorvo, Ettore Majorana and Emilio Segrè, and by Franco Rasetti, whom Fermi had appointed as his assistant.[29] They were soon nicknamed the "Via Panisperna boys" after the street where the Institute of Physics was located.[30] Fermi married Laura Capon, a science student at the University, on 19 July 1928.[31] They had two children: Nella, born in January 1931, and Giulio, born in February 1936.[32] On 18 March 1929, Fermi was appointed a member of the Royal Academy of Italy by Mussolini, and on 27 April he joined the Fascist Party. He later opposed Fascism when the 1938 racial laws were promulgated by Mussolini in order to bring Italian Fascism ideologically closer to German National Socialism. These laws threatened Laura, who was Jewish, and put many of Fermi's research assistants out of work.[33][34][35][36][37] During their time in Rome, Fermi and his group made important contributions to many practical and theoretical aspects of physics. In 1928, he published his Introduction to Atomic Physics (Introduzione alla fisica atomica), which provided Italian university students with an up-to-date and accessible text. Fermi also conducted public lectures and wrote popular articles for scientists and teachers in order to spread knowledge of the new physics as widely as possible.[38] Part of his teaching method was to gather his colleagues and graduate students together at the end of the day and go over a problem, often from his own research.[38][39] A sign of success was that foreign students now began to come to Italy. The most notable of these was the German physicist Hans Bethe,[40] who came to Rome as a Rockefeller Foundation fellow, and collaborated with Fermi on a 1932 paper "On the Interaction between Two Electrons" (German: Über die Wechselwirkung von Zwei Elektronen).[38] At this time, physicists were puzzled by beta decay, in which an electron was emitted from the atomic nucleus. To satisfy the law of conservation of energy, Pauli postulated the existence of an invisible particle with no charge and little or no mass that was also emitted at the same time. Fermi took up this idea, which he developed in a tentative paper in 1933, and then a longer paper the next year that incorporated the postulated particle, which Fermi called a "neutrino".[41][42][43] His theory, later referred to as Fermi's interaction, and still later as the theory of the weak interaction, described one of the four fundamental forces of nature. The neutrino was detected after his death, and his interaction theory showed why it was so difficult to detect. When he submitted his paper to the British journal Nature, that journal's editor turned it down because it contained speculations which were "too remote from physical reality to be of interest to readers".[42] Thus Fermi saw the theory published in Italian and German before it was published in English.[29] In the introduction to the 1968 English translation, physicist Fred L. Wilson noted that: Fermi's theory, aside from bolstering Pauli's proposal of the neutrino, has a special significance in the history of modern physics. One must remember that only the naturally occurring β emitters were known at the time the theory was proposed. Later when positron decay was discovered, the process was easily incorporated within Fermi's original framework. On the basis of his theory, the capture of an orbital electron by a nucleus was predicted and eventually observed. With time much experimental data has accumulated. Although peculiarities have been observed many times in β decay, Fermi's theory always has been equal to the challenge. The consequences of the Fermi theory are vast. For example, β spectroscopy was established as a powerful tool for the study of nuclear structure. But perhaps the most influential aspect of this work of Fermi is that his particular form of the β interaction established a pattern which has been appropriate for the study of other types of interactions. It was the first successful theory of the creation and annihilation of material particles. Previously, only photons had been known to be created and destroyed.[43] In January 1934, Irène Joliot-Curie and Frédéric Joliot announced that they had bombarded elements with alpha particles and induced radioactivity in them.[44][45] By March, Fermi's assistant Gian-Carlo Wick had provided a theoretical explanation using Fermi's theory of beta decay. Fermi decided to switch to experimental physics, using the neutron, which James Chadwick had discovered in 1932.[46] In March 1934, Fermi wanted to see if he could induce radioactivity with Rasetti's polonium-beryllium neutron source. Neutrons had no electric charge, and so would not be deflected by the positively charged nucleus. This meant that they needed much less energy to penetrate the nucleus than charged particles, and so would not require a particle accelerator, which the Via Panisperna boys did not have.[47][48] Enrico Fermi between Franco Rasetti (left) and Emilio Segrè in academic dress Fermi had the idea to resort to replacing the polonium-beryllium neutron source with a radon-beryllium one, which he created by filling a glass bulb with beryllium powder, evacuating the air, and then adding 50 mCi of radon gas, supplied by Giulio Cesare Trabacchi.[49][50] This created a much stronger neutron source, the effectiveness of which declined with the 3.8-day half-life of radon. He knew that this source would also emit gamma rays, but, on the basis of his theory, he believed that this would not affect the results of the experiment. He started by bombarding platinum, an element with a high atomic number that was readily available, without success. He turned to aluminium, which emitted an alpha particle and produced sodium, which then decayed into magnesium by beta particle emission. He tried lead, without success, and then fluorine in the form of calcium fluoride, which emitted an alpha particle and produced nitrogen, decaying into oxygen by beta particle emission. In all, he induced radioactivity in 22 different elements.[51] Fermi rapidly reported the discovery of neutron-induced radioactivity in the Italian journal La Ricerca Scientifica on 25 March 1934.[50][52][53] The natural radioactivity of thorium and uranium made it hard to determine what was happening when these elements were bombarded with neutrons but, after correctly eliminating the presence of elements lighter than uranium but heavier than lead, Fermi concluded that they had created new elements, which he called hesperium and ausonium.[54][48] The chemist Ida Noddack criticised this work, suggesting that some of the experiments could have produced lighter elements than lead rather than new, heavier elements. Her suggestion was not taken seriously at the time because her team had not carried out any experiments with uranium, and its claim to have discovered masurium (technetium) was disputed. At that time, fission was thought to be improbable if not impossible on theoretical grounds. While physicists expected elements with higher atomic numbers to form from neutron bombardment of lighter elements, nobody expected neutrons to have enough energy to split a heavier atom into two light element fragments in the manner that Noddack suggested.[55][54] Beta decay. A neutron decays into a proton, and an electron is emitted. In order for the total energy in the system to remain the same, Pauli and Fermi postulated that a neutrino () was also emitted The Via Panisperna boys also noticed some unexplained effects. The experiment seemed to work better on a wooden table than a marble table top. Fermi remembered that Joliot-Curie and Chadwick had noted that paraffin wax was effective at slowing neutrons, so he decided to try that. When neutrons were passed through paraffin wax, they induced a hundred times as much radioactivity in silver compared with when it was bombarded without the paraffin. Fermi guessed that this was due to the hydrogen atoms in the paraffin. Those in wood similarly explained the difference between the wooden and the marble table tops. This was confirmed by repeating the effect with water. He concluded that collisions with hydrogen atoms slowed the neutrons.[56][48] The lower the atomic number of the nucleus it collides with, the more energy a neutron loses per collision, and therefore the less collisions that are required to slow a neutron down by a given amount.[57] Fermi realised that this induced more radioactivity because slow neutrons were more easily captured than fast ones. He developed a diffusion equation to describe this, which became known as the Fermi age equation.[56][48] In 1938 Fermi received the Nobel Prize in Physics at the age of 37 for his "demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons".[58] After Fermi received the prize in Stockholm, he did not return home to Italy, but rather continued on to New York City with his family in December 1938, where they applied for permanent residency. The decision to move to America and become U.S. citizens was primarily a result of the racial laws in Italy.[33] Manhattan Project[edit] Fermi arrived in New York City on 2 January 1939.[59] He was immediately offered posts at five different universities, and accepted a post at Columbia University,[60] where he had already given summer lectures in 1936.[61] He received the news that in December 1938, the German chemists Otto Hahn and Fritz Strassmann had detected the element barium after bombarding uranium with neutrons,[62] which Lise Meitner and her nephew Otto Frisch correctly interpreted as the result of nuclear fission. Frisch confirmed this experimentally on 13 January 1939.[63][64] The news of Meitner and Frisch's interpretation of Hahn and Strassmann's discovery crossed the Atlantic with Niels Bohr, who was to lecture at Princeton University. Isidor Isaac Rabi and Willis Lamb, two Columbia University physicists working at Princeton, found out about it and carried it back to Columbia. Rabi said he told Enrico Fermi, but Fermi later gave the credit to Lamb:[65] Noddack was proven right after all. Fermi had dismissed the possibility of fission on the basis of his calculations, but he had not taken into account the binding energy that would appear when a nuclide with an odd number of neutrons absorbed an extra neutron.[55] For Fermi, the news came as a profound embarrassment, as the transuranic elements that he had partly been awarded the Nobel Prize for discovering had not been transuranic elements at all, but fission products. He added a footnote to this effect to his Nobel Prize acceptance speech.[65][67] Illustration of Chicago Pile-1, the first nuclear reactor to achieve a self-sustaining chain reaction. Designed by Fermi, it consisted of uranium and uranium oxide in a cubic lattice embedded in graphite. The scientists at Columbia decided that they should try to detect the energy released in the nuclear fission of uranium when bombarded by neutrons. On 25 January 1939, in the basement of Pupin Hall at Columbia, an experimental team including Fermi conducted the first nuclear fission experiment in the United States. The other members of the team were Herbert L. Anderson, Eugene T. Booth, John R. Dunning, G. Norris Glasoe, and Francis G. Slack.[68] The next day, the Fifth Washington Conference on Theoretical Physics began in Washington, D.C. under the joint auspices of George Washington University and the Carnegie Institution of Washington. There, the news on nuclear fission was spread even further, which fostered many more experimental demonstrations.[69] French scientists Hans von Halban, Lew Kowarski, and Frédéric Joliot-Curie had demonstrated that uranium bombarded by neutrons emitted more neutrons than it absorbed, which implies chain reaction may be a possibility.[70] Fermi and Anderson did so too a few weeks later.[71][72] Leó Szilárd obtained 200 kilograms (440 lb) of uranium oxide from Canadian radium producer Eldorado Gold Mines Limited, allowing Fermi and Anderson to conduct experiments with fission on a much larger scale.[73] Fermi and Szilárd collaborated on a design of a device to achieve a self-sustaining nuclear reaction—a nuclear reactor. Due to the rate of absorption of neutrons by the hydrogen in water, it was unlikely that a self-sustaining reaction could be achieved with natural uranium and water as a neutron moderator. Fermi suggested, based on his work with neutrons, that the reaction could be achieved with uranium oxide blocks and graphite as a moderator instead of water. This would reduce the neutron capture rate, and in theory make a self-sustaining chain reaction possible. Szilárd came up with a workable design: a pile of uranium oxide blocks interspersed with graphite bricks.[74] Szilárd, Anderson, and Fermi published a paper on "Neutron Production in Uranium".[73] But their work habits and personalities were different, and Fermi had trouble working with Szilárd.[75] Fermi was among the first to warn military leaders about the potential impact of nuclear energy, giving a lecture on the subject at the Navy Department on 18 March 1939. The response fell short of what he had hoped for, although the Navy agreed to provide $1,500 towards further research at Columbia.[76] Later that year, Szilárd, Eugene Wigner, and Edward Teller sent the famous letter signed by Einstein to U.S. President Roosevelt, warning that Nazi Germany was likely to build an atomic bomb. In response, Roosevelt formed the Advisory Committee on Uranium to investigate the matter.[77] Fermi's ID badge photo from Los Alamos The Advisory Committee on Uranium provided money for Fermi to buy graphite,[78] and he built a pile of graphite bricks on the seventh floor of the Pupin Hall laboratory.[79] By August 1941, he had six tons of uranium oxide and thirty tons of graphite, which he used to build a still larger pile in Schermerhorn Hall at Columbia.[80] The S-1 Section of the Office of Scientific Research and Development, as the Advisory Committee on Uranium was now known, met on 18 December 1941, with the U.S. now engaged in World War II, making its work urgent. Most of the effort sponsored by the Committee had been directed at producing enriched uranium, but Committee member Arthur Compton determined that a feasible alternative was plutonium, which could be mass-produced in nuclear reactors by the end of 1944.[81] He decided to concentrate the plutonium work at the University of Chicago. Fermi reluctantly moved, and his team became part of the new Metallurgical Laboratory there.[82] The possible results of a self-sustaining nuclear reaction were unknown, so it seemed inadvisable to build the first nuclear reactor on the U. of C. campus in the middle of the city. Compton found a location in Argonne Woods Forest Preserve, about 20 miles (32 km) from Chicago. Stone & Webster was contracted to develop the site, but the work was halted by an industrial dispute. Fermi then persuaded Compton that he could build the reactor in the squash court under the stands of the U of C's Stagg Field. Construction of the pile began on 6 November 1942, and Chicago Pile-1 went critical on 2 December.[83] The shape of the pile was intended to be roughly spherical, but as work proceeded Fermi calculated that criticality could be achieved without finishing the entire pile as planned.[84] This experiment was a landmark in the quest for energy, and it was typical of Fermi's approach. Every step was carefully planned, every calculation meticulously done.[83] When the first self-sustained nuclear chain reaction was achieved, Compton made a coded phone call to James B. Conant, the chairman of the National Defense Research Committee. I picked up the phone and called Conant. He was reached at the President's office at Harvard University. "Jim," I said, "you'll be interested to know that the Italian navigator has just landed in the new world." Then, half apologetically, because I had led the S-l Committee to believe that it would be another week or more before the pile could be completed, I added, "the earth was not as large as he had estimated, and he arrived at the new world sooner than he had expected." "Is that so," was Conant's excited response. "Were the natives friendly?" "Everyone landed safe and happy."[85] Three men talking. The one on the left is wearing a tie and leans against a wall. He stands head and shoulders above the other two. The one in the centre is smiling, and wearing an open-necked shirt. The one on the right wears a shirt and lab coat. All three have photo ID passes. Fermi (centre), with Ernest O. Lawrence (left) and Isidor Isaac Rabi (right) To continue the research where it would not pose a public health hazard, the reactor was disassembled and moved to the Argonne Woods site. There Fermi directed experiments on nuclear reactions, revelling in the opportunities provided by the reactor's abundant production of free neutrons.[86] The laboratory soon branched out from physics and engineering into using the reactor for biological and medical research. Initially, Argonne was run by Fermi as part of the University of Chicago, but it became a separate entity with Fermi as its director in May 1944.[87] When the air-cooled X-10 Graphite Reactor at Oak Ridge went critical on 4 November 1943, Fermi was on hand just in case something went wrong. The technicians woke him early so that he could see it happen.[88] Getting X-10 operational was another milestone in the plutonium project. It provided data on reactor design, training for DuPont staff in reactor operation, and produced the first small quantities of reactor-bred plutonium.[89] Fermi became an American citizen in July 1944, the earliest date the law allowed.[90] In September 1944, Fermi inserted the first uranium fuel slug into the B Reactor at the Hanford Site, the production reactor designed to breed plutonium in large quantities. Like X-10, it had been designed by Fermi's team at the Metallurgical Laboratory, and built by DuPont, but it was much larger, and was water-cooled. Over the next few days, 838 tubes were loaded, and the reactor went critical. Shortly after midnight on 27 September, the operators began to withdraw the control rods to initiate production. At first all appeared to be well, but around 03:00, the power level started to drop and by 06:30 the reactor had shut down completely. The Army and DuPont turned to Fermi's team for answers. The cooling water was investigated to see if there was a leak or contamination. The next day the reactor suddenly started up again, only to shut down once more a few hours later. The problem was traced to neutron poisoning from xenon-135, a fission product with a half-life of 9.2 hours. Fortunately, DuPont had deviated from the Metallurgical Laboratory's original design in which the reactor had 1,500 tubes arranged in a circle, and had added 504 tubes to fill in the corners. The scientists had originally considered this over-engineering a waste of time and money, but Fermi realized that by loading all 2,004 tubes, the reactor could reach the required power level and efficiently produce plutonium.[91][92] The FERMIAC, an analog device invented by Enrico Fermi to implement studies of neutron transport In mid-1944, Robert Oppenheimer persuaded Fermi to join his Project Y at Los Alamos, New Mexico.[93] Arriving in September, Fermi was appointed an associate director of the laboratory, with broad responsibility for nuclear and theoretical physics, and was placed in charge of F Division, which was named after him. F Division had four branches: F-1 Super and General Theory under Teller, which investigated the "Super" (thermonuclear) bomb; F-2 Water Boiler under L. D. P. King, which looked after the "water boiler" aqueous homogeneous research reactor; F-3 Super Experimentation under Egon Bretscher; and F-4 Fission Studies under Anderson.[94] Fermi observed the Trinity test on 16 July 1945, and conducted an experiment to estimate the bomb's yield by dropping strips of paper into the blast wave. He paced off how far they were blown by the explosion, and calculated the yield as ten kilotons of TNT; the actual yield was about 18.6 kilotons.[95] Along with Oppenheimer, Compton, and Ernest Lawrence, Fermi was part of the scientific panel that advised the Interim Committee on target selection. The panel agreed with the committee that atomic bombs would be used without warning against an industrial target.[96] Like others at the Los Alamos Laboratory, Fermi found out about the atomic bombings of Hiroshima and Nagasaki from the public address system in the technical area. Fermi did not believe that atomic bombs would deter nations from starting wars, nor did he think that the time was ripe for world government. He therefore did not join the Association of Los Alamos Scientists.[97] Post-war work[edit] Fermi became the Charles H. Swift Distinguished Professor of Physics at the University of Chicago on 1 July 1945,[98] although he did not depart the Los Alamos Laboratory with his family until 31 December 1945.[99] He was elected a member of the U.S. National Academy of Sciences in 1945.[100] The Metallurgical Laboratory became the Argonne National Laboratory on 1 July 1946, the first of the national laboratories established by the Manhattan Project.[101] The short distance between Chicago and Argonne allowed Fermi to work at both places. At Argonne he continued experimental physics, investigating neutron scattering with Leona Marshall.[102] He also discussed theoretical physics with Maria Mayer, helping her develop insights into spin–orbit coupling that would lead to her receiving the Nobel Prize.[103] The Manhattan Project was replaced by the Atomic Energy Commission (AEC) on 1 January 1947.[104] Fermi served on the AEC General Advisory Committee, an influential scientific committee chaired by Robert Oppenheimer.[105] He also liked to spend a few weeks of each year at the Los Alamos National Laboratory,[106] where he collaborated with Nicholas Metropolis,[107] and with John von Neumann on Rayleigh–Taylor instability, the science of what occurs at the border between two fluids of different densities.[108] Laura and Enrico Fermi at the Institute for Nuclear Studies, Los Alamos, 1954 Following the detonation of the first Soviet fission bomb in August 1949, Fermi, along with Isidor Rabi, wrote a strongly worded report for the committee, opposing the development of a hydrogen bomb on moral and technical grounds.[109] Nonetheless, Fermi continued to participate in work on the hydrogen bomb at Los Alamos as a consultant. Along with Stanislaw Ulam, he calculated that not only would the amount of tritium needed for Teller's model of a thermonuclear weapon be prohibitive, but a fusion reaction could still not be assured to propagate even with this large quantity of tritium.[110] Fermi was among the scientists who testified on Oppenheimer's behalf at the Oppenheimer security hearing in 1954 that resulted in denial of Oppenheimer's security clearance.[111] In his later years, Fermi continued teaching at the University of Chicago. His PhD students in the post-war period included Owen Chamberlain, Geoffrey Chew, Jerome Friedman, Marvin Goldberger, Tsung-Dao Lee, Arthur Rosenfeld and Sam Treiman.[112][113] Jack Steinberger was a graduate student.[114] Fermi conducted important research in particle physics, especially related to pions and muons. He made the first predictions of pion-nucleon resonance,[107] relying on statistical methods, since he reasoned that exact answers were not required when the theory was wrong anyway.[115] In a paper co-authored with Chen Ning Yang, he speculated that pions might actually be composite particles.[116] The idea was elaborated by Shoichi Sakata. It has since been supplanted by the quark model, in which the pion is made up of quarks, which completed Fermi's model, and vindicated his approach.[117] Fermi wrote a paper "On the Origin of Cosmic Radiation" in which he proposed that cosmic rays arose through material being accelerated by magnetic fields in interstellar space, which led to a difference of opinion with Teller.[115] Fermi examined the issues surrounding magnetic fields in the arms of a spiral galaxy.[118] He mused about what is now referred to as the "Fermi paradox": the contradiction between the presumed probability of the existence of extraterrestrial life and the fact that contact has not been made.[119] Toward the end of his life, Fermi questioned his faith in society at large to make wise choices about nuclear technology. He said: Fermi died at age 53 of stomach cancer in his home in Chicago,[2] and was interred at Oak Woods Cemetery.[121] Impact and legacy[edit] As a person, Fermi seemed simplicity itself. He was extraordinarily vigorous and loved games and sport. On such occasions his ambitious nature became apparent. He played tennis with considerable ferocity and when climbing mountains acted rather as a guide. One might have called him a benevolent dictator. I remember once at the top of a mountain Fermi got up and said: "Well, it is two minutes to two, let's all leave at two o'clock"; and of course, everybody got up faithfully and obediently. This leadership and self-assurance gave Fermi the name of "The Pope" whose pronouncements were infallible in physics. He once said: "I can calculate anything in physics within a factor 2 on a few sheets: to get the numerical factor in front of the formula right may well take a physicist a year to calculate, but I am not interested in that." His leadership could go so far that it was a danger to the independence of the person working with him. I recollect once, at a party at his house when my wife cut the bread, Fermi came along and said he had a different philosophy on bread-cutting and took the knife out of my wife's hand and proceeded with the job because he was convinced that his own method was superior. But all this did not offend at all, but rather charmed everybody into liking Fermi. He had very few interests outside physics and when he once heard me play on Teller's piano he confessed that his interest in music was restricted to simple tunes. Egon Bretscher[122] Fermi received numerous awards in recognition of his achievements, including the Matteucci Medal in 1926, the Nobel Prize for Physics in 1938, the Hughes Medal in 1942, the Franklin Medal in 1947, and the Rumford Prize in 1953. He was awarded the Medal for Merit in 1946 for his contribution to the Manhattan Project.[123] Fermi was elected a Foreign Member of the Royal Society (FRS) in 1950.[122] The Basilica of Santa Croce, Florence, known as the Temple of Italian Glories for its many graves of artists, scientists and prominent figures in Italian history, has a plaque commemorating Fermi.[124] In 1999, Time named Fermi on its list of the top 100 persons of the twentieth century.[125] Fermi was widely regarded as an unusual case of a 20th-century physicist who excelled both theoretically and experimentally. The historian of physics, C. P. Snow, wrote that "if Fermi had been born a few years earlier, one could well imagine him discovering Rutherford's atomic nucleus, and then developing Bohr's theory of the hydrogen atom. If this sounds like hyperbole, anything about Fermi is likely to sound like hyperbole".[126] Fermi was known as an inspiring teacher, and was noted for his attention to detail, simplicity, and careful preparation of his lectures.[127] Later, his lecture notes were transcribed into books.[128] His papers and notebooks are today in the University of Chicago.[129] Victor Weisskopf noted how Fermi "always managed to find the simplest and most direct approach, with the minimum of complication and sophistication."[130] Fermi's ability and success stemmed as much from his appraisal of the art of the possible, as from his innate skill and intelligence. He disliked complicated theories, and while he had great mathematical ability, he would never use it when the job could be done much more simply. He was famous for getting quick and accurate answers to problems that would stump other people. Later on, his method of getting approximate and quick answers through back-of-the-envelope calculations became informally known as the "Fermi method", and is widely taught.[131] Fermi was fond of pointing out that Alessandro Volta, working in his laboratory, could have had no idea where the study of electricity would lead.[132] Fermi is generally remembered for his work on nuclear power and nuclear weapons, especially the creation of the first nuclear reactor, and the development of the first atomic and hydrogen bombs. His scientific work has stood the test of time. This includes his theory of beta decay, his work with non-linear systems, his discovery of the effects of slow neutrons, his study of pion-nucleon collisions, and his Fermi–Dirac statistics. His speculation that a pion was not a fundamental particle pointed the way towards the study of quarks and leptons.[133] Things named after Fermi[edit] The sign at Enrico Fermi Street in Rome Many things have been named in Fermi's honour. These include the Fermilab particle accelerator and physics lab in Batavia, Illinois, which was renamed in his honour in 1974,[134] and the Fermi Gamma-ray Space Telescope, which was named after him in 2008, in recognition of his work on cosmic rays.[135] Three nuclear reactor installations have been named after him: the Fermi 1 and Fermi 2 nuclear power plants in Newport, Michigan, the Enrico Fermi Nuclear Power Plant at Trino Vercellese in Italy,[136] and the RA-1 Enrico Fermi research reactor in Argentina.[137] A synthetic element isolated from the debris of the 1952 Ivy Mike nuclear test was named fermium, in honour of Fermi's contributions to the scientific community.[138][139] This makes him one of 16 scientists who have elements named after them.[140] Since 1956, the United States Atomic Energy Commission has named its highest honour, the Fermi Award, after him. Recipients of the award include well-known scientists like Otto Hahn, Robert Oppenheimer, Edward Teller and Hans Bethe.[141] • Introduzione alla Fisica Atomica (in Italian). Bologna: N. Zanichelli. 1928. OCLC 9653646.  • Fisica per i Licei (in Italian). Bologna: N. Zanichelli. 1929. OCLC 9653646.  • Molecole e cristalli (in Italian). Bologna: N. Zanichelli. 1934. OCLC 19918218.  • Thermodynamics. New York: Prentice Hall. 1937. OCLC 2379038.  • Fisica per Istituti Tecnici (in Italian). Bologna: N. Zanichelli. 1938.  • Fisica per Licei Scientifici (in Italian). Bologna: N. Zanichelli. 1938.  (with Edoardo Amaldi) • Elementary particles. New Haven: Yale University Press. 1951. OCLC 362513.  For a full list of his papers, see pages 75–78 in [122] See also[edit] 2. ^ a b "Enrico Fermi Dead at 53; Architect of Atomic Bomb". New York Times. 29 November 1954. Retrieved 21 January 2013.  3. ^ Segrè 1970, pp. 3–4, 8. 4. ^ Amaldi 2001, p. 23. 5. ^ Cooper 1999, p. 19. 6. ^ Segrè 1970, pp. 5–6. 7. ^ Fermi 1954, pp. 15–16. 8. ^ Segrè 1970, p. 7. 9. ^ Bonolis 2001, p. 315. 10. ^ Amaldi 2001, p. 24. 11. ^ Segrè 1970, pp. 11-12. 12. ^ Segrè 1970, pp. 8–10. 13. ^ Segrè 1970, pp. 11–13. 14. ^ Segrè 1970, pp. 15–18. 15. ^ Bonolis 2001, p. 320. 16. ^ a b Bonolis 2001, pp. 317–319. 17. ^ Segrè 1970, p. 20. 18. ^ "Über einen Widerspruch zwischen der elektrodynamischen und relativistischen Theorie der elektromagnetischen Masse". Physikalische Zeitschrift (in German). 23: 340–344. Retrieved 17 January 2013.  19. ^ Bertotti 2001, p. 115. 20. ^ a b c Bonolis 2001, p. 321. 21. ^ "Enrico Fermi L'Uomo, lo Scienziato e il Massone" (in Italian). Retrieved 4 March 2015.  22. ^ Bonolis 2001, pp. 321–324. 23. ^ Hey & Walters 2003, p. 61. 24. ^ Bonolis 2001, pp. 329–330. 25. ^ Cooper 1999, p. 31. 26. ^ Fermi 1954, pp. 37–38. 27. ^ Segrè 1970, p. 45. 28. ^ Fermi 1954, p. 38. 29. ^ a b Alison 1957, p. 127. 30. ^ "Enrico Fermi e i ragazzi di via Panisperna" (in Italian). University of Rome. Retrieved 20 January 2013.  31. ^ Segrè 1970, p. 61. 32. ^ Cooper 1999, pp. 38–39. 33. ^ a b Alison 1957, p. 130. 34. ^ "About Enrico Fermi". University of Chicago. Retrieved 20 January 2013.  35. ^ Mieli, Paolo (2 October 2001). "Così Fermi scoprì la natura vessatoria del fascismo". Corriere della Sera (in Italian). Archived from the original on 19 October 2013. Retrieved 20 January 2013.  36. ^ Direzione generale per gli archivi (2005). "Reale accademia d'Italia:inventario dell'archivio" (PDF) (in Italian). Rome: Ministero per i beni culturali e ambientali. p. xxxix. Archived from the original (PDF) on 7 September 2012. Retrieved 20 January 2013.  37. ^ "A Legal Examination of Mussolini's Race Laws". Printed Matter. Centro Primo Levi. Retrieved 7 August 2015.  38. ^ a b c Bonolis 2001, pp. 333–335. 39. ^ Amaldi 2001, p. 38. 40. ^ Fermi 1954, p. 217. 41. ^ Amaldi 2001, pp. 50–51. 42. ^ a b Bonolis 2001, p. 346. 43. ^ a b Fermi, E. (1968). "Fermi's Theory of Beta Decay (English translation by Fred L. Wilson, 1968)" (PDF). American Journal of Physics. 36 (12): 1150. Bibcode:1968AmJPh..36.1150W. doi:10.1119/1.1974382. Retrieved 20 January 2013.  44. ^ Joliot-Curie, Irène; Joliot, Frédéric (15 January 1934). "Un nouveau type de radioactivité" [A new type of radioactivity]. Comptes rendus hebdomadaires des séances de l'Académie des Sciences (in French). 198 (January–June 1934): 254–256.  45. ^ Joliot, Frédéric; Joliot-Curie, Irène (1934). "Artificial Production of a New Kind of Radio-Element" (PDF). Nature. 133 (3354): 201–202. Bibcode:1934Natur.133..201J. doi:10.1038/133201a0.  46. ^ Amaldi 2001a, pp. 152–153. 47. ^ Bonolis 2001, pp. 347–351. 48. ^ a b c d Amaldi 2001a, pp. 153–156. 49. ^ Segrè 1970, p. 73. 50. ^ a b De Gregorio, Alberto G. (2005). "Neutron physics in the early 1930s". Historical Studies in the Physical and Biological Sciences. 35 (2): 293–340. arXiv:physics/0510044Freely accessible. doi:10.1525/hsps.2005.35.2.293.  51. ^ Guerra, Francesco; Robotti, Nadia (December 2009). "Enrico Fermi's Discovery of Neutron-Induced Artificial Radioactivity: The Influence of His Theory of Beta Decay". Physics in Perspective. 11 (4): 379–404. Bibcode:2009PhP....11..379G. doi:10.1007/s00016-008-0415-1.  52. ^ Fermi, Enrico (25 March 1934). "Radioattività indotta da bombardamento di neutroni". La Ricerca scientifica (in Italian). 1 (5): 283.  53. ^ Fermi, E.; Amaldi, E.; d'Agostino, O.; Rasetti, F.; Segre, E. (1934). "Artificial Radioactivity Produced by Neutron Bombardment". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 146 (857): 483. Bibcode:1934RSPSA.146..483F. doi:10.1098/rspa.1934.0168.  54. ^ a b Bonolis 2001, pp. 347–349. 55. ^ a b Amaldi 2001a, pp. 161–162. 56. ^ a b Bonolis 2001, pp. 347–352. 57. ^ "A Few Good Moderators: The Numbers". The Energy From Thorium Foundation. Retrieved 24 September 2013.  58. ^ Cooper 1999, p. 51. 59. ^ Cooper 1999, p. 52. 60. ^ Persico 2001, p. 40. 61. ^ Bonolis 2001, p. 352. 63. ^ Frisch, O. R. (1939). "Physical Evidence for the Division of Heavy Nuclei under Neutron Bombardment". Nature. 143 (3616): 276–276. Bibcode:1939Natur.143..276F. doi:10.1038/143276a0. Archived from the original on 23 January 2009.  64. ^ Meitner, L.; Frisch, O.R. (1939). "Disintegration of Uranium by Neutrons: a New Type of Nuclear Reaction". Nature. 143 (3615): 239–240. Bibcode:1939Natur.143..239M. doi:10.1038/143239a0.  65. ^ a b Rhodes 1986, p. 267. 66. ^ Segrè 1970, p. 222-223. 67. ^ Fermi, Enrico (12 December 1938). "Artificial radioactivity produced by neutron bombardment (Nobel Lecture)" (PDF). Retrieved 19 October 2013.  68. ^ Anderson, H.L.; Booth, E.; Dunning, J.; Fermi, E.; Glasoe, G.; Slack, F. (16 February 1939). "The Fission of Uranium". Physical Review. 55 (5): 511–512. Bibcode:1939PhRv...55..511A. doi:10.1103/PhysRev.55.511.2.  69. ^ Rhodes 1986, pp. 269–270. 70. ^ Von Halban, H.; Joliot, F.; Kowarski, L. (22 April 1939). "Number of Neutrons Liberated in the Nuclear Fission of Uranium". Nature. 143 (3625): 680–680. Bibcode:1939Natur.143..680V. doi:10.1038/143680a0.  74. ^ Salvetti 2001, pp. 186–188. 75. ^ Bonolis 2001, pp. 356–357. 76. ^ Salvetti 2001, p. 185. 77. ^ Salvetti 2001, pp. 188–189. 78. ^ Rhodes 1986, pp. 314–317. 79. ^ Salvetti 2001, p. 190. 80. ^ Salvetti 2001, p. 195. 81. ^ Salvetti 2001, pp. 194–196. 82. ^ Rhodes 1986, pp. 399–400. 83. ^ a b Salvetti 2001, pp. 198–202. 84. ^ Fermi, E. (1946). "The Development of the First Chain Reaction Pile". Proc. Am. Philos. Soc. 90: 20–24. JSTOR 3301034.  85. ^ Compton 1956, p. 144. 86. ^ Bonolis 2001, p. 366. 87. ^ Hewlett & Anderson 1962, p. 207. 88. ^ Hewlett & Anderson 1962, pp. 208–211. 89. ^ Jones 1985, p. 205. 90. ^ Segrè 1970, p. 104. 91. ^ Hewlett & Anderson 1962, pp. 304–307. 92. ^ Jones 1985, pp. 220–223. 93. ^ Bonolis 2001, pp. 368–369. 94. ^ Hawkins 1961, p. 213. 95. ^ Rhodes 1986, pp. 674–677. 96. ^ Jones 1985, pp. 531-532. 97. ^ Fermi 1954, pp. 244-245. 98. ^ Segrè 1970, p. 157. 99. ^ Segrè 1970, p. 167. 100. ^ "Enrico Fermi" on NASOnline.org 101. ^ Holl, Hewlett & Harris 1997, pp. xix–xx. 102. ^ Segrè 1970, p. 171. 103. ^ Segrè 1970, p. 172. 104. ^ Hewlett & Anderson 1962, p. 643. 105. ^ Hewlett & Anderson 1962, p. 648. 106. ^ Segrè 1970, p. 175. 107. ^ a b Segrè 1970, p. 179. 108. ^ Bonolis 2001, p. 381. 109. ^ Hewlett & Duncan 1969, pp. 380–385. 110. ^ Hewlett & Duncan 1969, pp. 527–530. 111. ^ Cooper 1999, pp. 102–103. 112. ^ Enrico Fermi at the Mathematics Genealogy Project 113. ^ "Jerome I. Friedman – Autobiography". The Nobel Foundation. 1990. Retrieved 16 March 2013.  114. ^ "Jack Steinberger – Biographical". Nobel Foundation. Retrieved 15 August 2013.  115. ^ a b Bonolis 2001, pp. 374–379. 116. ^ Fermi, E.; Yang, C. (1949). "Are Mesons Elementary Particles?". Physical Review. 76 (12): 1739. Bibcode:1949PhRv...76.1739F. doi:10.1103/PhysRev.76.1739.  117. ^ Jacob & Maiani 2001, pp. 254–258. 118. ^ Bonolis 2001, p. 386. 119. ^ Jones 1985a, pp. 1–3. 120. ^ Fermi 2004, p. 142. 121. ^ Hucke & Bielski 1999, pp. 147, 150. 122. ^ a b c Bretscher, E.; Cockcroft, J. D. (1955). "Enrico Fermi. 1901-1954". Biographical Memoirs of Fellows of the Royal Society. 1: 68. doi:10.1098/rsbm.1955.0006. JSTOR 769243.  123. ^ Alison 1957, pp. 135–136. 124. ^ "Enrico Fermi in Santa Croce, Florence". gotterdammerung.org. Retrieved 10 May 2015.  125. ^ "Time 100 Persons of the Century". Time. 6 June 1999. Retrieved 2 March 2013.  126. ^ Snow 1981, p. 79. 127. ^ Ricci 2001, pp. 297–302. 128. ^ Ricci 2001, p. 286. 129. ^ "Enrico Fermi Collection". University of Chicago. Retrieved 22 January 2013.  130. ^ Salvini 2001, p. 5. 131. ^ Von Baeyer 1993, pp. 3–8. 132. ^ Fermi 1954, p. 242. 133. ^ Salvini 2001, p. 17. 134. ^ "About Fermilab – History". Fermilab. Retrieved 21 January 2013.  135. ^ "First Light for the Fermi Space Telescope". National Aeronautics and Space Administration. Retrieved 21 January 2013.  136. ^ "Nuclear Power in Italy". World Nuclear Association. Retrieved 21 January 2013.  137. ^ "Report of the National Atomic Energy Commission of Argentina (CNEA)" (PDF). CNEA. November 2004. Archived from the original (PDF) on 14 May 2013. Retrieved 21 January 2013.  138. ^ Seaborg 1978, p. 2. 139. ^ Hoff 1978, pp. 39–48. 140. ^ Kevin A. Boudreaux. "Derivations of the Names and Symbols of the Elements". Angelo State University.  141. ^ "The Enrico Fermi Award". United States Department of Energy. Retrieved 25 August 2010.  External links[edit]
86b093c89c5c05fe
Take the 2-minute tour × I want to make a diffusion kernel, which involves $e^{\beta A}$, where A is a large matrix (25k by 25k). It is an adjacency matrix, so it's symmetric and very sparse. Does anyone have a recommendation of a tool to solve this? I use the term "tool" loosely - if you know that transforming it in this way first or whatever is useful then I'd like to know that. I am going with a hack - since the kernel "diffuses" relatively quickly, I just take only the neighbourhood around the two vertices that I want. This gives me a much reduced adjacency matrix which I can then raise e to without difficulty. I'm not familiar enough with the kernel function though to know how severely this is screwing up my results, and it's imperfect at best, so I'm still interested if anyone has a better idea. share|improve this question docs.google.com/… –  Steve Huntsman May 28 '10 at 14:02 You might find this article of interest cs.cornell.edu/cv/ResearchPDF/19ways+.pdf –  Guy Katriel May 28 '10 at 14:26 The two links posted so far are identical. –  j.c. May 28 '10 at 14:47 In MATLAB you'll want to sparsify explicitly if you haven't already; the "sparse" command does this. Then use "eigs" (not "eig") to return the eigenvectors. Do what everyone else is saying (if your matrix is really that sparse, MATLAB should be up to it on a modern laptop) and then compare the results you obtain with "expm" (if you can). I'd be surprised if the calculation took more than a few minutes. –  Steve Huntsman May 28 '10 at 20:40 Xodarap, why are you exponentiating this matrix, and where does the problem come from? In other words, do you want the object $(e^A)$, or are you interested in computing its action on a given vector? These are (at the numerical linear algebra level) somewhat different questions. I'll be happy to point you to some references if you specify what you're trying to do. –  Nilima Nigam Jun 23 '11 at 5:26 8 Answers 8 Suprised that no one mentioned Expokit, http://www.maths.uq.edu.au/expokit/ It does exactly what was requested, and is available in several different implementations (including Matlab). share|improve this answer The book by Higham and the "nineteen dubious ways" paper deal with the dense case only. For the sparse case, the best way to go is using an algorithm that computes the so-called action, i.e., the map $ v \mapsto \exp(A)v$. See e.g. Al-Mohy, http://epubs.siam.org/sisc/resource/1/sjoce3/v33/i2/p488_s1?isAuthorized=no. The matrix $\exp(A)$ itself is full and unstructured, and generally you do not want to use it. If you really need it, though, check out a series of papers by Benzi and coauthors: they show that the off-diagonal elements of many matrix functions decay exponentially, and thus your matrix might be "nearly banded". share|improve this answer Al-Mohy and Higham's paper is great is you are dealing with sparse matrices. A preprint of the paper can be found on Higham's website, and he has MATLAB code that implements the algorithm. –  Marcus P S Jan 15 '14 at 0:50 This is not an answer, but it's too long for a comment. First, you need advice from a numerical analyst, not me. Computing matrix exponentials is a well-studied problem with a large literature. For one example, the recent book by Higham "Functions of matrices. Theory and computation" devotes a chapter to it. Matlab has a builtin routine for it. The trick will be to take advantage of the sparseness, which almost certainly rules out an approach based on diagonalization. Taylor series are not likely to help---try computing $\exp(100)$ using the series expansion about $0$. Also, just because you can write down the problem you want to solve using a matrix exponential, does not guarantee this is the best way to solve it. (To give a crude example, the solution to the linear system $Ax=b$ is $A^{-1}b$, but no-one in their right mind solves linear systems by computing inverses.) share|improve this answer Glad somebody sees this my way. I googled "diffusion kernel," this problem is very far from being simply about exponentiating matrices. Then I deleted my answer, nobody seemed interested. Paper by Kondor and Lafferty, presentations by Liang Sun and then Bruno Jedynak. –  Will Jagy May 29 '10 at 18:26 Yeah, I must admit that when I asked this question I didn't realize it was so unsolved. I thought the answer would be "use the really-big-sparse-matrix add-on to Matlab" or something. That being said, sparse adjacency graphs (e.g. the web, genome mapping, etc.) appear all the time, and so I don't believe that there is no acceptable solution - I will accept that there is no perfect solution, but the problem seems too common for there to be no standard toolkit. –  Xodarap May 30 '10 at 20:54 @Xodorap, to be clear: there are scores of excellent algorithms out there for sparse matrix operations. What we need to get from you is a categorical statement like 'I want the matrix exponential itself' or 'I want the solution of the diffusion equation $u_t=Au$ with given data'. There are lots of acceptable approaches in either case, but they are not the same. As Will and Chris point out, it is rare for someone to genuinely need $e^A$ for a large, symmetric and sparse $A$. –  Nilima Nigam Jun 23 '11 at 15:02 I've asked for some clarification in a comment. In the meanwhile, if you're looking for software, I'll assume you've tried PETSc or Trilinos already? Here's a link to the freeware by Jiri Pittner, which links to BLAS routines as well: http://www.pittnerovi.com/la/ Here's a site from INRIA http://verdandi.gforge.inria.fr/doc/linear_algebra_libraries.pdf share|improve this answer If you have a sparse matrix with localized effect (e.g. small valences), fast eigenvalue drop off and are required to compute the full matrix exponential, then you might be interested in 'diffusion wavelets'. While calculating the exponential they are as well calculating a basis where the result is still sparse. Yet I am not aware of a ready to use implementation. share|improve this answer You can use the Chebyshev Polynomial expansion to calculate the effect of the matrix exponential on a vector. Which is a standard technique in quantum chemistry community and the method is extremely stable and fast. This method was developed by Tal-Ezer and Kossloff in an article named An accurate and efficient scheme for propagating the time dependent Schrödinger equation You can see a Reviews of Modern Physics article by Alexander Wesse which deals with Kernal Polynomial Method (A generalization of the Chebyshev type algorithms). I assume that to access these references you have the subscription to these scientific journals. share|improve this answer If your matrix is diagonalizable, say $A = PDP^-1$, then $\exp(A) = P \exp(D) P^-1$. If your matrix is not diagonalizable and you need the more general Jordan Canonical Form, this approach may not work. JCF is not suitable for numerical computation since it forming the JCF is a discontinuous process: arbitrarily close matrices can map to canonical forms that differ by an integer in one entry. You could calculate $\exp(A)$ directly by its Taylor series. Then the problem becomes how to efficiently calculate powers of $A$. Maybe you could take advantage of your particular sparsity structure to calculate these powers. share|improve this answer Do you know of a good way to diagonalize such a large matrix? Figuring out all 25k eigenvalues seems very time-consuming. –  Xodarap May 28 '10 at 15:42 You don't have to calculate all of the Taylor series. If you let P be the characteristic polynomial of the matrix, then you can write exp(A) = g(A) * P(A) + rest, where g is entire, and Cayley-Hamilton then gives exp(A) = rest (you can divide entire functions of matrices by polynomials). The rest can be calculated by finite differences, if I remember correctly. –  Gunnar Þór Magnússon May 28 '10 at 16:33 Xodarap says A is real symmetric, so it is indeed diagonalizable. So as Xodarap points out above, the real question is how to go about diagonalizing. –  Mark Meckes May 28 '10 at 16:58 A full diagonalization will not take advantage of sparsity. –  Terry Loring Dec 22 '13 at 18:20 Have a look at a recent paper discussing how matrix sparseness and locality go together: "Decay Properties of Spectral Projectors with Applications to Electronic Structure" by Benzi et al. in SIAM Review, 55(1), 3--64, (2013). The paper has applications that go beyond what the title indicates. Much of the paper covers continuous functions applied to sparse hermitian matrices. If you have some way of determining a priori which matrix elements will be small, you can compute a polynomial of the matrix quickly. If your graph is related to a surface, you have an idea of how far apart on the graph two vertices need to be before they can be neglected. To decide what polynomial to use, I would suggest you get an approximation of the operator norm. This is fast for a sparse matrix. In matlab you use normest. In other languages see: "Estimating the matrix p-norm" by Nicholas J. Higham, Numerische Mathematik, 62(1), 539--555, (1992). The code there simplifies in the case $p=2$, which is the case you want. This norm estimate, rounded up a bit for good measure, tells you where the spectrum of your matrix sits. Now get (say from a truncated power series) a polynomial that is close enough for your purposes to the actual exponential on the spectrum of your matrix. Even if you can't figure which matrix elements of the answer you will zero-out, if you can accept a modest error and so deal with a polynomial of relatively small degree, then you are just needing to compute several powers of a sparse matrix. It is then a question of how-sparse you start with vs. how high a power you need. I will warn you that I find Matlab does not do so well taking products of sparse matrices. I think it is optimized for minimizing data storage, not matrix multiplication. share|improve this answer Your Answer
840ffa0b55fd233c
Sunday, November 08, 2009 Google can render your equations for you! In my last post I mentioned that Knol and Google Docs now have equation editing. What I didn't mention is that this is an undocumented feature of the public Google Chart api, and it's easy to use. For instance, if I wanted to include the Schrödinger equation on this blog. I would construct an url like this:,s,FFFFFF00&chco=AACCFF&chl=i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},\,t) = \hat H \Psi(\mathbf{r},t) Put your code in the chl parameter. The chf parameter lets you specify a background color in RGBA, chco lets you set the foreground color in RGB. When you drop it in inside of an image tag you get this: If you anticipate making 250,000 calls to the chart server a day, contact Google first at There's no limit to how much you can use it, but they reserve the right to turn you off. Anonymous said... Firefox can render my equations for me! (It is called MathML.) tangentsoft said... A-w-e-s-o-m-e. Several years ago I hacked up a LaTeX-to-GIF equation formatter for my own site. It works reasonably well, but never did figure out how to get it to composite onto anything but paper white. Fortunately my site's background is close enough to white that you can't tell without zooming in, which normal visitors don't do. Even ignoring that, the Google output looks better than mine, probably more because it's using PNG transparency, rather than the GIF "single magic color" transparency that my scheme uses. I'm now itching to replace my current lash-up. Edward Kmett said... Only on some platforms and if the client has the right fonts installed when Mercury is in retrograde and even when it can, the spacing and alignment is nowhere near as good as it is in LaTeX. Did they ever fix it on MacOS X? Speaking as someone who once mistakenly assumed that he could use Firefox MathML support gainfully in a firefox plugin, let alone a web page, eight months of wasted development later, I find myself much more jaded.
da59f8a067ced508
Page semi-protected Listen to this article From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about the chemistry of hydrogen. For the physics of atomic hydrogen, see Hydrogen atom. For other uses, see Hydrogen (disambiguation). Hydrogen,  1H Hydrogen discharge tube.jpg Purple glow in its plasma state Hydrogen Spectra.jpg Spectral lines of hydrogen General properties Name, symbol hydrogen, H Pronunciation /ˈhdrəən/[1] Appearance colorless gas Hydrogen in the periodic table Hydrogen (diatomic nonmetal) Helium (noble gas) Lithium (alkali metal) Beryllium (alkaline earth metal) Boron (metalloid) Carbon (polyatomic nonmetal) Nitrogen (diatomic nonmetal) Oxygen (diatomic nonmetal) Fluorine (diatomic nonmetal) Neon (noble gas) Sodium (alkali metal) Magnesium (alkaline earth metal) Aluminium (post-transition metal) Silicon (metalloid) Phosphorus (polyatomic nonmetal) Sulfur (polyatomic nonmetal) Chlorine (diatomic nonmetal) Argon (noble gas) Potassium (alkali metal) Calcium (alkaline earth metal) Scandium (transition metal) Titanium (transition metal) Vanadium (transition metal) Chromium (transition metal) Manganese (transition metal) Iron (transition metal) Cobalt (transition metal) Nickel (transition metal) Copper (transition metal) Zinc (transition metal) Gallium (post-transition metal) Germanium (metalloid) Arsenic (metalloid) Selenium (polyatomic nonmetal) Bromine (diatomic nonmetal) Krypton (noble gas) Rubidium (alkali metal) Strontium (alkaline earth metal) Yttrium (transition metal) Zirconium (transition metal) Niobium (transition metal) Molybdenum (transition metal) Technetium (transition metal) Ruthenium (transition metal) Rhodium (transition metal) Palladium (transition metal) Silver (transition metal) Cadmium (transition metal) Indium (post-transition metal) Tin (post-transition metal) Antimony (metalloid) Tellurium (metalloid) Iodine (diatomic nonmetal) Xenon (noble gas) Caesium (alkali metal) Barium (alkaline earth metal) Lanthanum (lanthanide) Cerium (lanthanide) Praseodymium (lanthanide) Neodymium (lanthanide) Promethium (lanthanide) Samarium (lanthanide) Europium (lanthanide) Gadolinium (lanthanide) Terbium (lanthanide) Dysprosium (lanthanide) Holmium (lanthanide) Erbium (lanthanide) Thulium (lanthanide) Ytterbium (lanthanide) Lutetium (lanthanide) Hafnium (transition metal) Tantalum (transition metal) Tungsten (transition metal) Rhenium (transition metal) Osmium (transition metal) Iridium (transition metal) Platinum (transition metal) Gold (transition metal) Mercury (transition metal) Thallium (post-transition metal) Lead (post-transition metal) Bismuth (post-transition metal) Polonium (post-transition metal) Astatine (metalloid) Radon (noble gas) Francium (alkali metal) Radium (alkaline earth metal) Actinium (actinide) Thorium (actinide) Protactinium (actinide) Uranium (actinide) Neptunium (actinide) Plutonium (actinide) Americium (actinide) Curium (actinide) Berkelium (actinide) Californium (actinide) Einsteinium (actinide) Fermium (actinide) Mendelevium (actinide) Nobelium (actinide) Lawrencium (actinide) Rutherfordium (transition metal) Dubnium (transition metal) Seaborgium (transition metal) Bohrium (transition metal) Hassium (transition metal) Meitnerium (unknown chemical properties) Darmstadtium (unknown chemical properties) Roentgenium (unknown chemical properties) Copernicium (transition metal) Nihonium (unknown chemical properties) Flerovium (post-transition metal) Moscovium (unknown chemical properties) Livermorium (unknown chemical properties) Tennessine (unknown chemical properties) Oganesson (unknown chemical properties) – ← hydrogenhelium Atomic number (Z) 1 Group, block group 1, s-block Period period 1 Element category   diatomic nonmetal Standard atomic weight (Ar) 1.008[2] (1.00784–1.00811)[3] Electron configuration 1s1 Electrons per shell Physical properties Color colorless Phase gas Melting point 13.99 K ​(−259.16 °C, ​−434.49 °F) Boiling point 20.271 K ​(−252.879 °C, ​−423.182 °F) Density at stp (0 °C and 101.325 kPa) 0.08988 g/L when liquid, at m.p. 0.07 g/cm3 (solid: 0.0763 g/cm3)[4] when liquid, at b.p. 0.07099 g/cm3 Triple point 13.8033 K, ​7.041 kPa Critical point 32.938 K, 1.2858 MPa Heat of fusion (H2) 0.117 kJ/mol Heat of vaporization (H2) 0.904 kJ/mol Molar heat capacity (H2) 28.836 J/(mol·K) Vapor pressure at T (K) 15 20 Atomic properties Oxidation states −1, +1 ​(an amphoteric oxide) Electronegativity Pauling scale: 2.20 Ionization energies 1st: 1312.0 kJ/mol Covalent radius 31±5 pm Van der Waals radius 120 pm Crystal structure hexagonal Hexagonal crystal structure for hydrogen Speed of sound 1310 m/s (gas, 27 °C) Thermal conductivity 0.1805 W/(m·K) Magnetic ordering diamagnetic[5] Magnetic susceptibility (χmol) −3.98·10−6 cm3/mol (298 K)[6] CAS Number 1333-74-0 Discovery Henry Cavendish[7][8] (1766) Named by Antoine Lavoisier[9] (1783) Most stable isotopes of hydrogen iso NA half-life DM DE (MeV) DP 1H 99.98% is stable with 0 neutrons 2H 0.02% is stable with 1 neutron 3H trace 12.32 y β 0.01861 3He | references | in Wikidata Hydrogen is a chemical element with chemical symbol H and atomic number 1. With a standard atomic weight of circa 1.008, hydrogen is the lightest element on the periodic table. Its monatomic form (H) is the most abundant chemical substance in the Universe, constituting roughly 75% of all baryonic mass.[10][note 1] Non-remnant stars are mainly composed of hydrogen in the plasma state. The most common isotope of hydrogen, termed protium (name rarely used, symbol 1H), has one proton and no neutrons. Hydrogen gas was first artificially produced in the early 16th century by the reaction of acids on metals. In 1766–81, Henry Cavendish was the first to recognize that hydrogen gas was a discrete substance,[12] and that it produces water when burned, the property for which it was later named: in Greek, hydrogen means "water-former". Industrial production is mainly from steam reforming natural gas, and less often from more energy-intensive methods such as the electrolysis of water.[13] Most hydrogen is used near the site of its production, the two largest uses being fossil fuel processing (e.g., hydrocracking) and ammonia production, mostly for the fertilizer market. Hydrogen is a concern in metallurgy as it can embrittle many metals,[14] complicating the design of pipelines and storage tanks.[15] The Space Shuttle Main Engine burnt hydrogen with oxygen, producing a nearly invisible flame at full thrust. Explosion of a hydrogen–air mixture. Hydrogen gas (dihydrogen or molecular hydrogen)[16] is highly flammable and will burn in air at a very wide range of concentrations between 4% and 75% by volume.[17] The enthalpy of combustion is −286 kJ/mol:[18] Electron energy levels Main article: Hydrogen atom Drawing of a light-gray large sphere with a cut off quarter and a black small sphere and numbers 1.7x10−5 illustrating their relative diameters. Depiction of a hydrogen atom with size of central proton shown, and the atomic diameter shown as about twice the Bohr model radius (image not to scale) The ground state energy level of the electron in a hydrogen atom is −13.6 eV,[22] which is equivalent to an ultraviolet photon of roughly 91 nm wavelength.[23] The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, which conceptualizes the electron as "orbiting" the proton in analogy to the Earth's orbit of the Sun. However, the atomic electron and proton are held together by electromagnetic force, while planets and celestial objects are held by gravity. Because of the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies.[24] A more accurate description of the hydrogen atom comes from a purely quantum mechanical treatment that uses the Schrödinger equation, Dirac equation or even the Feynman path integral formulation to calculate the probability density of the electron around the proton.[25] The most complicated treatments allow for the small effects of special relativity and vacuum polarization. In the quantum mechanical treatment, the electron in a ground state hydrogen atom has no angular momentum at all—illustrating how the "planetary orbit" differs from electron motion. Elemental molecular forms First tracks observed in liquid hydrogen bubble chamber at the Bevatron There exist two different spin isomers of hydrogen diatomic molecules that differ by the relative spin of their nuclei.[26] In the orthohydrogen form, the spins of the two protons are parallel and form a triplet state with a molecular spin quantum number of 1 (12+12); in the parahydrogen form the spins are antiparallel and form a singlet with a molecular spin quantum number of 0 (1212). At standard temperature and pressure, hydrogen gas contains about 25% of the para form and 75% of the ortho form, also known as the "normal form".[27] The equilibrium ratio of orthohydrogen to parahydrogen depends on temperature, but because the ortho form is an excited state and has a higher energy than the para form, it is unstable and cannot be purified. At very low temperatures, the equilibrium state is composed almost exclusively of the para form. The liquid and gas phase thermal properties of pure parahydrogen differ significantly from those of the normal form because of differences in rotational heat capacities, as discussed more fully in spin isomers of hydrogen.[28] The ortho/para distinction also occurs in other hydrogen-containing molecules or functional groups, such as water and methylene, but is of little significance for their thermal properties.[29] The uncatalyzed interconversion between para and ortho H2 increases with increasing temperature; thus rapidly condensed H2 contains large quantities of the high-energy ortho form that converts to the para form very slowly.[30] The ortho/para ratio in condensed H2 is an important consideration in the preparation and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces enough heat to evaporate some of the hydrogen liquid, leading to loss of liquefied material. Catalysts for the ortho-para interconversion, such as ferric oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromic oxide, or some nickel[31] compounds, are used during hydrogen cooling.[32] Further information: Category:Hydrogen compounds Covalent and organic compounds While H2 is not very reactive under standard conditions, it does form compounds with most elements. Hydrogen can form compounds with elements that are more electronegative, such as halogens (e.g., F, Cl, Br, I), or oxygen; in these compounds hydrogen takes on a partial positive charge.[33] When bonded to fluorine, oxygen, or nitrogen, hydrogen can participate in a form of medium-strength noncovalent bonding with the hydrogen of other similar molecules, a phenomenon called hydrogen bonding that is critical to the stability of many biological molecules.[34][35] Hydrogen also forms compounds with less electronegative elements, such as metals and metalloids, where it takes on a partial negative charge. These compounds are often known as hydrides.[36] Hydrogen forms a vast array of compounds with carbon called the hydrocarbons, and an even vaster array with heteroatoms that, because of their general association with living things, are called organic compounds.[37] The study of their properties is known as organic chemistry[38] and their study in the context of living organisms is known as biochemistry.[39] By some definitions, "organic" compounds are only required to contain carbon. However, most of them also contain hydrogen, and because it is the carbon-hydrogen bond which gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word "organic" in chemistry.[37] Millions of hydrocarbons are known, and they are usually formed by complicated synthetic pathways that seldom involve elementary hydrogen. Main article: Hydride Compounds of hydrogen are often called hydrides, a term that is used fairly loosely. The term "hydride" suggests that the H atom has acquired a negative or anionic character, denoted H, and is used when hydrogen forms a compound with a more electropositive element. The existence of the hydride anion, suggested by Gilbert N. Lewis in 1916 for group 1 and 2 salt-like hydrides, was demonstrated by Moers in 1920 by the electrolysis of molten lithium hydride (LiH), producing a stoichiometry quantity of hydrogen at the anode.[40] For hydrides other than group 1 and 2 metals, the term is quite misleading, considering the low electronegativity of hydrogen. An exception in group 2 hydrides is BeH , which is polymeric. In lithium aluminium hydride, the AlH Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, more than 100 binary borane hydrides are known, but only one binary aluminium hydride.[41] Binary indium hydride has not yet been identified, although larger complexes exist.[42] Protons and acids Further information: Acid–base reaction Oxidation of hydrogen removes its electron and gives H+, which contains no electrons and a nucleus which is usually composed of one proton. That is why H+ A bare proton, H+ , cannot exist in solution or in ionic crystals because of its unstoppable attraction to other atoms or molecules with electrons. Except at the high temperatures associated with plasmas, such protons cannot be removed from the electron clouds of atoms and molecules, and will remain attached to them. However, the term 'proton' is sometimes used loosely and metaphorically to refer to positively charged or cationic hydrogen attached to other species in this fashion, and as such is denoted "H+ " without any implication that any single protons exist freely as a species. To avoid the implication of the naked "solvated proton" in solution, acidic aqueous solutions are sometimes considered to contain a less unlikely fictitious species, termed the "hydronium ion" (H .[44] Other oxonium ions are found when water is in acidic solution with other solvents.[45] Although exotic on Earth, one of the most common ions in the universe is the H+ ion, known as protonated molecular hydrogen or the trihydrogen cation.[46] Main article: Isotopes of hydrogen Hydrogen discharge (spectrum) tube Deuterium discharge (spectrum) tube Protium, the most common isotope of hydrogen, has one proton and one electron. Unique among all stable isotopes, it has no neutrons (see diproton for a discussion of why others do not exist). Hydrogen has three naturally occurring isotopes, denoted 1 , 2 and 3 . Other, highly unstable nuclei (4 to 7 ) have been synthesized in the laboratory but not observed in nature.[47][48] • 1 • 2 -NMR spectroscopy.[50] Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion.[51] • 3 is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decaying into helium-3 through beta decay with a half-life of 12.32 years.[43] It is so radioactive that it can be used in luminous paint, making it useful in such things as watches. The glass prevents the small amount of radiation from getting out.[52] Small amounts of tritium are produced naturally by the interaction of cosmic rays with atmospheric gases; tritium has also been released during nuclear weapons tests.[53] It is used in nuclear fusion reactions,[54] as a tracer in isotope geochemistry,[55] and in specialized self-powered lighting devices.[56] Tritium has also been used in chemical and biological labeling experiments as a radiolabel.[57] Hydrogen is the only element that has different names for its isotopes in common use today. During the early study of radioactivity, various heavy radioactive isotopes were given their own names, but such names are no longer used, except for deuterium and tritium. The symbols D and T (instead of 2 and 3 ) are sometimes used for deuterium and tritium, but the corresponding symbol for protium, P, is already in use for phosphorus and thus is not available for protium.[58] In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry allows any of D, T, 2 , and 3 to be used, although 2 and 3 are preferred.[59] The exotic atom muonium (symbol Mu), composed of an antimuon and an electron, is also sometimes considered as a light radioisotope of hydrogen, due to the mass difference between the antimuon and the electron.[60] which was discovered in 1960[61] During the muon's 2.2 µs lifetime, muonium can enter into compounds such as muonium chloride (MuCl) or sodium muonide (NaMu), analogous to hydrogen chloride and sodium hydride respectively.[62] Discovery and use In 1671, Robert Boyle discovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas.[63][64] In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by naming the gas from a metal-acid reaction "inflammable air". He speculated that "inflammable air" was in fact identical to the hypothetical substance called "phlogiston"[65][66] and further finding in 1781 that the gas produces water when burned. He is usually given credit for the discovery of hydrogen as an element.[7][8] In 1783, Antoine Lavoisier gave the element the name hydrogen (from the Greek ὑδρο- hydro meaning "water" and -γενής genes meaning "creator")[9] when he and Laplace reproduced Cavendish's finding that water is produced when hydrogen is burned.[8] Antoine-Laurent de Lavoisier Lavoisier produced hydrogen for his experiments on mass conservation by reacting a flux of steam with metallic iron through an incandescent iron tube heated in a fire. Anaerobic oxidation of iron by the protons of water at high temperature can be schematically represented by the set of following reactions:    Fe +    H2O → FeO + H2 2 Fe + 3 H2O → Fe2O3 + 3 H2 3 Fe + 4 H2O → Fe3O4 + 4 H2 Many metals such as zirconium undergo a similar reaction with water leading to the production of hydrogen. Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask.[8] He produced solid hydrogen the next year.[8] Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck.[7] Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey's group in 1932.[8] François Isaac de Rivaz built the first de Rivaz engine, an internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner's lamp and limelight were invented in 1823.[8] The first hydrogen-filled balloon was invented by Jacques Charles in 1783.[8] Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard.[8] German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900.[8] Regularly scheduled flights started in 1910 and by the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war. In the same year the first hydrogen-cooled turbogenerator went into service with gaseous hydrogen as a coolant in the rotor and the stator in 1937 at Dayton, Ohio, by the Dayton Power & Light Co.;[67] because of the thermal conductivity of hydrogen gas, this is the most common type in its field today. The nickel hydrogen battery was used for the first time in 1977 aboard the U.S. Navy's Navigation technology satellite-2 (NTS-2).[68] For example, the ISS,[69] Mars Odyssey[70] and the Mars Global Surveyor[71] are equipped with nickel-hydrogen batteries. In the dark part of its orbit, the Hubble Space Telescope is also powered by nickel-hydrogen batteries, which were finally replaced in May 2009,[72] more than 19 years after launch and 13 years beyond their design life.[73] Role in quantum theory A line spectrum showing black background with narrow lines superimposed on it: one violet, one blue, one cyan, and one red. Because of its simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic structure.[74] Furthermore, study of the corresponding simplicity of the hydrogen molecule and the corresponding cation H+ Antihydrogen ( ) is the antimatter counterpart to hydrogen. It consists of an antiproton with a positron. Antihydrogen is the only type of antimatter atom to have been produced as of 2015.[76][77] Natural occurrence Hydrogen, as atomic H, is the most abundant chemical element in the universe, making up 75% of normal matter by mass and more than 90% by number of atoms. (Most of the mass of the universe, however, is not in the form of chemical-element type matter, but rather is postulated to occur as yet-undetected forms of mass such as dark matter and dark energy.[78]) This element is found in great abundance in stars and gas giant planets. Molecular clouds of H2 are associated with star formation. Hydrogen plays a vital role in powering stars through the proton-proton reaction and the CNO cycle nuclear fusion.[79] Throughout the universe, hydrogen is mostly found in the atomic and plasma states, with properties quite different from those of molecular hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity and high emissivity (producing the light from the Sun and other stars). The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind they interact with the Earth's magnetosphere giving rise to Birkeland currents and the aurora. Hydrogen is found in the neutral atomic state in the interstellar medium. The large amount of neutral hydrogen found in the damped Lyman-alpha systems is thought to dominate the cosmological baryonic density of the Universe up to redshift z=4.[80] Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, H2. However, hydrogen gas is very rare in the Earth's atmosphere (1 ppm by volume) because of its light weight, which enables it to escape from Earth's gravity more easily than heavier gases. However, hydrogen is the third most abundant element on the Earth's surface,[81] mostly in the form of chemical compounds such as hydrocarbons and water.[43] Hydrogen gas is produced by some bacteria and algae and is a natural component of flatus, as is methane, itself a hydrogen source of increasing importance.[82] A molecular form called protonated molecular hydrogen (H+ ) is found in the interstellar medium, where it is generated by ionization of molecular hydrogen from cosmic rays. This charged ion has also been observed in the upper atmosphere of the planet Jupiter. The ion is relatively stable in the environment of outer space due to the low temperature and density. H+ is one of the most abundant ions in the Universe, and it plays a notable role in the chemistry of the interstellar medium.[83] Neutral triatomic hydrogen H3 can exist only in an excited form and is unstable.[84] By contrast, the positive hydrogen molecular ion (H+ ) is a rare molecule in the universe. Main article: Hydrogen production Steam reforming Hydrogen can be prepared in several different ways, but economically the most important processes involve removal of hydrogen from hydrocarbons, as about 95% of hydrogen production came from steam reforming around year 2000.[85] Commercial bulk hydrogen is usually produced by the steam reforming of natural gas.[86] At high temperatures (1000–1400 K, 700–1100 °C or 1300–2000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and H + H → CO + 3 H This reaction is favored at low pressures but is nonetheless conducted at high pressures (2.0  MPa, 20 atm or 600 inHg). This is because high-pressure H is the most marketable product and pressure swing adsorption (PSA) purification systems work better at higher pressures. The product mixture is known as "synthesis gas" because it is often used directly for the production of methanol and related compounds. Hydrocarbons other than methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized technology is the formation of coke or carbon: → C + 2 H Consequently, steam reforming typically employs an excess of H CO + H + H Other important methods for H production include partial oxidation of hydrocarbons:[87] 2 CH + O → 2 CO + 4 H and the coal reaction, which can serve as a prelude to the shift reaction above:[86] C + H → CO + H Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for the production of ammonia, hydrogen is generated from natural gas.[88] Electrolysis of brine to yield chlorine also produces hydrogen as a co-product.[89] In the laboratory, H is usually prepared by the reaction of dilute non-oxidizing acids on some reactive metals such as zinc with Kipp's apparatus. Zn + 2 H+ + H Aluminium can also produce H upon treatment with bases: 2 Al + 6 H + 2 OH → 2 Al(OH) + 3 H 2 H (l) → 2 H (g) + O An alloy of aluminium and gallium in pellet form added to water can be used to generate hydrogen. The process also produces alumina, but the expensive gallium, which prevents the formation of an oxide skin on the pellets, can be re-used. This has important potential implications for a hydrogen economy, as hydrogen can be produced on-site and does not need to be transported.[91] There are more than 200 thermochemical cycles which can be used for water splitting, around a dozen of these cycles such as the iron oxide cycle, cerium(IV) oxide–cerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle are under research and in testing phase to produce hydrogen and oxygen from water and heat without using electricity.[92] A number of laboratories (including in France, Germany, Greece, Japan, and the USA) are developing thermochemical methods to produce hydrogen from solar energy and water.[93] Anaerobic corrosion Under anaerobic conditions, iron and steel alloys are slowly oxidized by the protons of water concomitantly reduced in molecular hydrogen (H ). The anaerobic corrosion of iron leads first to the formation of ferrous hydroxide (green rust) and can be described by the following reaction: Fe + 2 H O → Fe(OH) + H In its turn, under anaerobic conditions, the ferrous hydroxide (Fe(OH) ) can be oxidized by the protons of water to form magnetite and molecular hydrogen. This process is described by the Schikorr reaction: 3 Fe(OH) + 2 H O + H ferrous hydroxide → magnetite + water + hydrogen The well crystallized magnetite (Fe This process occurs during the anaerobic corrosion of iron and steel in oxygen-free groundwater and in reducing soils below the water table. Geological occurrence: the serpentinization reaction In the absence of atmospheric oxygen (O ), in deep geological conditions prevailing far away from Earth atmosphere, hydrogen (H ) is produced during the process of serpentinization by the anaerobic oxidation by the water protons (H+) of the ferrous (Fe2+) silicate present in the crystal lattice of the fayalite (Fe , the olivine iron-endmember). The corresponding reaction leading to the formation of magnetite (Fe ), quartz (SiO ) and hydrogen (H ) is the following: + 2 H O → 2 Fe + 3 SiO + 3 H fayalite + water → magnetite + quartz + hydrogen This reaction closely resembles the Schikorr reaction observed in the anaerobic oxidation of the ferrous hydroxide in contact with water. Formation in transformers From all the fault gases formed in power transformers, hydrogen is the most common and is generated under most fault conditions; thus, formation of hydrogen is an early indication of serious problems in the transformer's life cycle.[94] Consumption in processes Large quantities of H are needed in the petroleum and chemical industries. The largest application of H in the petrochemical plant include hydrodealkylation, hydrodesulfurization, and hydrocracking. H has several other important uses. H is also used as a reducing agent of metallic ores.[95] Hydrogen is highly soluble in many rare earth and transition metals[96] and is soluble in both nanocrystalline and amorphous metals.[97] Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal lattice.[98] These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the gas's high solubility is a metallurgical problem, contributing to the embrittlement of many metals,[14] complicating the design of pipelines and storage tanks.[15] Apart from its use as a reactant, H has wide applications in physics and engineering. It is used as a shielding gas in welding methods such as atomic hydrogen welding.[99][100] H2 is used as the rotor coolant in electrical generators at power stations, because it has the highest thermal conductivity of any gas. Liquid H2 is used in cryogenic research, including superconductivity studies.[101] Because H is lighter than air, having a little more than 114 of the density of air, it was once widely used as a lifting gas in balloons and airships.[102] In more recent applications, hydrogen is used pure or mixed with nitrogen (sometimes called forming gas) as a tracer gas for minute leak detection. Applications can be found in the automotive, chemical, power generation, aerospace, and telecommunications industries.[103] Hydrogen is an authorized food additive (E 949) that allows food package leak testing among other anti-oxidizing properties.[104] Hydrogen's rarer isotopes also each have specific applications. Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons, and in nuclear fusion reactions.[8] Deuterium compounds have applications in chemistry and biology in studies of reaction isotope effects.[105] Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs,[106] as an isotopic label in the biosciences,[57] and as a radiation source in luminous paints.[107] Hydrogen is commonly used in power stations as a coolant in generators due to a number of favorable properties that are a direct result of its light diatomic molecules. These include low density, low viscosity, and the highest specific heat and thermal conductivity of all gases. Energy carrier Hydrogen is not an energy resource,[109] except in the hypothetical context of commercial nuclear fusion power plants using deuterium or tritium, a technology presently far from development.[110] The Sun's energy comes from nuclear fusion of hydrogen, but this process is difficult to achieve controllably on Earth.[111] Elemental hydrogen from solar, biological, or electrical sources require more energy to make it than is obtained by burning it, so in these cases hydrogen functions as an energy carrier, like a battery. Hydrogen may be obtained from fossil sources (such as methane), but these sources are unsustainable.[109] The energy density per unit volume of both liquid hydrogen and compressed hydrogen gas at any practicable pressure is significantly less than that of traditional fuel sources, although the energy density per unit fuel mass is higher.[109] Nevertheless, elemental hydrogen has been widely discussed in the context of energy, as a possible future carrier of energy on an economy-wide scale.[112] For example, CO sequestration followed by carbon capture and storage could be conducted at the point of H production from fossil fuels.[113] Hydrogen used in transportation would burn relatively cleanly, with some NOx emissions,[114] but without carbon emissions.[113] However, the infrastructure costs associated with full conversion to a hydrogen economy would be substantial.[115] Fuel cells can convert hydrogen and oxygen directly to electricity more efficiently than internal combustion engines.[116] Semiconductor industry Hydrogen is employed to saturate broken ("dangling") bonds of amorphous silicon and amorphous carbon that helps stabilizing material properties.[117] It is also a potential electron donor in various oxide materials, including ZnO,[118][119] SnO2, CdO, MgO,[120] ZrO2, HfO2, La2O3, Y2O3, TiO2, SrTiO3, LaAlO3, SiO2, Al2O3, ZrSiO4, HfSiO4, and SrZrO3.[121] Biological reactions H2 is a product of some types of anaerobic metabolism and is produced by several microorganisms, usually via reactions catalyzed by iron- or nickel-containing enzymes called hydrogenases. These enzymes catalyze the reversible redox reaction between H2 and its component two protons and two electrons. Creation of hydrogen gas occurs in the transfer of reducing equivalents produced during pyruvate fermentation to water.[122] The natural cycle of hydrogen production and consumption by organisms is called the hydrogen cycle.[123] Water splitting, in which water is decomposed into its component protons, electrons, and oxygen, occurs in the light reactions in all photosynthetic organisms. Some such organisms, including the alga Chlamydomonas reinhardtii and cyanobacteria, have evolved a second step in the dark reactions in which protons and electrons are reduced to form H2 gas by specialized hydrogenases in the chloroplast.[124] Efforts have been undertaken to genetically modify cyanobacterial hydrogenases to efficiently synthesize H2 gas even in the presence of oxygen.[125] Efforts have also been undertaken with genetically modified alga in a bioreactor.[126] Safety and precautions Main article: Hydrogen safety Hydrogen poses a number of hazards to human safety, from potential detonations and fires when mixed with air to being an asphyxiant in its pure, oxygen-free form.[127] In addition, liquid hydrogen is a cryogen and presents dangers (such as frostbite) associated with very cold liquids.[128] Hydrogen dissolves in many metals, and, in addition to leaking out, may have adverse effects on them, such as hydrogen embrittlement,[129] leading to cracks and explosions.[130] Hydrogen gas leaking into external air may spontaneously ignite. Moreover, hydrogen fire, while being extremely hot, is almost invisible, and thus can lead to accidental burns.[131] 2. ^ 286 kJ/mol: energy per mole of the combustible material (molecular hydrogen) 1. ^ Simpson, J. A.; Weiner, E. S. C. (1989). "Hydrogen". Oxford English Dictionary. 7 (2nd ed.). Clarendon Press. ISBN 0-19-861219-2.  7. ^ a b c "Hydrogen". Van Nostrand's Encyclopedia of Chemistry. Wylie-Interscience. 2005. pp. 797–799. ISBN 0-471-61525-0.  9. ^ a b Stwertka, Albert (1996). A Guide to the Elements. Oxford University Press. pp. 16–21. ISBN 0-19-508083-1.  11. ^ Laursen, S.; Chang, J.; Medlin, W.; Gürmen, N.; Fogler, H. S. (27 July 2004). "An extremely brief introduction to computational quantum chemistry". Molecular Modeling in Chemical Engineering. University of Michigan. Retrieved 4 May 2015.  12. ^ Presenter: Professor Jim Al-Khalili (21 January 2010). "Discovering the Elements". Chemistry: A Volatile History. 25:40 minutes in. BBC. BBC Four.  13. ^ "Hydrogen Basics — Production". Florida Solar Energy Center. 2007. Retrieved 5 February 2008.  14. ^ a b Rogers, H. C. (1999). "Hydrogen Embrittlement of Metals". Science. 159 (3819): 1057–1064. Bibcode:1968Sci...159.1057R. doi:10.1126/science.159.3819.1057. PMID 17775040.  15. ^ a b Christensen, C. H.; Nørskov, J. K.; Johannessen, T. (9 July 2005). "Making society independent of fossil fuels — Danish researchers reveal new technology". Technical University of Denmark. Retrieved 19 May 2015.  16. ^ "Dihydrogen". O=CHem Directory. University of Southern Maine. Retrieved 6 April 2009.  17. ^ Carcassi, M. N.; Fineschi, F. (2005). "Deflagrations of H2–air and CH4–air lean mixtures in a vented multi-compartment environment". Energy. 30 (8): 1439–1451. doi:10.1016/  18. ^ Committee on Alternatives and Strategies for Future Hydrogen Production and Use, US National Research Council, US National Academy of Engineering (2004). The Hydrogen Economy: Opportunities, Costs, Barriers, and R&D Needs. National Academies Press. p. 240. ISBN 0-309-09163-2.  19. ^ Patnaik, P. (2007). A Comprehensive Guide to the Hazardous Properties of Chemical Substances. Wiley-Interscience. p. 402. ISBN 0-471-71458-5.  20. ^ Schefer, E. W.; Kulatilaka, W. D.; Patterson, B. D.; Settersten, T. B. (June 2009). "Visible emission of hydrogen flames". Combustion and Flame. 156 (6): 1234–1241. doi:10.1016/j.combustflame.2009.01.011.  21. ^ Clayton, D. D. (2003). Handbook of Isotopes in the Cosmos: Hydrogen to Gallium. Cambridge University Press. ISBN 0-521-82381-1.  22. ^ NAAP Labs (2009). "Energy Levels". University of Nebraska Lincoln. Retrieved 20 May 2015.  23. ^ "photon wavelength 13.6 eV". Wolfram Alpha. 20 May 2015. Retrieved 20 May 2015.  24. ^ Stern, D. P. (16 May 2005). "The Atomic Nucleus and Bohr's Early Model of the Atom". NASA Goddard Space Flight Center (mirror). Retrieved 20 December 2007.  25. ^ Stern, D. P. (13 February 2005). "Wave Mechanics". NASA Goddard Space Flight Center. Retrieved 16 April 2008.  26. ^ Staff (2003). "Hydrogen (H2) Properties, Uses, Applications: Hydrogen Gas and Liquid Hydrogen". Universal Industrial Gases, Inc. Retrieved 5 February 2008.  27. ^ Tikhonov, V. I.; Volkov, A. A. (2002). "Separation of Water into Its Ortho and Para Isomers". Science. 296 (5577): 2363. doi:10.1126/science.1069513. PMID 12089435.  28. ^ Hritz, J. (March 2006). "CH. 6 – Hydrogen" (PDF). NASA Glenn Research Center Glenn Safety Manual, Document GRC-MQSA.001. NASA. Retrieved 5 February 2008.  29. ^ Shinitzky, M.; Elitzur, A. C. (2006). "Ortho-para spin isomers of the protons in the methylene group". Chirality. 18 (9): 754–756. doi:10.1002/chir.20319. PMID 16856167.  30. ^ Milenko, Yu. Ya.; Sibileva, R. M.; Strzhemechny, M. A. (1997). "Natural ortho-para conversion rate in liquid and gaseous hydrogen". Journal of Low Temperature Physics. 107 (1–2): 77–92. Bibcode:1997JLTP..107...77M. doi:10.1007/BF02396837.  31. ^ Amos, Wade A. (1 November 1998). "Costs of Storing and Transporting Hydrogen" (PDF). National Renewable Energy Laboratory. pp. 6–9. Retrieved 19 May 2015.  32. ^ Svadlenak, R. E.; Scott, A. B. (1957). "The Conversion of Ortho- to Parahydrogen on Iron Oxide-Zinc Oxide Catalysts". Journal of the American Chemical Society. 79 (20): 5385–5388. doi:10.1021/ja01577a013.  33. ^ Clark, J. (2002). "The Acidity of the Hydrogen Halides". Chemguide. Retrieved 9 March 2008.  34. ^ Kimball, J. W. (7 August 2003). "Hydrogen". Kimball's Biology Pages. Retrieved 4 March 2008.  35. ^ IUPAC Compendium of Chemical Terminology, Electronic version, Hydrogen Bond 36. ^ Sandrock, G. (2 May 2002). "Metal-Hydrogen Systems". Sandia National Laboratories. Retrieved 23 March 2008.  37. ^ a b "Structure and Nomenclature of Hydrocarbons". Purdue University. Retrieved 23 March 2008.  38. ^ "Organic Chemistry". Lexico Publishing Group. 2008. Retrieved 23 March 2008.  39. ^ "Biochemistry". Lexico Publishing Group. 2008. Retrieved 23 March 2008.  40. ^ Moers, K. (1920). "Investigations on the Salt Character of Lithium Hydride". Zeitschrift für Anorganische und Allgemeine Chemie. 113 (191): 179–228. doi:10.1002/zaac.19201130116.  41. ^ Downs, A. J.; Pulham, C. R. (1994). "The hydrides of aluminium, gallium, indium, and thallium: a re-evaluation". Chemical Society Reviews. 23 (3): 175–184. doi:10.1039/CS9942300175.  42. ^ Hibbs, D. E.; Jones, C.; Smithies, N. A. (1999). "A remarkably stable indium trihydride complex: synthesis and characterisation of [InH3P(C6H11)3]". Chemical Communications (2): 185–186. doi:10.1039/a809279f.  43. ^ a b c Miessler, G. L.; Tarr, D. A. (2003). Inorganic Chemistry (3rd ed.). Prentice Hall. ISBN 0-13-035471-6.  44. ^ Okumura, A. M.; Yeh, L. I.; Myers, J. D.; Lee, Y. T. (1990). "Infrared spectra of the solvated hydronium ion: vibrational predissociation spectroscopy of mass-selected H3O+•(H2O)n•(H2)m". Journal of Physical Chemistry. 94 (9): 3416–3427. doi:10.1021/j100372a014.  45. ^ Perdoncin, G.; Scorrano, G. (1977). "Protonation Equilibria in Water at Several Temperatures of Alcohols, Ethers, Acetone, Dimethyl Sulfide, and Dimethyl Sulfoxide". Journal of the American Chemical Society. 99 (21): 6983–6986. doi:10.1021/ja00463a035.  46. ^ Carrington, A.; McNab, I. R. (1989). "The infrared predissociation spectrum of triatomic hydrogen cation (H3+)". Accounts of Chemical Research. 22 (6): 218–222. doi:10.1021/ar00162a004.  47. ^ Gurov, Y. B.; Aleshkin, D. V.; Behr, M. N.; Lapushkin, S. V.; Morokhov, P. V.; Pechkurov, V. A.; Poroshin, N. O.; Sandukovsky, V. G.; Tel'kushev, M. V.; Chernyshev, B. A.; Tschurenkova, T. D. (2004). "Spectroscopy of superheavy hydrogen isotopes in stopped-pion absorption by nuclei". Physics of Atomic Nuclei. 68 (3): 491–97. Bibcode:2005PAN....68..491G. doi:10.1134/1.1891200.  48. ^ Korsheninnikov, A.; Nikolskii, E.; Kuzmin, E.; Ozawa, A.; Morimoto, K.; Tokanai, F.; Kanungo, R.; Tanihata, I.; et al. (2003). "Experimental Evidence for the Existence of 7H and for a Specific Structure of 8He". Physical Review Letters. 90 (8): 082501. Bibcode:2003PhRvL..90h2501K. doi:10.1103/PhysRevLett.90.082501.  49. ^ Urey, H. C.; Brickwedde, F. G.; Murphy, G. M. (1933). "Names for the Hydrogen Isotopes". Science. 78 (2035): 602–603. Bibcode:1933Sci....78..602U. doi:10.1126/science.78.2035.602. PMID 17797765.  50. ^ Oda, Y.; Nakamura, H.; Yamazaki, T.; Nagayama, K.; Yoshida, M.; Kanaya, S.; Ikehara, M. (1992). "1H NMR studies of deuterated ribonuclease HI selectively labeled with protonated amino acids". Journal of Biomolecular NMR. 2 (2): 137–47. doi:10.1007/BF01875525. PMID 1330130.  51. ^ Broad, W. J. (11 November 1991). "Breakthrough in Nuclear Fusion Offers Hope for Power of Future". The New York Times. Retrieved 12 February 2008.  52. ^ Traub, R. J.; Jensen, J. A. (June 1995). "Tritium radioluminescent devices, Health and Safety Manual" (PDF). International Atomic Energy Agency. p. 2.4. Retrieved 20 May 2015.  53. ^ Staff (15 November 2007). "Tritium". U.S. Environmental Protection Agency. Retrieved 12 February 2008.  54. ^ Nave, C. R. (2006). "Deuterium-Tritium Fusion". HyperPhysics. Georgia State University. Retrieved 8 March 2008.  55. ^ Kendall, C.; Caldwell, E. (1998). "Fundamentals of Isotope Geochemistry". US Geological Survey. Retrieved 8 March 2008.  56. ^ "The Tritium Laboratory". University of Miami. 2008. Retrieved 8 March 2008.  57. ^ a b Holte, A. E.; Houck, M. A.; Collie, N. L. (2004). "Potential Role of Parasitism in the Evolution of Mutualism in Astigmatid Mites". Experimental and Applied Acarology. Lubbock: Texas Tech University. 25 (2): 97–107. doi:10.1023/A:1010655610575.  58. ^ van der Krogt, P. (5 May 2005). "Hydrogen". Elementymology & Elements Multidict. Retrieved 20 December 2010.  59. ^ § IR-3.3.2, Provisional Recommendations, Nomenclature of Inorganic Chemistry, Chemical Nomenclature and Structure Representation Division, IUPAC. Accessed on line 3 October 2007. 60. ^ IUPAC (1997). "Muonium". In A.D. McNaught, A. Wilkinson. Compendium of Chemical Terminology (2nd ed.). Blackwell Scientific Publications. doi:10.1351/goldbook.M04069. ISBN 0-86542-684-8.  61. ^ V.W Hughes; et al. (1960). "Formation of Muonium and Observation of its Larmor Precession". Physical Review Letters. 5 (2): 63–65. Bibcode:1960PhRvL...5...63H. doi:10.1103/PhysRevLett.5.63.  62. ^ W.H. Koppenol (IUPAC) (2001). "Names for muonium and hydrogen atoms and their ions" (PDF). Pure and Applied Chemistry. 73 (2): 377–380. doi:10.1351/pac200173020377.  63. ^ Boyle, R. (1672). "Tracts written by the Honourable Robert Boyle containing new experiments, touching the relation betwixt flame and air..." London. 64. ^ Winter, M. (2007). "Hydrogen: historical information". WebElements Ltd. Retrieved 5 February 2008.  65. ^ Musgrave, A. (1976). "Why did oxygen supplant phlogiston? Research programmes in the Chemical Revolution". In Howson, C. Method and appraisal in the physical sciences. The Critical Background to Modern Science, 1800–1905. Cambridge University Press. Retrieved 22 October 2011.  66. ^ Cavendish, Henry (12 May 1766). "Three Papers, Containing Experiments on Factitious Air, by the Hon. Henry Cavendish, F. R. S.". Philosophical Transactions. The Royal Society. 56: 141–184. JSTOR 105491.  67. ^ National Electrical Manufacturers Association (1946). A chronological history of electrical development from 600 B.C. p. 102.  68. ^ "NTS-2 Nickel-Hydrogen Battery Performance 31". Retrieved 6 April 2009.  69. ^ Jannette, A. G.; Hojnicki, J. S.; McKissock, D. B.; Fincannon, J.; Kerslake, T. W.; Rodriguez, C. D. (July 2002). Validation of international space station electrical performance model via on-orbit telemetry (PDF). IECEC '02. 2002 37th Intersociety Energy Conversion Engineering Conference, 2002. pp. 45–50. doi:10.1109/IECEC.2002.1391972. ISBN 0-7803-7296-4. Retrieved 11 November 2011.  70. ^ Anderson, P. M.; Coyne, J. W. (2002). "A lightweight high reliability single battery power system for interplanetary spacecraft". Aerospace Conference Proceedings. 5: 5–2433. doi:10.1109/AERO.2002.1035418. ISBN 0-7803-7231-X.  71. ^ "Mars Global Surveyor". Retrieved 6 April 2009.  72. ^ Lori Tyahla, ed. (7 May 2009). "Hubble servicing mission 4 essentials". NASA. Retrieved 19 May 2015.  73. ^ Hendrix, Susan (25 November 2008). Lori Tyahla, ed. "Extending Hubble's mission life with new batteries". NASA. Retrieved 19 May 2015.  74. ^ Crepeau, R. (1 January 2006). Niels Bohr: The Atomic Model. Great Scientific Minds. Great Neck Publishing. ISBN 1-4298-0723-7.  75. ^ Berman, R.; Cooke, A. H.; Hill, R. W. (1956). "Cryogenics". Annual Review of Physical Chemistry. 7: 1–20. Bibcode:1956ARPC....7....1B. doi:10.1146/annurev.pc.07.100156.000245.  76. ^ Charlton, Mike; Van Der Werf, Dirk Peter (1 March 2015). "Advances in antihydrogen physics". Science Progress. 98 (1): 34–62. doi:10.3184/003685015X14234978376369.  77. ^ Kellerbauer, Alban (29 January 2015). "Why Antimatter Matters". European Review. 23 (01): 45–56. doi:10.1017/S1062798714000532.  78. ^ Gagnon, S. "Hydrogen". Jefferson Lab. Retrieved 5 February 2008.  79. ^ Haubold, H.; Mathai, A. M. (15 November 2007). "Solar Thermonuclear Energy Generation". Columbia University. Archived from the original on 2011-12-11. Retrieved 12 February 2008.  80. ^ Storrie-Lombardi, L. J.; Wolfe, A. M. (2000). "Surveys for z > 3 Damped Lyman-alpha Absorption Systems: the Evolution of Neutral Gas". Astrophysical Journal. 543 (2): 552–576. arXiv:astro-ph/0006044Freely accessible. Bibcode:2000ApJ...543..552S. doi:10.1086/317138.  81. ^ Dresselhaus, M.; et al. (15 May 2003). "Basic Research Needs for the Hydrogen Economy" (PDF). Argonne National Laboratory, U.S. Department of Energy, Office of Science Laboratory. Retrieved 5 February 2008.  82. ^ Berger, W. H. (15 November 2007). "The Future of Methane". University of California, San Diego. Retrieved 12 February 2008.  83. ^ McCall Group; Oka Group (22 April 2005). "H3+ Resource Center". Universities of Illinois and Chicago. Retrieved 5 February 2008.  84. ^ Helm, H.; et al. "Coupling of Bound States to Continuum States in Neutral Triatomic Hydrogen" (PDF). Department of Molecular and Optical Physics, University of Freiburg, Germany. Retrieved 25 November 2009.  85. ^ Ogden, J. M. (1999). "Prospects for building a hydrogen energy infrastructure". Annual Review of Energy and the Environment. 24: 227–279. doi:10.1146/  86. ^ a b c Oxtoby, D. W. (2002). Principles of Modern Chemistry (5th ed.). Thomson Brooks/Cole. ISBN 0-03-035373-4.  87. ^ "Hydrogen Properties, Uses, Applications". Universal Industrial Gases, Inc. 2007. Retrieved 11 March 2008.  88. ^ Funderburg, E. (2008). "Why Are Nitrogen Prices So High?". The Samuel Roberts Noble Foundation. Retrieved 11 March 2008.  89. ^ Lees, A. (2007). "Chemicals from salt". BBC. Archived from the original on 26 October 2007. Retrieved 11 March 2008.  90. ^ Kruse, B.; Grinna, S.; Buch, C. (2002). "Hydrogen Status og Muligheter" (PDF). Bellona. Retrieved 12 February 2008.  91. ^ Venere, E. (15 May 2007). "New process generates hydrogen from aluminum alloy to run engines, fuel cells". Purdue University. Retrieved 5 February 2008.  92. ^ Weimer, Al (25 May 2005). "Development of solar-powered thermochemical production of hydrogen from water" (PDF). Solar Thermochemical Hydrogen Generation Project.  93. ^ Perret, R. "Development of Solar-Powered Thermochemical Production of Hydrogen from Water, DOE Hydrogen Program, 2007" (PDF). Retrieved 17 May 2008.  94. ^ Hirschler, M. M. (2000). Electrical Insulating Materials: International Issues. ASTM International. pp. 89–. ISBN 978-0-8031-2613-8. Retrieved 13 July 2012.  95. ^ Chemistry Operations (15 December 2003). "Hydrogen". Los Alamos National Laboratory. Retrieved 5 February 2008.  96. ^ Takeshita, T.; Wallace, W. E.; Craig, R. S. (1974). "Hydrogen solubility in 1:5 compounds between yttrium or thorium and nickel or cobalt". Inorganic Chemistry. 13 (9): 2282–2283. doi:10.1021/ic50139a050.  97. ^ Kirchheim, R.; Mutschele, T.; Kieninger, W.; Gleiter, H.; Birringer, R.; Koble, T. (1988). "Hydrogen in amorphous and nanocrystalline metals". Materials Science and Engineering. 99: 457–462. doi:10.1016/0025-5416(88)90377-1.  98. ^ Kirchheim, R. (1988). "Hydrogen solubility and diffusivity in defective and amorphous metals". Progress in Materials Science. 32 (4): 262–325. doi:10.1016/0079-6425(88)90010-2.  99. ^ Durgutlu, A. (2003). "Experimental investigation of the effect of hydrogen in argon as a shielding gas on TIG welding of austenitic stainless steel". Materials & Design. 25 (1): 19–23. doi:10.1016/j.matdes.2003.07.004.  100. ^ "Atomic Hydrogen Welding". Specialty Welds. 2007. Archived from the original on 16 July 2011.  101. ^ Hardy, W. N. (2003). "From H2 to cryogenic H masers to HiTc superconductors: An unlikely but rewarding path". Physica C: Superconductivity. 388–389: 1–6. Bibcode:2003PhyC..388....1H. doi:10.1016/S0921-4534(02)02591-1.  102. ^ Almqvist, Ebbe (2003). History of industrial gases. New York, N.Y.: Kluwer Academic/Plenum Publishers. pp. 47–56. ISBN 0306472775. Retrieved 20 May 2015.  103. ^ Block, M. (3 September 2004). Hydrogen as Tracer Gas for Leak Detection. 16th WCNDT 2004. Montreal, Canada: Sensistor Technologies. Retrieved 25 March 2008.  104. ^ "Report from the Commission on Dietary Food Additive Intake" (PDF). European Union. Retrieved 5 February 2008.  105. ^ Reinsch, J.; Katz, A.; Wean, J.; Aprahamian, G.; MacFarland, J. T. (1980). "The deuterium isotope effect upon the reaction of fatty acyl-CoA dehydrogenase and butyryl-CoA". J. Biol. Chem. 255 (19): 9093–97. PMID 7410413.  106. ^ Bergeron, K. D. (2004). "The Death of no-dual-use". Bulletin of the Atomic Scientists. Educational Foundation for Nuclear Science, Inc. 60 (1): 15. doi:10.2968/060001004.  107. ^ Quigg, C. T. (March 1984). "Tritium Warning". Bulletin of the Atomic Scientists. 40 (3): 56–57.  108. ^ International Temperature Scale of 1990 (PDF). Procès-Verbaux du Comité International des Poids et Mesures. 1989. pp. T23–T42. Retrieved 25 March 2008.  109. ^ a b c McCarthy, J. (31 December 1995). "Hydrogen". Stanford University. Retrieved 14 March 2008.  110. ^ "Nuclear Fusion Power". World Nuclear Association. May 2007. Retrieved 16 March 2008.  111. ^ "Chapter 13: Nuclear Energy — Fission and Fusion". Energy Story. California Energy Commission. 2006. Retrieved 14 March 2008.  112. ^ "DOE Seeks Applicants for Solicitation on the Employment Effects of a Transition to a Hydrogen Economy". Hydrogen Program (Press release). US Department of Energy. 22 March 2006. Archived from the original on 19 July 2011. Retrieved 16 March 2008.  113. ^ a b "Carbon Capture Strategy Could Lead to Emission-Free Cars" (Press release). Georgia Tech. 11 February 2008. Retrieved 16 March 2008.  114. ^ Heffel, J. W. (2002). "NOx emission and performance data for a hydrogen fueled internal combustion engine at 1500 rpm using exhaust gas recirculation". International Journal of Hydrogen Energy. 28 (8): 901–908. doi:10.1016/S0360-3199(02)00157-X.  115. ^ Romm, J. J. (2004). The Hype About Hydrogen: Fact And Fiction In The Race To Save The Climate (1st ed.). Island Press. ISBN 1-55963-703-X.  116. ^ Garbak, John (2011). "VIII.0 Technology Validation Sub-Program Overview" (PDF). DOE Fuel Cell Technologies Program, FY 2010 Annual Progress Report. Retrieved 20 May 2015.  117. ^ Le Comber, P. G.; Jones, D. I.; Spear, W. E. (1977). "Hall effect and impurity conduction in substitutionally doped amorphous silicon". Philosophical Magazine. 35 (5): 1173–1187. Bibcode:1977PMag...35.1173C. doi:10.1080/14786437708232943.  118. ^ Van de Walle, C. G. (2000). "Hydrogen as a cause of doping in zinc oxide". Physical Review Letters. 85 (5): 1012–1015. Bibcode:2000PhRvL..85.1012V. doi:10.1103/PhysRevLett.85.1012. PMID 10991462.  119. ^ Janotti, A.; Van De Walle, C. G. (2007). "Hydrogen multicentre bonds". Nature Materials. 6 (1): 44–47. Bibcode:2007NatMa...6...44J. doi:10.1038/nmat1795. PMID 17143265.  120. ^ Kilic, C.; Zunger, Alex (2002). "n-type doping of oxides by hydrogen". Applied Physics Letters. 81 (1): 73–75. Bibcode:2002ApPhL..81...73K. doi:10.1063/1.1482783.  121. ^ Peacock, P. W.; Robertson, J. (2003). "Behavior of hydrogen in high dielectric constant oxide gate insulators". Applied Physics Letters. 83 (10): 2025–2027. Bibcode:2003ApPhL..83.2025P. doi:10.1063/1.1609245.  122. ^ Cammack, R.; Robson, R. L. (2001). Hydrogen as a Fuel: Learning from Nature. Taylor & Francis Ltd. pp. 202–203. ISBN 0-415-24242-8.  123. ^ Rhee, T. S.; Brenninkmeijer, C. A. M.; Röckmann, T. (19 May 2006). "The overwhelming role of soils in the global atmospheric hydrogen cycle". Atmospheric Chemistry and Physics. 6 (6): 1611–1625. doi:10.5194/acp-6-1611-2006. Retrieved 20 May 2015.  124. ^ Kruse, O.; Rupprecht, J.; Bader, K.; Thomas-Hall, S.; Schenk, P. M.; Finazzi, G.; Hankamer, B. (2005). "Improved photobiological H2 production in engineered green algal cells". The Journal of Biological Chemistry. 280 (40): 34170–7. doi:10.1074/jbc.M503840200. PMID 16100118.  125. ^ Smith, Hamilton O.; Xu, Qing (2005). "IV.E.6 Hydrogen from Water in a Novel Recombinant Oxygen-Tolerant Cyanobacteria System" (PDF). FY2005 Progress Report. United States Department of Energy. Retrieved 6 August 2016.  126. ^ Williams, C. (24 February 2006). "Pond life: the future of energy". Science. The Register. Retrieved 24 March 2008.  127. ^ a b Brown, W. J.; et al. (1997). "Safety Standard for Hydrogen and Hydrogen Systems" (PDF). NASA. Retrieved 5 February 2008.  128. ^ "Liquid Hydrogen MSDS" (PDF). Praxair, Inc. September 2004. Retrieved 16 April 2008.  129. ^ "'Bugs' and hydrogen embrittlement". Science News. Washington, D.C. 128 (3): 41. 20 July 1985. doi:10.2307/3970088. JSTOR 3970088.  130. ^ Hayes, B. "Union Oil Amine Absorber Tower". TWI. Retrieved 29 January 2010.  131. ^ Walker, James L.; Waltrip, John S.; Zanker, Adam (1988). John J. McKetta; William Aaron Cunningham, eds. Lactic acid to magnesium supply-demand relationships. Encyclopedia of Chemical Processing and Design. 28. New York: Dekker. p. 186. ISBN 082472478X. Retrieved 20 May 2015.  Further reading • Ferreira-Aparicio, P.; Benito, M. J.; Sanz, J. L. (2005). "New Trends in Reforming Technologies: from Hydrogen Industrial Plants to Multifuel Microreformers". Catalysis Reviews. 47 (4): 491–588. doi:10.1080/01614940500364958.  External links Listen to this article (2 parts) · (info) Part 1 • Part 2 This audio file was created from a revision of the "Hydrogen" article dated 2006-10-28, and does not reflect subsequent edits to the article. (Audio help) More spoken articles
449fe85b14aafbd8
A bunch of basic questions on electrons 1. Feel free to answer any subset. I'm not a science person, so apologies in advance for some of the absurdity. 1. When heat up some piece of matter (e.g., a rock), do the the electrons (of the atoms of which the rock is composed of) start moving faster? If so, then why do we observe temperature as a continuous phenomenon - or, at least, pseudo-continuous with millions of different possibilities (e.g., 25.743 C°, 245.24565435 C° etc.) - after all, the elements that are in the rock are bound by a very low energy levels (say, n=3). Therefore, the angular momentum can only be 0, 1, 2, which would mean that only three different temperatures are possible (assuming you would introduce the heat uniformly throughout the entire rock). What am I misunderstanding? Another reason I am wrong is because I've read that temperature does not have an upper limit, however, the angular momentum does have an upper limit... 2. I've seen pictures of electron clouds that appear to diffuse outwards in a continuous fashion (like a picture of the bivariate normal distribution) to positive infinity and negative infinity. I've also seen picture where it's just a geometric shape (a sphere or a oval sphere or a spherical cone) that's bounded in space. Which one is more accurate? Asking in another way: For a particular atom in Melbourne, is there a nonzero probability (albeit extremely small) of its electron appearing in the US (while the nucleaus stays in Melbourne and nobody does anything to it)? 3. Is there a mathematical pattern that produces the standing waves of the subshells we know of? Or is it just arbitrary and we don't understand it yet? 4. What does it take to kick out an electron? How often does it happen in our daily lives? What kind of wavelength and what kind of intensity is required to kick out an electron of an hydrogen atom, say? 5. Why are noble gases and other non-bonded atoms colorless? Why is that only after bonding the molecules assume color? 6. Why do we not have color photographs of molecules? Shoulnd't the wavelength be visible at that scale? 7. Why do the planets of our solar system have such large variation in color (blue, red etc.)? The visible spectrum is so extremely small - why does the emission/absorption just happen to take place in that range? Or are the emission/absorption characteristics so extremely idiosynchartic for different matters at all wavelength ranges? 8. When I look at the 2p (x,y,z) subshells I notice that when you put them together, then there will be some overlapping of the clouds (does that mean that the overlapping regions have additive probabilities of the two subshells)? 9. I once asked why the electrons and protons don't collide due to the magnetic attraction. Someone said that the laws at the atomic scale are different. However, I then read on wikipedia that electrons obey electromagnetic rules. What gives? Last edited: Aug 20, 2011 2. jcsd 3. 1. Yes and no. The temperature is due to the motion of the atoms, not their electrons. The temperature only appears continous in systems with a large number of atoms because the gaps between the enery levels become very small. 2. The diffuse cloud is more accurate. The "bubbles" you mention are constructed such that there is a certain probability of finding the electron inside them. 3. We have a theory which predicts the states of quantum systems and is in agreement with experiment. If this is what you mean, then yes, we understand it. 4. This happens all the time. You could say it happens in any chemical redox-reaction. 5. The colour of gases arise from their absorption/emission spectra. You can read about this in any basic physics textbook or on wikipedia. 6. See above. You can only define the colour of an object from its emission spectrum. A molecule absorbs and emits light as a whole, so you cannot obtain a colour photograph of one the same way you can for macroscopic objects. 8. Not neccesarily. It depends on the coefficients in their sum. This question may be difficult to answer precisely without knowledge of linear algebra. 4. Bill_K Bill_K 4,160 Science Advisor 9. The electrons and protons in an atom most certainly do "collide." Or at least they overlap. The s-wave orbital for an electron is nonzero at the origin, and consequently there is a nonzero probability of finding the electron inside the nucleus. In most cases there is no interaction simply because there's not enough energy available to do so. For example in the hydrogen atom, the electron and proton cannot unite to form a neutron because the neutron is heavier and outweighs the proton and electron combined. For other atoms, such as Al-26, the electron may indeed be captured, and one of the protons is turned into a neutron causing a neutrino to be emitted. 5. jtbell Staff: Mentor These waves arise from solving the Schrödinger equation. The solutions for the hydrogen atom are well-understood and are described with varying levels of detail in many "modern physics" and quantum mechanics textbooks, from the second-year university level on upward. Also on many Web sites, for example this one: 6. To expand on espen180's answer: For a single atom or molecule in vacuum (far from any other atoms or electromagnetic fields) the energy levels available to its electrons are indeed few and highly discrete. However, if you bring more atoms in to create a crystalline lattice, new electron energy levels are created. This is because each electron now feels the potential of not only its parent nucleus, but of all other nuclei and electrons in the lattice. The number of new electron energy levels is proportional to the number of atoms in the lattice. For a crystal of reasonable mass, there will be in excess of [itex]10^{20}[/itex] atoms, and a correspondingly large number of electron energy levels. These energy levels are so numerous and closely spaced as to merge into a near continuum of energy levels, called a band. See Wikipedia. For materials other than crystalline solids, the relationship between the number of atoms/molecules and the number of available electron energy levels is probably more complex than a simple linear proportionality, but I would presume that the basic principle is the same. Take a look at, or derive, the electron energy levels for the Bohr hydrogen atom: [tex]{E_n} = - \frac{{{m_e}{e^4}}}{{8{\pi ^2}\varepsilon _0^2{\hbar ^2}{n^2}}} = - \frac{{13.6{\rm{ eV}}}}{{{n^2}}}[/tex] You can see that it takes a photon of at least 13.6 eV to liberate a ground-state electron from a hydrogen atom (photoelectric effect). This corresponds to a photon of 91 nm, which is in the ultraviolet. The visible spectrum (~ 400-700 nm) corresponds to the range of photon energies ~ 1.8-3.1 eV. This also happens to be the approximate range in which the absorption/emission spectra of most materials are at their most detailed (i.e. have the most spikes). This is not a coincidence—electron transition energies are in the ~ 0.1-10 eV range. Since every material has a unique electronic absorption spectrum, it is likely that (say) a trichromatic eye or camera sensitive to visible light will perceive two different materials as different colours. Hence it is not so unlikely that the planets and moons of the solar system, each having a different surface composition/atmosphere etc., should appear differently coloured when viewed in visible light by the human eye. Just beyond the visible spectrum you have IR and UV. Near-IR and near-UV can absorbed and emitted electronically by most materials just like visible light, but (if I remember correctly) the absorptivity/emissivity in this range is typically lower. Deep-IR can only be absorbed and emitted thermally, so in this range you see very little visual contrast between materials of similar temperature. Far-UV and X-rays tend to interact via scattering. I am not sure how much variation in "colour" one would see when viewing the world in this band, but probably not very much. Microwaves and radio waves are absorbed weakly or not at all. Again, I am not sure how much variation in "colour" would be seen between objects when viewed in microwave, but I would guess very little. As for whether the absorption/emission spectrum of every material is extremely idiosyncratic across all wavelengths, I am fairly sure the answer is no: the absorption/emission spectrum is at generally its most characteristic in the visible part of the spectrum. But, as I say, I am not positive about this. Good question! 7. Many thanks for your answers. Especially the "visible spectrum sensitivity" answer I found very revealing, meta. Some follow-up: So, all else being equal, if you increase the temperature of an object, will the average principal quantum number of its atoms increase? If this question is absurd because you can only view the object as one collective of atoms, what will happen to the orbital as you increase temperature? What about the angular momentum? If it's not temperature, what determines what angular momentum an atom will adopt? espen, I don't understand why you say "yes and no" when you say that the atomic vibrations are mainly responsible for the heat, and not the electron clouds. Or are they the same phenomenon? I phrased my question in such a generic way in which it was impossible to infer my intentions. I was looking at the different angular momentum shapes of the orbitals at N=3. So, when you vary the angular momentum from 0...2, is there an "elegant"/simple equation which will give you the new electron field by just varying over l=0 to 2? Does the photon (that kicks out the electron) get absorbed by the atom and replace the kicked-out electron? When I compare the properties of electrons vs. photons I see vast differences: How can the photon just "emulate" being an electron? After all, the forces at that range are extremely picky/sensitive to the orbiting particle's properties. On the other hand, if it does not get kicked out, the object under light exposure would get ionized very fast (unless the kicked-out electrons will be absorbed by a neighboring ionized atom of the wood immediately thereafter). What gives? For a typical object (say, a piece of wood in direct sunlight), roughly at what frequency does this happen (i.e., an electron is kicked out of an orbital) and roughly what % of the woods surface atoms are affected? From your equation it looks like it would happen way over 99.9% of the objects surface at a extremely high frequency. But what is the approximate mean time for an individual atom from one photoelectric event to the next given typical sunlight exposure? I still don't understand how we cannot see the color at that range. What determines at which scale you start seeing colors? This is something I never understood. What causes the transformation of elementary particles on collision? What triggers the "I take this amount of this particle and that amount of the other particle and give you this particle and so much of kinetic energy." It seems extremely unlikely that the reacting particles just happen to have the same amount of property x as the resulting particle...and why do they not just kick off each other with conversion of energy? What does this mean? Also, I saw that mass of elementary particles is given in eV. Since when is mass provided in volts? Is this because of e=mc^2? 8. The temperature is dependent on the kinetic energy of the contituent atoms. However, there can be energy transfer to the atoms' electrons, to the average energy of the electrons is dependent on the temperature. Yes, there is such an equation. You can view it here: http://en.wikipedia.org/wiki/Hydrogen_atom#Wavefunction An electron volt is a unit of energy, equivalent to 1.602*10-19 J, which is equivalent to roughly 1.8*10-36 kg 9. Positive and negative infinity are limits, they don't physically exist. Infinities of temperature are a consequence of the integral calculus and entropy, nothing there is really physically infinite either, although some people will tell you they are; I wouldn't believe physics allows for infinities in any energy system though if I was you, only that a number sometimes has peculiarities that reality does not exhibit infinitely. Why quantisation is pretty much an obvious logical deduction, when energy is transferred it can be lost in several ways, all of which are usually just conceptual issues that boill down to things like momentum etc. The energy of lifting a book onto a shelf is really no different that other forms of energy it's just a way to explain context. Energy types are more of a philosophy of kinds than an absolute condition of reality. A way of delineating reactions and interactions. 10. I think m.e.t.a. is hinting towards Compton scattering(wiki it).
236f74575802bf0f
Collapse Theories First published Thu Mar 7, 2002; substantive revision Tue Feb 16, 2016 Quantum mechanics, with its revolutionary implications, has posed innumerable problems to philosophers of science. In particular, it has suggested reconsidering basic concepts such as the existence of a world that is, at least to some extent, independent of the observer, the possibility of getting reliable and objective knowledge about it, and the possibility of taking (under appropriate circumstances) certain properties to be objectively possessed by physical systems. It has also raised many others questions which are well known to those involved in the debate on the interpretation of this pillar of modern science. One can argue that most of the problems are not only due to the intrinsic revolutionary nature of the phenomena which have led to the development of the theory. They are also related to the fact that, in its standard formulation and interpretation, quantum mechanics is a theory which is excellent (in fact it has met with a success unprecedented in the history of science) in telling us everything about what we observe, but it meets with serious difficulties in telling us what is. We are making here specific reference to the central problem of the theory, usually referred to as the measurement problem, or, with a more appropriate term, as the macro-objectification problem. It is just one of the many attempts to overcome the difficulties posed by this problem that has led to the development of Collapse Theories, i.e., to the Dynamical Reduction Program (DRP). As we shall see, this approach consists in accepting that the dynamical equation of the standard theory should be modified by the addition of stochastic and nonlinear terms. The nice fact is that the resulting theory is capable, on the basis of a single dynamics which is assumed to govern all natural processes, to account at the same time for all well-established facts about microscopic systems as described by the standard theory as well as for the so-called postulate of wave packet reduction (WPR). As is well known, such a postulate is assumed in the standard scheme just in order to guarantee that measurements have outcomes but, as we shall discuss below, it meets with insurmountable difficulties if one takes the measurement itself to be a process governed by the linear laws of the theory. Finally, the collapse theories account in a completely satisfactory way for the classical behavior of macroscopic systems. Two specifications are necessary in order to make clear from the beginning what are the limitations and the merits of the program. The only satisfactory explicit models of this type (which are essentially variations and refinements of the one proposed in Ghirardi, Rimini, and Weber (1986), and usually referred to as the GRW theory) are phenomenological attempts to solve a foundational problem. At present, they involve phenomenological parameters which, if the theory is taken seriously, acquire the status of new constants of nature. Moreover, the problem of building satisfactory relativistic generalizations of these models which seemed extremely difficult up to few years ago, has seen some significant improvements. More important, such improvements have elucidated some crucial points and have made clear that there is no reason of principle preventing to reach this goal. In spite of their phenomenological character, we think that Collapse Theories have a remarkable relevance, since they have made clear that there are new ways to overcome the difficulties of the formalism, to close the circle in the precise sense defined by Abner Shimony (1989), which until a few years ago were considered impracticable, and which, on the contrary, have been shown to be perfectly viable. Moreover, they have allowed a clear identification of the formal features which should characterize any unified theory of micro and macro processes. Last but not least, Collapse theories qualify themselves as rival theories of quantum mechanics and one can easily identify some of their physical implications which, in principle, would allow crucial tests discriminating between the two. To get really stringent indications from such tests requires experiments involving technological techniques which have been developed only very recently. Actually, it is just due to remarkable improvements in dealing with mesoscopic systems and to important practical steps forward, that some specific bounds have already been obtained for the parameters characterizing the theories under investigation, and, more important, precise families of physical processes in which a violation of the linear nature of the standard formalism might emerge have been clearly identified and are the subject of systematic investigations which might lead, in the end, to relevant discoveries. 1. General Considerations As stated already, a very natural question which all scientists who are concerned about the meaning and the value of science have to face, is whether one can develop a coherent worldview that can accommodate our knowledge concerning natural phenomena as it is embodied in our best theories. Such a program meets serious difficulties with quantum mechanics, essentially because of two formal aspects of the theory which are common to all of its versions, from the original nonrelativistic formulations of the 1920s, to the quantum field theories of recent years: the linear nature of the state space and of the evolution equation, i.e., the validity of the superposition principle and the related phenomenon of entanglement, which, in Schrödinger’s words: is not one but the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought (Schrödinger, 1935, p. 807). These two formal features have embarrassing consequences, since they imply • objective chance in natural processes, i.e., the nonepistemic nature of quantum probabilities; • objective indefiniteness of physical properties both at the micro and macro level; • objective entanglement between spatially separated and non-interacting constituents of a composite system, entailing a sort of holism and a precise kind of nonlocality. For the sake of generality, we shall first of all present a very concise sketch of ‘the rules of the quantum game’. 2. The Formalism: A Concise Sketch Let us recall the axiomatic structure of quantum theory: 1. States of physical systems are associated with normalized vectors in a Hilbert space, a complex, infinite-dimensional, complete and separable linear vector space equipped with a scalar product. Linearity implies that the superposition principle holds: if \(\ket{f}\) is a state and \(\ket{g}\) is a state, then (for \(a\) and \(b\) arbitrary complex numbers) also \[ \ket{K} = a\ket{f} + b\ket{g} \] is a state. Moreover, the state evolution is linear, i.e., it preserves superpositions: if \(\ket{f,t}\) and \(\ket{g,t}\) are the states obtained by evolving the states \(\ket{f,0}\) and \(\ket{g,0}\), respectively, from the initial time \(t=0\) to the time \(t\), then \(a\ket{f,t} + b\ket{g,t}\) is the state obtained by the evolution of \(a\ket{f,0} + b\ket{g,0}\). Finally, the completeness assumption is made, i.e., that the knowledge of its statevector represents, in principle, the most accurate information one can have about the state of an individual physical system. 2. The observable quantities are represented by self-adjoint operators \(B\) on the Hilbert space. The associated eigenvalue equations \(B\ket{b_k} = b_k \ket{b_k}\) and the corresponding eigenmanifolds (the linear manifolds spanned by the eigenvectors associated to a given eigenvalue, also called eigenspaces) play a basic role for the predictive content of the theory. In fact: 1. The eigenvalues \(b_k\) of an operator \(B\) represent the only possible outcomes in a measurement of the corresponding observable. 2. The square of the norm (i.e., the length) of the projection of the normalized vector (i.e., of length 1) describing the state of the system onto the eigenmanifold associated to a given eigenvalue gives the probability of obtaining the corresponding eigenvalue as the outcome of the measurement. In particular, it is useful to recall that when one is interested in the probability of finding a particle at a given place, one has to resort to the so-called configuration space representation of the statevector. In such a case the statevector becomes a square-integrable function of the position variables of the particles of the system, whose modulus squared yields the probability density for the outcomes of position measurements. We stress that, according to the above scheme, quantum mechanics makes only conditional probabilistic predictions (conditional on the measurement being actually performed) for the outcomes of prospective (and in general incompatible) measurement processes. Only if a state belongs already before the act of measurement to an eigenmanifold of the observable which is going to be measured, can one predict the outcome with certainty. In all other cases—if the completeness assumption is made—one has objective nonepistemic probabilities for different outcomes. The orthodox position gives a very simple answer to the question: what determines the outcome when different outcomes are possible? Nothing—the theory is complete and, as a consequence, it is illegitimate to raise any question about possessed properties referring to observables for which different outcomes have non-vanishing probabilities of being obtained. Correspondingly, the referent of the theory are the results of measurement procedures. These are to be described in classical terms and involve in general mutually exclusive physical conditions. As regards the legitimacy of attributing properties to physical systems, one could say that quantum mechanics warns us against requiring too many properties to be actually possessed by physical systems. However—with Einstein—one can adopt as a sufficient condition for the existence of an objective individual property that one be able (without in any way disturbing the system) to predict with certainty the outcome of a measurement. This implies that, whenever the overall statevector factorizes into the product of a state of the Hilbert space of the physical system \(S\) and of the rest of the world, \(S\) does possess some properties (actually a complete set of properties, i.e., those associated to appropriate maximal sets of commuting observables). Before concluding this section we must add some comments about the measurement process. Quantum theory was created to deal with microscopic phenomena. In order to obtain information about them one must be able to establish strict correlations between the states of the microscopic systems and the states of objects we can perceive. Within the formalism, this is described by considering appropriate micro-macro interactions. The fact that when the measurement is completed one can make statements about the outcome is accounted for by the already mentioned WPR postulate (Dirac 1948): a measurement always causes a system to jump in an eigenstate of the observed quantity. Correspondingly, also the statevector of the apparatus ‘jumps’ into the manifold associated to the recorded outcome. 3. The Macro-Objectification Problem In this section we shall clarify why the formalism we have just presented gives rise to the measurement or macro-objectification problem. To this purpose we shall, first of all, discuss the standard oversimplified argument based on the so-called von Neumann ideal measurement scheme. Let us begin by recalling the basic points of the standard argument: Suppose that a microsystem \(S\), just before the measurement of an observable \(B\), is in the eigenstate \(\ket{b_j}\) of the corresponding operator. The apparatus (a macrosystem) used to gain information about \(B\) is initially assumed to be in a precise macroscopic state, its ready state, corresponding to a definite macro property—e.g., its pointer points at 0 on a scale. Since the apparatus \(A\) is made of elementary particles, atoms and so on, it must be described by quantum mechanics, which will associate to it the state vector \(\ket{A_0}\). One then assumes that there is an appropriate system-apparatus interaction lasting for a finite time, such that when the initial apparatus state is triggered by the state \(\ket{b_j}\) it ends up in a final configuration \(\ket{A_j}\), which is macroscopically distinguishable from the initial one and from the other configurations \(\ket{A_k}\) in which it would end up if triggered by a different eigenstate \(\ket{b_k}\). Moreover, one assumes that the system is left in its initial state. In brief, one assumes that one can dispose things in such a way that the system-apparatus interaction can be described as: \[\begin{align} \tag{1} \textit{(initial state)}{:}\ & \ket{b_k} \ket{A_0} \\ \textit{(final state)}{:}\ & \ket{b_k} \ket{A_k} \end{align}\] Equation (1) and the hypothesis that the superposition principle governs all natural processes tell us that, if the initial state of the microsystem is a linear superposition of different eigenstates (for simplicity we will consider only two of them), one has: \[\begin{align} \tag{2} \textit{(initial state)}{:}\ & (a\ket{b_k} + b\ket{b_j})\ket{A_0 } \\ \textit{(final state)}{:}\ & (a\ket{b_k} \ket{A_k} + b\ket{b_j} \ket{A_j}). \end{align}\] Some remarks about this are in order: • The scheme is highly idealized, both because it takes for granted that one can prepare the apparatus in a precise state, which is impossible since we cannot have control over all its degrees of freedom, and because it assumes that the apparatus registers the outcome without altering the state of the measured system. However, as we shall discuss below, these assumptions are by no means essential to derive the embarrassing conclusion we have to face, i.e., that the final state is a linear superposition of two states corresponding to two macroscopically different states of the apparatus. Since we know that the + representing linear superpositions cannot be replaced by the logical alternative either … or, the measurement problem arises: what meaning can one attach to a state of affairs in which two macroscopically and perceptively different states occur simultaneously? • As already mentioned, the standard solution to this problem is given by the WPR postulate: in a measurement process reduction occurs: the final state is not the one appearing in the second line of equation (2) but, since macro-objectification takes place, it is \[ \begin{align} \tag{3} \text{either } &\ket{b_k} \ket{A_k} \text{ with probability } \lvert a\rvert^2 \\ \text{or } &\ket{b_j} \ket{A_j} \text{ with probability } \lvert b\rvert^2. \end{align}\] Nowadays, there is a general consensus that this solution is absolutely unacceptable for two basic reasons: 1. It corresponds to assuming that the linear nature of the theory is broken at a certain level. Thus, quantum theory is unable to explain how it can happen that the apparata behave as required by the WPR postulate (which is one of the axioms of the theory). 2. Even if one were to accept that quantum mechanics has a limited field of applicability, so that it does not account for all natural processes and, in particular, it breaks down at the macrolevel, it is clear that the theory does not contain any precise criterion for identifying the borderline between micro and macro, linear and nonlinear, deterministic and stochastic, reversible and irreversible. To use J.S. Bell’s words, there is nothing in the theory fixing such a borderline and the split between the two above types of processes is fundamentally shifty. As a matter of fact, if one looks at the historical debate on this problem, one can easily see that it is precisely by continuously resorting to this ambiguity about the split that adherents of the Copenhagen orthodoxy or easy solvers (Bell 1990) of the measurement problem have rejected the criticism of the heretics (Gottfried 2000). For instance, Bohr succeeded in rejecting Einstein’s criticisms at the Solvay Conferences by stressing that some macroscopic parts of the apparatus had to be treated fully quantum mechanically; von Neumann and Wigner displaced the split by locating it between the physical and the conscious (but what is a conscious being?), and so on. Also other proposed solutions to the problem, notably certain versions of many-worlds interpretations, suffer from analogous ambiguities. It is not our task to review here the various attempts to solve the above difficulties. One can find many exhaustive treatments of this problem in the literature. On the contrary, we would like to discuss how the macro-objectification problem is indeed a consequence of very general, in fact unavoidable, assumptions on the nature of measurements, and not specifically of the assumptions of von Neumann’s model. This was established in a series of theorems of increasing generality, notably the ones by Fine (1970), d’Espagnat (1971), Shimony (1974), Brown (1986) and Busch and Shimony (1996). Possibly the most general and direct proof is given by Bassi and Ghirardi (2000), whose results we briefly summarize. The assumptions of the theorem are: 1. that a microsystem can be prepared in two different eigenstates of an observable (such as, e.g., the spin component along the z-axis) and in a superposition of two such states; 2. that one has a sufficiently reliable way of ‘measuring’ such an observable, meaning that when the measurement is triggered by each of the two above eigenstates, the process leads in the vast majority of cases to macroscopically and perceptually different situations of the universe. This requirement allows for cases in which the experimenter does not have perfect control of the apparatus, the apparatus is entangled with the rest of the universe, the apparatus makes mistakes, or the measured system is altered or even destroyed in the measurement process; 3. that all natural processes obey the linear laws of the theory. From these very general assumptions one can show that, repeating the measurement on systems prepared in the superposition of the two given eigenstates, in the great majority of cases one ends up in a superposition of macroscopically and perceptually different situations of the whole universe. If one wishes to have an acceptable final situation, one mirroring the fact that we have definite perceptions, one is arguably compelled to break the linearity of the theory at an appropriate stage. 4. The Birth of Collapse Theories The debate on the macro-objectification problem continued for many years after the early days of quantum mechanics. In the early 1950s an important step was taken by D. Bohm who presented (Bohm 1952) a mathematically precise deterministic completion of quantum mechanics (see the entry on Bohmian Mechanics). In the area of Collapse Theories, one should mention the contribution by Bohm and Bub (1966), which was based on the interaction of the statevector with Wiener-Siegel hidden variables. But let us come to Collapse Theories in the sense currently attached to this expression. Various investigations during the 1970s can be considered as preliminary steps for the subsequent developments. In the years 1970 we were seriously concerned with quantum decay processes and in particular with the possibility of deriving, within a quantum context, the exponential decay law. For an exhaustive review of our approach see (Fonda, Ghirardi, and Rimini 1978). Some features of this approach are extremely relevant for the DRP. Let us list them: • One deals with individual physical systems; • The statevector is supposed to undergo random processes at random times, inducing sudden changes driving it either within the linear manifold of the unstable state or within the one of the decay products; • To make the treatment quite general (the apparatus does not know which kind of unstable system it is testing) one is led to identify the random processes with localization processes of the relative coordinates of the decay fragments. Such an assumption, combined with the peculiar resonant dynamics characterizing an unstable system, yields, completely in general, the desired result. The ‘relative position basis’ is the preferred basis of this theory; • Analogous ideas have been applied to measurement processes; Obviously, in these papers the reduction processes which are involved were not assumed to be ‘spontaneous and fundamental’ natural processes, but due to system-environment interactions. Accordingly, these attempts did not represent original proposals for solving the macro-objectification problem but they have paved the way for the elaboration of the GRW theory. Almost in the same years, P. Pearle (1976, 1979), and subsequently N. Gisin (1984) and others, had entertained the idea of accounting for the reduction process in terms of a stochastic differential equation. These authors were really looking for a new dynamical equation and for a solution to the macro-objectification problem. Unfortunately, they were unable to give any precise suggestion about how to identify the states to which the dynamical equation should lead. Indeed, these states were assumed to depend on the particular measurement process one was considering. Without a clear indication on this point there was no way to identify a mechanism whose effect could be negligible for microsystems but extremely relevant for all the macroscopic ones. N. Gisin gave subsequently an interesting (though not uncontroversial) argument (Gisin 1989) that nonlinear modifications of the standard equation without stochasticity are unacceptable since they imply the possibility of sending superluminal signals. Soon afterwards, G. C. Ghirardi and R. Grassi proved that stochastic modifications without nonlinearity can at most induce ensemble and not individual reductions, i.e., they do not guarantee that the state vector of each individual physical system is driven in a manifold corresponding to definite properties. 5. The Original Collapse Model As already mentioned, the Collapse Theory we are going to describe amounts to accepting a modification of the standard evolution law of the theory such that microprocesses and macroprocesses are governed by a single dynamics. Such a dynamics must imply that the micro-macro interaction in a measurement process leads to WPR. Bearing this in mind, recall that the characteristic feature distinguishing quantum evolution from WPR is that, while Schrödinger’s equation is linear and deterministic (at the wave function level), WPR is nonlinear and stochastic. It is then natural to consider, as was suggested for the first time in the above quoted papers by P. Pearle, the possibility of nonlinear and stochastic modifications of the standard Schrödinger dynamics. However, the initial attempts to implement this idea were unsatisfactory for various reasons. The first, which we have already discussed, concerns the choice of the preferred basis: if one wants to have a universal mechanism leading to reductions, to which linear manifolds should the reduction mechanism drive the statevector? Or, equivalently, which of the (generally) incompatible ‘potentialities’ of the standard theory should we choose to make actual? The second, referred to as the trigger problem by Pearle (1989), is the problem of how the reduction mechanism can become more and more effective in going from the micro to the macro domain. The solution to this problem constitutes the central feature of the Collapse Theories of the GRW type. To discuss these points, let us briefly review the first consistent Collapse model to appear in the literature. Within such a model, originally referred to as QMSL (Quantum Mechanics with Spontaneous Localizations), the problem of the choice of the preferred basis is solved by noting that the most embarrassing superpositions, at the macroscopic level, are those involving different spatial locations of macroscopic objects. Actually, as Einstein has stressed, this is a crucial point which has to be faced by anybody aiming to take a macro-objective position about natural phenomena: ‘A macro-body must always have a quasi-sharply defined position in the objective description of reality’ (Born, 1971, p. 223). Accordingly, QMSL considers the possibility of spontaneous processes, which are assumed to occur instantaneously and at the microscopic level, which tend to suppress the linear superpositions of differently localized states. The required trigger mechanism must then follow consistently. The key assumption of QMSL is the following: each elementary constituent of any physical system is subjected, at random times, to random and spontaneous localization processes (which we will call hittings) around appropriate positions. To have a precise mathematical model one has to be very specific about the above assumptions; in particular one has to make explicit HOW the process works, i.e., which modifications of the wave function are induced by the localizations, WHERE it occurs, i.e., what determines the occurrence of a localization at a certain position rather than at another one, and finally WHEN, i.e., at what times, it occurs. The answers to these questions are as follows. Let us consider a system of \(N\) distinguishable particles and let us denote by \(F(\boldsymbol{q}_1, \boldsymbol{q}_2 , \ldots ,\boldsymbol{q}_N )\) the coordinate representation (wave function) of the state vector (we disregard spin variables since hittings are assumed not to act on them). 1. The answer to the question HOW is then: if a hitting occurs for the \(i\)-th particle at point \(\boldsymbol{x}\), the wave function is instantaneously multiplied by a Gaussian function (appropriately normalized) \[ G(\boldsymbol{q}_i, \boldsymbol{x}) = K \exp[-\{1/(2d^2)\}(\boldsymbol{q}_i -\boldsymbol{x})^2], \] where \(d\) represents the localization accuracy. Let us denote as \[ L_i (\boldsymbol{q}_1, \boldsymbol{q}_2, \ldots, \boldsymbol{q}_N ; \boldsymbol{x}) = F(\boldsymbol{q}_1, \boldsymbol{q}_2, \ldots, \boldsymbol{q}_N) G(\boldsymbol{q}_i, \boldsymbol{x}) \] the wave function immediately after the localization, as yet unnormalized. 2. As concerns the specification of WHERE the localization occurs, it is assumed that the probability density \(P(\boldsymbol{x})\) of its taking place at the point \(\boldsymbol{x}\) is given by the square of the norm of the state \(L_i\) (the length, or to be more precise, the integral of the modulus squared of the function \(L_i\) over the \(3N\)-dimensional space). This implies that hittings occur with higher probability at those places where, in the standard quantum description, there is a higher probability of finding the particle. Note that the above prescription introduces nonlinear and stochastic elements in the dynamics. The constant \(K\) appearing in the expression of \(G(\boldsymbol{q}_i, \boldsymbol{x})\) is chosen in such a way that the integral of \(P(\boldsymbol{x})\) over the whole space equals 1. 3. Finally, the question WHEN is answered by assuming that the hittings occur at randomly distributed times, according to a Poisson distribution, with mean frequency \(f\). It is straightforward to convince oneself that the hitting process leads, when it occurs, to the suppression of the linear superpositions of states in which the same particle is well localized at different positions separated by a distance greater than \(d\). As a simple example we can consider a single particle whose wavefunction is different from zero only in two small and far apart regions \(h\) and \(t\). Suppose that a localization occurs around \(h\); the state after the hitting is then appreciably different from zero only in a region around \(h\) itself. A completely analogous argument holds for the case in which the hitting takes place around \(t\). As concerns points which are far from both \(h\) and \(t\), one easily sees that the probability density for such hittings , according to the multiplication rule determining \(L_i\), turns out to be practically zero, and moreover, that if such a hitting were to occur, after the wave function is normalized, the wave function of the system would remain almost unchanged. We can now discuss the most important feature of the theory, i.e., the Trigger Mechanism. To understand the way in which the spontaneous localization mechanism is enhanced by increasing the number of particles which are in far apart spatial regions (as compared to \(d)\), one can consider, for simplicity, the superposition \(\ket{S}\), with equal weights, of two macroscopic pointer states \(\ket{H}\) and \(\ket{T}\), corresponding to two different pointer positions \(H\) and \(T\), respectively. Taking into account that the pointer is ‘almost rigid’ and contains a macroscopic number \(N\) of microscopic constituents, the state can be written, in obvious notation, as: \[\tag{4} \ket{S} = [\ket{1 \near h_1} \ldots \ket{N \near h_N} + \ket{1 \near t_1} \ldots \ket{N \near t_N}], \] where \(h_i\) is near \(H\), and \(t_i\) is near \(T\). The states appearing in first term on the right-hand side of equation (4) have coordinate representations which are different from zero only when their arguments \((1,\ldots ,N)\) are all near \(H\), while those of the second term are different from zero only when they are all near \(T\). It is now evident that if any of the particles (say, the \(i\)-th particle) undergoes a hitting process, e.g., near the point \(h_i\), the multiplication prescription leads practically to the suppression of the second term in (4). Thus any spontaneous localization of any of the constituents amounts to a localization of the pointer. The hitting frequency is therefore effectively amplified proportionally to the number of constituents. Notice that, for simplicity, the argument makes reference to an almost rigid body, i.e., to one for which all particles are around \(H\) in one of the states of the superposition and around \(T\) in the other. It should however be obvious that what really matters in amplifying the reductions is the number of particles which are in different positions in the two states appearing in the superposition itself. Under these premises we can now proceed to choose the parameters \(d\) and \(f\) of the theory, i.e., the localization accuracy and the mean localization frequency. The argument just given allows one to understand how one can choose the parameters in such a way that the quantum predictions for microscopic systems remain fully valid while the embarrassing macroscopic superpositions in measurement-like situations are suppressed in very short times. Accordingly, as a consequence of the unified dynamics governing all physical processes, individual macroscopic objects acquire definite macroscopic properties. The choice suggested in the GRW-model is: \[\begin{align} \tag{5} f &= 10^{-16} \text{ s}^{-1} \\ d &= 10^{-5} \text{ cm} \end{align}\] It follows that a microscopic system undergoes a localization, on average, every hundred million years, while a macroscopic one undergoes a localization every \(10^{-7}\) seconds. With reference to the challenging version of the macro-objectification problem presented by Schrödinger with the famous example of his cat, J.S. Bell comments (1987, p.44): [within QMSL] the cat is not both dead and alive for more than a split second. Besides the extremely low frequency of the hittings for microscopic systems, also the fact that the localization width is large compared to the dimensions of atoms (so that even when a localization occurs it does very little violence to the internal economy of an atom) plays an important role in guaranteeing that no violation of well-tested quantum mechanical predictions is implied by the modified dynamics. Some remarks are appropriate. QMSL, being precisely formulated, allows to locate precisely the ‘split’ between micro and macro, reversible and irreversible, quantum and classical. The transition between the two types of ‘regimes’ is governed by the number of particles which are well localized at positions further apart than \(10^{-5}\) cm in the two states whose coherence is going to be dynamically suppressed. In principle, the model is testable against quantum mechanics. However, for the above choice of the values of the parameters, its predictions do not contradict any already established fact about microsystems and macrosystems. Concerning the choice of the parameters of the model, it has to be stressed that, as it is obvious, the just mentioned quantum to classical transition region depends crucially on their values. The situation concerning the two parameters is rather different; in fact \(d\) cannot be made smaller than \(10^{-5}\) cm without inducing unacceptable effects on the internal dynamics, e.g., of solids, and it cannot be made much larger if one wants macrosystems to end up being rather accurately localized. On the contrary, an appreciable variation of \(f\) turns out to be possible. With reference to this point we would like to mention that Adler (2003) has suggested to change its value by a factor of the order of \(10^9\). The reasons for this derive from pretending that the latent image formation in photography occurs immediately after a grain of the emulsion has been excited, and that when a human eye is hit by few photons (the perceptual threshold being very low) reduction takes place in the rods of the eye. As we will discuss in what follows, if one takes the original GRW value for \(f\), reduction cannot occur in the rods (because a relatively small number of molecules—less than \(10^5\)—are affected), but only during the transmission along the nervous signal within the brain, a process which involves the displacement of a number of ions of the order of \(10^{12}\). It is interesting to remark that the drastic change suggested by Adler (2003) has physical implications which have already been experimentally falsified, see Curceanu et al. 2015, Bassi et al. 2010, Vinante et al. 2015 (Other Internet Resources), and Toros & Bassi 2016 (Other Internet Ressources). 6. The Continuous Spontaneous Localization Model (CSL) The model just presented (QMSL) has a serious drawback: it does not allow to deal with systems containing identical constituents because it does not respect the symmetry or antisymmetry requirements for such particles. A quite natural idea to overcome this difficulty would be that of relating the hitting process not to the individual particles but to the particle number density averaged over an appropriate volume. This can be done by introducing a new phenomenological parameter in the theory which however can be eliminated by an appropriate limiting procedure (see below). Another way to overcome this problem derives from injecting the physically appropriate principles of the GRW model within the original approach of P. Pearle. This line of thought has led to a quite elegant formulation of a dynamical reduction model, usually referred to as CSL (Pearle 1989; Ghirardi, Pearle, and Rimini 1990) in which the discontinuous jumps which characterize QMSL are replaced by a continuous stochastic evolution in the Hilbert space (a sort of Brownian motion of the statevector). We will not enter into the rather technical details of this interesting development of the original GRW proposal, since the basic ideas and physical implications are precisely the same as those of the original formulation. Actually, one could argue that the above idea of tackling the problem of identical particles by considering the average particle number within an appropriate volume is correct. In fact it has been proved (Ghirardi, Pearle, and Rimini 1990) that for any CSL dynamics there is a hitting dynamics which, from a physical point of view, is ‘as close to it as one wants’. Instead of entering into the details of the CSL formalism, it is useful, for the discussion below, to analyze a simplified version of it. 7. A Simplified Version of CSL With the aim of understanding the physical implications of the CSL model, such as the rate of suppression of coherence, we make now some simplifying assumptions. First, we assume that we are dealing with only one kind of particles (e.g., the nucleons), secondly, we disregard the standard Schrödinger term in the evolution and, finally, we divide the whole space in cells of volume \(d^3\). We denote by \(\ket{n_1, n_2 ,\ldots}\) a Fock state in which there are \(n_i\) particles in cell \(i\), and we consider a superposition of two states \(\ket{n_1, n_2 , \ldots}\) and \(\ket{m_1, m_2 , \ldots}\) which differ in the occupation numbers of the various cells of the universe. With these assumptions it is quite easy to prove that the rate of suppression of the coherence between the two states (so that the final state is one of the two and not their superposition) is governed by the quantity: \[\tag{6} \exp\{-f [(n_1 - m_1)^2 + (n_2 - m_2)^2 +\ldots]t\}, \] all cells of the universe appearing in the sum within the square brackets in the exponent. Apart from differences relating to the identity of the constituents, the overall physics is quite similar to that implied by QMSL. Equation 6 offers the opportunity of discussing the possibility of relating the suppression of coherence to gravitational effects. In fact, with reference to this equation we notice that the worst case scenario (from the point of view of the time necessary to suppress coherence) is the one corresponding to the superposition of two states for which the occupation numbers of the individual cells differ only by one unit. Indeed, in this case the amplifying effect of taking the square of the differences disappears. Let us then raise the question: how many nucleons (at worst) should occupy different cells, in order for the given superposition to be dynamically suppressed within the time which characterizes human perceptual processes? Since such a time is of the order of \(10^{-2}\) sec and \(f = 10^{-16}\) sec\(^{-1}\), the number of displaced nucleons must be of the order of \(10^{18}\), which corresponds, to a remarkable accuracy, to a Planck mass. This figure seems to point in the same direction as Penrose’s attempts to relate reduction mechanisms to quantum gravitational effects (Penrose 1989). Obviously, the model theory we are discussing implies various further physical effects which deserve to be discussed since they might allow a test of the theory with respect to standard quantum mechanics. For review, see (Bassi and Ghirardi 2003; Adler 2007, Bassi et al. 2013). We briefly list the most promising type of experiments which in the future might allow such a crucial test. 1. Effects in superconducting devices. A detailed analysis has been presented in (Ghirardi and Rimini 1990). As shown there and as follows from estimates about possible effects for superconducting devices (Rae 1990; Gallis and Fleming 1990; Rimini 1995), and for the excitation of atoms (Squires 1991), it turns out not to be possible, with present technology, to perform clear-cut experiments allowing to discriminate the model from standard quantum mechanics. 2. Loss of coherence in diffraction experiments with macromolecules. The group of Arndt and Zeilinger in Vienna has performed several diffraction experiments involving macromolecules.The most well known include C\(_{60}\), (720 nucleons) (Arndt et al. 1999), C\(_{70}\), (840 nucleons) (Hackermueller et al. 2004) and C\(_{30}\)H\(_{12}\)F\(_{30}\)N\(_2\)O\(_4\), (1030 nucleons) (Gerlich et al. 2007). These experiments aim at testing the validity of the superposition principle towards the macroscopic scale. The challenge is very exciting and near-future technology will probably allow to perform experiments with systems containing up to \(10 ^6\) nucleons and, accordingly, they will represent those imposing most severe limitations to the parameters of Collapse theories. 3. Loss of coherence in opto-mechanical interferometers. Recently, an interesting proposal of testing the superposition principle by resorting to an experimental set-up involving a (mesoscopic) mirror has been advanced (Marshall et al. 2003). This stimulating proposal has led a group of scientists directly interested in Collapse Theories (Bassi et al. 2005) to check whether the proposed experiment might be a crucial one for testing dynamical reduction models versus quantum mechanics. The problem is extremely subtle because the extension of the oscillations of the mirror is much smaller than the localization accuracy of GRW, so that the localizations processes become almost ineffective. However, quite recently a detailed reconsideration of the physics of such systems has been performed and it has allowed to draw the relevant conclusion that the proposal by Adler (2007) of changing the frequency of the GRW theory of a factor like the one he has considered is untenable. 4. Spontaneous X-ray emission from Germanium. Collapse models not only forbid macroscopic superpositions to be stable, they share several other features which are forbidden by the standard theory. One of these is the spontaneous emission of radiation from otherwise stable systems, like atoms. While the standard theory predicts that such systems—if not excited—do not emit radiation, collapse models allow for radiation to be produced. The emission rate has been computed both for free charged particles (Fu 1997) and for hydrogenic atoms (Adler et al. 2007). The theoretical predictions were compatible with current experimental data (Fu 1997). At any rate, the importance of such experiments lies in the fact that—so far—they provide the strongest upper bounds on the collapse parameters (Adler et al. 2007). But this is not the whole story: very recently Curceanu et al, 2015, following this line of research, have been able to prove experimentally that the proposal by Adler (2007) of a drastic change of the frequency of the localizations with respect to those of the original GRW paper is definitely incompatible with the experimental data. 5. In the recent years, another line of research has been proposed, one which makes direct reference to the way, which we will discuss in Section 10, in which collapse models account for the psycho-physical correspondence. The suggested approach might lead to completely new and fundamentally different practical tests of Collapse theories. The basic facts concerning the proposal deserve to be mentioned. In almost all physical situations we have analyzed, the appreciable dynamical changes of the system (tipically, the spreading of the center-of-mass position of a macroscopic object) take a time (years) which is enormously longer than the one between two localizations \((10^{-7}\) sec). On the contrary, as we will discuss below, in the case of conscious perceptions, the collapse time of two brain states in a superposition and the time which is necessary for the emergence of a definite perception, are quite similar, and this has some (small but significant) implications concerning the probabilities of the outcomes. This point has been analyzed in detail and explicitly evaluated by resorting to a simple model of a quantum system subjected to reduction processes(Ghirardi et al, 2014). The idea is to consider a spin 1/2 particle whose spin rotates around the \(x\)-axis with a frequency of about one hundreth of the one of the random measurements ascertaining whether its spin is UP or DOWN with respect to the \(z\)-axis. It turns out that for a superposition with amplitudes \(a\) and \(b\) of the two eigenstates of S\(_z\), the probability of the two supervening perceptions associated to the two outcomes will differ of about 1% from those predicted by quantum mechanics, i.e. \(\lvert a\rvert^2\) and \(\lvert b\rvert^2\), respectively. The test would be quite interesting also for the general meaning of collapse theories because it will give some practical evidence concerning the fact that, in the case in which a superposition of two microscopic different states which are able to trigger two precise (and different) perceptions, the brain actually collapses the wavefunction yielding only one perception, an clear-cut indication that the standard theory cannot run the whole process. Summarizing, we stress that, due to recent technological improvements, experiments in which one might test the deviations from Standard Quantum Theory implied by Collapse Models, seems to have become more feasible. Actually, lot of work has been done and it is still going on in this direction. The subject is developing rapidly and important papers have appeared and interesting experimental work has been and it is being performed. For a detailed technical analysis and for a precise specification of the limits for the parameters \(d\) and \(f\) which have been derived, we refer the reader to the papers by Bassi et al. (2013), Donadi et al. (2013 a,b), Baharami et al. (2014), Großardt et al. (2015, Other Internet Resources), Vinante et al. (2015). 8. Some remarks about Collapse Theories A. Pais famously recalls in his biography of Einstein: We often discussed his notions on objective reality. I recall that during one walk Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it (Pais 1982, p. 5). In the context of Einstein’s remarks in Albert Einstein, Philosopher-Scientist (Schilpp 1949), we can regard this reference to the moon as an extreme example of ‘a fact that belongs entirely within the sphere of macroscopic concepts’, as is also a mark on a strip of paper that is used to register the outcome of a decay experiment, so that as a consequence, there is hardly likely to be anyone who would be inclined to consider seriously […] that the existence of the location is essentially dependent upon the carrying out of an observation made on the registration strip. For, in the macroscopic sphere it simply is considered certain that one must adhere to the program of a realistic description in space and time; whereas in the sphere of microscopic situations one is more readily inclined to give up, or at least to modify, this program (p. 671). the ‘macroscopic’ and the ‘microscopic’ are so inter-related that it appears impracticable to give up this program in the ‘microscopic’ alone (p. 674). One might speculate that Einstein would not have taken the DRP seriously, given that it is a fundamentally indeterministic program. On the other hand, the DRP allows precisely for this middle ground, between giving up a ‘classical description in space and time’ altogether (the moon is not there when nobody looks), and requiring that it be applicable also at the microscopic level (as within some kind of ‘hidden variables’ theory). It would seem that the pursuit of ‘realism’ for Einstein was more a program that had been very successful rather than an a priori commitment, and that in principle he would have accepted attempts requiring a radical change in our classical conceptions concerning microsystems, provided they would nevertheless allow to take a macrorealist position matching our definite perceptions at this scale. In the DRP, we can say of an electron in an EPR-Bohm situation that ‘when nobody looks’, it has no definite spin in any direction , and in particular that when it is in a superposition of two states localised far away from each other, it cannot be thought to be at a definite place (see, however, the remarks in Section 11). In the macrorealm, however, objects do have definite positions and are generally describable in classical terms. That is, in spite of the fact that the DRP program is not adding ‘hidden variables’ to the theory, it implies that the moon is definitely there even if no sentient being has ever looked at it. In the words of J. S. Bell, the DRP allows electrons (in general microsystems) to enjoy the cloudiness of waves, while allowing tables and chairs, and ourselves, and black marks on photographs, to be rather definitely in one place rather than another, and to be described in classical terms (Bell 1986, p. 364). Such a program, as we have seen, is implemented by assuming only the existence of wave functions, and by proposing a unified dynamics that governs both microscopic processes and ‘measurements’. As regards the latter, no vague definitions are needed. The new dynamical equations govern the unfolding of any physical process, and the macroscopic ambiguities that would arise from the linear evolution are theoretically possible, but only of momentary duration, of no practical importance and no source of embarrassment. We have not yet analyzed the implications about locality, but since in the DRP program no hidden variables are introduced, the situation can be no worse than in ordinary quantum mechanics: ‘by adding mathematical precision to the jumps in the wave function’, the GRW theory ‘simply makes precise the action at a distance of ordinary quantum mechanics’ (Bell 1987, p. 46). Indeed, a detailed investigation of the locality properties of the theory becomes possible as shown by Bell himself (Bell 1987, p. 47). Moreover, as it will become clear when we will discuss the interpretation of the theory in terms of mass density, the QMSL and CSL theories lead in a natural way to account for a behaviour of macroscopic objects corresponding to our definite perceptions about them, the main objective of Einstein’s requirements. The achievements of the DRP which are relevant for the debate about the foundations of quantum mechanics can also be concisely summarized in the words of H.P. Stapp: The collapse mechanisms so far proposed could, on the one hand, be viewed as ad hoc mutilations designed to force ontology to kneel to prejudice. On the other hand, these proposals show that one can certainly erect a coherent quantum ontology that generally conforms to ordinary ideas at the macroscopic level (Stapp 1989, p. 157). 9. Relativistic Dynamical Reduction Models As soon as the GRW proposal appeared and attracted the attention of J.S. Bell it also stimulated him to look at it from the point of view of relativity theory. As he stated subsequently (Bell 1989a): When I saw this theory first, I thought that I could blow it out of the water, by showing that it was grossly in violation of Lorentz invariance. That’s connected with the problem of ‘quantum entanglement’, the EPR paradox. Actually, he had already investigated this point by studying the effect on the theory of a transformation mimicking a nonrelativistic approximation of a Lorentz transformation and he arrived (Bell 1987) at a surprising conclusion: … the model is as Lorentz invariant as it could be in its nonrelativistic version. It takes away the ground of my fear that any exact formulation of quantum mechanics must conflict with fundamental Lorentz invariance. What Bell had actually proved by resorting to a two-times formulation of the Schrödinger equation is that the model violates locality by violating outcome independence and not, as deterministic hidden variable theories do, parameter independence. Indeed, with reference to this point we recall that, as is well known, (Suppes and Zanotti 1976; van Fraassen 1982; Jarrett 1984; Shimony 1983; see also the entry on Bell’s Theorem), Bell’s locality assumption is equivalent to the conjunction of two other assumptions, viz., in Shimony’s terminology, parameter independence and outcome independence. In view of the experimental violation of Bell’s inequality, one has to give up either or both of these assumptions. The above splitting of the locality requirement into two logically independent conditions is particularly useful in discussing the different status of CSL and deterministic hidden variable theories with respect to relativistic requirements. Actually, as proved by Jarrett himself, when parameter independence is violated, if one had access to the variables which specify completely the state of individual physical systems, one could send faster-than-light signals from one wing of the apparatus to the other. Moreover, in Ghirardi and Grassi (1996) it has been proved that it is impossible to build a genuinely relativistically invariant theory which, in its nonrelativistic limit, exhibits parameter dependence. Here we use the term genuinely invariant to denote a theory for which there is no (hidden) preferred reference frame. On the other hand, if locality is violated only by the occurrence of outcome dependence then faster-than-light signaling cannot be achieved (Eberhard 1978; Ghirardi, Rimini, and Weber 1980). Few years after the just mentioned proof by Bell, it has been shown in complete generality (Ghirardi, Grassi, Butterfield, and Fleming 1993) that the GRW and CSL theories, just as standard quantum mechanics, exhibit only outcome dependence. This is to some extent encouraging and shows that there are no reasons of principle making unviable the project of building a relativistically invariant DRM. Let us be more specific about this crucial problem. P. Pearle was the first to propose (Pearle 1990) a relativistic generalization of CSL to a quantum field theory describing a fermion field coupled to a meson scalar field enriched with the introduction of stochastic and nonlinear terms. A quite detailed discussion of this proposal was presented in (Ghirardi et al. 1990a) where it was shown that the theory enjoys of all properties which are necessary in order to meet the relativistic constraints. Pearle’s approach requires the precise formulation of the idea of stochastic Lorentz invariance. The proposal can be summarized in the following terms: One considers a fermion field coupled to a meson field and puts forward the idea of inducing localizations for the fermions through their coupling to the mesons and a stochastic dynamical reduction mechanism acting on the meson variables. In practice, one considers Heisenberg evolution equations for the coupled fields and a Tomonaga-Schwinger CSL-type evolution equation with a skew-hermitian coupling to a c-number stochastic potential for the state vector. This approach has been systematically investigated by Ghirardi, Grassi, and Pearle (1990), to which we refer the reader for a detailed discussion. Here we limit ourselves to stressing that, under certain approximations, one obtains in the non-relativistic limit a CSL-type equation inducing spatial localization. However, due to the white noise nature of the stochastic potential, novel renormalization problems arise: the increase per unit time and per unit volume of the energy of the meson field is infinite due to the fact that infinitely many mesons are created. This point has also been lucidly discussed by Bell (1989b) in the talk he delivered at Trieste on the occasion of the 25th anniversary of the International Centre for Theoretical Physics. This talk appeared under the title The Trieste Lecture of John Stewart Bell. For these reasons one cannot consider this as a satisfactory example of a relativistic reduction model. In the years following the just mentioned attempts there has been a flourishing of researches aimed at getting the desired result. Let us briefly comment about them. As already mentioned, the source of the divergences is the assumption of point interactions between the quantum field operators in the dynamical equation for the statevector, or, equivalently, the white character of the stochastic noise. Having this aspect in mind P. Pearle (1989), L. Diosi (1990) and A. Bassi and G.C. Ghirardi (2002) reconsidered the problem from the beginning by investigating nonrelativistic theories with nonwhite Gaussian noises. The problem turns out to be very difficult from the mathematical point of view, but steps forward have been made. In recent years, a precise formulation of the nonwhite generalization (Bassi and Ferialdi 2009) of the so-called QMUPL model, which represents a simplified version of GRW and CSL, has been proposed. Moreover, a perturbative approach for the CSL model has been worked out (Adler and Bassi 2007, 2008). Further work is necessary. This line of thought is very interesting at the nonrelativistic level; however, it is not yet clear whether it will lead to a real step forward in the development of relativistic theories of spontaneous collapse. In the same spirit, Nicrosini and Rimini (Nicrosini 2003) tried to smear out the point interactions without success because, in their approach, a preferred reference frame had to be chosen in order to circumvent the nonintegrability of the Tomonaga-Schwinger equation Also other interesting and different approaches have been suggested. Among them we mention the one by Dove and Squires (Dove 1996) based on discrete rather than continuous stochastic processes and those by Dawker and Herbauts (Dawker 2004a) and Dawker and Henson (Dawker 2004b) formulated on a discrete space-time. Before going on we consider it important to call attention to the fact that precisely in the same years similar attempts to get a relativistic generalization of the other existing ‘exact’ theory, i.e., Bohmian Mechanics, were going on and that they too have encountered some difficulties. Relevant steps are represented by a paper (Dürr 1999) resorting to a preferred spacetime slicing, by the investigations of Goldstein and Tumulka (Goldstein 2003) and by other scientists (Berndl et. al 1996). However, we must recognize that no one of these attempts has led to a fully satisfactory solution of the problem of having a theory without observers, like Bohmian mechanics, which is perfectly satisfactory from the relativistic point of view, precisely due to the fact that they are not genuinely Lorentz invariant in the sense we have made precise before. Mention should be made also of the attempt by Dewdney and Horton (Dewdney 2001) to build a relativistically invariant model based on particle trajectories. Let us come back to the relativistic DRP. Some important changes have occurred quite recently. Tumulka (2006a) succeeded in proposing a relativistic version of the GRW theory for N non-interacting distinguishable particles, based on the consideration of a multi-time wavefunction whose evolution is governed by Dirac like equations and adopts as its Primitive Ontology (see the next section) the one which attaches a primary role to the space and time points at which spontaneous localizations occur, as originally suggested by Bell (1987). To my knowledge this represents the first proposal of a relativistic dynamical reduction mechanism which satisfies all relativistic requirements. In particular it is divergence free and foliation independent. However it can deal only with systems containing a fixed number of noninteracting fermions. At this point explicit mention should be made of the most recent steps which concern our problem. D. Bedingham (2011) following strictly the original proposal by Pearle (1990) of a quantum field theory inducing reductions based on a Tomonaga-Schwinger equation, has worked out an analogous model which, however, overcomes the difficulties of the original model. In fact, Bedingham has circumvented the crucial problems deriving from point interactions by (paying the price of) introducing, besides the fields characterizing the Quantum Field Theories he is interested in, an auxiliary relativistic field that amounts to a smearing of the interactions whilst preserving Lorentz invariance and frame independence. Adopting this point of view and taking advantage also of the proposal by Ghirardi (2000) concerning the appropriate way to define objective properties at any space-time point \(x\), he has been able to work out a fully satisfactory and consistent relativistic scheme for quantum field theories in which reduction processes may occur. It has also to be mentioned that, taking once more advantage of the ideas of the paper by Ghirardi (2000), various of the just quoted authors (see Bedingham et al. 2013), have been able to prove that it is possible to work out a relativistic generalization of Collapse models when their primitive ontology is taken to be the one given by the mass density interpretation for the nonrelativistic case we will present in what follows. In view of these results and taking into account the interesting investigations concerning relativistic Bohmian-like theories,the conclusions that Tumulka has drawn concerning the status of attempts to account for the macro-objectification process from a relativistic perspective are well-founded: A somewhat surprising feature of the present situation is that we seem to arrive at the following alternative: Bohmian mechanics shows that one can explain quantum mechanics, exactly and completely, if one is willing to pay with using a preferred slicing of spacetime; our model suggests that one should be able to avoid a preferred slicing of spacetime if one is willing to pay with a certain deviation from quantum mechanics, a conclusion that he has rephrased and reinforced in (Tumulka 2006c): Thus, with the presently available models we have the alternative: either the conventional understanding of relativity is not right, or quantum mechanics is not exact. Very recently, a thorough and illuminating discussion of the important approach by Tumulka has been presented by Tim Maudlin (2011) in the third revised edition of his book Quantum Non-Locality and Relativity. Tumulka’s position is perfectly consistent with the present ideas concerning the attempts to transform relativistic standard quantum mechanics into an ‘exact’ theory in the sense which has been made precise by J. Bell. Since the only unified, mathematically precise and formally consistent formulations of the quantum description of natural processes are Bohmian mechanics and GRW-like theories, if one chooses the first alternative one has to accept the existence of a preferred reference frame, while in the second case one is not led to such a drastic change of position with respect to relativistic concepts but must accept that the ensuing theory disagrees with the predictions of quantum mechanics and acquires the status of a rival theory with respect to it. In spite of the fact that the situation is, to some extent, still open and requires further investigations, it has to be recognized that the efforts which have been spent on such a program have made possible a better understanding of some crucial points and have thrown light on some important conceptual issues. First, they have led to a completely general and rigorous formulation of the concept of stochastic invariance. Second, they have prompted a critical reconsideration, based on the discussion of smeared observables with compact support, of the problem of locality at the individual level. This analysis has brought out the necessity of reconsidering the criteria for the attribution of objective local properties to physical systems. In specific situations, one cannot attribute any local property to a microsystem: any attempt to do so gives rise to ambiguities. However, in the case of macroscopic systems, the impossibility of attributing to them local properties (or, equivalently, the ambiguity associated to such properties) lasts only for time intervals of the order of those necessary for the dynamical reduction to take place. Moreover, no objective property corresponding to a local observable, even for microsystems, can emerge as a consequence of a measurement-like event occurring in a space-like separated region: such properties emerge only in the future light cone of the considered macroscopic event. Finally, recent investigations (Ghirardi and Grassi 1996; Ghirardi 2000) have shown that the very formal structure of the theory is such that it does not allow, even conceptually, to establish cause-effect relations between space-like events. The conclusion of this section, is that the question of whether a relativistic dynamical reduction program can find a satisfactory formulation seems to admit a positive answer. A last comment. Recently, a paper by Conway and Kochen (Conway 2006, 2006b), which has raised a lot of interest, has been published. A few words about it are in order, to clarify possible misunderstandings. The first and most important aim of the paper is the derivation of what the authors have called The Free Will Theorem, putting forward the provocative idea that if human beings are free to make their choices about the measurements they will perform on one of a pair of far-away entangled particles, then one must admit that also the elementary particles involved in the experiment have free will. One might make several comments on this statement. For what concerns us here the relevant fact is that the authors claim that their theorem implies, as a byproduct, the impossibility of elaborating a relativistically invariant dynamical reduction model. A lively debate has arisen. At the end, Goldstein et al (Goldstein 2010) have made clear why the argument of Conway and Kochen is not pertinent. We may conclude that nothing in principle forbids a perfectly satisfactory relativistic generalization of the GRW theory, and, actually, as repeatedly stressed, there are many elements which indicate that this is actually feasible. 10. Collapse Theories and Definite Perceptions Some authors (Albert and Vaidman 1989; Albert 1990, 1992) have raised an interesting objection concerning the emergence of definite perceptions within Collapse Theories. The objection is based on the fact that one can easily imagine situations leading to definite perceptions, that nevertheless do not involve the displacement of a large number of particles up to the stage of the perception itself. These cases would then constitute actual measurement situations which cannot be described by the GRW theory, contrary to what happens for the idealized (according to the authors) situations considered in many presentations of it, i.e., those involving the displacement of some sort of pointer. To be more specific, the above papers consider a ‘measurement-like’ process whose output is the emission of a burst of few photons triggered by the position in which a particle hits a screen. This can easily be devised by considering, e.g., a Stern-Gerlach set-up in which a spin 1/2 microsystem, according to the value of its spin component hits a fluorescent screen in different places and excites a small number of atoms which subsequently decay, emitting a small number of photons. The argument goes as follows: if one triggers the apparatus with a superposition of two spin states, since only a few atoms are excited, since the excitations involve displacements which are smaller than the characteristic localization distance of GRW, since GRW does not induce reductions on photon states and, finally, since the photon states immediately overlap, there is no way for the spontaneous localization mechanism to become effective in suppressing the ensuing superposition of the states ‘photons emerging from point \(A\) of the screen’ and ‘photons emerging from point \(B\) of the screen’. On the other hand, since the visual perception threshold is quite low (about 6-7 photons), there is no doubt that the naked eye of a human observer is sufficient to detect whether the luminous spot on the screen is at \(A\) or at \(B\). The conclusion follows: in the case under consideration no dynamical reduction can take place and as a consequence no measurement is over, no outcome is definite, up to the moment in which a conscious observer perceives the spot. Aicardi et al. (1991) have presented a detailed answer to this criticism. The crucial points of the argument are the following: it is agreed that in the case considered the superposition persists for long times (actually the superposition must persist, since, the system under consideration being microscopic, one could perform interference experiments which everybody would expect to confirm quantum mechanics). However, to deal in the appropriate and correct way with such a criticism, one has to consider all the systems which enter into play (electron, screen, photons and brain) and the universal dynamics governing all relevant physical processes. A simple estimate of the number of ions which are involved in the transmission of the nervous signal up to the higher virtual cortex makes perfectly plausible that, in the process, a sufficient number of particles are displaced by a sufficient spatial amount to satisfy the conditions under which, according to the GRW theory, the suppression of the superposition of the two nervous signals will take place within the time scale of perception. To avoid misunderstandings, this analysis by no means amounts to attributing a special role to the conscious observer or to perception. The observer’s brain is the only system present in the set-up in which a superposition of two states involving different locations of a large number of particles occurs. As such it is the only place where the reduction can and actually must take place according to the theory. It is extremely important to stress that if in place of the eye of a human being one puts in front of the photon beams a spark chamber or a device leading to the displacement of a macroscopic pointer, or producing ink spots on a computer output, reduction will equally take place. In the given example, the human nervous system is simply a physical system, a specific assembly of particles, which performs the same function as one of these devices, if no other such device interacts with the photons before the human observer does. It follows that it is incorrect and seriously misleading to claim that the GRW theory requires a conscious observer in order that measurements have a definite outcome. A further remark may be appropriate. The above analysis could be taken by the reader as indicating a very naive and oversimplified attitude towards the deep problem of the mind-brain correspondence. There is no claim and no presumption that GRW allows a physicalist explanation of conscious perception. It is only pointed out that, for what we know about the purely physical aspects of the process, one can state that before the nervous pulses reach the higher visual cortex, the conditions guaranteeing the suppression of one of the two signals are verified. In brief, a consistent use of the dynamical reduction mechanism in the above situation accounts for the definiteness of the conscious perception, even in the extremely peculiar situation devised by Albert and Vaidman. 11. The Interpretation of the Theory and its Primitive Ontologies As stressed in the opening sentences of this contribution, the most serious problem of standard quantum mechanics lies in its being extremely successful in telling us about what we observe, but being basically silent on what is. This specific feature is closely related to the probabilistic interpretation of the statevector, combined with the completeness assumption of the theory. Notice that what is under discussion is the probabilistic interpretation, not the probabilistic character, of the theory. Also collapse theories have a fundamentally stochastic character, but, due to their most specific feature, i.e., that of driving the statevector of any individual physical system into appropriate and physically meaningful manifolds, they allow for a different interpretation. One could even say (if one wants to avoid that they too, as the standard theory, speak only of what we find) that they require a different interpretation, one that accounts for our perceptions at the appropriate, i.e., macroscopic, level. We must admit that this opinion is not universally shared. According to various authors, the ‘rules of the game’ embodied in the precise formulation of the GRW and CSL theories represent all there is to say about them. However, this cannot be the whole story: stricter and more precise requirements than the purely formal ones must be imposed for a theory to be taken seriously as a fundamental description of natural processes (an opinion shared by J. Bell). This request of going beyond the purely formal aspects of a theoretical scheme has been denoted as (the necessity of specifying) the Primitive Ontology (PO) of the theory in an extremely interesting recent paper (Allori et al. 2008). The fundamental requisite of the PO is that it should make absolutely precise what the theory is fundamentally about. This is not a new problem; as already mentioned it has been raised by J. Bell since his first presentation of the GRW theory. Let me summarize the terms of the debate. Given that the wavefunction of a many-particle system lives in a (high-dimensional) configuration space, which is not endowed with a direct physical meaning connected to our experience of the world around us, Bell wanted to identify the ‘local beables’ of the theory, the quantities on which one could base a description of the perceived reality in ordinary three-dimensional space. In the specific context of QMSL, he (Bell 1987 p. 45) suggested that the ‘GRW jumps’, which we called ‘hittings’, could play this role. In fact they occur at precise times in precise positions of the three-dimensional space. As suggested in (Allori et al. 2008) we will denote this position concerning the PO of the GRW theory as the ‘flashes ontology.’ However, later, Bell himself suggested that the most natural interpretation of the wavefunction in the context of a collapse theory would be that it describes the ‘density […] of stuff’ in the 3N-dimensional configuration space (Bell 1990, p. 30), the natural mathematical framework for describing a system of \(N\) particles. Allori et al. (2008) appropriately have pointed out that this position amounts to avoiding commitment about the PO ontology of the theory and, consequently, to leaving vague the precise and meaningful connections it permits to be established between the mathematical description of the unfolding of physical processes and our perception of them. The interpretation which, in the opinion of the present writer, is most appropriate for collapse theories, has been proposed in (Ghirardi, Grassi and Benatti 1995) and has been referred in Allori et al. 2008 as ‘the mass density ontology’. Let us briefly describe it. First of all, various investigations (Pearle and Squires 1994) had made clear that QMSL and CSL needed a modification, i.e., the characteristic localization frequency of the elementary constituents of matter had to be made proportional to the mass characterizing the particle under consideration. In particular, the original frequency for the hitting processes \(f = 10^{-16}\) sec\(^{-1}\) is the one characterizing the nucleons, while, e.g., electrons would suffer hittings with a frequency reduced by about 2000 times. Unfortunately we have no space to discuss here the physical reasons which make this choice appropriate; we refer the reader to the above paper, as well as to the recent detailed analysis by Peruzzi and Rimini (2000). With this modification, what the nonlinear dynamics strives to make ‘objectively definite’ is the mass distribution in the whole universe. Second, a deep critical reconsideration (Ghirardi, Grassi, and Benatti 1995) has made evident how the concept of ‘distance’ that characterizes the Hilbert space is inappropriate in accounting for the similarity or difference between macroscopic situations. Just to give a convincing example, consider three states \(\ket{h} , \ket{h^*}\) and \(\ket{t}\) of a macrosystem (let us say a massive macroscopic bulk of matter), the first corresponding to its being located here, the second to its having the same location but one of its atoms (or molecules) being in a state orthogonal to the corresponding state in \(\ket{h}\), and the third having exactly the same internal state of the first but being differently located (there). Then, despite the fact that the first two states are indistinguishable from each other at the macrolevel, while the first and the third correspond to completely different and directly perceivable situations, the Hilbert space distance between \(\ket{h}\) and \(\ket{h^*}\), is equal to that between \(\ket{h}\) and \(\ket{t}\). When the localization frequency is related to the mass of the constituents, then, in completely generality (i.e., even when one is dealing with a body which is not almost rigid, such as a gas or a cloud), the mechanism leading to the suppression of the superpositions of macroscopically different states is fundamentally governed by the the integral of the squared differences of the mass densities associated to the two superposed states. Actually, in the original paper the mass density at a point was identified with its average over the characteristic volume of the theory, i.e., \(10^{-15}\) cm\(^3\) around that point. It is however easy to convince oneself that there is no need to do so and that the mass density at any point, directly identified by the statevector (see below), is the appropriate quantity on which to base an appropriate ontology. Accordingly, we take the following attitude: what the theory is about, what is real ‘out there’ at a given space point \(\boldsymbol{x}\), is just a field, i.e., a variable \(m(\mathbf{x},t)\) given by the expectation value of the mass density operator \(M(\boldsymbol{x})\) at \(\boldsymbol{x}\) obtained by multiplying the mass of any kind of particle times the number density operator for the considered type of particle and summing over all possible types of particles which can be present: \[\begin{align} \tag{7} m(\boldsymbol{x},t) &= \langle F,t \mid M(\boldsymbol{x}) \mid F,t \rangle; \\ M(\boldsymbol{x}) &= {\sum}_{(k)} m_{(k)}a^*_{(k)}(\boldsymbol{x})a_{(k)}(\boldsymbol{x}). \end{align}\] Here \(\ket{F,t}\) is the statevector characterizing the system at the given time, and \(a^*_{(k)}(\boldsymbol{x})\) and \(a_{(k)}(\boldsymbol{x})\) are the creation and annihilation operators for a particle of type \(k\) at point \(\boldsymbol{x}\). It is obvious that within standard quantum mechanics such a function cannot be endowed with any objective physical meaning due to the occurrence of linear superpositions which give rise to values that do not correspond to what we find in a measurement process or what we perceive. In the case of GRW or CSL theories, if one considers only the states allowed by the dynamics one can give a description of the world in terms of \(m(\boldsymbol{x},t)\), i.e., one recovers a physically meaningful account of physical reality in the usual 3-dimensional space and time. To illustrate this crucial point we consider, first of all, the embarrassing situation of a macroscopic object in the superposition of two differently located position states. We have then simply to recall that in a collapse model relating reductions to mass density differences, the dynamics suppresses in extremely short times the embarrassing superpositions of such states to recover the mass distribution corresponding to our perceptions. Let us come now to a microsystem and let us consider the equal weight superposition of two states \(\ket{h}\) and \(\ket{t}\) describing a microscopic particle in two different locations. Such a state gives rise to a mass distribution corresponding to 1/2 of the mass of the particle in the two considered space regions. This seems, at first sight, to contradict what is revealed by any measurement process. But in such a case we know that the theory implies that the dynamics running all natural processes within GRW ensures that whenever one tries to locate the particle he will always find it in a definite position, e.g., one and only one of the Geiger counters which might be triggered by the passage of the proton will fire, just because a superposition of ‘a counter which has fired’ and ‘one which has not fired’ is dynamically forbidden. This analysis shows that one can consider at all levels (the micro and the macroscopic ones) the field \(m(\mathbf{x},t)\) as accounting for ‘what is out there’, as originally suggested by Schrödinger with his realistic interpretation of the square of the wave function of a particle as representing the ‘fuzzy’ character of the mass (or charge) of the particle. Obviously, within standard quantum mechanics such a position cannot be maintained because ‘wavepackets diffuse, and with the passage of time become infinitely extended … but however far the wavefunction has extended, the reaction of a detector … remains spotty’, as appropriately remarked in (Bell 1990). As we hope to have made clear, the picture is radically different when one takes into account the new dynamics which succeeds perfectly in reconciling the spread and sharp features of the wavefunction and of the detection process, respectively. It is also extremely important to stress that, by resorting to the quantity (7) one can define an appropriate ‘distance’ between two states as the integral over the whole 3-dimensional space of the square of the difference of \(m(\boldsymbol{x},t)\) for the two given states, a quantity which turns out to be perfectly appropriate to ground the concept of macroscopically similar or distinguishable Hilbert space states. In turn, this distance can be used as a basis to define a sensible psychophysical correspondence within the theory. 12. The Problem of the Tails of the Wave Function In recent years, there has been a lively debate around a problem which has its origin, according to some of the authors which have raised it, in the fact that even though the localization process which corresponds to multiplying the wave function times a Gaussian and thus lead to wave functions strongly peaked around the position of the hitting, they allow nevertheless the final wavefuntion to be different from zero over the whole of space. The first criticism of this kind was raised by A. Shimony (1990) and can be summarized by his sentence, one should not tolerate tails in wave functions which are so broad that their different parts can be discriminated by the senses, even if very low probability amplitude is assigned to them. After a localization of a macroscopic system, typically the pointer of the apparatus, its centre of mass will be associated to a wave function which is different from zero over the whole space. If one adopts the probabilistic interpretation of the standard theory, this means that even when the measurement process is over, there is a nonzero (even though extremely small) probability of finding its pointer in an arbitrary position, instead of the one corresponding to the registered outcome. This is taken as unacceptable, as indicating that the DRP does not actually overcome the macro-objectification problem. Let us state immediately that the (alleged) problem arises entirely from keeping the standard interpretation of the wave function unchanged, in particular assuming that its modulus squared gives the probability density of the position variable. However, as we have discussed in the previous section, there are much more serious reasons of principle which require to abandon the probabilistic interpretation and replace it either with the ‘flash ontology’, or with the ‘ mass density ontology’ which we have discussed above. Before entering into a detailed discussion of this subtle point we need to focus better the problem. We cannot avoid making two remarks. Suppose one adopts, for the moment, the conventional quantum position. We agree that, within such a framework, the fact that wave functions never have strictly compact spatial support can be considered puzzling. However this is an unavoidable problem arising directly from the mathematical features (spreading of wave functions) and from the probabilistic interpretation of the theory, and not at all a problem peculiar to the dynamical reduction models. Indeed, the fact that, e.g., the wave function of the center of mass of a pointer or of a table has not a compact support has never been taken to be a problem for standard quantum mechanics. When, e.g., the center of mass of a table is extremely well peaked around a given point in space, it has always been accepted that it describes a table located at a certain position, and that this corresponds in some way to our perception of it. It is obviously true that, for the given wave function, the quantum rules entail that if a measurement were performed the table could be found (with an extremely small probability) to be kilometers far away, but this is not the measurement or the macro-objectification problem of the standard theory. The latter concerns a completely different situation, i.e., that in which one is confronted with a superposition with comparable weights of two macroscopically separated wave functions, both of which possess tails (i.e., have non-compact support) but are appreciably different from zero only in far-away narrow intervals. This is the really embarrassing situation which conventional quantum mechanics is unable to make understandable. To which perception of the position of the pointer (of the table) does this wave function correspond? The implications for this problem of the adoption of the QMSL theory should be obvious. Within GRW, the superposition of two states which, when considered individually, are assumed to lead to different and definite perceptions of macroscopic locations, are dynamically forbidden. If some process tends to produce such superpositions, then the reducing dynamics induces the localization of the centre of mass (the associated wave function being appreciably different from zero only in a narrow and precise interval). Correspondingly, the possibility arises of attributing to the system the property of being in a definite place and thus of accounting for our definite perception of it. Summarizing, we stress once more that the criticism about the tails as well as the requirement that the appearance of macroscopically extended (even though extremely small) tails be strictly forbidden is exclusively motivated by uncritically committing oneself to the probabilistic interpretation of the theory, even for what concerns the psycho-physical correspondence: when this position is taken, states assigning non-exactly vanishing probabilities to different outcomes of position measurements should correspond to ambiguous perceptions about these positions. Since neither within the standard formalism nor within the framework of dynamical reduction models a wave function can have compact support, taking such a position leads to conclude that it is just the linear character of the Hilbert space description of physical systems which has to be given up. It ought to be stressed that there is nothing in the GRW theory which forbids or makes problematic to assume that the localization function has compact support, but it also has to be noted that following this line would be totally useless: since the evolution equation contains the kinetic energy term, any function, even if it has compact support at a given time, will instantaneously spread acquiring a tail extending over the whole of space. If one sticks to the probabilistic interpretation and one accepts the completeness of the description of the states of physical systems in terms of the wave function, the tail problem cannot be avoided. The solution to the tails problem can only derive from abandoning completely the probabilistic interpretation and from adopting a more physical and realistic interpretation relating ‘what is out there’ to, e.g., the mass density distribution over the whole universe. In this connection, the following example will be instructive. Take a massive sphere of normal density and mass of about 1 kg. Classically, the mass of this body would be totally concentrated within the radius of the sphere, call it \(r\). In QMSL, after the extremely short time interval in which the collapse dynamics leads to a ‘regime’ situation, and if one considers a sphere with radius \(r + 10^{-5}\) cm, the integral of the mass density over the rest of space turns out to be an incredibly small fraction (of the order of 1 over 10 to the power \(10^{15})\) of the mass of a single proton. In such conditions, it seems quite legitimate to claim that the macroscopic body is localised within the sphere. However, also this quite reasonable conclusion has been questioned and it has been claimed (Lewis 1997), that the very existence of the tails implies that the enumeration principle (i.e., the fact that the claim ‘particle 1 is within this box & particle 2 is within this box & … & particle \(n\) is within this box & no other particle is within this box’ implies the claim ‘there are \(n\) particles within this box’) does not hold, if one takes seriously the mass density interpretation of collapse theories. This paper has given rise to a long debate which would be inappropriate to reproduce here. We conclude this brief analysis by stressing once more that, in the opinion of the present writer, all the disagreements and the misunderstandings concerning this problem have their origin in the fact that the idea that the probabilistic interpretation of the wave function must be abandoned has not been fully accepted by the authors who find some difficulties in the proposed mass density interpretation of the Collapse Theories. For a recent reconsideration of the problem we refer the reader to the paper by Lewis (2003). 13. The Status of Collapse Models and Recent Positions about them We recall that, as stated in Section 3, the macro-objectification problem has been at the centre of the most lively and most challenging debate originated by the quantum view of natural processes. According to the majority of those who adhere to the orthodox position such a problem does not deserve a particular attention: classical concepts are a logical prerequisite for the very formulation of quantum mechanics and, consequently, the measurement process itself, the dividing line between the quantum and the classical world, cannot and must not be investigated, but simply accepted. This position has been lucidly summarized by J. Bell himself (1981): Making a virtue of necessity and influenced by positivistic and instrumentalist philosophies, many came to hold not only that it is difficult to find a coherent picture but that it is wrong to look for one—if not actually immoral then certainly unprofessional The situation has seen many changes in the course of time, and the necessity of making a clear distinction between what is quantum and what is classical has given rise to many proposals for ‘easy solutions’ to the problem which are based on the possibility, for all practical purposes (FAPP), of locating the splitting between these two faces of reality at different levels. Then came Bohmian mechanics, a theory which has made clear, in a lucid and perfectly consistent way, that there is no reason of principle requiring a dichotomic description of the world. A universal dynamical principle runs all physical processes and even though ‘it completely agrees with standard quantum predictions’ it implies wave-packet reduction in micro-macro interactions and the classical behaviour of classical objects. As we have mentioned, the other consistent proposal, at the nonrelativistic level, of a conceptually satisfactory solution of the macro-objectification problem is represented by the Collapse Theories which are the subject of these pages. Contrary to bohmian mechanics, they are rival theory of quantum mechanics, since they make different predictions (even though quite difficult to put into evidence) concerning various physical processes. Let us now analyze other recent critical positions concerning the two just mentioned approaches (in what follows I will take advantage of the nice analysis of a paper which I have been asked to referee and of which I do not know the author). Various physicists have criticized Bohm approach on the basis that, being empirically indistinguishable from quantum mechanics, such an approach is an example of ‘bad science’ or of ‘a degenerate research program’. Useless to say, I do not consider such criticisms as appropriate; the conceptual advantages and the internal consistency of the approach render it an extremely appealing theoretical scheme (incidentally, one should not forget that it has been just the critical investigations on such a theory which have led Bell to derive his famous and conceptually extremely relevant inequality). On the contrary, I am fully convinced that to consider as acceptable a theory like the standard one, which is incapable of accounting for the way in which it assumes the measurement apparatuses to work, and to deal with them introduces a postulate which plainly contradicts the other assumption of the theory, is not a scientifically tenable position. This being the situation, one would think that theories like the GRW model would be exempt from an analogous charge, since they actually are (in principle) empirically different from the standard theory. For instance they disagree from such a theory since they forbid the occurrence of macroscopic massive entangled states. In spite of this, they have been the object of an analogous attack by the adherents to the ‘new orthodoxy’ (Bub 1997; Joos et al. 1996; Zurek, 1993) pointing out that environmental induced decoherence shows that, FAPP, collapse theories are simply phenomenological accounts of the reduced state to which one has to resort since one has no control of the degrees of freedom of the environment. When one takes such a position, one is claiming that, essentially, GRW cannot be taken as a fundamental description of nature, mainly because it suffers from the limitation of being empirically indistinguishable from the standard theory, provided such a theory is correctly applied taking into account the actual physical situation. Also in this case, and even at the level at which such an analysis is performed, the practical indistinguishability from the standard approach should not be regarded as a sufficient reason to not take seriously collapse models. In fact, there are many very well known and compelling reasons (see, e.g., Bassi and Ghirardi 2000; Adler 2003) to prefer a logically consistent unified theory to one which makes sense only due to the alleged practical impossibility of detecting the superpositions of macroscopically distinguishable states. At any rate, in principle, such theories can be tested against the standard one and it seems that such a challenge is already under investigation. . But this is not the whole story. Another criticism, aimed to ‘deny’ the potential interest of collapse theories makes reference to the fact that within any such theory the ensuing dynamics for the statistical operator can be considered as the reduced dynamics deriving from a unitary (and, consequently, essentially a standard quantum) dynamics for the states of an enlarged Hilbert space of a composite quantum system \(S+E\) involving, besides the physical system \(S\) of interest, an ancilla \(E\) whose degrees of freedom are completely unaccessible: due to the quantum dynamical semigroup nature of the evolution equation for the statistical operator, any GRW-like model can always be seen as a phenomenological model deriving from a standard quantum evolution on a larger Hilbert space. In this way, the unitary deterministic evolution characterizing quantum mechanics would be fully restored. Apart from the obvious remark that such a critical attitude completely fails to grasp—and indeed, purposefully ignores—the most important feature of collapse theories, i.e., of dealing with individual quantum systems and not with statistical ensembles and of yielding a perfectly satisfactory description, matching our perceptions concerning individual macroscopic systems, invoking an unaccessible ancilla to account for the nonlinear and stochastic character of GRW-type theories is once more a purely verbal way of avoiding facing the real puzzling aspects of the quantum description of macroscopic systems. This is not the only negative aspect of such a position; any attempt considering legitimate to introduce unaccessible entities in the theory, when one takes into consideration that there are infinitely possible and inequivalent ways of doing this, amounts really to embarking oneself in a ‘degenerate research program’. Other reasons for ignoring the dynamical reduction program have been put forward recently by the community of scientists involved in the interesting and exciting field of quantum information. We will not spend too much time in analyzing and discussing the new position about the foundational issues which have motivated the elaboration of collapse theories. The crucial fact is that, from this perspective, one takes the theory not to be about something real ‘occurring out there’ in a real word, but simply about information. This point is made extremely explicit in a recent paper (Zeilinger 2005): information is the most basic notion of quantum mechanics, and it is information about possible measurement results that is represented in the quantum state. Measurement results are nothing more than states of the classical apparatus used by the experimentalist. The quantum system then is nothing other than the consistently constructed referent of the information represented in the quantum state. It is clear that if one takes such a position almost all motivations to be worried by the measurement problem disappear, and with them the reasons to work out what Bell has denoted as ‘an exact version of quantum mechanics’. The most appropriate reply to this type of criticisms is to recall that J. Bell (1990) has included ‘information’ among the words which must have no place in a formulation with any pretension to physical precision. In particular he has stressed that one cannot even mention information unless one has given a precise answer to the two following questions: Whose information? and Information about what? A much more serious attitude is to call attention, as many serious authors do, to the fact that since collapse theories represent rival theories with respect to standard quantum mechanics they lead to the identification of experimental situations which would allow, in principle, crucial tests to discriminate between the two. As we have discussed above, presently, fully discriminating tests seem not to be completely out of reach. 14. Summary We hope to have succeeded in giving a clear picture of the ideas, the implications, the achievements and the problems of the DRP. We conclude by stressing once more our position with respect to the Collapse Theories. Their interest derives entirely from the fact that they have given some hints about a possible way out from the difficulties characterizing standard quantum mechanics, by proving that explicit and precise models can be worked out which agree with all known predictions of the theory and nevertheless allow, on the basis of a universal dynamics governing all natural processes, to overcome in a mathematically clean and precise way the basic problems of the standard theory. In particular, the Collapse Models show how one can work out a theory that makes perfectly legitimate to take a macrorealistic position about natural processes, without contradicting any of the experimentally tested predictions of standard quantum mechanics. Finally, they might give precise hints about where to look in order to put into evidence, experimentally, possible violations of the superposition principle. • Adler, S., 2003, “Why Decoherence has not Solved the Measurement Problem: A Response to P. W. Anderson”, Stud.Hist.Philos.Mod.Phys., 34: 135. • Adler, S., 2007, “Lower and Upper Bounds on CSL Parameters from Latent Image Formation and IGM Heating”, Journal of Physics, A40: 2935. • Adler, S. and Bassi, A., 2007, “Collapse models with non-white noises” Journal of Physics, A41: 395308. • –––, 2008, “Collapse models with non-white noises II”, Journal of Physics, A40: 15083. • Adler, S. and Ramazanoglu, F.M., 2007, “Photon emission rate from atomic systems in the CSL model”, Journal of Physics, A40: 13395. • Aicardi, F., Borsellino, A., Ghirardi, G.C., and Grassi, R., 1991, “Dynamic models for state-vector reduction—Do they ensure that measurements have outcomes?”, Foundations of Physics Letters, 4: 109. • Albert, D.Z., 1990, “On the Collapse of the Wave Function”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York. • –––, 1992, Quantum Mechanics and Experience, Harvard University Press, Cambridge, Mass. • Albert, D.Z. and Vaidman, L., 1989, “On a proposed postulate of state reduction”, Physics Letters, A139: 1. • Allori, V., Goldstein, S., Tumulka, R., and Zanghi, N., 2008, “On the Common Structure of Bohmian Mechanics and the Ghirardi-Rimini-Weber Theory”, British Journal for the Philosophy of Science, 59: 353–389. • Arndt, M, Nairz, O., Vos-Adreae, J., van der Zouw, G. and Zeilinger, A., 1999, “Wave-particle duality of C60 molecules”, Nature, 401: 680. • Bahrami, M., Donadi, S., Ferialdi, L., Bassi, A., Curceanu, C., Di Domenico, A., Hiesmayr, B.C., 2014, “Are collapse models testable with quantum oscillating systems? The case of neutrinos, kaons, chiral molecules”, Nature: Scientific Reports, 3: 1952. • Bassi, A. and Ferialdi, L., 2009, “Non-Markovian quantum trajectories: An exact result”, Physical Review Letters, 103: 050403. • –––, 2009, “Non-Markovian dynamics for a free quantum particle subject to spontaneous collapse in space: general solution and main properties”, Physical Review, A 80: 012116. • Bassi, A., D.-A. Deckert, and Ferialdi, L., 2010, “Breaking quantum linearity: constraints from human perception and cosmological implications ”, Europhysics Letters, 92: 5006. • Bassi, A. and Ghirardi G.C., 2000, “A general argument against the universal validity of the superposition principle”, Physics Letters, A 275: 373. • –––, 2001, “Counting marbles: Reply to Clifton and Monton”, British Journal for the Philosophy of Science, 52: 125. • –––, 2002, “Dynamical reduction models with general Gaussian noises”, Physical Review A, 65: 042114. • –––, 2003, “Dynamical Reduction Models”, Physics Reports, 379: 257. • Bassi, A., Ippoliti, E. and Adler, S., 2005, “Relativistic Reduction Dynamics”, Foundations of Physics, 41: 686. • Bassi, A., Lochan, K., Satin, S., Singh, T.P., and Ulbricht, H., 2013, “Models of Wave-function Collapse, Underlying Theories, and Experimental Tests”, Review of Modern Physics, 85: 47141. • Bedingham, D., 2011, “Towards Quantum Superpositions of a Mirror: an Exact Open Systems Analysis”, Journal of Physics, A38: 2715. • Bedingham, D., Duerr, D., Ghirardi, G.C., Goldstein, S., Tumulka, R. and Zanghi, N. 2014, “Matter Density and Relativistic Models of Wave Function Collapse”, Journal of Statistical Physics, 154: 623. • Bell, J.S., 1981, “Bertlmann’s socks and the nature of reality”, Journal de Physique, Colloque C2, suppl. au numero 3, Tome 42: 41. • –––, 1986, “Six possible worlds of quantum mechanics”, in Proceedings of the Nobel Symposium 65: Possible Worlds in Arts and Sciences, de Gruyter, New York. • –––, 1987, “Are there quantum jumps?”, in Schrödinger—Centenary Celebration of a Polymath, C.W. Kilmister (ed.), Cambridge University Press, Cambridge. • –––, 1989a, “Towards an Exact Quantum mechanics”, in Themes in Contemporary Physics II, S. Deser, R.J. Finkelstein (eds.), World Scientific, Singapore. • –––, 1989b, “The Trieste Lecture of John Stuart Bell”, Journal of Physics, A40: 2919. • –––, 1990, “Against ‘measurement’”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York. • Berndl, K., Duerr, D., Goldstein, S., Zanghi, N., 1996 , “Nonlocality, Lorentz Invariance, and Bohmian Quantum Theory”, Physical Review , A53: 2062. • Bohm, D., 1952, “A suggested interpretation of the quantum theory in terms of hidden variables. I & II.” Physical Review, 85: 166, ibid., 85: 180. • Bohm, D. and Bub, J., 1966, “A proposed solution of the measurement problem in quantum mechanics by a hidden variable theory”, Reviews of Modern Physics, 38: 453. • Born, M., 1971, The Born-Einstein Letters, Walter and Co., New York. • Brown, H.R., 1986, “The insolubility proof of the quantum measurement problem”, Foundations of Physics, 16: 857. • Bub, J., 1997, “Interpreting the Quantum World”, Cambridge University Press, Cambridge. • Busch, P. and Shimony, A., 1996, “Insolubility of the quantum measurement problem for unsharp observables”, Studies in History and Philosophy of Modern Physics, 27B: 397. • Clifton, R. and Monton, B., 1999a, “Losing your marbles in wavefunction collapse theories”, British Journal for the Philosophy of Science, 50: 697. • –––, 1999b, “Counting marbles with ‘accessible’ mass density: A reply to Bassi and Ghirardi”, British Journal for the Philosophy of Science, 51: 155. • Conway, J. and Kochen, S., 2006, “The Free Will Theorem”, to appear in Foundations of Physics . Also quant-phys 0604079 . • –––, 2006b, “On Adler’s Conway Kochen Twin Argument”, quant-phys 0610147 to appear on Foundations of Physics . • –––, 2007, “Reply to Comments of Bassi, Ghirardi and Tumulka on the Free Will Theorem”, quant-phys 0701016 to appear on Foundations of Physics. • Curceanu, C., Hiesmayr, B.C., and Piscicchia, K., 2015, “X-rays help to unfuzzy the concept of measurement”, Journal of Advances in Physics, 4: 263. • Dawker, F. and Herbauts, I., 2004a, “Simulating Causal Collapse Models”, Classical and Quantum Gravity, 21: 2936. • –––, 2004b, “A Spontaneous Collapse Model on a Lattice”, Journal of Statistical Physics, 115: 1394. • Dowker F. and Henson J., 2004, “Spontaneous collapse models on a lattice”, Journal of Statistical Physics, 115: 1327. • d’Espagnat, B., 1971, “Conceptual Foundations of Quantum Mechanics”, Reading, MA: W.A. Benjamin, . • Dirac, P.A.M., 1948, Quantum Mechanics, Oxford: Clarendon Press. • Dewdney, C. and Horton, G., 2001, “A non-local, Lorentz-invariant, hidden-variable interpretation of relativistic quantum mechanics based on particle trajectories”, Journal of Physics A, 34: 9871. • Diosi, L., 1990, “Relativistic theory for continuous measurement of quantum fields”, Physical Review A, 42: 5086. • Donadi, S., Bassi, A., Curceanu, C., Di Domenico, A. and Hiesmayr, H., 2013a, “Are Collapse Models Testable via Flavor Oscillations? ” Foundations of Physics, 43: 1066. • Donadi, S., Bassi, A., Curceanu, C., Ferialdi, L., 2013b, “The effect of spontaneous collapses on neutrino oscillations” Foundations of Physics, 43: 1066. • Dove, C. and Squires, E.J., 1995 “Symmetric Versions of Explicit Wavefunctions Collapse Models”, Foundations of Physics A, 25: 1267. • Dürr, D., Goldstein, S., Münch-Berndl, K., Zanghi, N., 1999, “Hypersurface Bohm—Dirac models”, Physical Review, A60: 2729. • Eberhard, P., 1978, “Bell’s theorem and different concepts of locality”, Nuovo Cimento, 46B: 392. • Fine, A., 1970, “Insolubility of the quantum measurement problem”, Physical Review, D2: 2783. • Fonda, L., Ghirardi, G.C., and Rimini A., 1978, “Decay theory of unstable quantum systems”, Reports on Progress in Physics, 41: 587. • Fu, Q., 1997, “Spontaneous radiation of free electrons in a nonrelativistic collapse model”, Physical Review, A56: 1806. • Gallis, M.R. and Fleming, G.N., 1990, “Environmental and spontaneous localization”, Physical Review, A42: 38. • Gerlich, S., Hackermüller, L., Hornberger, K., Stibor, A., Ulbricht, H., Gring, M., Goldfarb, F., Savas, T., Müri, M., Mayor, M and Arndt, M., 2007, “A Kapitza-Dirac-Talbot-Lau interferometer for highly polarizable molecules”, Nature Physics, 3: 711. • Ghirardi, G.C., 2000, “Local measurements of nonlocal observables and the relativistic reduction process”, Foundations of Physics, 30: 1337. • –––, 2007, “Some reflections inspired by my research activity in quantum mechanics”, Journal of Physics A, 40: 2891. • Ghirardi, G.C. and Grassi, R., 1996, “Bohm’s Theory versus Dynamical Reduction”, in Bohmian Mechanics and Quantum Theory: an Appraisal, J. Cushing et al. (eds), Kluwer, Dordrecht. • Ghirardi, G.C., Grassi, R., and Benatti, F., 1995, “Describing the macroscopic world—Closing the circle within the dynamical reduction program”, Foundations of Physics, 25: 5. • Ghirardi, G.C., Grassi, R., Butterfield, J., and Fleming, G.N., 1993, “Parameter dependence and outcome dependence in dynamic models for state-vector reduction”, Foundations of Physics, 23: 341. • Ghirardi, G.C., Grassi, R., and Pearle, P., 1990, “Relativistic dynamic reduction models—General framework and examples”, Foundations of Physics, 20: 1271. • Ghirardi, G.C., Pearle, P., and Rimini, A., 1990, “Markov-processes in Hilbert-space and continuous spontaneous localization of systems of identical particles”, Physical Review, A42: 78. • Ghirardi, G.C. and Rimini, A., 1990, “Old and New Ideas in the Theory of Quantum Measurement”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York . • Ghirardi, G.C., Rimini, A., and Weber, T., 1980, “A general argument against superluminal transmission through the quantum-mechanical measurement process”, Lettere al Nuovo Cimento, 27: 293. • –––, 1986, “Unified dynamics for microscopic and macroscopic systems”, Physical Review, D34: 470. • Ghirardi, G.C. and Romano, R., 2014, “Collapse Models and Perceptual Processes”, Journal of Physics: Conference Series, 504: 012022. • Gisin, N., 1984, “Quantum measurements and stochastic processes”, Physical Review Letters, 52: 1657, and “Reply”, ibid., 53: 1776. • –––, 1989, “Stochastic quantum dynamics and relativity”, Helvetica Physica Acta, 62: 363. • Goldstein, S. and Tumulka, R., 2003, “Opposite arrows of time can reconcile relativity and nonlocality”, Classical and Quantum Gravity, 20: 557. • Goldstein, S., Tausk, D.V., Tumulka, R., and Zanghi, N., 2010, “What does the Free Will Theorem Actually Prove?”, Notice of the American Mathematical Society, 57: 1451. • Gottfried, K., 2000, “Does Quantum Mechanics Carry the Seeds of its own Destruction?”, in Quantum Reflections, D. Amati et al. (eds), Cambridge University Press, Cambridge. • Hackermüller, L., Hornberger, K., Brexger, B., Zeilinger, A. and Arndt, M., 2004, “Decoherence of matter waves by thermal emission of radiation”, Nature, 427: 711. • Jarrett, J.P., 1984, “On the physical significance of the locality conditions in the Bell arguments”, Nous, 18: 569. • Joos, E., Zeh, H.D., Kiefer, C., Giulini, D., Kupsch, J., and Stamatescu, I.-O., 1996, “Decoherence and the Appearance of a Classical World”, Springer, Berlin. • Lewis, P., 1997, “Quantum mechanics, orthogonality and counting”, British Journal for the Philosophy of Science, 48: 313. • –––, 2003, “Four strategies for dealing with the counting anomaly in spontaneous collapse theories of quantum mechanics”, International Studies in the Philosophy of Science, 17: 137. • Marshall, W., Simon, C., Penrose, G. and Bouwmeester, D., 2003, “Towards quantum superpositions of a mirror”, Physical Review Letters, 91: 130401. • Maudlin, T., 2011, Quantum Non-Locality and Relativity Wiley-Blackwell. • Nicrosini, O. and Rimini, A., 2003, “Relativistic spontaneous localization: a proposal”, Foundations of Physics, 33: 1061. • Pais, A., 1982, Subtle is the Lord, Oxford University Press, Oxford. • Pearle, P., 1976, “Reduction of statevector by a nonlinear Schrödinger equation”, Physical Review, D13: 857. • –––, 1979, “Toward explaining why events occur”, International Journal of Theoretical Physics, 18: 489 . • –––, 1989, “Combining stochastic dynamical state-vector reduction with spontaneous localization”, Physical Review, A39: 2277. • –––, 1990, “Toward a Relativistic Theory of Statevector Reduction”, in Sixty-Two Years of Uncertainty, A. Miller (ed.), Plenum, New York. • –––, 1999, “Collapse Models”, in Open Systems and measurement in Relativistic Quantum Theory, H.P. Breuer and F. Petruccione (eds.), Springer, Berlin. • –––, 1999b, “Relativistic Collapse Model With Tachyonic Features”, Physical Review, A59: 80. • Pearle, P. and Squires, E., 1994, “Bound-state excitation, nucleon decay experiments, and models of wave-function collapse”, Physical Review Letters, 73: 1. • Penrose, R., 1989, The Emperor’s New Mind, Oxford University Press, Oxford. • Peruzzi, G. and Rimini, A., 2000, “Compoundation invariance and Bohmian mechanics”, Foundations of Physics, 30: 1445. • Rae, A.I.M., 1990, “Can GRW theory be tested by experiments on SQUIDs?”, Journal of Physics, A23: 57. • Rimini, A., 1995, “Spontaneous Localization and Superconductivity”, in Advances in Quantum Phenomena, E. Beltrametti et al. (eds.), Plenum, New York. • Schrödinger, E., 1935, “Die gegenwärtige Situation in der Quantenmechanik”, Naturwissenschaften, 23: 807. • Schilpp, P.A. (ed.), 1949, Albert Einstein: Philosopher-Scientist, Tudor, New York. • Shimony, A., 1974, “Approximate measurement in quantum-mechanics. 2”, Physical Review, D9: 2321. • –––, 1983, “Controllable and uncontrollable non-locality”, in Proceedings of the International Symposium on the Foundations of Quantum Mechanics, S. Kamefuchi et al. (eds), Physical Society of Japan, Tokyo. • –––, 1989, “Search for a worldview which can accommodate our knowledge of microphysics”, in Philosophical Consequences of Quantum Theory, J.T. Cushing and E. McMullin (eds), University of Notre Dame Press, Notre Dame, Indiana. • –––, 1990, “Desiderata for modified quantum dynamics”, in PSA 1990, Volume 2, A. Fine, M. Forbes and L. Wessels (eds), Philosophy of Science Association, East Lansing, Michigan. • Squires, E., 1991, “Wave-function collapse and ultraviolet photons”, Physics Letters, A 158: 431. • Stapp, H.P., 1989, “Quantum nonlocality and the description of nature”, in Philosophical Consequences of Quantum Theory, J.T. Cushing and E. McMullin (eds), University of Notre Dame Press, Notre Dame, Indiana. • Suppes, P. and Zanotti, M., 1976, “On the determinism of hidden variables theories with strict correlation and conditional statistical independence of observables”, in Logic and Probability in Quantum Mechanics, P. Suppes (ed.), Reidel, Dordrecht. • Tumulka, R., 2006a, “A Relativistic Version of the Ghirardi-Rimini-Weber Model”, Journal of Statistical Physics, 125: 821. • –––, 2006b, “On Spontaneous Wave Function Collapse and Quantum Field Theory”, Proceedings of the Royal Society, London, A462: 1897. • –––, 2006c, “Collapse and Relativity”, in Quantum Mechanics: Are there Quantum Jumps? and On the Present Status of Quantum Mechanics, A. Bassi, D. Dürr, T. Weber and N. Zanghi (eds), AIP Conference Proceedings 844, American Institute of Physics • –––, 2007, “Comment on The Free Will Theorem, to appear in Foundations of Physics. Also quant-phys 0611283 . • van Fraassen, B., 1982, “The Charybdis of Realism: Epistemological Implications of Bell’s Inequality”, Synthese, 52: 25. • Zeinlinger, A., 2005, “The message of the quantum”, Nature, 438: 743. • Zurek, W.H., 1993, “Decoherence—A reply to comments”, Physics Today, 46: 81. Other Internet Resources Copyright © 2016 by Giancarlo Ghirardi <> The Encyclopedia Now Needs Your Support Please Read How You Can Help Keep the Encyclopedia Free
622966a11ec0610a
Open Access • Ramón Manjarres-García1, • Gene Elizabeth Escorcia-Salas1, • Javier Manjarres-Torres1, • Ilia D Mikhailov2 and • José Sierra-Ortega1Email author Nanoscale Research Letters20127:531 DOI: 10.1186/1556-276X-7-531 Received: 10 July 2012 Accepted: 29 August 2012 Published: 26 September 2012 Quantum dots Adiabatic approximation Artificial molecule PACS 78.67.-n 78.67.Hc 73.21.-b An important feature in low-dimensional systems is the electron-electron interaction because it plays a crucial role in understanding the electrical transport properties of quantum dots (QDs) at low temperatures [1]. Such systems may involve small or large numbers of electrons as well as being confined in one or more dimensions. The number of electrons in a QD can be varied over a considerable range. It is possible to control the size and the number of electrons and to observe their spatial distributions in QDs. Energy spectrum of two-electron QD with a parabolic confinement, for which two-particle wave equation can be separated completely, has been analyzed previously by using different methods [25]. In the present work, we propose another exactly solvable two-electron heterostructure in which two separated electrons are confined in vertically coupled QDs with a special lens-like morphology. Together with two on-axis donors, these two electrons generate an artificial hydrogen-like molecule whose properties can be controlled by varying the geometric parameters and the strength of the magnetic field applied along the symmetry axis. The model which we analyze below consists of two identical, axially symmetrical and vertically coupled QDs with the on-axis donor located in each one of them (see Figure 1). The dimension of the heterostructure is defined by the QDs' radii R, height W, and the separation d between them along the z-axis. We assume that the QDs have a shape of very thin layers whose profiles are given by the following dependence of the thickness of the layers w on the distance ρ from the axis: w ρ = W / 1 + ρ / R 2 Figure 1 Scheme of the artificial hydrogen-like molecule. Besides, for the sake of simplicity, we consider a model with infinite barrier confinement, which is defined in cylindrical coordinates as V r = 0 if 0 < z < w ρ , and V r = otherwise. Given that the thicknesses of the layers are much smaller than their lateral dimensions, one can take advantage of the adiabatic approximation in order to exclude from consideration the rapid particle motions along the z-axis [6, 7] and obtain the following expression for effective Hamiltonian in polar coordinates: H = i = 1 , 2 H 0 ρ i + V ρ 1 , ρ 2 + 2 π 2 W 2 ; H 0 ρ i = Δ i 2 D + i γ ϑ i + ω 2 ρ i 2 4 ; ω 2 = 2 π / W R 2 + γ 2 V ρ 1 , ρ 2 = 2 d 2 + ρ 1 ρ 2 2 i = 1 , 2 2 d 2 + ρ i 2 + 2 ρ i The effective Bohr radius a 0 = 2 ϵ / m * e 2 as the unit of length, the effective Rydberg R y * = e 2 / 2 ϵ a 0 = 2 / 2 m * a 0 2 as the energy unit, and γ = e B / 2 m * c R y * as the unit of the magnetic field strength have been used in Hamiltonian (Equation 2), with m * being the electron effective mass and ϵ, the dielectric constant. The polar coordinates ρ k = ρ k , ϑ k labeled by k = 1 , 2 correspond to the first and the second electrons, respectively. It is seen that for the selected particular profile given by Equation 1, the Hamiltonian (Equation 2) coincides with one which describes two particles in 2D quantum dot with parabolic confinement and renormalized interaction. It is well known that such Hamiltonian may be separated by using the center of mass, R = ρ 1 + ρ 2 / 2 , and the relative, ρ = ρ 1 ρ 2 coordinates [8]: H = H R + 2 H ρ ; H R = Δ R 2 D 2 + 1 2 ω 2 R 2 ; H ρ = Δ ρ 2 D + ω 2 ρ 2 16 3 ρ 4 ρ 2 + 4 d 2 The wave function is factorized into two parts, ψ R , ρ = Φ R φ ρ , describing the center of mass and the relative motions, respectively. Meanwhile, the total energy splits into two terms depending on two radial N R , n ρ and two azimuthal L R , l ρ quantum numbers: E N R , L R ; n ρ , l ρ = E R N R , L R + 2 E ρ n ρ , l ρ = 2 N R + L R ω + 2 E ρ n ρ , l ρ where the first term represents the well-known expression for the exact energy levels of a two-dimensional harmonic oscillator, labeled by the radial N R = 0 , 1 , 2 , and azimuthal L R = 0 ± 1 ± 2 , quantum numbers for the center of mass motion and the relative motion energy 2 E ρ n ρ , l ρ must be found solving the following one-dimensional Schrödinger equation: u ' ' ( ρ ) + V ( ρ ) u ( ρ ) = E ρ ( n ρ , l ρ ) u ( ρ ) ; V ( ρ ) = ω 2 ρ 2 / 4 + l ρ 2 1 / 4 / ρ 2 3 / ρ 4 / ρ 2 + 4 d 2 In our numerical, work the trigonometric sweep method [8] is used to solve this equation. Results and discussion Before the results are shown and discussed, it is useful to specify the labeling of quantum levels of the two-electron molecular complex. According to Equation 4, the energy levels E N R , L R ; n ρ , l ρ can be labeled by four symbols N R , L R ; n ρ , l ρ . Even and odd l p correspond to the spin singlet and triplet states, respectively, consistent with the Pauli Exclusion Principle. We have performed numerical calculations of energy levels of complexes with radii R between 20 and 100 nm for different separations between layers. In all presented calculation results, the top thickness W is taken as 0.4 nm. In order to highlight the role of the interplay between the quantum size and correlation effects in the formation of the energy spectrum of our artificial system different from the natural hydrogen molecular complex, we have plotted in Figure 2 the potential curves E ˜ d = E N R , L R ; n ρ , l ρ + 2 / d , similar to those of the hydrogen molecule in which the complex energies with the electrostatic repulsion between donors included as functions of the separation d between QDs are shown. Comparing them with the corresponding potential curves of the hydrogen molecule, one can to take into account that in analyzing the structure here, the electron motion in contrast to hydrogen molecule is restricted inside two separated thin layers. The energy dependencies of different levels ( labeled by four quantum numbers, N R , L R ; n ρ , l ρ are shown in Figure 2 for QDs with two different radii, R = 40 nm and R = 100 nm. A clear difference in the behavior of the potential curves is readily seen. If the curves are smooth without any crossovers for QDs of small radius, the corresponding potential curves suffer a drastic change as the QD radius becomes large. In the last case, the energy levels become very sensitive to the variation of the separation between QDs, and the quantum size effect becomes essential, providing alteration of the energy gaps, multiple crossovers of levels with the same or different spins, and the level reordering, as the distance between QDs increases from 5 to 20 nm. Figure 2 Energies E ˜ d of the double-donor complex corresponding to some low-lying levels in vertically coupled QDs. As functions of the distance between them. We ascribe a dramatic alteration of the potential curves with the increase of the separation between QDs from 5 to 20 nm observed in Figure 2 to the interplay between the structural confinement and the electron-electron repulsion. As the QDs' radii are small(R → 0), the confinement is strong, and the kinetic energy (~1/R2) is larger than the electron-electron repulsion energy (~1/R), vice versa for QDs with large radii. Therefore, as the QDs' radii increase, the arrangement of the electronic structure for different energy levels changes from typical for the gas-like system to crystal-like one, accompanied by the crossovers of the curves and reordering of the levels. As the two-electron structure arrangement for large separation between electrons becomes almost rigid, the relative motion of electrons is frozen out, and the two-electron structure transforms into a rigid rotator with practically fixed separation between electrons. The electrons' motion in this case becomes similar to one in 1D ring, and therefore, the energy dependencies on the external magnetic field applied along the symmetry axis should be similar to those which exhibit the Aharonov-Bohm effect. In order to verify this hypothesis, we present in Figure 3 the calculated molecular complex energies E N R , L R ; n ρ , l ρ of some lower levels as functions of the magnetic field strength for QDs with small R = 40 nm (upper curves) and large R = 100 nm radii (lower curves). Figure 3 Energies E N R , L R ; n ρ , l ρ of some low-lying levels of the double-donor complex in vertically coupled QDs. As functions of the magnetic field. It is seen that for QD of small radius, the energies are increased smoothly with very few intersections. Such dependence is typical for gas-like systems where the paramagnetic term contribution is depreciable in comparison with the diamagnetic one. On the contrary, the energy dependency curves for QD of large radius present multiple crossovers and level-ordering inversion as the magnetic field strength increases from 0 to 1. It is due to a competition between diamagnetic (positive) and paramagnetic (negative) terms of the Hamiltonian whose contributions in total two-electron energy in QDs of large radii are of the same order while the electron arrangement is similar to a rigid rotator. In other words, the correlation in this case becomes as strong as the electrons are mainly located on the opposite sides within a narrow ring-like region. Finally, in Figures 4 and 5, we present results of the calculation of the density of electronic states for double-donor molecular complex confined in vertically coupled QDs. It is clear from the discussion above that the presence of the magnetic field should provide a significant change of the density of the electronic states as the QDs' radii are sufficiently large. Indeed, it is seen from Figure 4 that under relatively weak magnetic field (γ = 0.5), as the molecular complex is confined in QDs of 100-nm with 6-nm separation between them, the density of states becomes essentially more homogeneous since the widths of individual lines are broadened and the gaps between them are reduced. Such change of the density of states is observed due to a splitting and displacement of the individual lines accompanied by their crossovers and the reordering of the energy levels. Figure 4 Density of states for two different values of the magnetic field. Corresponding to low-lying levels of the double-donor complex in vertically coupled QDs. Figure 5 Density of states for three different distances between layers. Corresponding to low-lying levels of the double-donor complex in vertically coupled QDs. In Figure 5, we present similar curves of the molecular complex density of states for three different separations between QDs. It is seen that the curves of the density of states are modified only slightly, essentially less than under variation of the magnetic field. Particularly, the lower energy peak positions are almost insensitive to any change of the distance between dots, while the upper energy peaks are noticeably displaced toward higher energy regions. In short, we propose a simple numerical procedure for calculating the energies and wave functions of a molecular complex formed by two separated on-axis donors located at vertically coupled quantum dots with a particular lens-type morphology which produces in-plane parabolic confinement. We show that in the adiabatic approximation, the Hamiltonian of this two-electron system included in the presence of the external magnetic field is separable. The curves of the energy dependencies on the external magnetic field and the separation between quantum dots are presented. Analyzing the curves of the low-lying energies as functions of the magnetic field applied along the symmetry axis, we find that the two-electron configuration evolves from one similar to a rigid rotator to gas-like as the dot radii decrease. This quantum size effect is accompanied by a significant modification of the density of the energy states and the energy dependencies on the external magnetic field and geometric parameters of the structure. Authors’ Affiliations Group of Investigation in Condensed Matter Theory, Universidad del Magdalena Universidad Industrial de Santander 1. Kramer B: Proceedings of a NATO Advanced Study Institute on Quantum Coherence in Mesoscopic System: 1990 April 2–13; Les Arcs, France. New York: Plenum; 1991.View ArticleGoogle Scholar 2. Maksym PA, Chakraborty T: Quantum dots in a magnetic field: role of electron–electron interactions. Phys Rev Lett 1990, 65: 108–111. 10.1103/PhysRevLett.65.108View ArticleGoogle Scholar 3. Pfannkuche D, Gudmundsson V, Maksym P: Comparison of a Hartree, a Hartree-Fock, and an exact treatment of quantum-dot helium. Phys Rev B 1993, 47: 2244–2250. 10.1103/PhysRevB.47.2244View ArticleGoogle Scholar 4. Zhu JL, Yu JZ, Li ZQ, Kawasoe Y: Exact solutions of two electrons in a quantum dot. J Phys Condens Matter 1996, 8: 7857. 10.1088/0953-8984/8/42/005View ArticleGoogle Scholar 5. Mikhailov ID, Betancur FJ: Energy spectra of two particles in a parabolic quantum dot: numerical sweep method. Phys stat sol (b) 1999, 213: 325–332. 10.1002/(SICI)1521-3951(199906)213:2<325::AID-PSSB325>3.0.CO;2-WView ArticleGoogle Scholar 6. Peeters FM, Schweigert VA: Two-electron quantum disks. Phys Rev B 1996, 53: 1468–1474. 10.1103/PhysRevB.53.1468View ArticleGoogle Scholar 7. Mikhailov ID, Marín JH, García F: Off-axis donors in quasi-two-dimensional quantum dots with cylindrical symmetry. Phys stat sol (b) 2005, 242(8):1636–1649. 10.1002/pssb.200540053View ArticleGoogle Scholar 8. Betancur FJ, Mikhailov ID, Oliveira LE: Shallow donor states in GaAs-(Ga, Al)As quantum dots with different potential shapes. J Appl Phys D 1998, 31: 3391. 10.1088/0022-3727/31/23/013View ArticleGoogle Scholar © Manjarres-García et al.; licensee Springer. 2012
2f53bdf37990f983
Take the 2-minute tour × This website http://www7b.biglobe.ne.jp/~kcy05t/ appears to refute Quantum mechanics using some proof. An important paper involved is this 'Calculation of Helium Ground State Energy by Bohr's Theory-Based Methods' http://arxiv.org/abs/0903.2546 (written by the website author) How to disprove the author's claims, assuming his refutation of QM is unacceptable/false. Note: I don't know if this question belongs here. Edit:It may take considerable effort to refute or support his claims. share|improve this question no need to refute anything. He found a solution for a particular atomic configuration in the Bohr model framework. So? The Bohr model was superseded not because it was "wrong" but because the same data can be beautifully fitted within a formal quantum mechanical theory, a much larger enterprise. –  anna v Oct 20 '12 at 8:02 I have checked several pages of the website and it is full of misconceptions and false claims. –  juanrga Oct 21 '12 at 10:44 @juanrga: While I agree that it is full of misconceptions, one should be precise regarding false claims, because a lot of the claims are not false except they rub you the wrong way if you know the accepted story, but they make sense for a person who (in my opinion courageously and sensibly, but experimentally wrongly) rejects entanglement. –  Ron Maimon Oct 21 '12 at 13:12 3 Answers 3 up vote 8 down vote accepted Arriving at the same answer as quantum mechanics for one particular scenario by making a bunch of ad hoc assumptions (for example - the calculation didn't work, so we'll make the orbital planes perpendicular) isn't useful. QM allows you to calculate much more than the ground states of atoms. Any competing theory - and that paper doesn't contain anything which could be described as a theory - would have to have the same breadth of applicability as QM share|improve this answer At the core of the problem, I think, lies the question why the electrons that circulate around a nucleus don't behave as it would be expected according to Maxwell. This is where in quantum mechanics, the wave-particle dualism comes in, and by moving from the description of a moving particle to a standing wave (which is what the time-independent Schrödinger equation essentially does), this contradiction is resolved. But the author of the page seems to be just ignorant about the "wave" part of the description, and instead puts up a lot of straw man arguments ("electrons are not moving in QM" etc). He's really seems to be bending over backwards to avoid having to treat an electron as a wave, but on atomic scales, you simply can't have one without the other. Then there's such things as his literal interpretation of the term "electron spin", which leads to a very firm attachment of my palm to my face. I've only looked at the page shortly, but that's already been enough for me and there is so much wrong it that for anyone to correct it, it would probably take hours, if not days, a lot of patience, and someone to calm you down from time to time. share|improve this answer There's nothing wrong with a literal interpretation of electron spin, so long as you don't think the electron has constituent parts. Regarding the waves, this is a mistake, but he is working in Bohr/DeBroglie model where the waves are following the classical trajectories, and he wants to continue on this path, without transitioning to full quantum mechanics. This is wrong, but you need to explain why, not just from the fact that it contradicts everything you've ever learned (which it does). –  Ron Maimon Oct 22 '12 at 0:39 Well, the page nicely shows what happens if one interprets the electron spin literally: Things are just bound to go haywire. Also, the best explanation for why one cannot go without the wave character is in my opinion right there in the "circulating particle" problem. The author of the page does at some point mumble something about de Broglie waves and that they lead to stable states in the Bohr-Sommerfeld model, but he completely fails to consequently follow this path -- and furthermore doesn't realize this is the exact thing for which he wants to discredit the quantum mechanical picture. –  Antimon Oct 22 '12 at 8:01 You should interpret the spin literally, nothing goes haywire, that is not the problem on the page. The author wants deBroglie waves, he just wants them to live on real space, not in the space of all possible worlds. This is the main problme, and it's not as stupid as you are making it out to be, it's still wrong though. –  Ron Maimon Oct 22 '12 at 13:09 The problem with his claims is that they don't include entanglement, which was the major prediction of new quantum theory, as opposed to the Bohr model. At least he correctly is attacking the source of the quantum weirdness-- entanglement was experimentally demonstrated from the He atom ground state originally. The main point of this attack on QM is to replace QM with an entanglement free scheme, which will then not have to have all the "many worlds" quantum superpositions, but just some physical deBroglie waves waving along with particles in real space. This is hopeless, because entanglement is measured directly by now. The best evidence is in the Bell test experiments of Aspect. It is there that you see that classical local models are definitely wrong, and while you might cook up some explanation for the energy of Helium, you can't cook up a local explanation for violations of Bell's inequality. For the particular claims on this website, perpendicular orbits doesn't work for Helium, because the electrons repel--- they don't stay perpendicular. The classical orbits are a chaotic nightmare, you can't use them to semiclassically quantize the ground state of Helium, it just isn't semiclassical. If you try, you will have to use orbits which are far apart from each other an abnormal fraction of the time, due to the entanglement. This is ad-hoc and inconsistent with classical equations, unlike QM (this is a repeat of what twistor59 said). share|improve this answer Your Answer
6d9939aff8e12976
Take the 2-minute tour × I know some proofs require the existence of large infinite ordinals, they give the fuel that drives induction principles. An example of this is the use of ε0 to give a consistency proof of peano arithmetic. What I would like to find is proofs that require the existence of a large finite ordinal. thank you! share|improve this question In set theory like system, arbitrary large finite ordinal can be proven from axiom of set theory without axioms of infinity. In arithmetic like system, you can prove the existence of arbitrary large number only with axiom related to successor. If this is what you mean then you won't get far if you attempt something weaker. –  abcdxyz Apr 11 '10 at 20:05 Or may be there is another meaning to your question? –  abcdxyz Apr 11 '10 at 20:07 I think the question is just asking about proofs where you have some kind of gigantic finite upper bound like Graham's number. –  Harry Gindi Apr 11 '10 at 20:53 If that is the case, I would cite the proof that there exist infinitely many primes. –  abcdxyz Apr 11 '10 at 20:56 As others have said, the word "require" in the title of the question and the logic tag create the apparently misleading impression that the OP is interested in a foundational system so weak that sufficiently large finite numbers do not exist! (Note that D. Zeilberger sincerely subscribes to this, at least as a philosophy; I had a fun email exchange with him which appears off of his opinions page.) The question rather seems to be: "What are some proofs where you can give an explicit, but ridiculously large, bound for something?" To me this is not so fascinating, but to each his own... –  Pete L. Clark Apr 12 '10 at 7:33 5 Answers 5 This isn't addressed to logicians, but it may be of interest. I happen to know of an example in PDE that was necessary in proving the well-posedness of radial solutions of the Nonlinear Schrodinger Equation: $$i u_{t}+\Delta u=|u|^{4}u$$ for which J. Bourgain was awarded his Fields Medal for treating. (J. Bourgain, Global well-posedness of defocusing 3D critical NLS in the radial case, JAMS 12 (1999), 145-171). In one of the many many critical steps required in this proof, a bound on energy is required. A team (J. COLLIANDER, M. KEEL, G. STAFFILANI, H. TAKAOKA, and T. TAO) have now treated the non-radial case and make explicit the large ordinals used for bounding the energy. I quote from page 36 of their paper "Global well-posedness and scattering for the energy-critical nonlinear Schrödinger equation in R^3": "If one then runs the induction of energy argument in a direct way (rather than arguing by contradiction as we do here), this leads to very rapidly growing (but still finite) bound for M(E) for each E, which can only be expressed in terms of multiply iterated towers of exponentials (the Ackermann hierarchy). More precisely, if we use X ↑ Y to denote exponentiation X^Y, X↑↑Y :=X↑(X↑...↑X) to denote the tower formed by exponentiating Y copies of X, X↑↑↑Y :=X↑↑(X↑↑...↑↑X) to denote the double tower formed by tower-exponentiating Y copies of X, and so forth, then we have computed our final bound for M(E) for large E to essentially be M(E) ≤ C ↑↑↑↑↑↑↑↑ (CE^C). This rather Bunyanesque bound is mainly due to the large number of times we invoke the induction hypothesis Lemma 4.1, and is presumably not best possible." share|improve this answer Large numbers (Ackerman of Ackerman of Ackerman of ...... of something) tend to creep into modern additive combinatorics arguments due to some dark ergodic witchcraft tool which they call "PET induction" (PET = polynomial exhaustion technique), and some of its cousins. You can easily google-up the terms and find references; sadly understanding what they actually do is (at least for me) a different matter altogether. share|improve this answer The example I know is the 1933 Skewes' number, see Looking at your question again, I have no idea whether this is what you wanted. share|improve this answer Large numbers are used in things like the busy beaver problem. However, since it has given me some good rep in the past, I once again recommend Harvey Friedman and his Enormous Numbers in Real Life. You can search Math Overflow for Harvey and see some of the posts which quote part of his article. Gerhard "Ask Me About System Design" Paseman, 2010.04.11 share|improve this answer Your Answer
a4be13c1d7a299c2
30 under 30: Doing Better Chemistry through Quantum Mechanics Meet Robert Parrish, 23, one of the up-and-coming physicists attending this year's Lindau Nobel Laureate Meeting American physicist Robert Parrish Courtesy Robert Parrish Name: Robert Parrish Age: 23 Born: Miami Nationality: U.S. Current position: Graduate student, Georgia Institute of Technology Education: B.S., mechanical engineering, Georgia Institute of Technology What is your field of research? I apply quantum mechanics to simulate the motions of electrons in molecules, using computers. Accurate simulations of this type provide in silico chemical predictions about whether a molecule might make a good drug candidate, reaction catalyst, etc. I have always been fascinated that the beautiful complexities of phenomena ranging from weather patterns to the evolution of the universe each emerge from a simple governing equation which can be written in a page or less. In my undergraduate work in engineering, I learned that the hard bit is solving those equations, and discovered that I was a natural at finding new approximations to speed up those solutions. Of all the equations I studied, the electronic Schrödinger equation of quantum chemistry was easily the most difficult, and therefore the most fun to work on. In 10 years, I hope to be a professor. Thinking about a really tough problem 24/7, working flexible hours (albeit 100 of them per week!), and having amazing friends as research collaborators is the kind of lifestyle I would cultivate even if I was not paid to do it. As far as research goals, I am extremely interested in compression algorithms to treat the correlated motions of electrons. If an efficient scheme could be devised, we could run fully quantum-mechanical simulations of chemical systems as large as proteins. This would move a lot of chemical discovery away from the lab bench and onto the computer, in the same way that computational fluid dynamics has revolutionized the design of aircraft. Who are your scientific heroes? Horst Störmer of Columbia University, and formerly Bell Labs, who discovered the fractional quantum Hall effect. I saw him speak about nanotechnology when I was in high school, and was struck by how much he obviously enjoyed going to work every day, and how that enthusiasm naturally led to an amazing discovery. Also, my father, Jack Parrish, who is a flight meteorologist studying hurricanes with NOAA. As a very practical scientist, he can fly into a hurricane to eyeball it, and gather just as much information as a supercomputer simulation. This reminds me that all the pretty mathematics I work on should eventually boil down to something useful. The funny thing about my field is that we already know how to exactly solve for the motions of the electrons for any system, but we would need an exponential amount of computer time to do it. We often joke about writing an article titled “Exact solutions of the electronic Schrödinger equation with a time machine,” where we send a workstation back in time about 150 million years, and then go pick the results of the simulation up yesterday. What activities outside of physics do you most enjoy? I really enjoy travel and am looking forward to seeing some of Germany and Austria after the Lindau conference. Also, being a Floridian, I am experiencing severe beach withdrawal here in Atlanta, and I look forward to finding someplace to reacquire my windsurfing skills during my postdoc. What do you hope to gain from this year’s Lindau meeting? Popular culture often seems to think that science is done at 3:00 A.M. by a solo grad student in a white coat slaving over a lab bench. While I have certainly had my fair share of evenings spent in front of green-on-black windows of C++ source code, all of my best ideas have come from having a chat over a beer with a friend. Lindau is a great opportunity to make friends like this, who might eventually become colleagues. In particular, I hope to have the opportunity to talk with many young scientists and laureates who are working in areas orthogonal to my own. After finding success in chemistry following an undergraduate in engineering, I am a strong believer that ideas can often cross from one field to another, and Lindau is the perfect place for that to happen. I have spent a considerable amount of time over the last two years writing a code for a method called density functional theory, for which Walter Kohn won the Nobel Prize in 1998. His development of the method has caused a renaissance in electronic structure theory over the past 25 years, but we are becoming increasingly aware of spectacular failures for some chemical systems. I am very interested to hear his take on how we might fix these errors firsthand. « Previous 17. Ulrika Forsberg 30 Under 30: Lindau Nobel Laureate Meeting Next » 19. Matteo Lucchini Rights & Permissions Share this Article: Back to School Sale! One year just $19.99 Order now > Email this Article
dd63356c92fb3ea7
Physics Made Easy Condensed Matter IV ‘Nearly free’ electron approximation: we now add in the effects of the positive ion cores as a perturbing potential. nearly-free-potential.jpgV(x)=\sum_{G\neq 0}V_G\cos(Gx) where $G=\frac{2n\pi x}{a}$ are the reciprocal lattice vectors. The spikier the potential, the more higher-frequency components are present (VG large for large G). Physically, the magnitude of the VG coefficients are due to the number of electron shells, charges of ion cores, atomic spacing and so forth. For our perturbation theory, we’ll just take the first term in the series. H'=V=V_0\cos\frac{2\pi x}{a} First order shift \Delta E=<\psi|H'|\psi>=0 as \psi=Ae^{ikx} Second order shift \Delta E_2=\sum_{\psi'}\frac{|<\psi|H'|\psi>|^2}{E_{\psi'}-E_{\psi}} This term is non-zero if E_{\psi'}=E_{\psi} and k-k'=\frac{2\pi n}{a} -> band gap opens up at k=\pm\frac{n\pi}{a} The two energies at the band gap correspond to Bloch waves are another way of looking at the above. For any periodic potential, V(\bold{r})=V(\bold{r}+\bold{R}). The Schrödinger equation has solutions of the form \psi(\bold{r})=u(\bold{r})e^{i\bold{k}.\bold{r}} Substitute into the Schrödinger equation to energy eigenvalue equation; this will lead to the same result as above. Reduced and extended zone schemes Extended zone scheme Reduced zone scheme- Umklapp everything back into 1st Brillouin Zone (translate everything back by \frac{2n\pi}{a}– lattice vector). Fermi surface is no longer spherical. Contours of constant energy in k-space (kx, ky plane). The edges of the k-square are at ±π/a Effective mass: when an electron goes into a crystal, it is no longer a real particle. These quasi-particles appear to obey Newtonian mechanics, but with an effective mass m* that is not equal to the mass of a free electron. The effective mass contains various lattice forces which are not known explicitly- hence it can be infinite or negative. Group velocity v_g=\frac{\partial \omega}{\partial k}=\frac{1}{\bar{h}}\frac{\partial E}{\partial k} Acceleration \frac{\partial v_g}{\partial t}\frac{1}{\bar{h}}\frac{\partial^2 E}{\partial k \partial t} \frac{\partial v_g}{\partial t}\frac{1}{\bar{h}}\frac{\partial^2 E}{\partial k^2}\frac{\partial k}{\partial t} \frac{\partial v_g}{\partial t}=\frac{1}{\bar{h}^2}\frac{\partial^2 E}{\partial k^2}(\bar{h}\frac{\partial k}{\partial t}) A quick moment’s thought will show us that since p=\bar{h}k, then the bracketed term (\bar{h}\frac{\partial k}{\partial t}) must equal force. With that in mind, our acceleration equation looks like Newton II (acceleration = force/mass) but with an effective mass \frac{1}{m*}=\frac{1}{\bar{h}^2}\frac{\partial^2 E}{\partial k^2} Metals, insulators and semiconductors Metal: half-filled band means electrons can move into empty states. Semiconductor: At T=0, the valence band is full and the conduction band is empty. Above T=0, we can excite electrons into the conduction band -> both electrons and empty states in the conduction and valence bands, therefore conduction can occur. Insulator: large band gap so electrons cannot be excited out of the valence band (by the time you had enough energy to do so, you would have vaporised your sample anyway). The valence band is completely filled, so conduction cannot occur. 1. Sometimes bands overlap, so even if one band is filled, electrons may be able to move into the overlapping band (i.e. conduction can occur). 2. The conduction and valence bands don’t have to be the first bands- they just refer to the lowest unfilled and highest filled bands respectively. semiconductor-t0.jpgAt T=0 This is the lowest unfilled band- the conduction band. At T=0 it is empty. This is the highest filled band- the valence band. At T=0 it is completely filled. semiconductor-t-above0.jpgAt higher temperatures we start exciting electrons out of the valence band and into the conduction band. In the conduction band, we have a few electrons and mostly empty states, so conduction can occur here. Electrons have an effective mass me*. In the valence band, we have mostly electrons and a few empty states- conduction can occur here also, but now electrons have a negative effective mass. It turns out to be easier to think of conduction in the valence band in terms of the movements of empty states rather than the electrons; these empty states behave like positively charged particles with positive effective mass, which we call holes. Semiconducting materials: typical semiconductors include the Group IV elements Si and Ge and III-V compounds such as GaAs and InSb. In the primitive basis there are two atoms with 4+4 or 3+5 electrons; this total of eight electrons fills the four bands of s and p orbitals. Direct gap semiconductor: in a direct gap semiconductor such as GaAs, the minimum of the conduction band occurs directly above the maximum of the valence band in k-space. Optical absorption cannot occur until a photon has enough energy (\bar{h}\omega \geq E_G) to excite an electron from the valence band to the conduction band, thus creating an electron-hole pair. Indirect gap semiconductor: in an indirect gap semiconductor such as Si or Ge, the minimum of the conduction band is not directly above the maximum of the valence band in k-space. 1. Photon has enough energy to excite an electron into the conduction band, but not enough momentum, so a phonon is also required to transfer momentum to the electron (phonon-assisted transition). 2. There is now sufficient energy for an electron to go from the valence band to the conduction band with δk=0 (no phonon required). Experiment to determine band gap from optical absorption- some general notes. 1. EG is dependent on temperature; make sure temperature is controlled during experiment. 2. Radiation source of IR frequency is needed, and also a detector (obviously). 3. Plot absorption coefficient vs. photon energy as in the graphs above to find EG. Properties of holes: the motion of electrons in a band with one empty state looks like the motion of one positively charge particle (hole). Since electron density is referred to as n, we’ll be using p for hole density. Hole wavevector and momentum kh=-ke Energy εh= -εe Velocity v_h=v_e=\frac{1}{\bar{h}}\frac{\partial \epsilon}{\partial k} Intrinsic semiconductors: an intrinsic semiconductor has no impurities, so n=p. We’ll now go ahead and work out these intrinsic carrier densities. Put the bottom of the conduction band at EC and the top of the valence band at EV; EcEV=EG. Electron density n=\int_{E_C}^{\infty}g(\epsilon)f(\epsilon_d\epsilon f(\epsilon)=\frac{1}{e^{\frac{\epsilon-\mu}{kT}}+1}\simeq e^{\frac{\mu-\epsilon}{kT}} for \epsilon-\mu \gg kT g(\epsilon)d\epsilon \propto \sqrt{\epsilon-E_C} d\epsilon Substituting all this into our expression for n gives n=A\int_{E_C}^{\infty}\sqrt{\epsilon-E_C} e^{\frac{\mu-\epsilon}{kT}} d\epsilon n=A e^{\frac{\mu-E_C}{kT}}\int_0^{\infty}\sqrt{\epsilon-E_C} e^{\frac{E_C-\epsilon}{kT}} d\epsilon (We can let the lower limit go to zero as there aren’t going to be any electrons in the states from 0 to EC anyway). Let x=\frac{E_C-\epsilon}{kT} n=A e^{\frac{\mu-E_C}{kT}}(kT)^{\frac{3}{2}}\int_0^{\infty}\sqrt{x}e^{-x}dx The integral over x will just come out to a constant, which we can absorb into A along with the factor of k3/2. Hence n=AT^{\frac{3}{2}}e^{\frac{\mu-E_C}{kT}} For holes, we start at p=\int_{-\infty}^{E_V}(1-f(\epsilon)g(\epsilon)d\epsilon For most purposes, it should be sufficient to simply quote that, by symmetry p=A' T^{\frac{3}{2}}e^{\frac{E_V-\mu}{kT}} However, in an exam you might be expected to go through that derivation as well, in which case just proceed as above. N_C=A T^{\frac{3}{2}}=(\frac{2\pi m_e*kT}{\bar{h}^2})^{\frac{3}{2}} N_V=A' T^{\frac{3}{2}}=(\frac{2\pi m_h*kT}{\bar{h}^2})^{\frac{3}{2}} np=N_C N_Ve^{\frac{-E_G}{kT}}\simeq 10^{33}m^{-6} This is the Law of Mass action, and it always holds, independent of μ. n=p= \sqrt{N_C N_V}e^{\frac{-E_G}{2kT}} Extrinsic semiconductors have had impurities introduced to produce an excess of electrons or holes. This process is known as doping. N-type semiconductors have been doped to produce an excess of electrons, for example by introducing Group V elements such as phosphorus into the crystal structure of a Group IV element such as Si or Ge. The Group V element atoms take the place of Group IV element atoms- four of its valence electrons are used in bonds, with an extra fifth electron left over. silicon.jpg doped-silicon1.jpg These impurities are called donors because they donate extra electrons to the structure. P-type semiconductors have been doped with electron acceptors to produce an excess of holes. In this case, the impurity atoms are from Group III, e.g. Ga or Al. (One electron missing- equivalent to donating a hole.) Acceptor/donor energy levels: model the ionised impurity atom and electron/hole as a hydrogenic system, with mass m* and permittivity ε0εr. Ionisation energy E=\frac{m*e^4}{2(4\pi \epsilon_0\epsilon_r \bar{h})^2}=\frac{m*}{m_e \epsilon_r^2}E_R where ER is the good old Rydberg energy (13.6 eV). Substituting in typical values of εr~10, m*~0.1me gives us E~13.6 meV At room temperature kT~25 meV, so we would expect all impurities to be ionised. Now define an effective ‘Bohr radius’ for impurity wavefunctions. Where a0 is, of course the Bohr radius (0.53×10-10m). If the impurity wavefunctions start to overlap, we get an impurity band which allows conduction to occur, leading to metallic behaviour. Density of states and carrier densities At room temperature, assume all donors are ionised, but few electrons are excited across the band gap. nND+ND (actually, n=p+ ND+ for charge neutrality, but we’re saying that p is negligible). From the law of mass action Temperature dependence of extrinsic properties 1. Freeze-out range. The temperature is still low so not all impurities have been ionised yet. As the temperature increases, more impurities ionise. 2. Saturation range. All impurities are now ionised- carrier density is constant. 3. There is now sufficient energy to excite large numbers of electrons across the band gap, so we see a transition to intrinsic behaviour. Conductivity of a semi-conductor To make things easier, it will often be assumed that one type of carrier dominates, or that μeh=constant. Cyclotron resonance can be used to determined the effective mass and electron dispersion relation (more detail to be included at a later date). TrackBack URI Create a free website or blog at %d bloggers like this:
6305b66e82f57a1e
söndag 30 augusti 2015 Quantum Information Can Be Lost Stephen Hawking claimed in lecture at KTH in Stockholm last week (watch the lecture here and check this announcement) that he had solved the "black hole information problem": • The information is not stored in the interior of the black hole as one might expect, but in its boundary — the event horizon,” he said. Working with Cambridge Professor Malcolm Perry (who spoke afterward) and Harvard Professor Andrew Stromberg, Hawking formulated the idea hat information is stored in the form of what are known as super translations. The problem arises because quantum mechanics is viewed to be reversible, because the mathematical equations supposedly describing atomic physics formally are time reversible: a solution proceeding in forward time from an initial to a final state, can also be viewed as a solution in backward time from the earlier final state to the initial state. The information encoded in the initial state can thus, according to this formal argument, be recovered and thus is never lost. On the other hand a black hole is supposed to swallow and completely destroy anything it reaches and thus it appears that a black hole violates the postulated time reversibility of quantum mechanics and non-destruction of information. Hawking's solution to this apparent paradox, is to claim that after all a black hole does not destroy information completely but "stores it on the boundary of the event horizon". Hawking thus "solves" the paradox by maintaining non-destruction of information and giving up complete black hole destruction of information. The question Hawking seeks to answer is the same as the fundamental problem of classical physics which triggered the development of modern physics in the late 19th century with Boltzmann's "proof" of the 2nd law of thermodynamics: Newton's equations describing thermodynamics are formally reversible, but the 2nd law of thermodynamics states that real physics is not always reversible: Information can be inevitably lost as a system evolves towards thermodynamical equilibrium and then cannot be recovered. Time has a direction forward and cannot be reversed.  Boltzmann's "proof" was based an argument that things that do happen do that because they are "more probable" than things which do not happen. This deep insight opened the new physics of statistical mechanics from which quantum borrowed its statistical interpretation. I have presented a different new resolution of the apparent paradox of irrreversible macrophysics based on reversible microphysics by viewing physics as analog computation with finite precision, on both macro- and microscales. A spin-off of this idea is a new resolution of d'Alemberts's paradox and a new theory of flight to be published shortly. The basic idea here is thus to replace the formal infinite precision of both classical and quantum mechanics, which leads to paradoxes without satisfactory solution, with realistic finite precision which allows the paradoxes to be resolved in a natural way without resort to unphysical statistics. See the listed categories for lots of information about this novel idea. The result is that reversible infinite precision quantum mechanics is fiction without physical realization, and that irreversible finite precision quantum mechanics can be real physics and in this world of real physics information is irreversibly lost all the time even in the atomic world. Hawking's resolution is not convincing. Here is the key observation explaining the occurrence of irreversibility in formally reversible systems modeled by formally non-dissipative partial differential equations such as the Euler equations for inviscid macroscopic fluid flow and the Schrödinger equations for atomic physics: Smooth solutions are strong solutions in the sense of satisfying the equations pointwise with vanishing residual and as such are non-dissipative and reversible.  But smooth solutions make break down into weak turbulent solutions, which are only solutions in weak approximate sense with pointwise large residuals and these solutions are dissipative and thus irreversible. An atom can thus remain in a stable ground state over time corresponding to a smooth reversible non-dissipative solution, while an atom in an excited state may return to the ground state as a non-smooth solution under dissipation of energy in an irreversible process.       2 kommentarer: 1. Just a short thought 'experiment' in section 6 above reading, "Boltzmann's "proof" was based an argument that things that do happen do that because they are "more probable" than things which do not happen.". The way I read this, although I know it does not say it out loud, is "For it is statistically improbable to occur, thus it does not occur." Hence, one never wins on the lottery for it is highly improbable one will win, yet sometimes someone wins. I would love to see a mathematical notation as to how to interpret the sentence in the sixth section above, just for the fun of it. 2. Boltzmann's key argument is that things are likely to evolve from less probable states to more probabable states, thus giving time a direction from improbable to probable. But this is empty tautology as something being true by definition. It is self-evident that more probable states will trend to occur more frequently than less probable states.
e58f2d99b2dd3466
From Wikipedia, the free encyclopedia   (Redirected from Materialization (science fiction)) Jump to: navigation, search For other uses, see Teleportation (disambiguation). "Teleporter" redirects here. For machines with telescopic booms to move loads, see Telescopic handler. Since 1993, teleportation has become a hot topic in quantum mechanics, namely state, energy and particle teleportation. The use of the term teleport to describe the hypothetical movement of material objects between one place and another without physically traversing the distance between them has been documented as early as 1878.[1][2] American writer Charles Fort is credited with having coined the word teleportation in 1931[3][4] to describe the strange disappearances and appearances of anomalies, which he suggested may be connected. As in the earlier usage, he joined the Greek prefix tele- (meaning "distant") to the root of the Latin verb portare (meaning "to carry").[5] Fort's first formal use of the word occurred in the second chapter of his 1931 book Lo!:[6] Mostly in this book I shall specialize upon indications that there exists a transportory force that I shall call Teleportation. I shall be accused of having assembled lies, yarns, hoaxes, and superstitions. To some degree I think so, myself. To some degree, I do not. I offer the data. The earliest recorded story of a "matter transmitter" was Edward Page Mitchell's "The Man Without a Body" in 1877.[7] See also the movie The Fly (1958) and 1957 story of the same name. In episode 20 of the Gerry and Sylvia Anderson children's programme, Fireball XL5, produced in 1962 before the advent of Star Trek and its 'transporter', the Nutopians have a "matter transporter" used to dematerialise and rematerialise people between the planet and an alien ship not unlike the later transporter of Star Trek fame. In the Star Trek transporter, which brought the concept of teleportation into popular knowledge, two essential stages of the process are dematerialization and rematerialization; created in an era before any CGI was possible. The visual effects communicating these processes to the spectators "were created by dropping tiny bits of aluminum foil and aluminum perchlorate powder against a black sheet of cardboard, and photographing them illuminated from the side by a bright light. [...] In the studio lab, after the film was developed, the actors were superimposed fading out and the fluttering aluminum fading in, or vice versa."[8] According to an informal survey carried out by Lawrence M. Krauss on his campus "the number of people in the United States who would not recognize the phrase 'Beam me up, Scotty' is roughly comparable to the number of people who have never heard of ketchup."[9] In his book, The Physics of Star Trek, after explaining the difference between transporting information and transporting the actual atoms, Krauss notes that "The Star Trek writers seem never to have got it exactly clear what they want the transporter to do. Does the transporter send the atoms and the bits, or just the bits?" He notes that according to the canon definition of the transporter the former seems to be the case, but that that definition is inconsistent with a number of applications, particularly incidents, involving the transporter, which appear to involve only a transport of information, for example the way in which it splits Kirk into two versions in the episode "The Enemy Within" or the way in which Riker is similarly split in the episode "Second Chances".[10] Krauss writes that in order to "dematerialize" something in order to achieve matter teleportation, the binding energy of the atoms and probably that of all its nuclei would have to be overcome. He notes that the binding energy of electrons around nuclei is minuscule relative to binding energy that hold nuclei together. He notes that "if we were to heat up the nuclei to about 1000 billion degrees (about a million times hotter than the temperature at the core of the Sun), then not only would the quarks inside lose their binding energies but at around this temperature matter will suddenly lose almost all of its mass. Matter will turn into radiation—or, in the language of our transporter, matter will dematerialize. [...] In energy units, this implies providing about 10 percent of the rest mass of protons and neutrons in the form of heat. To heat up a sample the size of a human being to this level would require therefore, about 10 percent of the energy needed to annihilate the material—or the energy equivalent of a hundred 1-megaton hydrogen bombs."[11] In 1993, Bennett et al[12] proposed that a quantum state of a particle could be teleported to another distant particle, but the two particles do not move at all. This is called state teleportation. There are a lot of following theoretical and experimental papers published. Researchers believe that quantum teleportation is the foundation of quantum calculation and quantum communication. In 2008, M. Hotta[13] proposed that it may be possible to teleport energy by exploiting quantum energy fluctuations of an entangled vacuum state of a quantum field. There are some papers published but no experimental verification. In 2016, Y. Wei proposed that particles themselves could teleport from one place to another.[14] This is called particle teleportation. With this concept, Superconductivity can be viewed as the teleportation of some electrons in the superconductor and superfluidity as the teleportation of some of the atoms in the cellular tube. Physicists are trying to verify this concept experimentally. See also[edit] 1. ^ "The Hawaiian gazette. (Honolulu [Oahu, Hawaii]) 1865-1918, October 23, 1878, Image 4".  2. ^ "29 Jun 1878 - THE LATEST WONDER.".  3. ^ "Lo!: Part I: 2". Retrieved 2014-03-20.  4. ^ "less well-known is the fact that Charles Fort coined the word in 1931" in Rickard, B. and Michell, J. Unexplained Phenomena: a Rough Guide special (Rough Guides, 2000 (ISBN 1-85828-589-5), p.3) 5. ^ "Teleportation". Etymology online. Retrieved 7 October 2016.  6. ^ Mr. X. "Lo!: A Hypertext Edition of Charles Hoy Fort's Book". Retrieved 2014-03-20.  7. ^ "Teleportation in early science fiction". The Worlds of David Darling. Retrieved 2014-02-04.  8. ^ David Darling (29 April 2005). Teleportation: The Impossible Leap. John Wiley & Sons. p. 10. ISBN 978-0-471-71545-0.  9. ^ Mieke Schüller (2 October 2005). Star Trek - The Americanization of Space. GRIN Verlag. p. 5. ISBN 978-3-638-42309-0.  10. ^ Lawrence M. Krauss (1995), The Physics of Star Trek, Basic Books, ISBN 978-0465002047, pp. 67-68 11. ^ Lawrence M. Krauss (1995), The Physics of Star Trek, Basic Books, ISBN 978-0465002047, pp. 71-73 12. ^ C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, W. K. Wootters (1993), Teleporting an Unknown Quantum State via Dual Classical and Einstein–Podolsky–Rosen Channels, Phys. Rev. Lett. 70, 1895–1899. 13. ^ Hotta, Masahiro. "A PROTOCOL FOR QUANTUM ENERGY DISTRIBUTION". Phys. Lett. A 372 5671 (2008).  14. ^ Wei, Yuchuan (29 June 2016). "Comment on "Fractional quantum mechanics" and "Fractional Schrödinger equation"". APS Physics.  Further reading[edit]
d5989781d3539623
We gratefully acknowledge support from the Simons Foundation and member institutions. Authors and titles for Oct 2016, skipping first 50 [ total of 3212 entries: 1-50 | 51-100 | 101-150 | 151-200 | 201-250 | ... | 3201-3212 ] [ showing 50 entries per page: fewer | more ] Title: Periods of quaternionic Shimura varieties. I Comments: 173 pages Subjects: Number Theory (math.NT) [52]  arXiv:1610.00167 [pdf, other] Title: Structure of attractors for boundary maps associated to Fuchsian groups Subjects: Dynamical Systems (math.DS) Title: Regularity of Milne Problem with Geometric Correction in 3D Authors: Yan Guo, Lei Wu Subjects: Analysis of PDEs (math.AP) Title: The symbolic defect of an ideal Subjects: Commutative Algebra (math.AC) Title: Transitive closure and transitive reduction in bidirected graphs Subjects: Combinatorics (math.CO) Title: Effective Capacity in MIMO Channels with Arbitrary Inputs Comments: Accepted for publication at IEEE transaction on vehicular technology Subjects: Information Theory (cs.IT) Subjects: Analysis of PDEs (math.AP) Comments: arXiv admin note: text overlap with arXiv:1508.05646 Subjects: Probability (math.PR) Title: Large deviation for lasso diffusion process Subjects: Probability (math.PR) Title: Expected Depth of Random Walks on Groups Comments: 14 pages Subjects: Group Theory (math.GR); Probability (math.PR) Comments: arXiv admin note: text overlap with arXiv:1007.2915 Subjects: Analysis of PDEs (math.AP) Title: Weakly coupled mean-field game systems Subjects: Analysis of PDEs (math.AP) Title: Simpson's construction of varieties with many local systems Authors: Donu Arapura Comments: This will appear in the proceedings of Zucker's birthday conference Subjects: Algebraic Geometry (math.AG) Comments: 89 pages Subjects: Probability (math.PR) Authors: Quoc P. Ho Subjects: Algebraic Geometry (math.AG) Title: A note on the reverse mathematics of the sorites Subjects: Logic (math.LO) Comments: 19 pages Subjects: Analysis of PDEs (math.AP) Title: Quantizing Weierstrass Authors: Jack Klys Comments: 30 pages Subjects: Number Theory (math.NT) [71]  arXiv:1610.00227 [pdf, other] [72]  arXiv:1610.00228 [pdf, other] Title: Positivity for convective semi-discretizations Subjects: Numerical Analysis (math.NA) Comments: 19 pages Subjects: Analysis of PDEs (math.AP) Authors: Han Wu Comments: arXiv admin note: text overlap with arXiv:1604.08551 Subjects: Number Theory (math.NT) Title: Regularity of the Eikonal equation with two vanishing entropies Subjects: Analysis of PDEs (math.AP) [76]  arXiv:1610.00238 [pdf, ps, other] Title: The IC-indices of Some Complete Multipartite Graphs Subjects: Combinatorics (math.CO) [77]  arXiv:1610.00239 [pdf, ps, other] Title: Optimal compression of approximate inner products and dimension reduction Comments: 29 pages Subjects: Metric Geometry (math.MG); Combinatorics (math.CO) [78]  arXiv:1610.00240 [pdf, ps, other] Title: Vanishing Viscosity Limit For the 3D Nonhomogeneous Incompressible Navier-Stokes Equations With a Slip Boundary Condition Comments: 14 pages Subjects: Analysis of PDEs (math.AP) [79]  arXiv:1610.00247 [pdf, ps, other] Title: The large $k$-term progression-free sets in $\mathbb{Z}_q^n$ Authors: Hongze Li Subjects: Number Theory (math.NT) [80]  arXiv:1610.00260 [pdf, ps, other] Title: Non-Koszul quadratic Gorenstein toric rings Authors: Kazunori Matsuda Comments: 11 pages, 3 figures, Proposition 1.3(4) is added Subjects: Commutative Algebra (math.AC) [81]  arXiv:1610.00267 [pdf, ps, other] Title: A sufficient condition for global existence of solutions to a generalized derivative nonlinear Schrödinger equation Comments: To appear in Analysis & PDE. We changed the title. Namely, this paper is a revised version of "Global Well-Posedness on a generalized derivative nonlinear Schr\"{o}dinger equation" Journal-ref: Analysis & PDE 10 (2017) 1149-1167 Subjects: Analysis of PDEs (math.AP) [82]  arXiv:1610.00268 [pdf, ps, other] Title: Balayage for Riesz kernels with application to potential theory for the associated Green kernels Comments: 29 pages [83]  arXiv:1610.00276 [pdf, other] Title: Universal measure for Poncelet-type theorems [84]  arXiv:1610.00278 [pdf, ps, other] Title: On the wellposedness of the KdV equation on the space of pseudomeasures Comments: 45 pages. arXiv admin note: text overlap with arXiv:1502.05857 Journal-ref: Sel. Math. New Ser., online, (2017) Subjects: Analysis of PDEs (math.AP) [85]  arXiv:1610.00280 [pdf] Title: The Quadrahelix: A Nearly Perfect Loop of Tetrahedra Comments: 15 pages, 17 figures, additional 7 pages in an Appendix of Mathematica code Revision changes the argument in section 6 using lattice reduction, and adds a reference Subjects: Metric Geometry (math.MG) [86]  arXiv:1610.00282 [pdf, ps, other] Title: The bullet problem with discrete speeds Comments: 12 pages, 3 figures. Streamlined introduction and proofs. Simplified theorem statements. Added applications to ballistic annihilation. Updated references Subjects: Probability (math.PR) [87]  arXiv:1610.00284 [pdf, ps, other] Title: Whittaker supports for representations of reductive groups Comments: v7: minor corrections. Version to appear in Annales de l'institut Fourier. 33 pages Subjects: Representation Theory (math.RT) [88]  arXiv:1610.00286 [pdf, ps, other] Title: New methods for old spaces: synthetic differential geometry Authors: Anders Kock Comments: Invited contribution to the planned book: New Spaces in Mathematics and Physics - Formal and Philosophical Reflections (ed. M. Anel and G. Cartren), presented at the Workshop at IHP (Paris), September 28 - October 2 2015. Updated Sept. 2017, including a section on Huygens' principle of wave fronts Subjects: Differential Geometry (math.DG) [89]  arXiv:1610.00296 [pdf, other] Title: Comparing the Locking Threshold for Rings and Chains of Oscillators Comments: 9 pages, 4 figures Journal-ref: Phys. Rev. E 94, 062203 (2016) Subjects: Dynamical Systems (math.DS); Adaptation and Self-Organizing Systems (nlin.AO) [90]  arXiv:1610.00297 [pdf, ps, other] Title: Roman domination excellent graphs: trees Comments: 23 pages, 2 figures Subjects: Combinatorics (math.CO) [91]  arXiv:1610.00298 [pdf, other] Title: Khovanskii bases, higher rank valuations and tropical geometry Comments: Extensively revised and many typos and errors corrected. Section on Gr\"obner bases and higher rank tropical geometry moved to the appendix. To appear in SIAM Journal on Applied Algebra and Geometry (SIAGA). 43 pages [92]  arXiv:1610.00299 [pdf, ps, other] Title: Classical and strongly classical 2-absorbing second submodules Comments: This article was accepted for publication in European Journal of Pure and Applied Mathematics. arXiv admin note: substantial text overlap with arXiv:1609.08054 Subjects: Commutative Algebra (math.AC) [93]  arXiv:1610.00306 [pdf, other] Title: A two-phase strategy for control constrained elliptic optimal control problems Authors: Xiaoliang Song, Bo Yu Subjects: Optimization and Control (math.OC) [94]  arXiv:1610.00313 [pdf, ps, other] Title: $X$-torsion and universal groups Comments: 10 pages. This is the first version, comments are welcome Subjects: Group Theory (math.GR); Logic (math.LO) [95]  arXiv:1610.00316 [pdf, ps, other] Title: Uniformly most powerful unbiased test for conditional independence in Gaussian graphical model Comments: 11 pages Subjects: Statistics Theory (math.ST) [96]  arXiv:1610.00317 [pdf, other] Title: Suspension of the Billiard maps in the Lazutkin's coordinate Authors: Jianlu Zhang Comments: 16 pages, 4 figures Subjects: Dynamical Systems (math.DS) [97]  arXiv:1610.00319 [pdf, ps, other] Title: Namba forcing, weak approximation, and guessing Journal-ref: J. symb. log. 83 (2018) 1539-1565 Subjects: Logic (math.LO) [98]  arXiv:1610.00322 [pdf, ps, other] Title: The Pointillist principle for variation operators and jump functions Authors: Kevin Hughes Comments: 9 pages Subjects: Classical Analysis and ODEs (math.CA) [99]  arXiv:1610.00330 [pdf, ps, other] Title: On powers of the Euler class for flat circle bundles Authors: Sam Nariman Comments: Accepted for publication by Journal of Topology and Analysis [100]  arXiv:1610.00341 [pdf, other] Title: Improved bounds on the diameter of lattice polytopes Comments: 14 pages, 1 figure Journal-ref: Acta Math. Hung. 154(2), 457-469 (2018) Subjects: Metric Geometry (math.MG); Combinatorics (math.CO); Optimization and Control (math.OC) [ showing 50 entries per page: fewer | more ] Disable MathJax (What is MathJax?) Links to: arXiv, form interface, find, math, 2012, contact, help  (Access key information)
96a278dbe202c391
Quantum Many-Body Adiabaticity, Topological Thouless Pump and Driven Impurity in a One-Dimensional Quantum Fluid [    [    [ [ [ [ [ [ [ The quantum adiabatic theorem states that a driven system can be kept arbitrarily close to the instantaneous eigenstate of its Hamiltonian if the latter varies in time slowly enough. When it comes to applying the adiabatic theorem in practice, the key question to be answered is how slow slowly enough is. This question can be an intricate one, especially for many-body systems, where the limits of slow driving and large system size may not commute. Recently we have shown how the quantum adiabaticity in many-body systems is related to the generalized orthogonality catastrophe [Phys. Rev. Lett. 119, 200401 (2017)]. We have proven a rigorous inequality relating these two phenomena and applied it to establish conditions for the quantized transport in the topological Thouless pump. In the present contribution we (i) review these developments and (ii) apply the inequality to establish the conditions for adiabaticity in a one-dimensional system consisting of a quantum fluid and an impurity particle pulled through the fluid by an external force. The latter analysis is vital for the correct quantitative description of the phenomenon of quasi-Bloch oscillations in a one-dimensional translation invariant impurity-fluid system. aff1,aff2,aff3] Oleg Lychkovskiy \correfcor1 aff4,aff5,aff6]Oleksandr Gamayun aff5]Vadim Cheianov aff1] Skolkovo Institute of Science and Technology, Skolkovo Innovation Center 3, Moscow 143026, Russia. aff2] Steklov Mathematical Institute of Russian Academy of Sciences, Gubkina str. 8, Moscow 119991, Russia. aff3] Russian Quantum Center, Novaya St. 100A, Skolkovo, Moscow Region, 143025, Russia. aff4] Institute for Theoretical Physics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands. aff5] Instituut-Lorentz, Universiteit Leiden, P.O. Box 9506, 2300 RA Leiden, The Netherlands. aff6] Bogolyubov Institute for Theoretical Physics, 14-b Metrolohichna str., Kyiv 03680, Ukraine. [cor1]Corresponding author: O.Lychkovskiy@skoltech.ru 1 Introduction Consider a quantum system with a Hamitonian , where is a time-dependent parameter. For simplicity, we assume a linear dependence of on time, where is time and is called the driving rate. For each one defines an instantaneous ground state, , which is the lowest eigenvalue solution to the Schrödinger’s stationary equation, Here is the instantaneous ground state energy. We assume that the ground state is non-degenerate for any .111Our consideration and results are equally applicable to other nondegenerate eigenstates. We restrict the presentation to the most important case of the ground state to simplify notations. The dynamics of the system is governed by the Schrödinger equation, which can be written in a convenient rescaled form, Here is the state vector of the system, which depends on time through the time-dependent parameter . Initially the system is prepared in the instantaneous ground state: The evolution is called adiabatic as long as the state of the system, , stays close to the instantaneous ground state, . The celebrated Quantum Adiabatic Theorem (QAT) [1, 2] states that for however small and arbitrary point in the parameter space, , there exists small enough that where the adiabatic fidelity, quantifies how close and are. The QAT as presented above is a typical existence theorem. To make this theorem practical one usually needs to understand how slow slowly enough is. In other words, one would like to estimate the maximum allowed for given and . Upper (lower) bounds on such a threshold value of are known as necessary (sufficient) adiabatic conditions. A variety of sufficient adiabatic conditions can be constructed as byproducts of the proof of the QAT (see, e.g. [3]). Unfortunately, these conditions typically contain operator norms of the time derivatives of the Hamiltonian. This fact limits their applicability for continuous systems (where these operator norms are often infinite) and for many-body systems (where these operator norms, even when finite, grow very rapidly with the system size). Alternatively, one can be interested in another meaningful question: For a given driving rate , how far in the parameter space the system can evolve whilst maintaining adiabaticity with a given allowance222in the sense of Eq. 4 ? The answer to this question, for a given , can be encoded in the adiabatic mean free path, . To simplify the notations we will discuss the adiabatic mean free path with respect to the fixed , in which case it is given, by definition, by the solution of the equation333To be more exact, the mean free path is given by the smallest positive solution of Eq. (6). From general considerations one expects that for gapless many-body systems vanishes in the thermodynamic limit (TL) with , where is the number of particles, is the linear size of the system and is the dimensionality of the system (see e.g. [4]).444In what follows we slightly abuse the notations and denote the thermodynamic limit by without mentioning explicitly that and . The latter statement, however, has been explicitly verified only for a limited number of many-body systems [5, 6, 7]. Moreover, its validity for a driven one-dimensional impurity-fluid system has been recently questioned [8]. In the recent paper [9] we have shown how the quantum adiabaticity in many-body systems can be quantitatively related to a phenomenon of a genuinely many-body origin – orthogonality catastrophe. This relation have been used to establish a necessary condition for quantum adiabaticity and express the mean free path through the orthogonality catastrophe exponent. These general results have been applied to establish conditions for quantum adiabaticity in the topological Thouless pump and to clarify the effect of the adiabaticity breakdown on the quantization of the charge transport. In the present contribution we (i) review these developments (next two sections) and (ii) apply the developed general theory to the system consisting of a quantum fluid and an impurity particle pulled through the fluid by a constant external force (the forth section). The latter analysis is of crucial importance for establishing conditions for quasi-Blosh oscillations in one dimension – an intriguing phenomenon by which an impurity particle pulled through a spatially homogeneous one-dimensional quantum fluid experiences oscillations in the absence of any external periodic potential. In particular, we resolve a dispute on whether quantum adiabaticity can be maintained in the impurity-fluid system in the thermodynamic limit [8, 10]. 2 Adiabaticity and Orthogonality Catastrophe Orthogonality catastrophe is a genuine multiparticle phenomenon by which the ground states of Hamiltonians and can be nearly orthogonal for large system sizes despite is small [11]. To quantify this phenomenon we introduce the orthogonality overlap and say that the orthogonality catastrophe takes place whenever in the thermodynamic limit. Here is the leading term of in the thermodynamic limit in the sense that the remainder satisfies Note that the scaling of can be very different depending on the nature of driving and on whether the system is gapless or gapped in the TL. The central result of Ref. [9] is an inequality which binds the adiabatic fidelity, , and the orthogonality overlap, . Here we quote this inequality for an important special form of the Hamiltonian, In this case the inequality reads [9] where is the uncertainty of the driving term , The subscript ”” in emphasizes that this quantity can (and in certain cases does) diverge in the thermodynamic limit. However, for a broad class of systems the orthogonality exponent, , diverges faster: This fact has profound implications. Indeed, assume first that the driving rate, , does not scale with the system size. Then there is a timescale on which the right hand side of the inequality (9) is still small and, at the same time, the orthogonality overlap has already vanished. As a consequence, the adiabaticity, which is tied to the orthogonality catastrophe by the inequality (9), inevitably breaks down at this timescale. This allows one to find the adiabatic mean free path which reads [9] The physical scenario behind this adiabaticity breakdown has been qualitatively described in Ref. [12]: In a many-body system departs from much slower than from , as a result with . Ref. [9] establishes quantitative conditions when this indeed happens. The inequality (9) implies that whenever the scaling law (11) applies the only way to avoid adiabaticity breakdown with increasing system size is to scale down the driving rate with the system size. Assume that with . In order for the adiabaticity to be maintained at with the allowance one should require that the r.h.s. of the inequality (9) is greater than . This leads to the necessary adiabatic condition [9] 3 Topological Thouless Pump The topological Thouless pump [13] is a quantum device which transfers charge in a quantized manner by performing a cycle in the parameter space of its Hamiltonian. In the original paper by Thouless, Ref. [13], the quantization was proved under the assumption of adiabaticity, however, the adiabatic conditions were not discussed. In Ref. [9] we have filled this gap by applying our general theory to the simplest theoretical realization of the Thouless pump – the Rice-Mele model [14]. Two key quantities, and , where is the number of particles in the body of the pump, have been calculated. We have found that and , thus validating Eq. (11). We have concluded that for any given driving rate (or, equivalently, cycle duration), the adiabaticity breaks down for a sufficiently large pump. Alternatively, in order to have a chance to maintain adiabaticity for larger and larger systems, one needs to decrease the driving rate at least as fast as . Quite remarkably, the considered model of the pump has an energy gap between the many-body ground state and the first excited state which does not vanish in the thermodynamic limit in any point of the cycle. This fact illustrates a failure of the conventional wisdom which asserts that the maximal driving rate appropriate for maintaining adiabaticity scales with the system size in the same way as the gap. This conventional wisdom is widely used but, in general, wrong. This is highlighted by the present example, where a finite gap fails to protect adiabaticity in the thermodynamic limit. While the true many-body adiabaticity defined according to Eq. (4) is sufficient for the quantized transport [13], whether it is necessary has been remaining an open question. We have addressed this question in Ref. [9], with a surprising results. It appears that, in fact, two distinct modes of operation of the pump should be considered separately. The first mode is a continuous one, when the pump performs one cycle after another, approaching a stationary state. We have verified that the true many-body adiabaticity is mandatory for quantization of the transferred charge per cycle in this mode. The second mode can be called a transient one: One measures the transferred charge immediately after a single cycle is completed, and then initiates the pump back in its ground state (such initialization requires some sort of external cooling). In this mode the quantization is present even when the many-body adiabaticity has gone completely. It seems plausible that in this case a less stringent notion of a local adiabaticity [15] can provide an adequate condition for the transferred charge quantization. 4 Driven Impurity in a One-Dimensional Fluid In the present section we apply the general theory developed in [9] and reviewed above to a one-dimensional impurity-fluid system. This system can feature a phenomenon of quasi-Bloch oscillations by which an impurity particle pulled with a constant force through a one-dimensional quantum fluid experiences periodic oscillations of its velocity. Note that in contrast to the ordinary Bloch oscillations, there is no external periodic potential here and the system is translation invariant. This intriguing phenomenon has been predicted in [16] and observed experimentally in [17]. While the existence of this phenomenon is beyond any doubt, the quantitative conditions for its emergence are a matter of controversy and debate [8, 10, 18, 19, 20, 21, 22, 23]. A key issue in this debate is whether the many-body adiabaticity can be maintained for a small but finite driving force in the thermodynamic limit [8, 10]. Here we address this issue for a particular impurity-fluid model. This model consists of fermions and a single impurity particle with a mass equal to the mass of a fermion. The force is applied to the impurity. Fermions do not interact with each other but couple to the impurity via the repulsive contact potential. The Hamiltonian reads where is the impulse of the force, and are the coordinates of the impurity and the ’th fermion respectively and is the impurity-fluid coupling. The role of the time-dependent parameter is played by the dimensionless impulse , where is the Fermi momentum. For a fixed the model (14) is integrable as shown by McGuire [24]. In fact, this model is one of the simplest models solvable via the Bethe ansatz: Its eigenfunction can be expressed through Slatter-like determinants [25, 26]. For this reason it has been possible to obtain a wealth of analytical results and to gain a number of deep insights into the physics of the model [25, 26, 27, 28, 29, 30, 31, 32]. Although this model is a special case of the Yang-Gaudin model [33, 34], it might deserve a separate name – McGuire model – due to its conceptual importance. Thanks to the integrability, we are able to calculate and for the model (14) explicitly and to evaluate Eq. (12) and Eq. (13). We find that and in the thermodynamic limit. The details of calculations will be presented elsewhere [32], here we report the results. The adiabatic mean free path reads One can see that in the thermodynamic limit of the adiabatic mean free path, , vanishes and thus the adiabaticity breaks down for any finite value of the force. If one allows the force to be dependent on the system size, , one gets from Eq. (13) the necessary adiabatic condition where is a quantum uncertainty of the impurity momentum, 5 Summary To summarize, we have reviewed the formalism developed in Ref. [9], which relates the adiabaticity in many-body systems to the orthogonality catastrophe, and its application to the topological quantized pumping. We have also applied this formalism to a particular one-dimensional impurity-fluid model in which a force applied to the impurity pulls the latter through the fluid. We have found the adiabatic mean free path and established a necessary adiabatic condition for this model. As a corollary, we have proven that the adiabaticity breaks down in the thermodynamic limit for any finite force. This way we have resolved a controversy of key importance for establishing the conditions for quasi-Bloch oscillations [8, 10]. 6 Acknowledgments The authors are grateful to P. Ostrovsky, S. Kettemann, I. Lerner, G. Shlyapnikov, Y. Gefen, S. Nakajima, M. Schecter and M. Troyer for fruitful discussions and useful comments. OL acknowledges the support from the Russian Foundation for Basic Research under Grant No. 16-32-00669. The work of OG was partially supported by Project 1/30-2015 “Dynamics and topological structures in Bose-Einstein condensates of ultracold gases” of the KNU Branch Target Training at the NAS of Ukraine. Comments 0 Request Comment You are adding the first comment! How to quickly get a good reply: Add comment Loading ... This is a comment super asjknd jkasnjk adsnkj The feedback must be of minumum 40 characters The feedback must be of minumum 40 characters You are asking your first question! How to quickly get a good answer: • Keep your question short and to the point • Check for grammar or spelling errors. • Phrase it like a question Test description
c434e591084a15ac
I believe that all of Quantum Mechanics should be retrievable from QFT (in 3+1 dimensions) by taking some appropriate limits and/or integrating out degrees of freedom. David Tong shows in his Lecture Notes on QFT (David Tong) how the Schrodinger equation without potential follows from taking a free field and looking at a superposition of one particle states (pp. 43-45). How would one derive the Schrodinger equation for a general potential? What is the input on the QFT side that gives you a specific potential? (Either, or preferably both, in the Hamiltonian/Canonical Approach and/or in the Lagrangian/Path Integral approach.) • 1 $\begingroup$ 1. More on reduction from QFT to QM: physics.stackexchange.com/q/26960/2451 , physics.stackexchange.com/q/4156/2451 , physics.stackexchange.com/q/208615/2451 and links therein. 2. For a connection between Schr. eq. and Klein-Gordon eq, see e.g. A. Zee, QFT in a Nutshell, Chap. III.5, and this Phys.SE post plus links therein. $\endgroup$ – Qmechanic Jun 9 '17 at 15:40 • $\begingroup$ This answer address the core of your question I think: physics.stackexchange.com/a/142172/154997 $\endgroup$ – user154997 Jun 9 '17 at 20:19 • $\begingroup$ @Luc thanks. While this is certainly interesting and it is related, it does not answer the core of my question. It shows how we can compare quantum mechanical scattering to QFT scattering. This seems like a smart approach when you want to compare QM to QFT, as in QFT we generally focus only on scattering. But I'm more interested in where the rest of QM is hiding out. How do we derive the Schrodinger equation with a potential. Is there a simple procedure or do we have to introduce a complicated system of fields that happens to give a certain potential, such as a the 1/r potential in this case? $\endgroup$ – Kvothe Jun 12 '17 at 8:36 First I will try to address your specific interrogation about the existence of a generic case. The potential in the Schrödinger equation for one particle models the interaction of that particle with "something else". Therefore, if it is to be the limit of some QFT, the latter shall feature at least two fields and their interaction: a field for the particles of the type modelled by the Schrödinger equation, and a field for the "something else", i.e. another type of particles. That interaction is what will give rise to the potential in non-relativistic Schrödinger equation, after integrating out all the degrees of freedom of the second field. So, I think that a reasoning valid for an arbitrary potential does not exist. A big part of the reason is that one has too much of a limited choice for the QFT Lagrangian, and especially the interaction term, as it has to be Lorentz-invariant, gauge-invariant, etc. But now, the obvious question about the QED case. This is both the easiest and the most important variant of your question. Important because a lot of predictions in atomic physics rely on this limit. Precisely we would like to prove that QED degenerates into the following Hamiltonian for $N$ particles of charge $q$ and position $r_i$ and momentum $p_i$ and spin $\sigma_i$ $$H = \sum_{i=1}^N \frac{1}{2m}\left(p_i - q A_{\perp}(r_i)\right)^2 - \sum_{i=1}^N \frac{q\hbar}{2m}\sigma_i\cdot B(r_i) + \sum_{i\ne j} \frac{q^2}{8\pi\epsilon_0}\frac{1}{\|r_i - r_j\|} + H_\text{Photons}$$ where $A_\perp$ is the transverse vector potential and $B$ is the magnetic field. There is a very thorough treatment of this question in [1, B$_\text{V}$], a complement to chapter V titled "Justification of the nonrelativistic theory in the Coulomb gauge starting from relativistic quantum electrodynamics". This book is a translation of the original version in French, which is the one I own (and pulled from the card boxes where I had stashed it away!). I may try to summarise the argument in a later edit of this answer but it would be hard to do justice to Cohen-Tannoudji and Dupont-Roc. [1] Cohen-Tannoudji, Dupont-Roc, Photons and Atoms: Introduction to Quantum Electrodynamics, Wiley, 1989 Your Answer
6d68e6d4d5263229
Pub. Date: Springer Netherlands Mathematical Aspects of Superspace / Edition 1 Mathematical Aspects of Superspace / Edition 1 Current price is , Original price is $169.99. You Temporarily Out of Stock Online Please check back later for updated availability. Over the past five years, through a continually increasing wave of activity in the physics community, supergravity has come to be regarded as one of the most promising ways of unifying gravity with other particle interaction as a finite gauge theory to explain the spectrum of elementary particles. Concurrently im­ portant mathematical works on the arena of supergravity has taken place, starting with Kostant's theory of graded manifolds and continuing with Batchelor's work linking this with the superspace formalism. There remains, however, a gap between the mathematical and physical approaches expressed by such unanswered questions as, does there exist a superspace having all the properties that physicists require of it? Does it make sense to perform path­ integral in such a space? It is hoped that these proceedings will begin a dialogue between mathematicians and physicists on such questions as the plan of renormalisation in supergravity. The contributors to the proceedings consist both of mathe­ maticians and relativists who bring their experience in differen­ tial geometry, classical gravitation and algebra and also quantum field theorists specialized in supersymmetry and supergravity. One of the most important problems associated with super­ symmetry is its relationship to the elementary particle spectrum. Product Details ISBN-13: 9789027718051 Publisher: Springer Netherlands Publication date: 07/31/1984 Series: Nato Science Series C: , #132 Edition description: 1984 Pages: 214 Table of Contents Non-linear Realization of Supersymmetry.- 1. Introduction.- 2. The Akulov-Volkov field.- 3. Superfields.- 4. Standard fields.- 5. N > 1/N = 1.- 6. N = 1 supergravity.- References.- Fields, Fibre Bundles and Gauge Groups.- 1. Manifolds.- 2. Fibre bundles.- 2.1 Fields.- 2.2 Coordinate bundles.- 2.3 Fibre bundles.- 2.4 Examples.- 2.5 Fields and geometry.- 2.6 Principal bundles.- 2.7 Cross-sections.- 2.8 Bundles with structure: sheaves.- 2.9 Associated bundles.- 2.10 Connections.- 2.11 Examples.- 3. Gauge Groups.- 3.1 Proposition: Gauge transformations.- 3.2 Gauge action on associate bundles.- 3.3 Quasi-gauge groups.- 3.4 Gauge algebras.- 3.5 Gauge-invariance.- 3.6 Gauge theory.- 4. Space-Time.- 4.1 Spinors.- 4.2 Soldering forms.- 4.3 Achtbeine.- 4.4 Example: Lie derivatives.- 4.5 Supersymmetries.- Path Integration on Manifolds.- 1. Introduction.- 2. Gaussian measures, cylinder set measures, and the Feynman-Kac formula.- 2.1 Basic difficulties; terminology.- 2.2 Gaussian measures.- 2.3 Cylinder set measures.- 2.4 Radonification.- 2.5 Feynman-Kac formula.- 2.6 Time slicing.- 3. Feynman path integrals.- 3.1 Oscillatory integrals and Fresnal integrals.- 3.2 Feynman maps.- 3.3 Feynman path integrals and the Schrödinger equation.- 4. Path integration on Riemannian manifolds.- 4.1 Wiener measure and rolling without slipping.- 4.2 The Pauli-Van-Vleck-De Witt propagator.- 5. Gauge invariant equations; diffusion and differential forms.- 5.1 Quantum particle in a classical magnetic field.- 5.2 Heat equation for differential forms.- Acknowledgements, References.- Graded Manifolds and Supermanifolds.- Preface and cautionary note.- 0. Standard notation.- 1. The category GM.- 1.1 Definitions and examples of graded manifolds.- 1.2 Bundles in GM.- 2. The geometric approach.- 2.1 The general idea.- 2.2 The graded commutative algebra B and supereuclidan space.- 2.3 Smooth maps on Er,s.- 2.4 Examples of supermanifolds.- 2.5 Bundles over supermanifolds.- 3. Comparisons.- 3.1 Comparing GM and SSM.- 3.2 Comparison of geometric manifolds.- 3.3 A direct method of comparing GM and G?.- 4. Lie supergroups.- 4.1 Lie supergroups in the geometric categories.- 4.2 Graded Lie groups.- Table: “All I know about supermanifolds”.- References.- Aspects of the Geometrical Approach to Supermanifolds.- 1. Abstract.- 2. Building superspace over an arbitrary spacetime.- 3. Super Lie groups.- 4. Compact supermanifolds with non-Abelian fundamental group.- 5. Developments and applications.- References.- Integration on Supermanifolds.- 1. Introduction.- 2. Standard integration theory.- 3. Integration over odd variables.- 4. Superforms.- 5. Integration on Er,s.- 6. Integration on supermanifolds.- References.- Remarks on Batchelor’s Theorem.- Classical Supergravity.- 1. Definition of classical supergravity.- 2. Dynamical analysis of classical field theories.- 3. Formal dynamical analysis of classical supergravity.- 4. The exterior algebra formulation of classical supergravity.- 5. Does classical supergravity make sense?.- Appendix: Notations and conventions.- References.- List of participants. Customer Reviews
d04fcd554b2df13e
Johan Wärnegård: Numerical Methods for the Gross-Pitaevskii Equation Tid: To 2019-05-16 kl 14.15 - 15.00 Föreläsare: Johan Wärnegård, KTH Plats: Room F11, Lindstedtsvägen 22, våningsplan 2, F-huset, KTH Campus. The Gross-Pitaevskii equation is a nonlinear Schrödinger equation with applications in several fields such as optics, fluid dynamics, in particular deep water waves and quantum physics. When selecting a suitable discretization for this problem, one needs to be aware of the rich geometric properties. In this talk we compare various mass-conservative time-integrators for the Gross-Pitaevskii equation in physically relevant setups. The comparison contains methods that are purely mass-conservative, methods that are additionally symplectic and methods that preserve the energy exactly. A walkthrough of these properties will be given. Quite notably, the differences between symplectic and energy-conservative discretizations turn out to be stronger than expected if the regularity of the solution is low.
e84496af262ab1a6
Login Register Thread Rating: • 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 Yet another attack of Motl against realist interpretations Lubos Motl has again attacked realist interpretations in general and de Broglie-Bohm theory in particular.   A large part is simply namecalling: Quote:I find it impossible to consider these people intelligent. ... Anti-quantum zealots ... Bohms and similar stinky Bolsheviks ... And all the people persistently (in 2018) trying to negate the basic rules of the game as articulated in Copenhagen are morons – sadly, in most cases, pompous morons. Fortunately, there are also some arguments. Let's take a look at them. The first one uses the uncertainty relations. Quote:Just try to imagine that you want to explain the general uncertainty principle in a "realist" i.e. fundamentally classical theory. So an anti-quantum zealot will probably admit that the operators are "useful" in some way but they're just a caricature of some "deeper", classical theory. Now you must ask: Can such a hypothetical classical theory have a justification for the uncertainty principle? A reason that implies that if A is accurately measurable in the prepared state, B must be less accurate, and vice versa? And can it get the right bound for any choice of A,B? In a "realist" theory, if the outcome of the measurement of A and B that you're going to get is knowable in advance, there's just no reason why there should be some unavoidable uncertainty. First of all, Motl confuses "realist" with "deterministic". Most realist interpretations are stochastic theories, thus, have an inherent uncertainty too. Then, even in a deterministic theory there can appear probabilities. I would suggest him to study thermodynamics. Quite classical thermodynamics, with deterministic classical Hamiltonian mechanics as the starting point. He could easily learn that even if the fundamental theory is deterministic, what we can construct and measure will be uncertain. In fact, the situation in dBB theory is quite similar. In thermodynamics, Boltzmann has derived his H-theorem and proven in this way that the states in thermodynamics move toward equlibrium. For dBB theory, Valentini has proven, using essentially the same techniques, his own subquantum H-theorem: A state which is more localised than a state in quantum equilibrium, which evolves following the equations of dBB theory, in quite short time approximates quantum equilibrium. Valentini and Westman have made some numerical computations to see how fast this happens, and it happened sufficiently fast: Valentini, A., Westman, H. (2005). Dynamical Origin of Quantum Probabilities, Proc.Roy.Soc.Lond. A 461, 253-272, arxiv:quant-ph/0403034 Quote:In that world, graduate students don't have any fundamental obstacle that prevents them from approaching omniscient God increasingly closely Wrong. In the dBB world, they are able to prepare states only in quantum equilibrium, because whatever they create, it approaches quantum equilibrium fast enough. In fact, the preparation procedure available to those graduate students fix only the wave function, not the configuration of the quantum system. The next argument is the requirement that realist interpretations should somehow explain the uncertainty relation: Quote:To sensibly claim that you have an alternative, you would actually have to offer a quantitative scheme that predicts the right lower bound for any choice of A,B. Even if the uncertainty is just an artifact of the apparatuses' imperfection, these apparatuses are still governed by the laws of physics and the laws of physics must have some explanation why their minimum uncertainty always seems to be what the uncertainty principle claims, right? This is an obvious yet huge task that none of the Bohms and similar stinky Bolsheviks has even attempted to solve. Because the way how to do this is well-known and simple. Combine the equivalence proof of dBB theory in quantum equilibrium with quantum theory, and then use the derivation of the uncertainty relations from any quantum theory textbook. In fact, Motl knows this, and argues in quite a strange way that this is somehow not good or so: Quote:I think that all of them know that only the proper apparatus of quantum mechanics – in which the observables really are linear operators, and the calculable predictions really are subjective probabilities of outcomes – can achieve this triumph. The goal of the Bohmian, many-world, and similar theories is just to fake quantum mechanics – to "embed" quantum mechanics in some "realist" framework and claim that it's the better one. So, Motl as the defender of True quantum mechanics starts to fight against some fake quantum mechanics. The fake quantum mechanics has, surprisingly, the same equations and makes the same experimental predictions, but it is a horrible fake! Quote:But there's a problem with "faking". The things you're proposing are still "fake". If the comrades try to fake the capitalist economy but they impose all the Bolshevik constraints such as egalitarianism, they still get just the communist economy which totally sucks. The magic of capitalism and its prosperity strictly contradicts the communist axioms such as egalitarianism. You simply can't fake the capitalist economy within communism – and you can't fake quantum mechanics within a "realist" theory. Your "realist" theory doesn't fundamentally associate the observable quantities with linear operators. Maybe Motl cannot, but Bohm has done it 1952: Bohm, D. (1952). A suggested interpretation of the quantum theory in terms of "hidden" variables, Phys.Rev. 85(2), 166-193 Hm, maybe this is not "fundamentally"? He has proven the equivalence only mathematically, and this is not sufficient? I don't know. I'm quite satisfied with a mathematical proof of equivalence, and if I have it, I feel free to apply the whole mathematical apparatus of quantum theory as if it is part of dBB theory too. This is the nice point of mathematics - you don't have to care if something is also fundamental - once you prove that the equations of dBB theory give in quantum equilibrium the equations of quantum theory together with the Born interpretation, everything is fine. Quote:There's one cute, almost equivalent, way to kill the "realist" theories. And it's the universality of \(\hbar\). Ups. \(\hbar\) is a universal constant in the equations of dBB theory as well as quantum theory. What could be the problem here? Quote:This universality of Planck's constant is also totally incompatible with any realist theory simply because realist theories don't have and can't have any universal constant whose units are those of \(\hbar\). There's just no room for such a constant in classical or "realist" physics! The classical Hamiltonian dynamics is fully given by the Hamiltonian H whose units are just joules, but H isn't a universal constant and the scaling of H doesn't affect the evolution equation at all, anyway. All other universal constants in classical physics are various coefficients defining various terms in H etc. and those apply differently to different degrees of freedom – they are not universal. For example, if some students try to determine the energy-to-frequency ratio, \(\hbar\) from \(E=\hbar\omega\), "realist" theories predict that they must get different values of \(\hbar\) from different particle species etc. Sorry, I don't get the point. Ok, classical Hamiltonian mechanics does not have a constant \(\hbar\). So what? The equations of dBB theory are not equations of classical Hamiltonian mechanics, but of dBB theory. And they have a constant \(\hbar\), in the equations. \(\hbar\) is, as in quantum theory, a constant fixed from the start in the equations of the theory. Quote:So if many groups of graduate students try to extract a constant with units of \(\hbar\) from their experiment, it's basically guaranteed that each group will have a different answer for \(\hbar\): "realist" theories predict that nonzero quantities with the units of \(\hbar\) simply cannot be universal constants of Nature! This is perfectly falsified by Nature where \(\hbar\) may be extracted from infinitely many different experiments (with particles or fields or strings or branes of any kinds, or any combinations of those) and it always has the same value, despite the high precision of the modern experiments. Is there anybody who is able to explain me what is the difference, in this question, between dBB theory and quantum theory? In both theories, \(\hbar\) is the constant used in the Schrödinger equation, which is, BTW, used in both theories in the same way. Quote:"Realist" failure to get quantized measured values The uncertainty principle is just one famous, and almost defining, consequence of the basic rules of quantum mechanics. But there are many others. Such as the quantized spectrum of the energy. ...Can a "realist" theory actually predict the discrete energy spectrum of atoms? Of course, as Bohm has shown in 1952. By proving that realist dBB theory gives in quantum equlibrium the same predictions as quantum theory. After this, it remains to apply the proofs used in quantum theory. Just to inform Motl: Physical theories are not bounded by copyright restrictions, if a proof has been used in quantum theory, and a proponent of dBB theory likes the result, he is free to apply the proof in dBB theory too. Quote:You may embed the mathematics of the wave functions in your "realist" theory. But the interpretation of the wave function will be wrong – the wave function will be misinterpreted as a classical wave – and this misinterpretation has far-reaching consequences. Wow. I have to admit that this sounds interesting. I'm used to hear that considering other interpretations than the established ones is evil, because interpretations are anyway only metaphysics, without any consequences. Now it appears that this is wrong, and using a bad interpretation has even far-reaching consequences. Quote:... every observable that you can measure will have a continuous spectrum. The reason is utterly simple. If you interpret the wave function as a classical wave, your phase space S is a connected, infinite-dimensional continuum. It's as continuous as you can get. If your apparatus ends up measuring the energy of a photon, Eγ, you know that a priori, all positive values of the energy of a general photon must be allowed. If the transformation mapping the initial state to the final state is continuous in any way, it's obvious that you may perturb the desired final value of Eγ, run the evolution backwards, and find an appropriately perturbed initial state that leads to this non-quantized value of the photon's energy. I look at some source of light through a spectrometer. I see spectral lines. Of course, the spectrometer has a continuous spectrum of possible results. Somehow only a few lines are used in this particular source. There is some accuracy of the device - so, the spectral line I see has some thickness. But it is, for all practical purposes, nonetheless a discrete spectrum. There are large parts of the spectrum it shows nothing. What does dBB theory predict for this? The measurement process is described in dBB theory as an interaction with a macroscopic measurement device. What is measured is the position of the measurement device. Once the measurement device is big enough, one can easily see that the quantum potential of this device is negligible, so that it follows essentially a classical trajectory, which is easily observable. Now, the wave function of this measurement device will be, nonetheless, nonzero almost everywhere. So Motl is, of course, free to use the quantum method to catch a lion: Put a cage into the desert. The probability that the lion, via quantum tunneling, appears in the cage is non-zero - and quantum and dBB predictions agree about this. Wait. So, assume Motl's waiting was successful and he has observed a pointer position telling him that the energy was some value which is completely off, in conflict with the quantum prediction. So what do we reach if we follow the the Bohmian trajectory backwards? The first problem is that what has happened was an interaction between the measurement device and the quantum system. What we have observed is,instead, only the position of the measurement device. The final position of the quantum system is unknown. All what we know is the wave function. But let's ignore this, so assume that we know also the position of the quantum system. what does this give? It gives the two initial positions. And we will see that the initial wave function for the same measurement process (the thing which is the same as for quantum theory) also gives these initial positions a nonzero probability. What was Motl's error here? The whole process leads to a non-quantized value of the position of the measurement device, but not of photon's energy. These are different things. Quote:Bohm's theory can't be constructed for relativistic particles such as photons (Quantum Electrodynamics) but a Bohmist would surely say that they explain the measured quantized energy because the pilot wave gets reduced to several beams and the real Bohmian particle is in one of them. Great but this sleight-of-hand won't work if you measure other observables that aren't reduced to positions – such as the voltages in our brain which is how we actually perceive things at the end. The claim that Bohm's theory is unable to handle QED is simply wrong, and the first proposal to handle QED has been given in Bohm's original paper. A relativistic field theory in itself is not a problem at all - for a scalar field, the basic standard field-theoretic formulas are already sufficient. Gauge fields and fermions are more problematic, but the problems have nothing to do with relativity. The key difference between the only approach known by Motl - the particle positions as the configurations - and the approach of dBB field theory is that in field theory the configuration space is the field configuration. And this answers also the other objection - the idea that not all measurements can be reduced to measurements of the configuration (which is only in non-relativistic many-particle theory defined by particle positions). The voltages in our brain are classical field configurations of the EM field, thus, described by EM fields \(F_{\mu\nu}(x)\). In a field theoretic version of dBB electrodynamics, the configuration may be defined either by the potentials \(A_{\mu}(x)\) or the fields \(F_{\mu\nu}(x)\), in dependence on how gauge freedom is handled. But in both cases, the fields \(F_{\mu\nu}(x)\) are uniquely defined by the trajectory of the configuration, which is the trajectory of the field configuration. Photons are irrelevant - they have the same status as phonons in condenses matter theory: They are pseudo-particles with no fundamental relevance, and no measurement has to be reduced to a measurement of phonon positions. So, photon positions become similarly irrelevant in dBB field theory. So far for the arguments of the first part. There is yet another argument, which I will handle separately, because it is based on Bayesian reasoning, and I favor a Bayesian interpretation of the wave function too. So, at least in this part, I'm in some sense on Motl's side. But only in some weak, general sense. Unfortunately, the argument itself is wrong too. But about this later. To end this posting, some fun provided for free by Motl: Quote:Now, this is an extremely general empirical fact – i.e. a fact that you may experimentally check in millions of different situations involving thousands of different physical systems, particles of all types, with or without spins, fields, strings, branes, whatever you like. Emphasis mine, Wink Let's consider now the "final hit" against realist interpretations: Quote:If you make a measurement of the observable \(L\) and the wave function collapses to an eigenstate \(\ket\psi\) of that observable, all the parts of the wave function that "existed" (outcomes that were possible) before the measurement completely disappear and they have exactly zero impact on anything after the measurement of \(L\). The erased parts of the collapsed wave functions are totally eradicated, totally forgotten. You may do anything to your experiment, try ingenious methods to persuade your clever apparatus to "remember" or "recall" the erased parts of the wave function. But your clever apparatuses will not be able to say anything about the number a that defined the state before the collapse If you think about it for a second, this trivial fact totally contradicts any natural (not fine-tuned) "realist" theory. Take Bohmian mechanics as an example. In that "realist" theory, there's the objective pilot wave, a classical field/wave whose numerical values are chosen to fake the quantum mechanical wave function, and then there are the objective "real" positions of the particles. Now, the pilot wave guides the "real" particle somewhere, and you may measure the real particle and say something about the spin – Bohmian mechanics doesn't allow the spin directly so the spin measurement has to be reduced to some measurement of the position. So the Bohmian particle is known to be at the place corresponding to "up". However, the pilot wave still exists in the region that would correspond to "down", too. The point is that this "wrong part of the pilot wave" hasn't been cleaned or forgotten. This pilot wave is coupled to other degrees of freedom in the physical system so in principle, it should be observable. However, experiments clearly say that whatever you do, you just can't observe this "wrong part of the wave function". To avoid the contradiction with the basic empirical facts, the Bohmian mechanics really needs some "janitors" that remove the zombie parts of the pilot wave at places where the particle wasn't seen. Let's start with the clarification that the argument is not against realist interpretations in general, but only against a particular variant of realist interpretations, namely those who give the wave function an ontological status.  There may be realist interpretations of quantum theory which give the wave function an epistemic interpretation. What makes them realist interpretations is that the wave function described incomplete knowledge about some well-defined reality. So, \(|\psi(q)|^2\) describes the probability that the configuration of the system is \(q\). For realist interpretations which interpret the wave function epistemically, as knowledge about some reality, the objection does not work at all.  I think there are good arguments in favor of such an epistemic interpretation of the wave function, which is what I use in my <a href="http://ilja-schmelzer.de/quantum/">"minimal realist interpretation"</a>.  But the argument proposed here by Motl is not of this type. Because in the dBB interpretation one has to distinguish two variants of the wave function:  On the one hand, the wave function of the universe - which, according to the dBB interpretation, somehow objectively exists, but which remains unknown to us - and the effective wave function of a quantum subsystem of the whole universe.  And there is a simple formula which connects them: \[ \psi_{effective}(q_{system},t) = \psi_{universe}(q_{system}, q_{rest-of-the-universe}(t),t)\] Or similarly for the relevant part of the universe, where the rest of the universe is the macroscopic measurement device, which has a known, visible trajectory \(q_{rest-of-the-universe}(t)\).  And this effective wave function collapses.   Now, the dBB wave function of the universe does not collapse.  So, it is, indeed, the superposition of a dead cat and a living cat in Schrödinger's experiment. But so what?  What we see is the configuration \(q(t)\), not the wavefunction \(\psi(q,t).\) This trajectory \(q(t)\) describes a particular configuration of the cat, which is either alive or dead, but not in some superposition.  Once the cat lives, does it matter that the wave function is yet non-zero in imaginable worlds where the cat is dead? No. Once the measurement result is visible in a macroscopic measurement device, decoherence tells us that the other parts of the wave function, which correspond to other, non-observed measurement results, can no longer influence our state. Forum Jump: Users browsing this thread: 2 Guest(s)
fb0c1ecd65d363e3
previous   index   next   PDF Path Integrals in Quantum Mechanics Michael Fowler, UVa Huygen’s Picture of Wave Propagation                                               If a point source of light is switched on, the wavefront is an expanding sphere centered at the source.  Huygens suggested that this could be understood if at any instant in time each point on the wavefront was regarded as a source of secondary wavelets, and the new wavefront a moment later was to be regarded as built up from the sum of these wavelets. For a light shining continuously, this process just keeps repeating. What use is this idea? For one thing, it explains refraction—the change in direction of a wavefront on entering a different medium, such as a ray of light going from air into glass. If the light moves more slowly in the glass, velocity v  instead of c,  with v<c,  then Huygen’s picture explains Snell’s Law, that the ratio of the sines of the angles to the normal of incident and transmitted beams is constant, and in fact is the ratio c/v.   This is evident from the diagram below: in the time the wavelet centered at A has propagated to C, that from B has reached D, the ratio of lengths AC/BD being c/v.  But the angles in Snell’s Law are in fact the angles ABC, BCD, and those right-angled triangles have a common hypotenuse BC, from which the Law follows. Fermat’s Principle of Least Time We will now temporarily forget about the wave nature of light, and consider a narrow ray or beam of light shining from point A to point B, where we suppose A to be in air, B in glass.  Fermat showed that the path of such a beam is given by the Principle of Least Time: a ray of light going from A to B by any other path would take longer. How can we see that? It’s obvious that any deviation from a straight line path in air or in the glass is going to add to the time taken, but what about moving slightly the point at which the beam enters the glass? Where the air meets the glass, the two rays, separated by a small distance CD = d  along that interface, will look parallel: (Feynman gives a nice illustration: a lifeguard on a beach spots a swimmer in trouble some distance away, in a diagonal direction. He can run three times faster than he can swim. What is the quickest path to the swimmer?) Moving the point of entry up a small distance d,  the light has to travel an extra dsin θ 1  in air, but a distance less by dsin θ 2  in the glass, giving an extra travel time Δt=dsin θ 1 /cdsin θ 2 /v .   For the classical path, Snell’s Law gives sin θ 1 /sin θ 2 =n=c/v MathType@MTEF@5@5@+= , so Δt=0  to first order. But if we look at a series of possible paths, each a small distance d  away from the next at the point of crossing from air into glass, Δt  becomes of order d/c  away from the classical path. Suppose now we imagine that the light actually travels along all these paths with about equal amplitude.   What will be the total contribution of all the paths at B?  Since the times along the paths are different, the signals along the different paths will arrive at B with different phases, and to get the total wave amplitude we must add a series of unit 2D vectors, one from each path.  (Representing the amplitude and phase of the wave by a complex number for convenience for a real wave, we can take the real part at the end.) When we map out these unit 2D vectors, we find that in the neighborhood of the classical path, the phase varies little, but as we go away from it the phase spirals more and more rapidly, so those paths interfere amongst themselves destructively.  To formulate this a little more precisely, let us assume that some close by path has a phase difference φ  from the least time path, and goes from air to glass a distance x  away from the least time path: then for these close by paths, φ=a x 2 ,  where a  depends on the geometric arrangement and the wavelength.  From this, the sum over the close by paths is an integral of the form e ia x 2 dx .  (We are assuming the wavelength of light is far less than the size of the equipment.)  This is a standard integral, its value is π/ia ,  all its weight is concentrated in a central area of width 1/ a ,  exactly as for the real function e a x 2 .    This is the explanation of Fermat’s Principle only near the path of least time do paths stay approximately in phase with each other and add constructively. So this classical path rule has an underlying wave-phase explanation.  In fact, the central role of phase in this analysis is sometimes emphasized by saying the light beam follows the path of stationary phase. Of course, we’re not summing over all paths here we assume that the path in air from the source to the point of entry into the glass is a straight line, clearly the subpath of stationary phase. Classical Mechanics: The Principle of Least Action Confining our attention for the moment to the mechanics of a single nonrelativistic particle in a potential, with Lagrangian L=TV,  the action S  is defined by S= t 1 t 2 L(x, x ˙ )dt . Newton’s Laws of Motion can be shown to be equivalent to the statement that a particle moving in the potential from A at t 1  to B at t 2  travels along the path that minimizes the action.  This is called the Principle of Least Action: for example, the parabolic path followed by a ball thrown through the air minimizes the integral along the path of the action TV  where T  is the ball’s kinetic energy, V  its gravitational potential energy (neglecting air resistance, of course).  Note here that the initial and final times are fixed, so since we’ll be summing over paths with different lengths, necessarily the particles speed will be different along the different paths. In other words, it will have different energies along the different paths. With the advent of quantum mechanics, and the realization that any particle, including a thrown ball, has wave like properties, the rather mysterious Principle of Least Action looks a lot like Fermat’s Principle of Least Time.  Recall that Fermat’s Principle works because the total phase along a path is the integrated time elapsed along the path, and for a path where that integral is stationary for small path variations, neighboring paths add constructively, and no other sets of paths do.  If the Principle of Least Action has a similar explanation, then the wave amplitude for a particle going along a path from A to B must have a phase equal to some constant times the action along that path. If this is the case, then the observed path followed will be just that of least action, or, more generally, of stationary action, for only near that path will the amplitudes add constructively, just as in Fermat’s analysis of light rays. Going from Classical Mechanics to Quantum Mechanics Of course, if we write a phase factor for a path  e icS  where S  is the action for the path and c  is some constant, c  must necessarily have the dimensions of inverse action.  Fortunately, there is a natural candidate for the constant c.  The wave nature of matter arises from quantum mechanics, and the fundamental constant of quantum mechanics, Planck’s constant, is in fact a unit of action.  (Recall action has the same dimensions as Et,  and therefore the same as px,  manifestly the same as angular momentum.)  It turns out that the appropriate path phase factor is e iS/ That the phase factor is e iS/ , rather than e iS/h , say, can be established by considering the double slit experiment for electrons (Peskin page 277).  This is analogous to the light waves going from a source in air to a point in glass, except now we have vacuum throughout (electrons don’t get far in glass), and we close down all but two of the paths. Suppose electrons from the top slit, Path I, go a distance D  to the detector, those from the bottom slit, Path II, go D+d,  with dD.   Then if the electrons have wavelength λ  we know the phase difference at the detector is 2πd/λ.   To see this from our formula for summing over paths, on Path I the action  S=Et= 1 2 m v 1 2 t,  and v 1 =D/t,  so S 1 = 1 2 m D 2 /t.   For Path II, we must take v 2 =( D+d )/t.   Keeping only terms of leading order in d/D,  the action difference between the two paths S 2 S 1 =mDd/t   so the phase difference S 2 S 1 = mvd = 2πpd h = 2πd λ . This is the known correct result, and this fixes the constant multiplying the action/h in the expression for the path phase. In quantum mechanics, such as the motion of an electron in an atom, we know that the particle does not follow a well-defined path, in contrast to classical mechanics.  Where does the crossover to a well-defined path take place?  Taking the simplest possible case of a free particle (no potential) of mass m  moving at speed v,  the action along a straight line path taking time t  from A to B is 1 2 m v 2 t.   If this action is of order Planck’s constant h,  then the phase factor will not oscillate violently on moving to different paths, and a range of paths will contribute.  In other words, quantum rather than classical behavior dominates when 1 2 m v 2 t  is of order h.  But vt  is the path length L,  and mv/h  is the wavelength λ,  so we conclude that we must use quantum mechanics when the wavelength h/p  is significant compared with the path length.  Interference sets in when the difference in path actions is of order h,  so in the atomic regime many paths must be included. Feynman (in Feynman and Hibbs) gives a nice picture to help think about summing over paths. He begins with the double slit experiment for an electron.  We suppose the electron is emitted from some source A on the left, and we look for it at a point B on a screen to the right.  In the middle is a thin opaque barrier with the familiar two slits.  Evidently, to find the amplitude for the electron to reach B we sum over two paths.  Now suppose we add another two-slit barrier. We have to sum over four paths.  Now add another.  Next, replace the two slits in each barrier by several slits.  We must sum over a multitude of paths!  Finally, increase the number of barriers to some large number N, and at the same time increase the number of slits to the point that there are no barriers left.  We are left with a sum over all possible paths through space from A to B, multiplying each path by the appropriate action phase factor.  This is reminiscent of the original wave propagation picture of Huygens: if one pictures it at successive time intervals of picoseconds, say, from each point on the wavefront waves go out 3 mm in all directions, then in the next time interval each of those sprouts more waves in all directions.  One could write this as a sum over all zigzag paths with random 3 mm steps. In fact, the sum over paths is even more daunting than Feynman’s picture suggests.  All the paths going through these many slitted barriers are progressing in a forward direction, from A towards B.  Actually, if we’re summing over all paths, we should be including the possibility of paths zigzagging backwards and forwards as well, eventually arriving at B.  We shall soon see how to deal systematically with all possible paths. Review: Standard Definition of the Free Electron Propagator As a warm up exercise, consider an electron confined to one dimension, with no potential present, moving from x  at time 0 to x  at time T.   We’ll follow Feynman in using T  for the final time, so we can keep t  for the continuous (albeit sometimes discretized) time variable over the interval 0 to T.     (As explained previously, when we write that the electron is initially at x ,  we mean its wave function is a normalizable state, such as a very narrow Gaussian, centered at x .  The propagator then represents the probability amplitude, that is, the wave function, at point x  after the given time T.  )  The propagator is given by | ψ( x,t=T )=U( T )| ψ( x,t=0 ), or, in Schrödinger wave function notation, ψ( x,T )= U( x,T; x ,0 )ψ( x ,0 ) d x . It is clear that for this to make sense, as T0,U( x,T; x ,0 )δ( x x ). In the lecture on propagators, we found x|U( T,0 )| x = e i k 2 T/2m dk 2π x|kk| x = e i k 2 T/2m dk 2π e ik( x x ) = m 2πiT e im ( x x ) 2 /2T . Summing over Paths Let us formulate the sum over paths for this simplest one-dimensional case, the free electron, more precisely.  Each path is a continuous function of time x( t )  in the time interval 0tT,  with boundary conditions x( 0 )= x ,x( T )=x.   Each path contributes a term e iS/ ,  where S[ x( t ) ]= 0 T L( x( t ), x ˙ ( t ) )dt = 0 T 1 2 m x ˙ 2 ( t )dt (for the free electron case) evaluated along that path. The integral over all paths is written: x|U( T,0 )| x = D[ x( t ) ] e iS[ x( t ) ]/ This rather formal statement begs the question of how, exactly, we perform the sum over paths: what is the appropriate measure in the space of paths? A natural approach is to measure the paths in terms of their deviation from the classical path, since we know that path dominates in the classical limit.  The classical path for the free electron is just the straight line from x  to x,  traversed at constant velocity, since there are no forces acting on the electron.  We write x( t )= x cl ( t )+y( t ) x cl ( 0 )= x , x cl ( T )=x and therefore y( 0 )=0,y( T )=0. x|U( T,0 )| x = D[ y( t ) ] e iS[ x cl ( t )+y( t ) ]/ , S[ x cl ( t )+y( t ) ]= 0 T 1 2 m ( x ˙ cl ( t )+ y ˙ ( t ) ) 2 dt =S[ x cl ( t ) ]+ 0 T m x ˙ cl ( t ) y ˙ ( t )dt + 0 T 1 2 m y ˙ 2 ( t )dt . The middle term on the bottom line is zero, as it has to be since it is a linear term in the deviation from the minimum path.  To see this explicitly, one can integrate by parts: the end terms are zero, from the boundary condition on y,  and the other term is the acceleration of the particle along the classical path, which is zero. x|U( T,0 )| x = e iS[ x cl ( t ) ]/ D[ y( t ) ] e iS[ y( t ) ]/ The y-  paths, being the deviation from the classical path from x  to x,  necessarily begin and end at the y-  origin, since all paths summed over go from x  to x.     The classical path, motion from x  to x  at a constant speed v=( x x )/T,  has action   Et,  with E  the classical energy 1 2 m v 2 ,  so U(x,T; x ,0)=A( T ) e im (x x ) 2 /2T . This gives the correct exponential term.   The prefactor A,  representing the sum over the deviation paths y( t ),  cannot depend on x  or x ,   and is fixed by the requirement that as t  goes to zero, U  must approach a δ-  function, giving the prefactor found previously.   Proving that the Sum-Over-Paths Definition of the Propagator is Equivalent to the Sum-Over-Eigenfunctions Definition The first step is to construct a practical method of summing over paths.  Let us begin with a particle in one dimension going from x  at time 0 to x  at time T.   The paths can be enumerated in a crude way, reminiscent of Riemann integration: divide the time interval 0 to T  into N equal intervals each of duration ε,  so t 0 =0, t 1 = t 0 +ε, t 2 = t 0 +2ε,, t N =T. Next, define a particular path from x  to x  by specifying the position of the particle at each of the intermediate times, that is to say, it is at x 1  at time t 1 , x 2  at time t 2  and so on.  Then, simplify the path by putting in straight line bits connecting x 0  to x 1 , x 1  to x 2 ,  etc.  The justification is that in the limit of ε  going to zero, taken at the end, this becomes a true representation of the path. The next step is to sum over all possible paths with a factor e iS/  for each one.  The sum is accomplished by integrating over all possible values of the intermediate positions x 1 , x 2 ,, x N1 ,   and then taking N to infinity. The action on the zigzag path is S= 0 T dt( 1 2 m x ˙ 2 V(x)) i [ m ( x i+1 x i ) 2 2ε εV( x i+1 + x i 2 ) ] We define the “integral over paths” written D[x(t)]  by lim ε0 N 1 B(ε) ... d x 1 B(ε) ... d x N1 B(ε) where we haven’t yet figured out what the overall weighting factor B(ε)  is going to be. (It is standard convention to have that extra B(ε)  outside.) To summarize: the propagator U(x,T; x ,0)  is the contribution to the wave function at x  at time t=T  from that at x  at the earlier time t=0.   Consequently, U(x,T; x ,0)  regarded as a function of x,T  is, in fact, nothing but the Schrödinger wave function ψ( x,T ),  and therefore must satisfy Schrödinger’s equation i T U( x,T; x ,0 )=( 2 2m 2 x 2 +V( x ) )U( x,T; x ,0 ). We shall now show that defining U( x,T; x ,0 )  as a sum over paths, it does in fact satisfy Schrödinger’s equation, and furthermore goes to a δ-  function as time goes to zero. U( x,T; x ,0 )= D[x(t)] e iS[x(t)]/ = lim ε0 N 1 B(ε) ... d x 1 B(ε) ... d x N1 B(ε) e iS( x 1 ,..., x N1 )/ . We shall establish this equivalence by proving that it satisfies the same differential equation.  It clearly has the same initial value as t  and t  coincide, it goes to δ( x x )  in both representations. To differentiate U( x,T; x ,0 )  with respect to t,  we isolate the integral over the last path variable, x N1 :              U( x,T; x ,0 )= d x N1 B(ε) e [ im (x x N1 ) 2 2ε i εV( x+ x N1 2 ) ] U( x N1 ,Tε; x ,0 ) Now in the limit ε  going to zero, almost all the contribution to this integral must come from close to the point of stationary phase, that is, x N1 =x.  In that limit, we can take U( x N1 ,tε; x , t )  to be a slowly varying function of x N1 ,  and replace it by the leading terms in a Taylor expansion about x, so U( x,T; x ,0 )= d x N1 B(ε) e im (x x N1 ) 2 2ε ( 1 i εV( x+ x N1 2 ) ) ( U(x,Tε)+( x N1 x) U x + ( x N1 x) 2 2 2 U x 2 ) The x N1  dependence in the potential V  can be neglected in leading order that leaves standard Gaussian integrals, and U( x,T; x ,0 )= 1 B(ε) 2πε im ( 1 iε V(x)+ iε 2m 2 x 2 )U( x,Tε; x ,0 ). Taking the limit of ε  going to zero fixes our unknown normalizing factor, B(ε)= 2πε im thus establishing that the propagator derived from the sum over paths obeys Schrödinger’s equation, and consequently gives the same physics as the conventional approach. Explicit Evaluation of the Path Integral for the Free Particle Case The required correspondence to the Schrödinger equation result fixes the unknown normalizing factor, as we’ve just established.  This means we are now in a position to evaluate the sum over paths explicitly, at least in the free particle case, and confirm the somewhat hand-waving result given above. The sum over paths is U( x,T; x ,0 )= D[x(t)] e iS[x(t)]/ = lim ε0 N 1 B(ε) ... d x 1 B(ε) ... d x N1 B(ε) e i i m ( x i+1 x i ) 2 2ε . Let us consider the sum for small but finite ε.   In particular, we’ll divide up the interval first into halves, then quarters, and so on, into 2n small intervals.  The reason for this choice will become clear. Now, we’ll integrate over half the paths: those for i  odd, leaving the even x i  values fixed for the moment.  The integrals are of the form dy e ( ia/2 )[ ( xy ) 2 + ( yz ) 2 ] = e ( ia/2 )( x 2 + z 2 ) dy e ia y 2 iay( x+z ) = e ( ia/2 )( x 2 + z 2 ) π ia e ( ia/4 ) ( x+z ) 2 = π ia e ( ia/4 ) ( xz ) 2 using the standard result dx e a x 2 +bx = π a e b 2 /4a . Now put in the value a=m/ε:  the factor π ia = πε im  cancels the normalization factor B(ε)= 2πε im  except for the factor of 2 inside the square root. But we need that factor of 2, because we’re left with an integral over the remaining even numbered paths exactly like the one before except that the time interval has doubled, both in the normalization factor and in the exponent, ε2ε. So we’re back where we started. We can now repeat the process, halving the number of paths again, then again, until finally we have the same expression but with only the fixed endpoints appearing. previous   index   next   PDF
6eed8165274a3e34
How Particles Pass Through Potential Barriers That Have Less Energy By Steven Holzner When you are working with potential barrier of height V0 and width a where E > V0, this means that the particle has enough energy to pass through the potential barrier and end up in the x > a region. This is what the Schrödinger equation looks like in this case: The solutions for are the following: In fact, because there’s no leftward traveling wave in the x > a region, So how do you determine A, B, C, D, and F? You use the continuity conditions, which work out here to be the following: Okay, from these equations, you get the following: • A + B = C + D • ik1(A – B) = ik2(C – D) • Ceik2a + Deik2a = Feik1a • ik2Ceik2aik2Deik2a = ik1Feik1a So putting all of these equations together, you get this for the coefficient F in terms of A: Wow. So what’s the transmission coefficient, T? Well, T is And this works out to be Whew! Note that as k1 goes to k2, T goes to 1, which is what you’d expect. So how about R, the reflection coefficient? Without going into the algebra, here’s what R equals: You can see what the E > V0 probability density, looks like for the potential barrier in the figure. That completes the potential barrier when E > V0.
b8a6a818f646986d
Ensemble interpretation From Wikipedia, the free encyclopedia   (Redirected from Ensemble Interpretation) Jump to navigation Jump to search The ensemble interpretation of quantum mechanics considers the quantum state description to apply only to an ensemble of similarly prepared systems, rather than supposing that it exhaustively represents an individual physical system.[1] The advocates of the ensemble interpretation of quantum mechanics claim that it is minimalist, making the fewest physical assumptions about the meaning of the standard mathematical formalism. It proposes to take to the fullest extent the statistical interpretation of Max Born, for which he won the Nobel Prize in Physics.[2] For example, a new version of the ensemble interpretation that relies on a new formulation of probability theory was introduced by Raed Shaiia, which showed that the laws of quantum mechanics are the inevitable result of this new formulation.[3][4][5] On the face of it, the ensemble interpretation might appear to contradict the doctrine proposed by Niels Bohr, that the wave function describes an individual system or particle, not an ensemble, though he accepted Born's statistical interpretation of quantum mechanics. It is not quite clear exactly what kind of ensemble Bohr intended to exclude, since he did not describe probability in terms of ensembles. The ensemble interpretation is sometimes, especially by its proponents, called "the statistical interpretation",[1] but it seems perhaps different from Born's statistical interpretation. As is the case for "the" Copenhagen interpretation, "the" ensemble interpretation might not be uniquely defined. In one view, the ensemble interpretation may be defined as that advocated by Leslie E. Ballentine, Professor at Simon Fraser University[6]. His interpretation does not attempt to justify, or otherwise derive, or explain quantum mechanics from any deterministic process, or make any other statement about the real nature of quantum phenomena; it intends simply to interpret the wave function. It does not propose to lead to actual results that differ from orthodox interpretations. It makes the statistical operator primary in reading the wave function, deriving the notion of a pure state from that. In the opinion of Ballentine, perhaps the most notable supporter of such an interpretation was Albert Einstein: — Albert Einstein[7] Nevertheless, one may doubt as to whether Einstein, over the years, had in mind one definite kind of ensemble.[8] Meaning of "ensemble" and "system"[edit] Perhaps the first expression of an ensemble interpretation was that of Max Born.[9] In a 1968 article, he used the German words 'Haufen gleicher', which are often translated into English, in this context, as 'ensemble' or 'assembly'. The atoms in his assembly were uncoupled, meaning that they were an imaginary set of independent atoms that defines its observable statistical properties. Born did not mean an ensemble of instances of a certain kind of wave function, nor one composed of instances of a certain kind of state vector. There may be room here for confusion or miscommunication.[citation needed] An example of an ensemble is composed by preparing and observing many copies of one and the same kind of quantum system. This is referred to as an ensemble of systems. It is not, for example, a single preparation and observation of one simultaneous set ("ensemble") of particles. A single body of many particles, as in a gas, is not an "ensemble" of particles in the sense of the "ensemble interpretation", although a repeated preparation and observation of many copies of one and the same kind of body of particles may constitute an "ensemble" of systems, each system being a body of many particles. The ensemble is not in principle confined to such a laboratory paradigm, but may be a natural system conceived of as occurring repeatedly in nature; it is not quite clear whether or how this might be realized. The members of the ensemble are said to be in the same state, and this defines the term 'state'. The state is mathematically denoted by a mathematical object called a statistical operator. Such an operator is a map from a certain corresponding Hilbert space to itself, and may be written as a density matrix. It is characteristic of the ensemble interpretation to define the state by the statistical operator. Other interpretations may instead define the state by the corresponding Hilbert space. Such a difference between the modes of definition of state seems to make no difference to the physical meaning. Indeed, according to Ballentine, one can define the state by an ensemble of identically prepared systems, denoted by a point in the Hilbert space, as is perhaps more customary. The link is established by making the observing procedure a copy of the preparative procedure; mathematically the corresponding Hilbert spaces are mutually dual. Since Bohr's concern was that the specimen phenomena are joint preparation-observation occasions, it is not evident that the Copenhagen and ensemble interpretations differ substantially in this respect. According to Ballentine, the distinguishing difference between the Copenhagen interpretation (CI) and the ensemble interpretation (EI) is the following: CI: A pure state provides a "complete" description of an individual system, in the sense that a dynamical variable represented by the operator has a definite value (, say) if and only if . EI: A pure state describes the statistical properties of an ensemble of identically prepared systems, of which the statistical operator is idempotent. Ballentine emphasizes that the meaning of the "Quantum State" or "State Vector" may be described, essentially, by a one-to-one correspondence to the probability distributions of measurement results, not the individual measurement results themselves.[10] A mixed state is a description only of the probabilities, and of positions, not a description of actual individual positions. A mixed state is a mixture of probabilities of physical states, not a coherent superposition of physical states. Ensemble interpretation applied to single systems[edit] The statement that the quantum mechanical wave function itself does not apply to a single system in one sense does not imply that the ensemble interpretation itself does not apply to single systems in the sense meant by the ensemble interpretation. The condition is that there is not a direct one-to-one correspondence of the wave function with an individual system that might imply, for example, that an object might physically exist in two states simultaneously. The ensemble interpretation may well be applied to a single system or particle, and predict what is the probability that that single system will have for a value of one of its properties, on repeated measurements. Consider the throwing of two dice simultaneously on a craps table. The system in this case would consist of only the two dice. There are probabilities of various results, e.g. two fives, two twos, a one and a six etc. Throwing the pair of dice 100 times, would result in an ensemble of 100 trials. Classical statistics would then be able predict what typically would be the number of times that certain results would occur. However, classical statistics would not be able to predict what definite single result would occur with a single throw of the pair of dice. That is, probabilities applied to single one off events are, essentially, meaningless, except in the case of a probability equal to 0 or 1. It is in this way that the ensemble interpretation states that the wave function does not apply to an individual system. That is, by individual system, it is meant a single experiment or single throw of the dice, of that system. The Craps throws could equally well have been of only one dice, that is, a single system or particle. Classical statistics would also equally account for repeated throws of this single dice. It is in this manner, that the ensemble interpretation is quite able to deal with "single" or individual systems on a probabilistic basis. The standard Copenhagen Interpretation (CI) is no different in this respect. A fundamental principle of QM is that only probabilistic statements may be made, whether for individual systems/particles, a simultaneous group of systems/particles, or a collection (ensemble) of systems/particles. An identification that the wave function applies to an individual system in standard CI QM, does not defeat the inherent probabilistic nature of any statement that can be made within standard QM. To verify the probabilities of quantum mechanical predictions, however interpreted, inherently requires the repetition of experiments, i.e. an ensemble of systems in the sense meant by the ensemble interpretation. QM cannot state that a single particle will definitely be in a certain position, with a certain momentum at a later time, irrespective of whether or not the wave function is taken to apply to that single particle. In this way, the standard CI also "fails" to completely describe "single" systems. However, it should be stressed that, in contrast to classical systems and older ensemble interpretations, the modern ensemble interpretation as discussed here, does not assume, nor require, that there exist specific values for the properties of the objects of the ensemble, prior to measurement. Preparative and observing devices as origins of quantum randomness[edit] An isolated quantum mechanical system, specified by a wave function, evolves in time in a deterministic way according to the Schrödinger equation that is characteristic of the system. Though the wave function can generate probabilities, no randomness or probability is involved in the temporal evolution of the wave function itself. This is agreed, for example, by Born,[11] Dirac,[12] von Neumann,[13] London & Bauer,[14] Messiah,[15] and Feynman & Hibbs.[16] An isolated system is not subject to observation; in quantum theory, this is because observation is an intervention that violates isolation. The system's initial state is defined by the preparative procedure; this is recognized in the ensemble interpretation, as well as in the Copenhagen approach.[17][18][19][20] The system's state as prepared, however, does not entirely fix all properties of the system. The fixing of properties goes only as far as is physically possible, and is not physically exhaustive; it is, however, physically complete in the sense that no physical procedure can make it more detailed. This is stated clearly by Heisenberg in his 1927 paper.[21] It leaves room for further unspecified properties.[22] For example, if the system is prepared with a definite energy, then the quantum mechanical phase of the wave function is left undetermined by the mode of preparation. The ensemble of prepared systems, in a definite pure state, then consists of a set of individual systems, all having one and the same the definite energy, but each having a different quantum mechanical phase, regarded as probabilistically random.[23] The wave function, however, does have a definite phase, and thus specification by a wave function is more detailed than specification by state as prepared. The members of the ensemble are logically distinguishable by their distinct phases, though the phases are not defined by the preparative procedure. The wave function can be multiplied by a complex number of unit magnitude without changing the state as defined by the preparative procedure. The preparative state, with unspecified phase, leaves room for the several members of the ensemble to interact in respectively several various ways with other systems. An example is when an individual system is passed to an observing device so as to interact with it. Individual systems with various phases are scattered in various respective directions in the analyzing part of the observing device, in a probabilistic way. In each such direction, a detector is placed, in order to complete the observation. When the system hits the analyzing part of the observing device, that scatters it, it ceases to be adequately described by its own wave function in isolation. Instead it interacts with the observing device in ways partly determined by the properties of the observing device. In particular, there is in general no phase coherence between system and observing device. This lack of coherence introduces an element of probabilistic randomness to the system–device interaction. It is this randomness that is described by the probability calculated by the Born rule. There are two independent originative random processes, one that of preparative phase, the other that of the phase of the observing device. The random process that is actually observed, however, is neither of those originative ones. It is the phase difference between them, a single derived random process. The Born rule describes that derived random process, the observation of a single member of the preparative ensemble. In the ordinary language of classical or Aristotelian scholarship, the preparative ensemble consists of many specimens of a species. The quantum mechanical technical term 'system' refers to a single specimen, a particular object that may be prepared or observed. Such an object, as is generally so for objects, is in a sense a conceptual abstraction, because, according to the Copenhagen approach, it is defined, not in its own right as an actual entity, but by the two macroscopic devices that should prepare and observe it. The random variability of the prepared specimens does not exhaust the randomness of a detected specimen. Further randomness is injected by the quantum randomness of the observing device. It is this further randomness that makes Bohr emphasize that there is randomness in the observation that is not fully described by the randomness of the preparation. This is what Bohr means when he says that the wave function describes "a single system". He is focusing on the phenomenon as a whole, recognizing that the preparative state leaves the phase unfixed, and therefore does not exhaust the properties of the individual system. The phase of the wave function encodes further detail of the properties of the individual system. The interaction with the observing device reveals that further encoded detail. It seems that this point, emphasized by Bohr, is not explicitly recognized by the ensemble interpretation, and this may be what distinguishes the two interpretations. It seems, however, that this point is not explicitly denied by the ensemble interpretation. Einstein perhaps sometimes seemed to interpret the probabilistic "ensemble" as a preparative ensemble, recognizing that the preparative procedure does not exhaustively fix the properties of the system; therefore he said that the theory is "incomplete". Bohr, however, insisted that the physically important probabilistic "ensemble" was the combined prepared-and-observed one. Bohr expressed this by demanding that an actually observed single fact should be a complete "phenomenon", not a system alone, but always with reference to both the preparing and the observing devices. The Einstein–Podolsky–Rosen criterion of "completeness" is clearly and importantly different from Bohr's. Bohr regarded his concept of "phenomenon" as a major contribution that he offered for quantum theoretical understanding.[24][25] The decisive randomness comes from both preparation and observation, and may be summarized in a single randomness, that of the phase difference between preparative and observing devices. The distinction between these two devices is an important point of agreement between Copenhagen and ensemble interpretations. Though Ballentine claims that Einstein advocated "the ensemble approach", a detached scholar would not necessarily be convinced by that claim of Ballentine. There is room for confusion about how "the ensemble" might be defined. "Each photon interferes only with itself"[edit] Perhaps here may be found reason for differences of opinion, as between Niels Bohr's and other interpretations. Niels Bohr famously insisted that the wave function refers to a single individual quantum system. What did he mean by this? He was expressing the idea that Dirac expressed when he famously wrote: "Each photon then interferes only with itself. Interference between different photons never occurs.".[26] Dirac clarified this by writing: "This, of course, is true only provided the two states that are superposed refer to the same beam of light, i.e. all that is known about the position and momentum of a photon in either of these states must be the same for each."[27] Bohr wanted to emphasize that a superposition is different from a mixture. He seemed to think that those who spoke of a "statistical interpretation" were not taking that into account. To create, by a superposition experiment, a new and different pure state, from an original pure beam, one can put absorbers and phase-shifters into some of the sub-beams, so as to alter the composition of the re-constituted superposition. But one cannot do so by mixing a fragment of the original unsplit beam with component split sub-beams. That is because one photon cannot both go into the unsplit fragment and go into the split component sub-beams. Bohr felt that talk in statistical terms might hide this fact. The physics here is that the effect of the randomness contributed by the observing apparatus depends on whether the detector is in the path of a component sub-beam, or in the path of the single superposed beam. This is not explained by the randomness contributed by the preparative device. Measurement and collapse[edit] Bras and kets[edit] The ensemble interpretation is notable for its relative de-emphasis on the duality and theoretical symmetry between bras and kets. The approach emphasizes the ket as signifying a physical preparation procedure.[28] There is little or no expression of the dual role of the bra as signifying a physical observational procedure. The bra is mostly regarded as a mere mathematical object, without very much physical significance. It is the absence of the physical interpretation of the bra that allows the ensemble approach to by-pass the notion of "collapse". Instead, the density operator expresses the observational side of the ensemble interpretation. It hardly needs saying that this account could be expressed in a dual way, with bras and kets interchanged, mutatis mutandis. In the ensemble approach, the notion of the pure state is conceptually derived by analysis of the density operator, rather than the density operator being conceived as conceptually synthesized from the notion of the pure state. An attraction of the ensemble interpretation is that it appears to dispense with the metaphysical issues associated with reduction of the state vector, Schrödinger cat states, and other issues related to the concepts of multiple simultaneous states. The ensemble interpretation postulates that the wave function only applies to an ensemble of systems as prepared, but not observed. There is no recognition of the notion that a single specimen system could manifest more than one state at a time, as assumed, for example, by Dirac.[29] Hence, the wave function is not envisaged as being physically required to be "reduced". This can be illustrated by an example: Consider a quantum die. If this is expressed in Dirac notation, the "state" of the die can be represented by a "wave" function describing the probability of an outcome given by: Where it should be noted that the "+" sign of a probabilistic equation is not an addition operator, it is a standard probabilistic or Boolean logical OR operator. The state vector is inherently defined as a probabilistic mathematical object such that the result of a measurement is one outcome OR another outcome. It is clear that on each throw, only one of the states will be observed, but this is not expressed by a bra. Consequently, there appears to be no requirement for a notion of collapse of the wave function/reduction of the state vector, or for the die to physically exist in the summed state. In the ensemble interpretation, wave function collapse would make as much sense as saying that the number of children a couple produced, collapsed to 3 from its average value of 2.4. The state function is not taken to be physically real, or be a literal summation of states. The wave function, is taken to be an abstract statistical function, only applicable to the statistics of repeated preparation procedures. The ket does not directly apply to a single particle detection, but only the statistical results of many. This is why the account does not refer to bras, and mentions only kets. The ensemble approach differs significantly from the Copenhagen approach in its view of diffraction. The Copenhagen interpretation of diffraction, especially in the viewpoint of Niels Bohr, puts weight on the doctrine of wave–particle duality. In this view, a particle that is diffracted by a diffractive object, such as for example a crystal, is regarded as really and physically behaving like a wave, split into components, more or less corresponding to the peaks of intensity in the diffraction pattern. Though Dirac does not speak of wave–particle duality, he does speak of "conflict" between wave and particle conceptions.[30] He indeed does describe a particle, before it is detected, as being somehow simultaneously and jointly or partly present in the several beams into which the original beam is diffracted. So does Feynman, who speaks of this as "mysterious".[31] The ensemble approach points out that this seems perhaps reasonable for a wave function that describes a single particle, but hardly makes sense for a wave function that describes a system of several particles. The ensemble approach demystifies this situation along the lines advocated by Alfred Landé, accepting Duane's hypothesis. In this view, the particle really and definitely goes into one or other of the beams, according to a probability given by the wave function appropriately interpreted. There is definite quantal transfer of translative momentum between particle and diffractive object.[32] This is recognized also in Heisenberg's 1930 textbook,[33] though usually not recognized as part of the doctrine of the so-called "Copenhagen interpretation". This gives a clear and utterly non-mysterious physical or direct explanation instead of the debated concept of wave function "collapse". It is presented in terms of quantum mechanics by other present day writers also, for example, Van Vliet.[34][35] For those who prefer physical clarity rather than mysterianism, this is an advantage of the ensemble approach, though it is not the sole property of the ensemble approach. With a few exceptions,[33][36][37][38][39][40][41] this demystification is not recognized or emphasized in many textbooks and journal articles. David Mermin sees the ensemble interpretation as being motivated by an adherence ("not always acknowledged") to classical principles. "[...] the notion that probabilistic theories must be about ensembles implicitly assumes that probability is about ignorance. (The 'hidden variables' are whatever it is that we are ignorant of.) But in a non-deterministic world probability has nothing to do with incomplete knowledge, and ought not to require an ensemble of systems for its interpretation". However, according to Einstein and others, a key motivation for the ensemble interpretation is not about any alleged, implicitly assumed probabilistic ignorance, but the removal of "…unnatural theoretical interpretations…". A specific example being the Schrödinger cat problem stated above, but this concept applies to any system where there is an interpretation that postulates, for example, that an object might exist in two positions at once. Mermin also emphasises the importance of describing single systems, rather than ensembles. "The second motivation for an ensemble interpretation is the intuition that because quantum mechanics is inherently probabilistic, it only needs to make sense as a theory of ensembles. Whether or not probabilities can be given a sensible meaning for individual systems, this motivation is not compelling. For a theory ought to be able to describe as well as predict the behavior of the world. The fact that physics cannot make deterministic predictions about individual systems does not excuse us from pursuing the goal of being able to describe them as they currently are."[42] Single particles[edit] According to proponents of this interpretation, no single system is ever required to be postulated to exist in a physical mixed state so the state vector does not need to collapse. It can also be argued that this notion is consistent with the standard interpretation in that, in the Copenhagen interpretation, statements about the exact system state prior to measurement cannot be made. That is, if it were possible to absolutely, physically measure say, a particle in two positions at once, then quantum mechanics would be falsified as quantum mechanics explicitly postulates that the result of any measurement must be a single eigenvalue of a single eigenstate. Arnold Neumaier finds limitations with the applicability of the ensemble interpretation to small systems. "Among the traditional interpretations, the statistical interpretation discussed by Ballentine in Rev. Mod. Phys. 42, 358-381 (1970) is the least demanding (assumes less than the Copenhagen interpretation and the Many Worlds interpretation) and the most consistent one. It explains almost everything, and only has the disadvantage that it explicitly excludes the applicability of QM to single systems or very small ensembles (such as the few solar neutrinos or top quarks actually detected so far), and does not bridge the gulf between the classical domain (for the description of detectors) and the quantum domain (for the description of the microscopic system)". (spelling amended)[43] However, the "ensemble" of the ensemble interpretation is not directly related to a real, existing collection of actual particles, such as a few solar neutrinos, but it is concerned with the ensemble collection of a virtual set of experimental preparations repeated many times. This ensemble of experiments may include just one particle/one system or many particles/many systems. In this light, it is arguably, difficult to understand Neumaier's criticism, other than that Neumaier possibly misunderstands the basic premise of the ensemble interpretation itself.[citation needed] Schrödinger's cat[edit] The ensemble interpretation states that superpositions are nothing but subensembles of a larger statistical ensemble. That being the case, the state vector would not apply to individual cat experiments, but only to the statistics of many similar prepared cat experiments. Proponents of this interpretation state that this makes the Schrödinger's cat paradox a trivial non-issue. However, the application of state vectors to individual systems, rather than ensembles, has claimed explanatory benefits, in areas like single-particle twin-slit experiments and quantum computing (see Schrödinger's cat applications). As an avowedly minimalist approach, the ensemble interpretation does not offer any specific alternative explanation for these phenomena. The frequentist probability variation[edit] The claim that the wave functional approach fails to apply to single particle experiments cannot be taken as a claim that quantum mechanics fails in describing single-particle phenomena. In fact, it gives correct results within the limits of a probabilistic or stochastic theory. Probability always requires a set of multiple data, and thus single-particle experiments are really part of an ensemble — an ensemble of individual experiments that are performed one after the other over time. In particular, the interference fringes seen in the double-slit experiment require repeated trials to be observed. The quantum Zeno effect[edit] Leslie Ballentine promoted the ensemble interpretation in his book Quantum Mechanics, A Modern Development. In it,[44] he described what he called the "Watched Pot Experiment". His argument was that, under certain circumstances, a repeatedly measured system, such as an unstable nucleus, would be prevented from decaying by the act of measurement itself. He initially presented this as a kind of reductio ad absurdum of wave function collapse.[45] The effect has been shown to be real. Ballentine later wrote papers claiming that it could be explained without wave function collapse.[46] Classical ensemble ideas[edit] These views regard the randomness of the ensemble as fully defined by the preparation, neglecting the subsequent random contribution of the observing process. This neglect was particularly criticized by Bohr. Early proponents, for example Einstein, of statistical approaches regarded quantum mechanics as an approximation to a classical theory. John Gribbin writes: "The basic idea is that each quantum entity (such as an electron or a photon) has precise quantum properties (such as position or momentum) and the quantum wavefunction is related to the probability of getting a particular experimental result when one member (or many members) of the ensemble is selected by an experiment" But hopes for turning quantum mechanics back into a classical theory were dashed. Gribbin continues: "There are many difficulties with the idea, but the killer blow was struck when individual quantum entities such as photons were observed behaving in experiments in line with the quantum wave function description. The Ensemble interpretation is now only of historical interest."[47] In 1936 Einstein wrote a paper, in German, in which, amongst other matters, he considered quantum mechanics in general conspectus.[48] He asked "How far does the ψ-function describe a real state of a mechanical system?" Following this, Einstein offers some argument that leads him to infer that "It seems to be clear, therefore, that the Born statistical interpretation of the quantum theory is the only possible one." At this point a neutral student may ask do Heisenberg and Bohr, considered respectively in their own rights, agree with that result? Born in 1971 wrote about the situation in 1936: "All theoretical physicists were in fact working with the statistical concept by then; this was particularly true of Niels Bohr and his school, who also made a vital contribution to the clarification of the concept."[49] Where, then, is to be found disagreement between Bohr and Einstein on the statistical interpretation? Not in the basic link between theory and experiment; they agree on the Born "statistical" interpretation". They disagree on the metaphysical question of the determinism or indeterminism of evolution of the natural world. Einstein believed in determinism while Bohr (and it seems many physicists) believed in indeterminism; the context is atomic and sub-atomic physics. It seems that this is a fine question. Physicists generally believe that the Schrödinger equation describes deterministic evolution for atomic and sub-atomic physics. Exactly how that might relate to the evolution of the natural world may be a fine question. Objective-realist version[edit] Willem de Muynck describes an "objective-realist" version of the ensemble interpretation featuring counterfactual definiteness and the "possessed values principle", in which values of the quantum mechanical observables may be attributed to the object as objective properties the object possesses independent of observation. He states that there are "strong indications, if not proofs" that neither is a possible assumption.[50] See also[edit] 1. ^ a b Ballentine, L.E. (1970). 'The statistical interpretation of quantum mechanics', Rev. Mod. Phys., 42(4):358–381. 2. ^ "The statistical interpretation of quantum mechanics" (PDF). Nobel Lecture. December 11, 1954. 3. ^ Shaiia, Raed M. "On the Measurement Problem". International Journal of Theoretical and Mathematical Physics. 4 (5). 4. ^ M. Shaiia, Raed (1 October 2014). "On the Measurement Problem". International Journal of Theoretical and Mathematical Physics. 4 (5): 202–219. doi:10.5923/j.ijtmp.20140405.04. 5. ^ "Publications - Raed Shaiia". sites.google.com. 6. ^ Leslie E. Ballentine (1998). Quantum Mechanics: A Modern Development. World Scientific. Chapter 9. ISBN 981-02-4105-4. 7. ^ Einstein: Philosopher-Scientist, edited by Paul Arthur Schilpp (Tudor Publishing Company, 1957), p. 672. 8. ^ Home, D. (1997). Conceptual Foundations of Quantum Physics: An Overview from Modern Perspectives, Springer, New York, ISBN 978-1-4757-9810-4, p. 362: "Einstein's references to the ensemble interpretation remained in general rather sketchy." 9. ^ Born M. (1926). 'Zur Quantenmechanik der Stoßvorgänge', Zeitschrift für Physik, 37(11–12): 803–827 (German); English translation by Gunter Ludwig, pp. 206–225, 'On the quantum mechanics of collisions', in Wave Mechanics (1968), Pergamon, Oxford UK. 10. ^ Quantum Mechanics, A Modern Development, p. 48. 11. ^ Born, M. (1951). 'Physics in the last fifty years', Nature, 168: 625–630; p. : 630: "We have accustomed ourselves to abandon deterministic causality for atomic events; but we have still retained the belief that probability spreads in space (multi-dimensional) and time according to deterministic laws in the form of differential equations." 12. ^ Dirac, P.A.M. (1927). 'On the physical interpretation of the quantum dynamics', Proc. Roy. Soc. Series A,, 113(1): 621–641, p. 641: "One can suppose that the initial state of a system determines definitely the state of the system at any subsequent time. ... The notion of probabilities does not enter into the ultimate description of mechanical processes." 13. ^ J. von Neumann (1932). Mathematische Grundlagen der Quantenmechanik (in German). Berlin: Springer. Translated as J. von Neumann (1955). Mathematical Foundations of Quantum Mechanics. Princeton NJ: Princeton University Press. P. 349: "... the time dependent Schrödinger differential equation ... describes how the system changes continuously and causally." 14. ^ London, F., Bauer, E. (1939). La Théorie de l'Observation dans la Mécanique Quantique, issue 775 of Actualités Scientifiques et Industrielles, section Exposés de Physique Générale, directed by Paul Langevin, Hermann & Cie, Paris, translated by Shimony, A., Wheeler, J.A., Zurek, W.H., McGrath, J., McGrath, S.M. (1983), at pp. 217–259 in Wheeler, J.A., Zurek, W.H. editors (1983). Quantum Theory and Measurement, Princeton University Press, Princeton NJ; p. 232: "... the Schrödinger equation has all the features of a causal connection." 15. ^ Messiah, A. (1961). Quantum Mechanics, volume 1, translated by G.M. Temmer from the French Mécanique Quantique, North-Holland, Amsterdam, p. 61: "... specifying Ψ at a given initial instant uniquely defines its entire later evolution, in accord with the hypothesis that the dynamical state of the system is entirely determined once Ψ is given." 16. ^ Feynman, R.P., Hibbs, A. (1965). Quantum Mechanics and Path Integrals, McGraw–Hill, New York, p. 22: "the amultitudes φ are solutions of a completely deterministic equation (the Schrödinger equation)." 17. ^ Dirac, P.A.M. (1940). The Principles of Quantum Mechanics, fourth edition, Oxford University Press, Oxford UK, pages 11–12: "A state of a system may be defined as an undisturbed motion that is restricted by as many conditions or data as are theoretically possible without mutual interference or contradiction. In practice, the conditions could be imposed by a suitable preparation of the system, consisting perhaps of passing it through various kinds of sorting apparatus, such as slits and polarimeters, the system being undisturbed after preparation." 18. ^ Messiah, A. (1961). Quantum Mechanics, volume 1, translated by G.M. Temmer from the French Mécanique Quantique, North-Holland, Amsterdam, pp. 204–205: "When the preparation is complete, and consequently the dynamical state of the system is completely known, one says that one is dealing with a pure state, in contrast to the statistical mixtures which characterize incomplete preparations." 19. ^ L. E., Ballentine (1998). Quantum Mechanics: A Modern Development. Singapore: World Scientific. p. Chapter 9. ISBN 981-02-4105-4. P.  46: "Any repeatable process that yields well-defined probabilities for all observables may be termed a state preparation procedure." 20. ^ Jauch, J.M. (1968). Foundations of Quantum Mechanics, Addison–Wesley, Reading MA; p. 92: "Two states are identical if the relevant conditions in the preparation of the state are identical; p. 93: "Thus, a state of a quantum system can only be measured if the system can be prepared an unlimited number of times in the same state." 21. ^ Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys. 43: 172–198. Translation as 'The actual content of quantum theoretical kinematics and mechanics'. Also translated as 'The physical content of quantum kinematics and mechanics' at pp. 62–84 by editors John Wheeler and Wojciech Zurek, in Quantum Theory and Measurement (1983), Princeton University Press, Princeton NJ: "Even in principle we cannot know the present [state] in all detail." 23. ^ Dirac, P.A.M. (1926). 'On the theory of quantum mechanics', Proc. Roy. Soc. Series A,, 112(10): 661–677, p. 677: "The following argument shows, however, that the initial phases are of real physical importance, and that in consequence the Einstein coefficients are inadequate to describe the phenomena except in special cases." 24. ^ Bohr, N. (1948). 'On the notions of complementarity and causality', Dialectica 2: 312–319: "As a more appropriate way of expression, one may advocate limitation of the use of the word phenomenon to refer to observations obtained under specified circumstances, including an account of the whole experiment." 25. ^ Rosenfeld, L. (1967).'Niels Bohr in the thirties: Consolidation and extension of the conception of complementarity', pp. 114–136 in Niels Bohr: His life and work as seen by his friends and colleagues, edited by S. Rozental, North Holland, Amsterdam; p. 124: "As a direct consequence of this situation it is now highly necessary, in the definition of any phenomenon, to specify the conditions of its observation, the kind of apparatus determining the particular aspect of the phenomenon we wish to observe; and we have to face the fact that different conditions of observation may well be incompatible with each other to the extent indicated by indeterminacy relations of the Heisenberg type." 26. ^ Dirac, P.A.M., The Principles of Quantum Mechanics, (1930), 1st edition, p. 15; (1935), 2nd edition, p. 9; (1947), 3rd edition, p. 9; (1958), 4th edition, p. 9. 27. ^ Dirac, P.A.M., The Principles of Quantum Mechanics, (1930), 1st edition, p. 8. 28. ^ Ballentine, L.E. (1998). Quantum Mechanics: a Modern Development, World Scientific, Singapore, p. 47: "The quantum state description may be taken to refer to an ensemble of similarly prepared systems." 29. ^ Dirac, P.A.M. (1958). The Principles of Quantum Mechanics, 4th edition, Oxford University Press, Oxford UK, p. 12: "The general principle of superposition of quantum mechanics applies to the states, with either of the above meanings, of any one dynamical system. It requires us to assume that between these states there exist peculiar relationships such that whenever the system is definitely in one state we can consider it as being partly in each of two or more other states." 31. ^ Feynman, R.P., Leighton, R.B., Sands, M. (1965). The Feynman Lectures on Physics, volume 3, Addison-Wesley, Reading, MA, p. 1–1. 32. ^ Ballentine, L.E. (1998). Quantum Mechanics: a Modern Development, World Scientific, Singapore, ISBN 981-02-2707-8, p. 136. 33. ^ a b Heisenberg, W. (1930). The Physical Principles of the Quantum Theory, translated by C. Eckart and F.C. Hoyt, University of Chicago Press, Chicago, pp. 77–78. 34. ^ Van Vliet, K. (1967). Linear momentum quantization in periodic structures, Physica, 35: 97–106, doi:10.1016/0031-8914(67)90138-3. 35. ^ Van Vliet, K. (2010). Linear momentum quantization in periodic structures ii, Physica A, 389: 1585–1593, doi:10.1016/j.physa.2009.12.026. 36. ^ Pauling, L.C., Wilson, E.B. (1935). Introduction to Quantum Mechanics: with Applications to Chemistry, McGraw-Hill, New York, pp. 34–36. 37. ^ Landé, A. (1951). Quantum Mechanics, Sir Isaac Pitman and Sons, London, pp. 19–22. 38. ^ Bohm, D. (1951). Quantum Theory, Prentice Hall, New York, pp. 71–73. 39. ^ Thankappan, V.K. (1985/2012). Quantum Mechanics, third edition, New Age International, New Delhi, ISBN 978-81-224-3357-9, pp. 6–7. 40. ^ Schmidt, L.P.H., Lower, J., Jahnke, T., Schößler, S., Schöffler, M.S., Menssen, A., Lévêque, C., Sisourat, N., Taïeb, R., Schmidt-Böcking, H., Dörner, R. (2013). Momentum transfer to a free floating double slit: realization of a thought experiment from the Einstein-Bohr debates, Physical Review Letters 111: 103201, 1–5. 41. ^ Wennerstrom, H. (2014). Scattering and diffraction described using the momentum representation, Advances in Colloid and Interface Science, 205: 105–112. 42. ^ Mermin, N.D. The Ithaca interpretation 43. ^ "A theoretical physics FAQ". www.mat.univie.ac.at. 44. ^ Leslie E. Ballentine. Quantum Mechanics: A Modern Development. p. 342. ISBN 981-02-4105-4. 45. ^ "Like the old saying "A watched pot never boils", we have been led to the conclusion that a continuously observed system never changes its state! This conclusion is, of course false. The fallacy clearly results from the assertion that if an observation indicates no decay, then the state vector must be |y_u>. Each successive observation in the sequence would then "reduce" the state back to its initial value |y_u>, and in the limit of continuous observation there could be no change at all. Here we see that it is disproven by the simple empirical fact that [..] continuous observation does not prevent motion. It is sometimes claimed that the rival interpretations of quantum mechanics differ only in philosophy, and can not be experimentally distinguished. That claim is not always true. as this example proves". Ballentine, L. Quantum Mechanics, A Modern Development(p 342) 46. ^ "The quantum Zeno effect is not a general characteristic of continuous measurements. In a recently reported experiment [Itano et al., Phys. Rev. A 41, 2295 (1990)], the inhibition of atomic excitation and deexcitation is not due to any collapse of the wave function, but instead is caused by a very strong perturbation due to the optical pulses and the coupling to the radiation field. The experiment should not be cited as providing empirical evidence in favor of the notion of wave-function collapse." Physical Review 47. ^ John Gribbin. Q is for Quantum. ISBN 978-0684863153. 48. ^ Einstein, A. (1936). 'Physik und Realität', Journal of the Franklin Institute, 221(3): 313–347. English translation by J. Picard, 349–382. 49. ^ Born, M.; Born, M. E. H. & Einstein, A. (1971). The Born–Einstein Letters: Correspondence between Albert Einstein and Max and Hedwig Born from 1916 to 1955, with commentaries by Max Born. I. Born, trans. London, UK: Macmillan. ISBN 978-0-8027-0326-2. 50. ^ "Quantum mechanics the way I see it". www.phys.tue.nl. External links[edit]
53d6ae9725dcb0a1
Green's function (many-body theory) Last updated In many-body theory, the term Green's function (or Green function) is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators. Many-body theory is an area of physics which provides the framework for understanding the collective behavior of large numbers of interacting particles, often on the order of Avogadro's number. In general terms, many-body theory deals with effects that manifest themselves only in systems containing large numbers of constituents. While the underlying physical laws that govern the motion of each individual particle may be simple, the study of the collection of particles can be extremely complex. In some cases emergent phenomena may arise which bear little resemblance to the underlying elementary laws. Correlation function (quantum field theory) In quantum field theory, the n-point correlation function is defined as the functional average of a product of field operators at different positions Creation and annihilation operators are mathematical operators that have widespread applications in quantum mechanics, notably in the study of quantum harmonic oscillators and many-particle systems. An annihilation operator lowers the number of particles in a given state by one. A creation operator increases the number of particles in a given state by one, and it is the adjoint of the annihilation operator. In many subfields of physics and chemistry, the use of these operators instead of wavefunctions is known as second quantization. The name comes from the Green's functions used to solve inhomogeneous differential equations, to which they are loosely related. (Specifically, only two-point 'Green's functions' in the case of a non-interacting system are Green's functions in the mathematical sense; the linear operator that they invert is the Hamiltonian operator, which in the non-interacting case is quadratic in the fields.) In quantum mechanics, a Hamiltonian is an operator corresponding to the sum of the kinetic energies plus the potential energies for all the particles in the system. It is usually denoted by , but also or to highlight its function as an operator. Its spectrum is the set of possible outcomes when one measures the total energy of a system. Because of its close relation to the time-evolution of a system, it is of fundamental importance in most formulations of quantum theory. Spatially uniform case Basic definitions We consider a many-body theory with field operator (annihilation operator written in the position basis) . The Heisenberg operators can be written in terms of Schrödinger operators as In physics, the Heisenberg picture is a formulation of quantum mechanics in which the operators incorporate a dependency on time, but the state vectors are time-independent, an arbitrary fixed basis rigidly underlying the theory. In physics, the Schrödinger picture is a formulation of quantum mechanics in which the state vectors evolve in time, but the operators are constant with respect to time. This differs from the Heisenberg picture which keeps the states constant while the observables evolve in time, and from the interaction picture in which both the states and the observables evolve in time. The Schrödinger and Heisenberg pictures are related as active and passive transformations and commutation relations between operators are preserved in the passage between the two pictures. and the creation operator is , where is the grand-canonical Hamiltonian. In statistical mechanics, a grand canonical ensemble is the statistical ensemble that is used to represent the possible states of a mechanical system of particles that are in thermodynamic equilibrium with a reservoir. The system is said to be open in the sense that the system can exchange energy and particles with a reservoir, so that various possible states of the system can differ in both their total energy and total number of particles. The system's volume, shape, and other external coordinates are kept the same in all possible states of the system. Similarly, for the imaginary-time operators, [Note that the imaginary-time creation operator is not the Hermitian conjugate of the annihilation operator .] In real time, the -point Green function is defined by where we have used a condensed notation in which signifies and signifies . The operator denotes time ordering, and indicates that the field operators that follow it are to be ordered so that their time arguments increase from right to left. In imaginary time, the corresponding definition is where signifies . (The imaginary-time variables are restricted to the range from to the inverse temperature .) Note regarding signs and normalization used in these definitions: The signs of the Green functions have been chosen so that Fourier transform of the two-point () thermal Green function for a free particle is and the retarded Green function is is the Matsubara frequency. Throughout, is for bosons and for fermions and denotes either a commutator or anticommutator as appropriate. (See below for details.) Two-point functions The Green function with a single pair of arguments () is referred to as the two-point function, or propagator. In the presence of both spatial and temporal translational symmetry, it depends only on the difference of its arguments. Taking the Fourier transform with respect to both space and time gives where the sum is over the appropriate Matsubara frequencies (and the integral involves an implicit factor of , as usual). In real time, we will explicitly indicate the time-ordered function with a superscript T: The real-time two-point Green function can be written in terms of 'retarded' and 'advanced' Green functions, which will turn out to have simpler analyticity properties. The retarded and advanced Green functions are defined by They are related to the time-ordered Green function by is the Bose–Einstein or Fermi–Dirac distribution function. Imaginary-time ordering and β-periodicity The thermal Green functions are defined only when both imaginary-time arguments are within the range to . The two-point Green function has the following properties. (The position or momentum arguments are suppressed in this section.) Firstly, it depends only on the difference of the imaginary times: The argument is allowed to run from to . Secondly, is (anti)periodic under shifts of . Because of the small domain within which the function is defined, this means just for . Time ordering is crucial for this property, which can be proved straightforwardly, using the cyclicity of the trace operation. These two properties allow for the Fourier transform representation and its inverse, Finally, note that has a discontinuity at ; this is consistent with a long-distance behaviour of . Spectral representation The propagators in real and imaginary time can both be related to the spectral density (or spectral weight), given by where |α⟩ refers to a (many-body) eigenstate of the grand-canonical Hamiltonian H  μN, with eigenvalue Eα. The imaginary-time propagator is then given by and the retarded propagator by where the limit as is implied. The advanced propagator is given by the same expression, but with in the denominator. The time-ordered function can be found in terms of and . As claimed above, and have simple analyticity properties: the former (latter) has all its poles and discontinuities in the lower (upper) half-plane. The thermal propagator has all its poles and discontinuities on the imaginary axis. The spectral density can be found very straightforwardly from , using the Sokhatsky–Weierstrass theorem where P denotes the Cauchy principal part. This gives This furthermore implies that obeys the following relationship between its real and imaginary parts: where denotes the principal value of the integral. The spectral density obeys a sum rule, which gives as . Hilbert transform The similarity of the spectral representations of the imaginary- and real-time Green functions allows us to define the function which is related to and by A similar expression obviously holds for . The relation between and is referred to as a Hilbert transform. Proof of spectral representation We demonstrate the proof of the spectral representation of the propagator in the case of the thermal Green function, defined as Due to translational symmetry, it is only necessary to consider for , given by Inserting a complete set of eigenstates gives Since and are eigenstates of , the Heisenberg operators can be rewritten in terms of Schrödinger operators, giving Performing the Fourier transform then gives Momentum conservation allows the final term to be written as (up to possible factors of the volume) which confirms the expressions for the Green functions in the spectral representation. The sum rule can be proved by considering the expectation value of the commutator, and then inserting a complete set of eigenstates into both terms of the commutator: Swapping the labels in the first term then gives which is exactly the result of the integration of ρ. Non-interacting case In the non-interacting case, is an eigenstate with (grand-canonical) energy , where is the single-particle dispersion relation measured with respect to the chemical potential. The spectral density therefore becomes From the commutation relations, with possible factors of the volume again. The sum, which involves the thermal average of the number operator, then gives simply , leaving The imaginary-time propagator is thus and the retarded propagator is Zero-temperature limit As β→∞, the spectral density becomes where α = 0 corresponds to the ground state. Note that only the first (second) term contributes when ω is positive (negative). General case Basic definitions We can use 'field operators' as above, or creation and annihilation operators associated with other single-particle states, perhaps eigenstates of the (noninteracting) kinetic energy. We then use where is the annihilation operator for the single-particle state and is that state's wavefunction in the position basis. This gives with a similar expression for . Two-point functions These depend only on the difference of their time arguments, so that We can again define retarded and advanced functions in the obvious way; these are related to the time-ordered function in the same way as above. The same periodicity properties as described in above apply to . Specifically, for . Spectral representation In this case, where and are many-body states. The expressions for the Green functions are modified in the obvious ways: Their analyticity properties are identical. The proof follows exactly the same steps, except that the two matrix elements are no longer complex conjugates. Noninteracting case If the particular single-particle states that are chosen are `single-particle energy eigenstates', i.e. then for an eigenstate: so is : and so is : We therefore have We then rewrite and the fact that the thermal average of the number operator gives the Bose–Einstein or Fermi–Dirac distribution function. Finally, the spectral density simplifies to give so that the thermal Green function is and the retarded Green function is Note that the noninteracting Green function is diagonal, but this will not be true in the interacting case. See also Related Research Articles In electrodynamics, elliptical polarization is the polarization of electromagnetic radiation such that the tip of the electric field vector describes an ellipse in any fixed plane intersecting, and normal to, the direction of propagation. An elliptically polarized wave may be resolved into two linearly polarized waves in phase quadrature, with their polarization planes at right angles to each other. Since the electric field can rotate clockwise or counterclockwise as it propagates, elliptically polarized waves exhibit chirality. In quantum chemistry and molecular physics, the Born–Oppenheimer (BO) approximation is the assumption that the motion of atomic nuclei and electrons in a molecule can be treated separately. The approach is named after Max Born and J. Robert Oppenheimer who proposed it in 1927, in the early period of quantum mechanics. The approximation is widely used in quantum chemistry to speed up the computation of molecular wavefunctions and other properties for large molecules. There are cases where the assumption of separable motion no longer holds, which make the approximation lose validity, but is then often used as a starting point for more refined methods. In physics, specifically in quantum mechanics, a coherent state is the specific quantum state of the quantum harmonic oscillator, often described as a state which has dynamics most closely resembling the oscillatory behavior of a classical harmonic oscillator. It was the first example of quantum dynamics when Erwin Schrödinger derived it in 1926, while searching for solutions of the Schrödinger equation that satisfy the correspondence principle. The quantum harmonic oscillator and hence, the coherent states arise in the quantum theory of a wide range of physical systems. For instance, a coherent state describes the oscillating motion of a particle confined in a quadratic potential well. The coherent state describes a state in a system for which the ground-state wavepacket is displaced from the origin of the system. This state can be related to classical solutions by a particle oscillating with an amplitude equivalent to the displacement. Second quantization Formulation of the quantum many-body problem Second quantization, also referred to as occupation number representation, is a formalism used to describe and analyze quantum many-body systems. In quantum field theory, it is known as canonical quantization, in which the fields are thought of as field operators, in a manner similar to how the physical quantities are thought of as operators in first quantization. The key ideas of this method were introduced in 1927 by Paul Dirac, and were developed, most notably, by Vladimir Fock and Pascual Jordan later. In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT). In mathematics, the Hamilton–Jacobi equation (HJE) is a necessary condition describing extremal geometry in generalizations of problems from the calculus of variations, and is a special case of the Hamilton–Jacobi–Bellman equation. It is named for William Rowan Hamilton and Carl Gustav Jacob Jacobi. LSZ reduction formula In quantum field theory, the LSZ reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann. The Havriliak–Negami relaxation is an empirical modification of the Debye relaxation model in electromagnetism. Unlike the Debye model, the Havriliak–Negami relaxation accounts for the asymmetry and broadness of the dielectric dispersion curve. The model was first used to describe the dielectric relaxation of some polymers, by adding two exponential parameters to the Debye equation: In quantum mechanics, the Hellmann–Feynman theorem relates the derivative of the total energy with respect to a parameter, to the expectation value of the derivative of the Hamiltonian with respect to that same parameter. According to the theorem, once the spatial distribution of the electrons has been determined by solving the Schrödinger equation, all the forces in the system can be calculated using classical electrostatics. In mathematics, the Fubini–Study metric is a Kähler metric on projective Hilbert space, that is, on a complex projective space CPn endowed with a Hermitian form. This metric was originally described in 1904 and 1905 by Guido Fubini and Eduard Study. A quasiprobability distribution is a mathematical object similar to a probability distribution but which relaxes some of Kolmogorov's axioms of probability theory. Although quasiprobabilities share several of general features with ordinary probabilities, such as, crucially, the ability to yield expectation values with respect to the weights of the distribution, they all violate the σ-additivity axiom, because regions integrated under them do not represent probabilities of mutually exclusive states. To compensate, some quasiprobability distributions also counterintuitively have regions of negative probability density, contradicting the first axiom. Quasiprobability distributions arise naturally in the study of quantum mechanics when treated in phase space formulation, commonly used in quantum optics, time-frequency analysis, and elsewhere. Sinusoidal plane-wave solutions are particular solutions to the electromagnetic wave equation. The theoretical and experimental justification for the Schrödinger equation motivates the discovery of the Schrödinger equation, the equation that describes the dynamics of nonrelativistic particles. The motivation uses photons, which are relativistic particles with dynamics described by Maxwell's equations, as an analogue for all types of particles. In a field of mathematics known as differential geometry, a Courant geometry was originally introduced by Zhang-Ju Liu, Alan Weinstein and Ping Xu in their investigation of doubles of Lie bialgebroids in 1997. Liu, Weinstein and Xu named it after Courant, who had implicitly devised earlier in 1990 the standard prototype of Courant algebroid through his discovery of a skew symmetric bracket on , called Courant bracket today, which fails to satisfy the Jacobi identity. Both this standard example and the double of a Lie bialgebra are special instances of Courant algebroids. In mathematics — specifically, in stochastic analysis — an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô. A symmetric, informationally complete, positive operator-valued measure (SIC-POVM) is a special case of a generalized measurement on a Hilbert space, used in the field of quantum mechanics. A measurement of the prescribed form satisfies certain defining qualities that makes it an interesting candidate for a "standard quantum measurement", utilized in the study of foundational quantum mechanics, most notably in QBism. Furthermore, it has been shown that applications exist in quantum state tomography and quantum cryptography, and a possible connection has been discovered with Hilbert's twelfth problem. In thermal quantum field theory, the Matsubara frequency summation is the summation over discrete imaginary frequencies. It takes the following form This article summarizes important identities in exterior calculus.
b3d124776a2b7b89
סמינר מחלקה אלקטרוניקה פיזיקאלית: Zhiwei Fan 27 ביוני 2019, 11:00  פקולטה להנדסה, בניין כיתות, חדר 011  סמינר מחלקה אלקטרוניקה פיזיקאלית: Zhiwei Fan סמינר סטודנט You are invited to attend a lecture Dynamics of two-component solitons in optics and matter waves Zhiwei Fan M.Sc. student under supervision of Prof. Boris A. Malomed Solitons, also known as solitary waves, exist in a large range of fields featuring essential nonlinear effects. They play a significant role in many subjects, such as optics, astronomy, condensed physics, biology, etc. It was first observed in 1834 by John Scott Russell in a canal in Scotland. In the subsequent years, the nonlinear waves were considered in many integrable nonlinear equations, such as nonlinear Schrödinger equation (NLSE) and so on. In this research, we mainly focus on the dynamics and stability of solitons in two-component systems. This talk consists of two parts, in the first one, we consider a dual-core nonlinear waveguide with PT symmetry, realized in the form of equal gain and loss terms carried by the coupled cores. And the second part will mainly focus on the solitons in matter waves which supported by the different nonlinearities, such as dipole-dipole interactions (DDI), microwave interactions. On Thursday, June 27, 2019, 11:00 Room 011, Kitot building UI/UX Basch_Interactive
a1bc23fbc2f0613b
The quantum chemical physics of few-particle atoms and molecules Baskerville, Adam (2018) The quantum chemical physics of few-particle atoms and molecules. Doctoral thesis (PhD), University of Sussex. [img] PDF - Published Version Download (17MB) The many-electron Schrödinger equation for atoms and molecules still remains analytically insoluble after over 90 years of investigation. This has not deterred scientists from developing a large variety of elegant techniques and approximations to workaround this issue and make many-particle quantum calculations computationally tractable. This thesis presents an all-particle treatment of three-particle systems which represent the simplest, most complex, many-particle systems including electron correlation and nuclear motion effects; meaning they provide a close-up view of fundamental particle interaction. Fully-Correlated (FC) energies and wavefunctions are calculated to high accuracy (mJ mol−1 or better for energies); and the central theme of this work is to use the wavefunctions to study fundamental quantum chemical physics. Nuclear motion has not received the same attention as electronic structure theory and this complicated coupling of electron and nuclear motions is studied in this work with the use of intracule and centre of mass particle densities where it is found nuclear motion exhibits strong correlation. A highly accurate Hartree-Fock implementation is presented which uses a Laguerre polynomial basis set. This method is used to accurately calculate electron correlation energies using the Löwdin definition and Coulomb holes by comparing with our FC data. Additionally the critical nuclear charge to bind two electrons within the HF methodology is calculated. A modification to Pekeris’ series solution method is implemented to accurately model excited states of three-particle systems, and adapted to include the effects of nuclear motion along with three Non-Linear variational Parameters (NLPs) to aid convergence. This implementation is shown to produce high accuracy results for singlet and triplet atomic excited S states and the critical nuclear charge to bind two electrons in both spin states is investigated. Geometrical properties of three-particle systems are studied using a variety of particle densities and by determining the bound state stability at the lowest continuum threshold as a function of mass. This enables us to better ascertain what is meant when we define a system as an atom or a molecule. Item Type: Thesis (Doctoral) Schools and Departments: School of Life Sciences > Chemistry Depositing User: Library Cataloguing Date Deposited: 13 Jul 2018 13:24 Last Modified: 13 Jul 2018 13:24 View download statistics for this item 📧 Request an update
acee824b48b4614e
You are currently browsing the tag archive for the ‘renewal process’ tag. I’ve just uploaded to the arXiv my paper “Almost all Collatz orbits attain almost bounded values“, submitted to the proceedings of the Forum of Mathematics, Pi. In this paper I returned to the topic of the notorious Collatz conjecture (also known as the {3x+1} conjecture), which I previously discussed in this blog post. This conjecture can be phrased as follows. Let {{\bf N}+1 = \{1,2,\dots\}} denote the positive integers (with {{\bf N} =\{0,1,2,\dots\}} the natural numbers), and let {\mathrm{Col}: {\bf N}+1 \rightarrow {\bf N}+1} be the map defined by setting {\mathrm{Col}(N)} equal to {3N+1} when {N} is odd and {N/2} when {N} is even. Let {\mathrm{Col}_{\min}(N) := \inf_{n \in {\bf N}} \mathrm{Col}^n(N)} be the minimal element of the Collatz orbit {N, \mathrm{Col}(N), \mathrm{Col}^2(N),\dots}. Then we have Conjecture 1 (Collatz conjecture) One has {\mathrm{Col}_{\min}(N)=1} for all {N \in {\bf N}+1}. Establishing the conjecture for all {N} remains out of reach of current techniques (for instance, as discussed in the previous blog post, it is basically at least as difficult as Baker’s theorem, all known proofs of which are quite difficult). However, the situation is more promising if one is willing to settle for results which only hold for “most” {N} in some sense. For instance, it is a result of Krasikov and Lagarias that \displaystyle \{ N \leq x: \mathrm{Col}_{\min}(N) = 1 \} \gg x^{0.84} for all sufficiently large {x}. In another direction, it was shown by Terras that for almost all {N} (in the sense of natural density), one has {\mathrm{Col}_{\min}(N) < N}. This was then improved by Allouche to {\mathrm{Col}_{\min}(N) < N^\theta} for almost all {N} and any fixed {\theta > 0.869}, and extended later by Korec to cover all {\theta > \frac{\log 3}{\log 4} \approx 0.7924}. In this paper we obtain the following further improvement (at the cost of weakening natural density to logarithmic density): Theorem 2 Let {f: {\bf N}+1 \rightarrow {\bf R}} be any function with {\lim_{N \rightarrow \infty} f(N) = +\infty}. Then we have {\mathrm{Col}_{\min}(N) < f(N)} for almost all {N} (in the sense of logarithmic density). Thus for instance one has {\mathrm{Col}_{\min}(N) < \log\log\log\log N} for almost all {N} (in the sense of logarithmic density). The difficulty here is one usually only expects to establish “local-in-time” results that control the evolution {\mathrm{Col}^n(N)} for times {n} that only get as large as a small multiple {c \log N} of {\log N}; the aforementioned results of Terras, Allouche, and Korec, for instance, are of this time. However, to get {\mathrm{Col}^n(N)} all the way down to {f(N)} one needs something more like an “(almost) global-in-time” result, where the evolution remains under control for so long that the orbit has nearly reached the bounded state {N=O(1)}. However, as observed by Bourgain in the context of nonlinear Schrödinger equations, one can iterate “almost sure local wellposedness” type results (which give local control for almost all initial data from a given distribution) into “almost sure (almost) global wellposedness” type results if one is fortunate enough to draw one’s data from an invariant measure for the dynamics. To illustrate the idea, let us take Korec’s aforementioned result that if {\theta > \frac{\log 3}{\log 4}} one picks at random an integer {N} from a large interval {[1,x]}, then in most cases, the orbit of {N} will eventually move into the interval {[1,x^{\theta}]}. Similarly, if one picks an integer {M} at random from {[1,x^\theta]}, then in most cases, the orbit of {M} will eventually move into {[1,x^{\theta^2}]}. It is then tempting to concatenate the two statements and conclude that for most {N} in {[1,x]}, the orbit will eventually move {[1,x^{\theta^2}]}. Unfortunately, this argument does not quite work, because by the time the orbit from a randomly drawn {N \in [1,x]} reaches {[1,x^\theta]}, the distribution of the final value is unlikely to be close to being uniformly distributed on {[1,x^\theta]}, and in particular could potentially concentrate almost entirely in the exceptional set of {M \in [1,x^\theta]} that do not make it into {[1,x^{\theta^2}]}. The point here is the uniform measure on {[1,x]} is not transported by Collatz dynamics to anything resembling the uniform measure on {[1,x^\theta]}. So, one now needs to locate a measure which has better invariance properties under the Collatz dynamics. It turns out to be technically convenient to work with a standard acceleration of the Collatz map known as the Syracuse map {\mathrm{Syr}: 2{\bf N}+1 \rightarrow 2{\bf N}+1}, defined on the odd numbers {2{\bf N}+1 = \{1,3,5,\dots\}} by setting {\mathrm{Syr}(N) = (3N+1)/2^a}, where {2^a} is the largest power of {2} that divides {3N+1}. (The advantage of using the Syracuse map over the Collatz map is that it performs precisely one multiplication of {3} at each iteration step, which makes the map better behaved when performing “{3}-adic” analysis.) When viewed {3}-adically, we soon see that iterations of the Syracuse map become somewhat irregular. Most obviously, {\mathrm{Syr}(N)} is never divisible by {3}. A little less obviously, {\mathrm{Syr}(N)} is twice as likely to equal {2} mod {3} as it is to equal {1} mod {3}. This is because for a randomly chosen odd {\mathbf{N}}, the number of times {\mathbf{a}} that {2} divides {3\mathbf{N}+1} can be seen to have a geometric distribution of mean {2} – it equals any given value {a \in{\bf N}+1} with probability {2^{-a}}. Such a geometric random variable is twice as likely to be odd as to be even, which is what gives the above irregularity. There are similar irregularities modulo higher powers of {3}. For instance, one can compute that for large random odd {\mathbf{N}}, {\mathrm{Syr}^2(\mathbf{N}) \hbox{ mod } 9} will take the residue classes {0,1,2,3,4,5,6,7,8 \hbox{ mod } 9} with probabilities \displaystyle 0, \frac{8}{63}, \frac{16}{63}, 0, \frac{11}{63}, \frac{4}{63}, 0, \frac{2}{63}, \frac{22}{63} respectively. More generally, for any {n}, {\mathrm{Syr}^n(N) \hbox{ mod } 3^n} will be distributed according to the law of a random variable {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} on {{\bf Z}/3^n{\bf Z}} that we call a Syracuse random variable, and can be described explicitly as \displaystyle \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) = 2^{-\mathbf{a}_1} + 3^1 2^{-\mathbf{a}_1-\mathbf{a}_2} + \dots + 3^{n-1} 2^{-\mathbf{a}_1-\dots-\mathbf{a}_n} \hbox{ mod } 3^n, \ \ \ \ \ (1) where {\mathbf{a}_1,\dots,\mathbf{a}_n} are iid copies of a geometric random variable of mean {2}. In view of this, any proposed “invariant” (or approximately invariant) measure (or family of measures) for the Syracuse dynamics should take this {3}-adic irregularity of distribution into account. It turns out that one can use the Syracuse random variables {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} to construct such a measure, but only if these random variables stabilise in the limit {n \rightarrow \infty} in a certain total variation sense. More precisely, in the paper we establish the estimate \displaystyle \sum_{Y \in {\bf Z}/3^n{\bf Z}} | \mathbb{P}( \mathbf{Syrac}({\bf Z}/3^n{\bf Z})=Y) - 3^{m-n} \mathbb{P}( \mathbf{Syrac}({\bf Z}/3^m{\bf Z})=Y \hbox{ mod } 3^m)| \ \ \ \ \ (2) \displaystyle \ll_A m^{-A} for any {1 \leq m \leq n} and any {A > 0}. This type of stabilisation is plausible from entropy heuristics – the tuple {(\mathbf{a}_1,\dots,\mathbf{a}_n)} of geometric random variables that generates {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} has Shannon entropy {n \log 4}, which is significantly larger than the total entropy {n \log 3} of the uniform distribution on {{\bf Z}/3^n{\bf Z}}, so we expect a lot of “mixing” and “collision” to occur when converting the tuple {(\mathbf{a}_1,\dots,\mathbf{a}_n)} to {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}; these heuristics can be supported by numerics (which I was able to work out up to about {n=10} before running into memory and CPU issues), but it turns out to be surprisingly delicate to make this precise. A first hint of how to proceed comes from the elementary number theory observation (easily proven by induction) that the rational numbers \displaystyle 2^{-a_1} + 3^1 2^{-a_1-a_2} + \dots + 3^{n-1} 2^{-a_1-\dots-a_n} are all distinct as {(a_1,\dots,a_n)} vary over tuples in {({\bf N}+1)^n}. Unfortunately, the process of reducing mod {3^n} creates a lot of collisions (as must happen from the pigeonhole principle); however, by a simple “Lefschetz principle” type argument one can at least show that the reductions \displaystyle 2^{-a_1} + 3^1 2^{-a_1-a_2} + \dots + 3^{m-1} 2^{-a_1-\dots-a_m} \hbox{ mod } 3^n \ \ \ \ \ (3) are mostly distinct for “typical” {a_1,\dots,a_m} (as drawn using the geometric distribution) as long as {m} is a bit smaller than {\frac{\log 3}{\log 4} n} (basically because the rational number appearing in (3) then typically takes a form like {M/2^{2m}} with {M} an integer between {0} and {3^n}). This analysis of the component (3) of (1) is already enough to get quite a bit of spreading on { \mathbf{Syrac}({\bf Z}/3^n{\bf Z})} (roughly speaking, when the argument is optimised, it shows that this random variable cannot concentrate in any subset of {{\bf Z}/3^n{\bf Z}} of density less than {n^{-C}} for some large absolute constant {C>0}). To get from this to a stabilisation property (2) we have to exploit the mixing effects of the remaining portion of (1) that does not come from (3). After some standard Fourier-analytic manipulations, matters then boil down to obtaining non-trivial decay of the characteristic function of {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}, and more precisely in showing that \displaystyle \mathbb{E} e^{-2\pi i \xi \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) / 3^n} \ll_A n^{-A} \ \ \ \ \ (4) for any {A > 0} and any {\xi \in {\bf Z}/3^n{\bf Z}} that is not divisible by {3}. If the random variable (1) was the sum of independent terms, one could express this characteristic function as something like a Riesz product, which would be straightforward to estimate well. Unfortunately, the terms in (1) are loosely coupled together, and so the characteristic factor does not immediately factor into a Riesz product. However, if one groups adjacent terms in (1) together, one can rewrite it (assuming {n} is even for sake of discussion) as \displaystyle (2^{\mathbf{a}_2} + 3) 2^{-\mathbf{b}_1} + (2^{\mathbf{a}_4}+3) 3^2 2^{-\mathbf{b}_1-\mathbf{b}_2} + \dots \displaystyle + (2^{\mathbf{a}_n}+3) 3^{n-2} 2^{-\mathbf{b}_1-\dots-\mathbf{b}_{n/2}} \hbox{ mod } 3^n where {\mathbf{b}_j := \mathbf{a}_{2j-1} + \mathbf{a}_{2j}}. The point here is that after conditioning on the {\mathbf{b}_1,\dots,\mathbf{b}_{n/2}} to be fixed, the random variables {\mathbf{a}_2, \mathbf{a}_4,\dots,\mathbf{a}_n} remain independent (though the distribution of each {\mathbf{a}_{2j}} depends on the value that we conditioned {\mathbf{b}_j} to), and so the above expression is a conditional sum of independent random variables. This lets one express the characeteristic function of (1) as an averaged Riesz product. One can use this to establish the bound (4) as long as one can show that the expression \displaystyle \frac{\xi 3^{2j-2} (2^{-\mathbf{b}_1-\dots-\mathbf{b}_j+1} \mod 3^n)}{3^n} is not close to an integer for a moderately large number ({\gg A \log n}, to be precise) of indices {j = 1,\dots,n/2}. (Actually, for technical reasons we have to also restrict to those {j} for which {\mathbf{b}_j=3}, but let us ignore this detail here.) To put it another way, if we let {B} denote the set of pairs {(j,l)} for which \displaystyle \frac{\xi 3^{2j-2} (2^{-l+1} \mod 3^n)}{3^n}, we have to show that (with overwhelming probability) the random walk \displaystyle (1,\mathbf{b}_1), (2, \mathbf{b}_1 + \mathbf{b}_2), \dots, (n/2, \mathbf{b}_1+\dots+\mathbf{b}_{n/2}) (which we view as a two-dimensional renewal process) contains at least a few points lying outside of {B}. A little bit of elementary number theory and combinatorics allows one to describe the set {B} as the union of “triangles” with a certain non-zero separation between them. If the triangles were all fairly small, then one expects the renewal process to visit at least one point outside of {B} after passing through any given such triangle, and it then becomes relatively easy to then show that the renewal process usually has the required number of points outside of {B}. The most difficult case is when the renewal process passes through a particularly large triangle in {B}. However, it turns out that large triangles enjoy particularly good separation properties, and in particular afer passing through a large triangle one is likely to only encounter nothing but small triangles for a while. After making these heuristics more precise, one is finally able to get enough points on the renewal process outside of {B} that one can finish the proof of (4), and thus Theorem 2.
ca394747dae15923
What is exactly non-deterministic in our universe? What is determinism? Bad news is… Main question of the article How can we tackle this question? Where does uncertainty come from? What is it that we’re uncertain about in our universe? The problem is the transition Physics models nature, it doesn’t find its laws Do those sound not different from each other? Why is this wrong? Has there been incidents that show that this is the case? Subatomic particles are waves. What’s the position of this particle? Newer concepts Electrons do not “jump” or teleport from one energy level to the other Neil deGrasse Tyson, you gotta fix this! I am very happy that Neil Tyson made the series “Cosmos”, where it is another way to communicate science to people, which is necessary in this era. I, personally, haven’t watched it, because I’m a physicist and the guy usually talks about things I learned academically. However, my wife watched it… and she told me once: “Neil Tyson said that electrons disappear from one orbit and appear in another”… and she continued talking, while I interrupted and asked… what?! How could a physicist say that? That destroys the simplest rule in relativity! And yes, he did say that, which is crazy actually, and I’m pretty shocked that this kind of mistake would come out of such a famous scientist. Look for yourself: Why is that wrong? Simply, because there is no reason to believe that this is the case. Back then, when Bohr provided his semi-classical solution of the hydrogen atom, those transitions were not understood very well, and they would’ve lead to such conflicts. But, do we still deal with Bohr’s model? Definitely not! We now know Quantum Mechanics. Before delving into Quantum Mechanics, let me pose this question: Is there any experimental evidence that electrons “teleport” from one orbit to the other as Neil Tyson said? The answer is: NO! And if there is, please let me know about them in the comments. So, even if we would assume that Bohr made a successful model that explains the hydrogen energy levels in steady state, does that mean that it can be blindly extended to explain the dynamics of electronic transitions? Definitely not! That’s not scientific at all. Why is this not scientific? Because in science, we create models of natural phenomena, and then test them and try to disprove them. Now what we see in the case of Bohr’s model, is that it successfully explained atomic energy levels to a good accuracy, but there is no part in Bohr’s model that talks about transitions. Therefore, inferring blindly that electrons are only in those levels is… crazy! On the other hand, this easily breaks special relativity’s main result: Particles do not exceed the speed of light. So, what does this mean? This means that if what Neil Tyson said is true, then Quantum Field Theory, which is a superset of Quantum Mechanics, agrees that nothing exceeds the speed of light, but the very simple hydrogen atom in Quantum Mechanics… does not. How crazy is that? Quantum Mechanics and the hydrogen atom Explaining the Quantum Mechanics (QM) model, the QM model comes up when solving the Schrödinger equation (time independent version of it), and the result from solving that is a wave-function, where this wave-function is directly related to the probability of finding an electron spatially somewhere. In the case of a hydrogen atom, the Schrödinger equation is solved for simply a negative electron and a positive proton. The result of this problem is presented in a wave-function that uses complicated mathematical functions, called Legendre Polynomials and Spherical Harmonics. The result is presented in a nice picture that I found on Wikipedia. Hydrogen_Density_PlotsNotice that the solution is not “black and white” like Neil Tyson described it. There’s a key on the right, where a $+$ and $-$ can be seen. The $+$ represents higher probability than the $-$ regions. The first row shows the typical spherical orbits that we understand from classical mechanics (the Bohr model), while the other rows show more complicated solutions that involve angular momentum. Notice that in those solutions, the wave-function is never zero anywhere but at infinity and specific points (lines, or nodes) in space that are infinitely small (Thanks to Lance for making me notice that more nodes exist in the wave-function)! So, according to our current knowledge of the hydrogen atom, why should we believe that electrons disappear from level to another? I think there’s no reason whatsoever. A little more detail on transitions Many atomic physics books treated the problem of atomic transitions in a model called “Dipole Transitions”. The model is usually accurate with relative accuracy of around $10^{-6}$. In that model, the problem of transitions is very well understood. For example, in the book Optically Polarized Atoms: Understanding light-atom interactions, there is a section called “Visualization of atomic transitions”. In it, the author shows that a transition from one state to another can be well modeled with a simple time evolution operator that incorporates the two involved states. For a transition from state $\left|2P\right\rangle$ to the state $\left|1S\right\rangle$ can be modeled with a simple wave function $$\psi=a_{1}\left|1S\right\rangle +e^{-i\frac{E_{2}-E_{1}}{\hbar}t}a_{2}\left|2P\right\rangle$$ where $a_1,a_2$ are normalization factors, and $E_1,E_2$ are the energies of the states. We see that an oscillation of frequency (in units of energy) $E_2-E_1$ would happen, leading to the production of a photon. Then, again, why should we ever believe that electrons teleport from one atomic state to the other? Is it just simplicity? Probably some people will argue that Neil Tyson was simplifying the atomic model for common people, but then I would ask the question: When did simplifications start to communicate false or wrong information? I think simplifying does not justify giving people wrong information at all. Another simpler mistake One more simple mistake Neil Tyson did in that video, is that he claimed that spontaneous decays are not understood (with why they happen). This is actually not true. In the same book I mentioned above, a discussion was put on that spontaneous decays happen due to spontaneous quantum fluctuations, that act as a stimulus for atoms and hit them. Therefore, technically, spontaneous decays do not exist; they’re just another form of stimulated emission. This is not a big deal, though. I think this is an advanced issue, and claiming that “we don’t know” is better than posing wrong information. Conclusion and discussion I didn’t make this article to blame Neil Tyson, and actually he’s done a very good job with Cosmos. But I made this article because I found it common in social networks that people use this wrong information, and it has to be cleared out. I actually would be very grateful to him if he would fix this mistake and replace the episode. The conclusion of this article, is that electrons do not teleport from one energy level to the other. There’s no evidence on that whatsoever! Electrons are, also, not strictly bound to those energy levels. According to our understanding of the quantum world, electrons have a probability cloud; and an electronic transition (dipole transition) will just make this cloud oscillate continuously from one energy level to another one continuously.
c471c7c4d37972ef
Neil deGrasse Tyson, you gotta fix this! Why is that wrong? Quantum Mechanics and the hydrogen atom A little more detail on transitions Is it just simplicity? Another simpler mistake Conclusion and discussion 14 thoughts on “Electrons do not “jump” or teleport from one energy level to the other” 1. There are 2 things that seem to be in error. You stated “… the wave function is never zero anywhere but at infinity!” about atomic orbitals. Actually, that is only true for 1S, all the others have nodes where the function passes through zero. Second, you assumed that a hypothesis was a fact when you stated “… that spontaneous decays happen due to spontaneous quantum fluctuations …” There is no indications of spontaneous quantum fluctuations and whether they can cause spontaneous decays (although it is likely). 1. About the wave-function, you’re right. I will fix it. But about the spontaneous fluctuations, the book stated that this is something proven, the following is a quote from the book page 128: Spontaneous vs. stimulated transitions When an atom in the lower state absorbs radiation and is transferred to the upper state, it is clear that the radiation is the agent which initiates the transition. But what if the atom is initially in the upper state and there is no light shining on it. We know that, eventually, the atom will decay via spontaneous emission. The question is: what causes the atom in a stationary state to spontaneously radiate an electromagnetic wave and undergo a transition to the lower final state? It turns out that the answer to this question is actually beyond nonrelativistic quantum mechanics, and, to be answered properly, requires one to go into the realm of quantum electrodynamics, and, more generally, the quantum field theory. The gist of the story is that, even in the absence of applied light there is a certain density of electromagnetic-field fluctuations with all possible frequencies and polarizations that exists in what would otherwise be considered the vacuum. This means that if a free atom is in the upper state, there is always a vacuum fluctuation with a frequency and polarization that is needed to connect resonantly the upper and the lower states of the atom, which causes a spontaneous decay to occur. Thus, we can say that, in some sense, spontaneous emission is, in fact, stimulated emission induced by vacuum fluctuations. A thorough introduction to quantum field theory and quantum vacuum can be found in the book by Milonni (1994). Is it a hypothesis? I don’t think so. I think it’s a successful model. 2. I HAVE No idea about more than half the stuff in this post, but you made it pretty clear i shuldnt have taken everything stated in cosmos as fact. The “electron jump” thing sounded so cool. :'( however don’t go so strongly against tyson with your proofs. Understandings of these mechanics can change entirely, maybe to the extent of destroying understanding of relativity. Be a bit more open minded and more calm at the mistakes people make. Dont shout at your poor old wife. 1. I have just finished watching the Cosmos for the first time, and really enjoyed it, most of it was already known to myself but a great way to get the kiddies excited about science. Regarding the spontaneous decay of the hydrogen electron from one energy level to another which in theory is impossible ( or is it ) Perhaps and just perhaps, it is not the same electron hopping between orbits but a different electron from another hydrogen atom. A sort of social networking between the atoms passing information to each other via a dimension unknown to us on a quantum level. just a thought. 1. Oh, sure. No problem. I just hope you don’t attack “science” per se with some religious agenda. Otherwise, feel free to quote whatever you like from here. 3. Wow. I cant thank you enough for posting this. Last semester, in physics class we were taught that the electron teleported from one energy level to the next. I’m glad you cleared that up. You burst my wide-eyed, magic, wonder bubble, about teleportation as a possibility but I’m glad you did. 4. You’re a physicist? I’m assuming that somehow you’ve managed to avoid ever teaching, however. And you’ve never really thought about the philosophy of science at all. Yeah, what Tyson said was an approximation. What you’ve said is a better approximation. But it’s still an approximation – remember, in QFT, particles like electrons are not what’s “really” going on. The fields behave like particles in certain circumstances. And QFT itself is an approximation, an effective field theory for what’s “really” happening. A physicist knows that what’s important is to use an approximation that is good enough for what you’re calculating. A teacher knows that it’s also important to use an approximation your audience can understand. And the wording you’ve quoted for Tyson is accurate even at the level of intro non-relativistic* quantum mechanics (upper-level undergrad or intro grad school). Electrons do “disappear” from, say, 1s, and appear in 2p. Notice we said nothing about distance there. The orbitals don’t correspond to distances, as you yourself pointed out. If you want a better interpretation of measurement than this (default Copenhagen), you need to look at more sophisticated formulations like Everettian QM. *Let’s stress again that basic QM is non-relativistic. Your constraint from relativity literally doesn’t make sense within this theory. If you want to properly obey SR, you need QFT. 1. Mr. “Actual physicist”, apart from your hostile attitude, while parts of what you said are correct, and nothing in what I said is incorrect, which led you to wrestling with me on an opinion-based matter and you provided ZERO evidence or references (unlike me), you made a few mistakes yourself that show that you’re not that brilliant a physicist you think you are. I’m not here to judge you as you want to judge me, but take two steps back and think about why you’re doing this. You’re not proving me to be less in any way. You wanna know about approximations? Go ahead and take a look at my doctoral dissertation, and just search for the word “approximation”. Anyone reading your comment will sense your hostility + not a single reference/evidence given to anything you said! And a reader will just wonder why you’re doing this. So why are you doing this? Do you just want to prove you’re a “better” physicist than me? OK, dude, you get a star! Congrats! Here are your mistakes: 1- Orbitals ARE really distances. Where the hell did I say they’re not real distances? When we solve the Schrödinger’s equation, we use a RADIUS in the potential, and we derive the RADIUS of the hydrogen atom, $a_0$, which is called the Bohr RADIUS. What does judge whether it’s a real radius? It’s basically the RADIUS that you see in your equations. It’s not your opinion. It’s not an opinion-based matter. Here is a link where you can learn about atomic orbitals and how they’re related to RADIUS. 2- Transitions from 1s to 2p do involve a change in radius (from orbital n=1 to orbital n=2), and does involve a change in angular momentum. Ever seen the hydrogen wave-functions? 3- Even if orbitals are not real in a physical radius matter (which they are, but let’s just say they aren’t), that doesn’t necessarily mean that transitions over atomic states have to be instantaneous unless you prove that; i.e., disappear at one state and appear in the other state. While constraints on physical radius transitions are bound to relativity, since change in distance over time is speed, other instantaneous transitions will have their problems too. It’s more reasonable to consider a superposition between the initial and final states, where the wave function dacays from one and grows for the other using an Evolution Operator, which I used to describe atomic transitions in my article (ever heard about it? Ever learned how wave-functions and operators evolve over time?). At least that’s a superset of what you said! Why would any physicist in the world accept your crude, unrealistic model and take it for given as a general solution to every possible atomic system? Just because they can’t disprove it (which is still not true)? How about you believe in unicorns because you can’t disprove them too? Look, I’m more than happy to discuss things with you in detail like civilized people, but I’m not really willing to have a battle with hostile behavior. Your next comment will only be accepted if you stop your hostility. 1. Actually, to be clear, I was being condescending, not hostile. You are now being hostile. I’ll try to not be condescending anymore, but I am going to correct you where you’re wrong. Looking at your publications, you are a legit physicist. Good for you. This makes me a lot more puzzled about why you don’t appear to know basic QM. 1. Orbitals are not distances. They really aren’t. Orbitals are defined by the n,l,m quantum numbers (reference: any basic quantum book; e.g., Shankar, Griffiths, etc.). The expectation value of the r operator does depend on this (well, kinda – it depends on n and l, not m, so is the same for all electrons within a subshell). But r, as you very well know, is an operator, not a number that’s defined for a given |nlm>. (This is why we use quantum numbers like n, l, and m – because position and momentum, which define our particle state in classical phase space, are generally no longer well-defined things in Hilbert space. Source: any basic QM book. Seriously.) 2. No, they don’t. They involve a change in the expectation value of the radius. Why do you keep conflating operators with quantities? They do change the angular momentum, which is well defined because |nlm> is an eigenstate of L^2. It’s not an eigenstate of the operator r. 3. You seriously appear to be using a Bohr model of the atom. Of course this will make you worried about “instantaneously” transitioning states. Sure, the wavefunction evolves smoothly, according to Schrödinger. (Unless you measure it, in your Copenhagen interpretation, then it doesn’t! Hence my recommendation for Everettian QM.) 4. Can you address the fact that non-relativistic QM is non-relativistic? Don’t get all worked up about things the model can’t accommodate. You are getting incredibly worked up over a really minor simplification that Tyson made. 1. 1 and 2- Orbitals are defined by $n,l,m$, right. Did I say that’s not true? I never have. But what I said was that these numbers ARE RELATED TO RADIUS. If you look at the Schrödinger equation solution of the Hydrogen atom, you’ll see that the wave function is a function $\psi_{nlm} (r,\theta,\phi)$. What is $r$? Isn’t that a radius? Doesn’t a specific $n$ have a higher likelihood for a specific radius? You call it an operator? Some quantum mechanics books call it an OBSERVABLE. It’s something that is MEASURABLE. It’s the closest thing to a real measurable quantity in Quantum Mechanics. What’s the part that doesn’t make it real for you? I honestly have no idea. 3- No, it’s not just the Bohr model I’m using. It’s also quantum mechanics which is a superset of the Bohr’s model. There are tons of reasons why not to accept instantaneous transitions, and while I mentioned one here and more than one in the article, I still have more. All that’s Tyson doing there is a very crude zeroth order interpolation. Sorry! Science doesn’t work that way, as far as I know. At least don’t tell people that this is the way it works. You can always avoid talking about such confusing things, hence this article. 4- Yes, I agree with you completely. Non-relativistic QM is non-relativistic. But again, even if we ignore relativity, it doesn’t justify accepting the crude zeroth order interpolation Tyson did. You may find it a little slip as an approximation. I find it a big deal because people started asking me after this, and then it’s a problem for me to justify this very non-realistic and odd behavior Tyson gave as a fact to people. All I’m saying is that there are things we know with very high certainty that you can teach to the public, while this whole “instantaneous” jump between levels isn’t any close to anything true, especially that the public doesn’t understand that physics is just modeling nature, and not the absolute truth about nature. 5. I just noticed in the original post, when you show the spherical harmonics plot: “The first row shows the typical spherical orbits that we understand from classical mechanics, while the other rows show more complicated solutions that involve angular momentum.” I’m puzzled by this statement. Yes, the l=0 states are spherically symmetric, but this is not a classical thing. The radial portion comes from solving the Schrödinger equation, just like the rest of it. It’s related to probability density, and has no classical analog. (In classical mechanics, the 1/r potential can have any you want, for example. Hence the “quantum” in quantum mechanics.) Classical orbits of course don’t have to be spherical, either. (As in the QM case, that’s just the lowest angular momentum state for a given energy.) I’d suggest you correct it, but you went a bit wild last time I made such a suggestion. Just a tip for continuing in your field (I hadn’t realized how young you were until I looked at your dissertation): you need to substantively address criticism with good arguments if the criticism is wrong, and corrections if the criticism is right. You did that on a couple minor issues above, but on mine… And yes, citations are good, but don’t expect people to cite things found in standard intro-level texts in courses we all had to take to get PhDs. 1. The classical thing I’m talking about is the Bohr model. The Bohr model was able to predict the principal energy levels quite accurately, which are spherically symmetric, by definition, as they’re defined to have constant radius. Check the Bohr model to remember this. The condition of atom stability in Bohr’s model is $n\lambda=2\pi r$. If you think of classical orbits as earth’s orbit around the sun, then you went too far, as I never said that; not even close. A reasonable improvement to the article is only putting in parenthesis that it’s the (Bohr model). Nothing I said there is wrong, is it? I’m happy to address criticism and I did it before. But when you become hostile or condescending + unclear + opinion driven, then you make it hard for me. We all do mistakes, which is something I understand very well, but here you took your misunderstanding as my mistakes with a bad attitude. 6. It has to teleport, if they didnt teleport, then the black spaces in between the orange wouldiunt be black, DUH. And if it only traveled rarely, then only one side of the atom would be bright orange, but both sides are bright, check mate! Leave a Reply
4ccb41ff21ae575c
Sunday, September 06, 2009 ... Deutsch/Español/Related posts from blogosphere Schellnhuber: West has exceeded quotas In his previous life, Hans Joachim Schellnhuber used to be a fairly good theoretical physicist. For example, he would solve the Schrödinger equation with an almost periodic potential in 1983. He has spent a year or so as a postdoc at KITP in Santa Barbara (1981-82). But the times have changed. For a couple of years, he has been the director of the Potsdam Institute for Climate Impact Research and the main German government's climate protection adviser. What he has just said for Spiegel, in Industrialized nations are facing CO2 insolvency (click), is just breathtaking and it helps me to understand how crazy political movements such as the Nazis or communists could have so easily taken over a nation that is as sensible as Germany. A few rotten steps in the hierarchy is enough for a loon to get to the very top. He is proposing the creation of a CO2 budget for every person on the planet, regardless whether they live in Berlin or Beijing. Let us allow him to speak: Humankind has to limit itself to emit only fixed amount of carbon into the atmosphere until 2050. [...] Because the industrialized nations have already exceeded their quotas if you take into account past emissions. [...] With the current output you see that Germany, the US and other industrialized nations have either already used up their permissible quota, or will do so within the next few years. [...] Question: So industrialized nations would have to pay massive sums of money? Yes. Up to €100 billion ($142 billion) annually. If the richest sixth of the world's population were to pay this amount, each person would have to pay €100 per year. The West would give back part of the wealth it has taken from the South in the past centuries and be indebted to countries that are now amongst the poorest in the world. It would, however, have to be ensured that the poorer nations use the money for the proposes it is intended -- namely to help them to develop a greener economy. Of course, Schellnhuber is not the first hardcore nutcase of this kind who has been saying such things, pretending that he is oh so smart. Many of you may remember Richard Feynman's popular book, Surely You're Joking, Mr Feynman, where he also described a crazy "interdisciplinary" conference where a similar "thinker" has been proposing the same "reparations" paid to the poor countries, based on the same assumptions that Mr Schellnhuber has used. In order for me to save some time, let me just copy Feynman's entertaining description of the crazy conference he attended in the 1950s. The amount and basic types of pompous fools haven't changed: they have just taken over many institutions that apparently include the German government: There was a special dinner at some point, and the head of the theology place, a very nice, very Jewish man, gave a speech. It was a good speech, and he was a very good speaker, so while it sounds crazy now, when I'm telling about it, at that time his main idea sounded completely obvious and true. He talked about the big differences in the welfare of various countries, which cause jealousy, which leads to conflict, and now that we have atomic weapons, any war and we're doomed, so therefore the right way out is to strive for peace by making sure there are no great differences from place to place, and since we have so much in the United States, we should give up nearly everything to the other countries until we're all even. Everybody was listening to this, and we were all full of sacrificial feeling, and all thinking we ought to do this. But I came back to my senses on the way home. The next day one of the guys in our group said, "I think that speech last night was so good that we should all endorse it, and it should be the summary of our conference." I started to say that the idea of distributing everything evenly is based on a theory that there's only X amount of stuff in the world, that somehow we took it away from the poorer countries in the first place, and therefore we should give it back to them. But this theory doesn't take into account the real reason for the differences between countries -- that is, the development of new techniques for growing food, the development of machinery to grow food and to do other things, and the fact that all this machinery requires the concentration of capital. It isn't the stuff, but the power to make the stuff, that is important. But I realize now that these people were not in science; they didn't understand it. They didn't understand technology; they didn't understand their time. The conference made me so nervous that a girl I knew in New York had to calm me down. "Look," she said, "you're shaking! You've gone absolutely nuts! Just take it easy, and don't take it so seriously. Back away a minute and look at what it is." So I thought about the conference, how crazy it was, and it wasn't so bad. But if someone were to ask me to participate in something like that again, I'd shy away from it like mad -- I mean zero! No! Absolutely not! And I still get invitations for this kind of thing today. Even worse, at the end of the conference they were going to have another meeting, but this time the public would come, and the guy in charge of our group has the nerve to say that since we've worked out so much, there won't be any time for public discussion, so we'll just tell the public all the things we've worked out. My eyes bugged out: I didn't think we had worked out a damn thing! Finally, when we were discussing the question of whether we had developed a way of having a dialogue among people of different disciplines -- our second basic "problem" -- I said that I noticed something interesting. Each of us talked about what we thought the "ethics of equality" was, from our own point of view, without paying any attention to the other guy's point of view. For example, the historian proposed that the way to understand ethical problems is to look historically at how they evolved and how they developed; the international lawyer suggested that the way to do it is to see how in fact people actually act in different situations and make their arrangements; the Jesuit priest was always referring to "the fragmentation of knowledge"; and I, as a scientist, proposed that we should isolate the problem in a way analogous to Galileo's techniques for experiments; and so on. "So, in my opinion," I said, "we had no dialogue at all. Instead, we had nothing but chaos!" Of course I was attacked, from all around. "Don't you think that order can come from chaos?" "Uh, well, as a general principle, or..." I didn't understand what to do with a question like "Can order come from chaos?" Yes, no, what of it? There were a lot of fools at that conference -- pompous fools -- and pompous fools drive me up the wall. Ordinary fools are all right; you can talk to them, and try to help them out. But pompous fools -- guys who are fools and are covering it all over and impressing people as to how wonderful they are with all this hocus pocus -- THAT, I CANNOT STAND! An ordinary fool isn't a faker; an honest fool is all right. But a dishonest fool is terrible! And that's what I got at the conference, a bunch of pompous fools, and I got very upset. I'm not going to get upset like that again, so I won't participate in interdisciplinary conferences any more. Feynman's book continues with a story involving the young rabbis whose main concern was whether electricity was fire. Hat tip: Marc Morano A British religious bonus The German readers could feel that they're superior because their government science adviser is the greatest nutcase among the world's government science advisers. ;-) In order for me to fight against such a new wave of German supremacy, let us look into the United Kingdom. Lord Robert May, Baron May of Oxford, has been the chief adviser to the British government for some time. Now he is the head of the British Science Association. According to The Guardian, The Telegraph, and others, God is obliged to fight against climate change, and so are the religious leaders. A deity can serve as a "punisher". Lord May will kindly accept the role of the commander-in-chief who will instruct God and all churches in the world what they should demand from the people. More precisely, faith groups "could lead policing of social behavior". And if God fails to make a soul green, the soul may always be burned at stake. Welcome to the Postmodern Dark Ages, at least until Lord May is admitted to a mental asylum! ;-) Add to Digg this Add to reddit snail feedback (4) : reader Neil' said... (I think your fast-comment section is having problems.) OK ... I can understand worrying about extreme measures as such, etc. But if CO2 is a warming stimulus (from spectroscopic data), there is still as risk factor, right? Most around here wouldn't take the multiplying factor to be around three, but maybe around one, etc. But if they aren't sure, then neither is anyone else AFAICT. So what is the proper "risk" posture then? How much is worth doing? People want critics to come up with alternative plans and not just critiques. reader papertiger said... damn Lubos - I'm gone a couple weeks and your fan club doubles. How do you keep them all fed? Neil' you're missing the main point. Warm is good. Period. How do I underline with this thing? Alright I know. We use to want to save the rainforest. Co2 augmentation saved the rainforest. We have more rainforest then we know what to do with. It's creeping into Mexico we have so much. We use to want to plant trees. We don't have to plant trees anymore. Trees are springing up all over the Boreal forest of their own accord. We use to want rain. Now we get plenty of rain, more then we need. We all like to see the girls in bikinis. Warm is good. Good doesn't need a risk factor. reader Brad Tittle said... Before we run around with our heads cut off, we should verify our data, verify our models. Let's do something silly and make sure the models predict reality before we make decisions based off of them. There are a lot of really intelligent people out there that think Global Climate Change is a big problem. I am pretty sure they are all going to wake up some day and wonder what the hell they were doing. Lots of hand waving going on. Every once in a while an actual bit of science shows up. Just about everything else appears to be someone hyping something beyond the boundaries of rationality. Acidifying the oceans means going to from a basic solution to a slightly less basic solution. (pH from 8.1 to maybe 7.9) To say this is acidification requires someone to be purposely misleading. The real cause of the problem.... SUPERMAN. Everyone wants to save the world just like Superman does. reader Ken said... Thought you'd find this interesting - Electronic comparison of all Darwin Editions to highlight shifts in ideas
a6f2945e9c1f8c65
The Unquantum Quantum مقال مختار Quantum theorists often speak of the world as being pointillist at the smallest scales. Yet a closer look at the laws of nature suggests that the physical world is actually continuous—more analog than digital In Brief • Quantum mechanics is usually thought of as inherently discrete, yet its equations are formulated in terms of continuous quantities. Discrete values emerge depending on how a system is set up. • Digital partisans insist that the continuous quantities are, on closer inspection, discrete: they lie on a tightly spaced grid that gives the illusion of a continuum, like the pixels on a computer screen. • This idea of pixilated, discrete space contradicts at least one feature of nature, however: the asymmetry between left- and right-handed versions of elementary particles of matter. Editors’ Note In the late 1800s the famous german mathematician Leopold Kronecker proclaimed, “God made the integers, all else is the work of man.” He believed that whole numbers play a fundamental role in mathematics. For today’s physicists, the quote has a different resonance. It ties in with a belief that has become increasingly common over the past several decades: that nature is, at heart, discrete—that the building blocks of matter and of spacetime can be counted out, one by one. This idea goes back to the ancient Greek atomists but has extra potency in the digital age. Many physicists have come to think of the natural world as a vast computer described by discrete bits of information, with the laws of physics an algorithm, like the green digital rain seen by Neo at the end of the 1999 film The Matrix. Yet is that really the way the laws of physics work? Although it might seem to contradict the spirit of the times, I, among many others, think that reality is ultimately analog rather than digital. In this view, the world is a true continuum. No matter how closely you zoom in, you will not find irreducible building blocks. Physical quantities are not integers but real numbers—continuous numbers, with an infinite number of digits after the decimal point. The known laws of physics,Matrix fans will be disappointed to learn, have features that no one knows how to simulate on a computer, no matter how many bytes its memory has. Appreciating this aspect of these laws is essential to developing a fully unified theory of physics. An Ancient Enigma The debate between digital and analog is one of the oldest in physics. Whereas the atomists conceived of reality as discrete, other Greek philosophers such as Aristotle thought of it as a continuum. In Isaac Newton’s day, which spanned the 17th and 18th centuries, natural philosophers were torn between particle (discrete) theories and wave (continuous) theories. By Kronecker’s time, advocates of atomism, such as John Dalton, James Clerk Maxwell and Ludwig Boltzmann, were able to derive the laws of chemistry, thermodynamics and gases. But many scientists remained unconvinced. Wilhelm Ostwald, winner of the 1909 Nobel Prize in Chemistry, pointed out that the laws of thermodynamics refer only to continuous quantities such as energy. Similarly, Maxwell’s theory of electromagnetism describes electric and magnetic fields as continuous. Max Planck, who would later pioneer quantum mechanics, finished an influential paper in 1882 with the words: “Despite the great success that the atomic theory has so far enjoyed, ultimately it will have to be abandoned in favor of the assumption of continuous matter.” One of the most powerful arguments of the continuous camp was the seeming arbitrariness of discreteness. As an example: How many planets are there in the solar system? I was told at school that there are nine. In 2006 astronomers officially demoted Pluto from the planetary A-list, leaving just eight. At the same time, they introduced a B-list of dwarf planets. If you include these, the number increases to 13. In short, the only honest answer to the question of the number of planets is that it depends on how you count. The Kuiper belt beyond Neptune contains objects in size ranging from mere microns to a few thousand kilometers. You can count the number of planets only if you make a fairly arbitrary distinction between what is a planet, what is a dwarf planet, and what is just a lump of rock or ice. Quantum mechanics ultimately transformed the digital-analog debate. Whereas the definition of a planet may be arbitrary, the definition of an atom or an elementary particle is not. The integers labeling chemical elements—which, we now know, count the number of protons in their constituent atoms—are objective. Regardless of what developments occur in physics, I will happily take bets that we will never observe an element with √500 protons that sits between titanium and vanadium. The integers in atomic physics are here to stay. Another example occurs in spectroscopy, the study of light emitted and absorbed by matter. An atom of a particular type can emit only very specific colors of light, resulting in a distinctive fingerprint for each atom. Unlike human fingerprints, the spectra of atoms obey fixed mathematical rules. And these rules are governed by integers. The early attempts to understand quantum theory, most notably by Danish physicist Niels Bohr, placed discreteness at its heart. Emergent Integers But Bohr’s was not the final word. Erwin Schrödinger developed an alternative approach to quantum theory based on the idea of waves in 1925. The equation that he formulated to describe how these waves evolve contains only continuous quantities—no integers. Yet when you solve the Schrödinger equation for a specific system, a little bit of mathematical magic happens. Take the hydrogen atom: the electron orbits the proton at very specific distances. These fixed orbits translate into the spectrum of the atom. The atom is analogous to an organ pipe, which produces a discrete series of notes even though the air movement is continuous. At least as far as the atom is concerned, the lesson is clear: God did not make the integers. He made continuous numbers, and the rest is the work of the Schrödinger equation. Perhaps more surprisingly, the existence of atoms, or indeed of any elementary particle, is also not an input of our theories. Physicists routinely teach that the building blocks of nature are discrete particles such as the electron or quark. That is a lie. The building blocks of our theories are not particles but fields: continuous, fluidlike objects spread throughout space. The electric and magnetic fields are familiar examples, but there are also an electron field, a quark field, a Higgs field, and several more. The objects that we call fundamental particles are not fundamental. Instead they are ripples of continuous fields. A skeptic might say that the laws of physics do contain some integers. For example, these laws describe three kinds of neutrinos, six kinds of quarks (each of which comes in three varieties called colors), and so on. Integers, integers everywhere. Or are there? All these examples are really counting the number of particle species in the Standard Model, a quantity that is famously difficult to make mathematically precise when particles interact with one another. Particles can mutate: a neutron can split into a proton, an electron and a neutrino. Should we count it as one particle or three particles or four particles? The claim that there are three kinds of neutrinos, six kinds of quarks, and so on is an artifact of neglecting the interactions between particles. Here is another example of an integer in the laws of physics: the number of observed spatial dimensions is three. Or is it? The famous late mathematician Benoît Mandelbrot pointed out that the number of spatial dimensions does not have to be an integer. The coastline of Great Britain, for example, has a dimension of around 1.3. Moreover, in many proposed unified theories of physics, such as string theory, the dimension of space is ambiguous. Spatial dimensions can emerge or dissolve. I venture to say only one true integer may occur in all of physics. The laws of physics refer to one dimension of time. Without precisely one dimension of time, physics appears to become inconsistent. Indiscrete Ideas Even if our current theories assume reality is continuous, many of my fellow physicists think that a discrete reality still underlies the continuity. They point to examples of how continuity can emerge from discreteness. On the macroscopic scales of everyday experience, the water in a glass appears to be smooth and continuous. It is only when you look much much closer that you see the atomic constituents. Could a mechanism of this type perhaps sit at the root of physics? Maybe if we looked at a deeper level, the smooth quantum fields of the Standard Model, or even spacetime itself, would also reveal an underlying discrete structure. We do not know the answer to this question, but we can glean a clue from 40 years of efforts to simulate the Standard Model on a computer. To perform such a simulation, one must first take equations expressed in terms of continuous quantities and find a discrete formulation that is compatible with the bits of information in which computers trade. Despite decades of effort, no one has succeeded in doing that. It remains one of the most important, yet rarely mentioned, open problems in theoretical physics. Physicists have developed a discretized version of quantum fields called lattice field theory. It replaces spacetime with a set of points. Computers evaluate quantities at these points to approximate a continuous field. The technique has limitations, however. The difficulty lies with electrons, quarks and other particles of matter, called fermions. Strangely, if you rotate a fermion by 360 degrees, you do not find the same object that you started with. Instead you have to turn a fermion by 720 degrees to get back to the same object. Fermions resist being put on a lattice. In the 1980s Holger Bech Nielsen of the Niels Bohr Institute in Copenhagen and Masao Ninomiya, now at the Okayama Institute for Quantum Physics in Japan, proved a celebrated theorem that it is impossible to discretize the simplest kind of fermion. Such theorems are only as strong as their assumptions, and in the 1990s theorists, most notably David Kaplan, now at the University of Washington, and Herbert Neuberger of Rutgers University, introduced various creative methods to place fermions on the lattice. Quantum field theories come in many conceivable varieties, each with different possible types of fermions, and people can now formulate nearly every one on a lattice. There is just a single class of quantum field theory that people do not know how to put on a lattice. Unfortunately, that class includes the Standard Model. We can handle all kinds of hypothetical fermions but not the ones that actually exist. Fermions in the Standard Model have a very special property. Those that spin in a counterclockwise direction feel the weak nuclear force, and those that spin in a clockwise direction do not. The theory is said to be chiral. A chiral theory is delicate. Subtle effects known as anomalies are always threatening to render it inconsistent. Such theories have so far resisted attempts to be modeled on a computer. Yet chirality is not a bug of the Standard Model that might go away in a deeper theory; it is a core feature. At first glance, the Standard Model, based on three interlinking forces, seems to be an arbitrary construction. It is only when thinking about the chiral fermions that its true beauty emerges. It is a perfect jigsaw puzzle, with the three pieces locked together in the only manner possible. The chiral nature of fermions in the Standard Model makes everything fit together. Scientists are not entirely sure what to make of our inability to simulate the Standard Model on a computer. It is difficult to draw strong conclusions from a failure to solve a problem; quite possibly the puzzle is just a very difficult one waiting to be solved with conventional techniques. But aspects of the problem smell deeper than that. The obstacles involved are intimately tied to mathematics of topology and geometry. The difficulty in placing chiral fermions on the lattice may be telling us something important: that the laws of physics are not, at heart, discrete. We are not living inside a computer simulation. David Tong is professor of theoretical physics at the University of Cambridge. He previously held research positions in Boston, New York City, London and Mumbai. His interests have centered on quantum field theory, string theory, solitons and cosmology. Scientific American (December 2012), 307, 46-49 Published online: 13 November 2012 | doi:10.1038/scientificamerican1212-46 Website Comments 1. معلومة مفيدة كل يوم حكمة اعجبتني ^_^ قال الطبيب ان لم يتعافئ الحصان خلال ثلاثة ايام اقتلوه سمع الخروف بذلك فذهب للحصان واخبره بالفازعة وقال له في اليوم الاول : انهض بعد يومين سيقتلونك في اليوم الثاني : انهض بعد يوم سيقتلونك !!! في اليوم الثالث : جمع الحصان قوته ونهض ففرح الفلاه وقال : اذبحو الخروف علئ سلامة الحصان في بعض الاحيان يجب عليك كتم بعض الكلام تجنب الاصابة إذا راقت لك القصة. قم بالإعجاب بهذا التعليق لديك حب الاستطلاع؟ ^_^ تعرف على قصة أخرى رائعة بالمنشور الأول على حائطي؟ :* <3 …. …. … …. <3 …. …. … …. <3 …. …. … …. <3 …. …. … …. <3
ead8fb1853f359c5
MSE Seminar: Chen Huang, Los Alamos National Laboratory 3:00 pm on Friday, January 17, 2014 4:00 pm on Friday, January 17, 2014 8 St. Mary’s Street, Room 205 Potential-Functional Embedding Theory: An Effective and Rigorous Way to Perform Multiphysics Quantum Mechanics Simulations for Materials and Molecules Abstract: Accurate and detailed electronic structures are prerequisites for our understanding and prediction of properties of molecules and materials. Ideally, we just need to solve the Schrödinger equation in quantum mechanics which has been introduced for over 80 years. Unfortunately, the many-body nature of the Schrödinger equation makes itself extremely difficult to solve. Theories of varying levels of accuracy exist in the literature to approximately solve the Schrödinger equation. Very accurate methods, such as the configuration interaction method, often have a computational cost that scales exponentially with system sizes. Efficient methods, such as the Kohn-Sham density functional theory, often have large errors that are difficult to predict. All these difficulties severely limit the predictive power of computer simulations. A novel way to obtain accurate electronic properties in large-scale materials is quantum mechanics embedding theory, in which the key regions in materials are solved using highly accurate methods, with the unimportant regions solved by less accurate methods. In this talk, I will present our recent breakthrough in quantum mechanics embedding theory: the potential-functional embedding theory, [1,2] which provides a unified framework to perform multiphysics simulations of materials and molecules in a seamless and first-principle manner. I will also present the application of our embedding theory to two long-term puzzles related to surface catalysis and corrosion: (a) the true bonding nature between carbon monoxide and copper surface [2], and (b) the counterintuitive process of the oxidation of aluminum surface. [3] References: [1] C. Huang and E.A. Carter, J. Chem. Phys., 135, 194104 (2011). [2] C. Huang, M. Pavone, and E. A. Carter, J. Chem. Phys., 134, 154110 (2011). [3] F. Libisch, C. Huang, P. Liao, M. Pavone, and E.A. Carter, Phys. Rev. Lett., 109, 198303 (2012). Biography: Dr. Chen Huang is currently a postdoctoral research associate in the Theoretical Division at Los Alamos National Laboratory. He received his Ph.D. in physics from Princeton University in 2011, and B.Sc. from Tsinghua University, Beijing, China. His research focuses on developing novel theoretical methods to solve challenging electronic structure and kinetic problems in materials and molecules. He is the main inventor of the potential-functional embedding theory, which provides an effective and rigorous way to perform multiphysics quantum mechanics simulations for complex materials and molecules. With this embedding theory, he and coworkers have successfully unveiled the complicated process of the oxidation of metal surface, which has puzzled the scientific and industrial communities for decades. Another research area of Dr. Huang is long-time simulation methodology. At Los Alamos, he is actively developing accelerated molecular dynamics (AMD), which is promising to bridge the timescale gap between experiments and simulations. His work on AMD provides an effective way to predict striking kinetic processes in materials and would greatly advance computer-aided rational design of materials. Faculty Host: David Bishop Student Host: Yang Yu
24ff0242408adadb
Search FQXi Techie Gorilla: "The Quicken account has some off-base record data or is covered up. The..." in Space-time from Collapse... Georgina Woodward: "Steve, I don't think you have read the argument I presented. You begin with..." in Space-time from Collapse... Shirley Massy: "1. Wait a few minutes and select "Try again". 2. Go to" in Are We Merging With Our... David Nash: "The protons and neutrons inside the atomic nucleus exhibit shell structures..." in New Nuclear "Magic... Georgina Woodward: "Zeeya, thanks for bringing this book to my attention. I also listened to..." in Superhuman: Book Review... Huzaifa seo: "google" in Our Place in the... click titles to read articles Dissolving Quantum Paradoxes Constructing a Theory of Life Usurping Quantum Theory Fuzzballs v Black Holes October 21, 2018 Riding the Rogue Quantum Waves Could the formation of giant sea swells help explain how the macroscopic world emerges from the quantum microworld? by Steven Ashley November 6, 2016 Bookmark and Share Thomas Durt École Centrale de Marseille In February 1933 the U.S. Navy oiler USS Ramapo was making good time on its run across the South Pacific when an officer spied a monster directly astern on the horizon. A huge rogue wave—a solitary sea swell that is much larger and more powerful than the surrounding waves—rapidly overtook the ship. Later, the Ramapo’s crew, having somehow survived the freak encounter, triangulated the wave’s height at an astounding 34 meters (112 feet)—the tallest rogue wave ever recorded. Now, a trio of physicists is taking inspiration from such rogue waves—and the model commonly used to describe how they grow to such immense heights—to see if they can help solve one of the biggest mysteries in physics. Supported by a research grant of over $50,000 from FQXi, Thomas Durt of the École Centrale de Marseille, in France, Ralph Willox at the University of Tokyo, in Japan, and Samuel Colin of the Brazilian Center for Physics Research, in Rio de Janeiro, are investigating an alternative to quantum theory which can explain how the definite everyday world we see around us emerges from the uncertain microscopic realm, where objects can be in multiple places at the same time. In the decade before the Ramapo’s momentous meeting in the South Pacific, leading European theorists had begun laying the foundations of quantum theory. They were grappling with the notion that on small scales, particles can behave as waves, and waves as particles, depending on how they are measured. Stranger still was that a quantum particle-cum-wave has no location until it is observed; only when it is measured does it settle in one spot. In 1926, Austrian physicist Erwin Schrödinger encapsulated this uncertainty by describing quantum objects mathematically as "wavefunctions." Schrödinger’s equation enables physicists to predict the probability of finding the quantum object in a particular place, or indeed with other fixed properties, when they carry out their experiment to measure the object’s features. According to standard quantum theory, the observer carrying out the experiment in some way causes the collapse of the quantum wave-function, forcing the quantum object to take on definite properties. But nobody can explain how or why that should happen. So Durt, Willox and Colin have turned to rogue ocean waves—which scientists today actually describe using a more complicated version of the Schrödinger equation—for an answer. Soaking Energy Although rogue waves have many causes, scientists believe they sometimes develop spontaneously from natural processes that occur amid a random background of smaller waves. Researchers hypothesize that an unusual wave type can form that somehow ’sucks’ energy from surrounding waves to grow to enormous heights. The version of the Schrödinger equation that is used to describe rogue wave formation is described as a "non-linear" equation because—unlike the linear Schrödinger equation that is commonly used in quantum theory—it allows for the possibility that the waves in the system interact with themselves, amplifying effects. One of the simplest models says that through such non-linear processes, a normal ocean wave ’soaks’ energy from the adjacent waves, reducing them to mere ripples as it rises in turn. Sea Monster Understanding rogue waves could help unravel a quantum mystery. Credit: MIT News Could a similar effect be happening in quantum systems, enabling one type of quantum wave—corresponding to the quantum system being in one place, rather than spread over multiple locations, say—to grow at the expense of others? If so, this could explain how the quantum wavefunction collapses, as this single wave dominates over the others. "We aim to explain this spontaneous localization of the wave based on a process similar to the formation of rogue waves, whose birth is best described by a non-linear wave equation that describes extreme event amplification arising from small perturbations," Durt explains (see Classical and Quantum Gravity 31 (2014)). Some years back, the mainstream view would have been that this approach is stretching an analogy too far, because subatomic systems and ocean waves are simply too different in character to be treated with the same math. But that’s changing: "Three or four years ago, I would have told you ’no, you will not find rogue wave-like phenomena in quantum mechanics’," notes Majid Taki, a physicist at the Lille University of Science and Technology, in France, who is an expert on non-linear waves in macroscopic environments. "That’s because at the time we believed that rogue waves come only from highly non-linear conditions," Taki continues. Now, however, new research on rogue waves shows that they can be built in nearly linear systems that have only a small degree of non-linearity, a situation that is much closer to the quantum case. "I think now is the moment to try to find such effects in near-linear systems," says Taki, who is so convinced by the similarities that he advised Durt to pursue this approach. It means pushing existing technology to extremes, which is a good thing. - Catalina Curceanu Durt, Willox and Colin hope to develop new ways to test their model by carrying out experiments in an optical trap—a focused laser beam that generates small forces that physically hold tiny objects in empty space like ’optical tweezers.’ The plan is to drop a quantum object, such as a nano-sized sphere, in a gravity-free environment. Ideally, this test would be performed in space because, far from Earth’s gravity, it will be possible to see whether gravitational effects induced between the components of the nanosphere itself (the self-interaction required by the model) causes the object’s wavefunction to collapse (Physical Review A 93, 062102 (2016)). "This is an ambitious proposal," says FQXi member Catalina Curceanu, a quantum physicist and expert on collapse models at the National Institute of Nuclear Physics in Frascati, Italy. "Such experiments are very difficult because of the extreme precision that’s required." Curceanu says. "It means pushing existing technology to extremes, which is a good thing." Comment on this Article • You may also include LateX equations into your post. Insert LaTeX Equation [hide] LaTeX Equation Preview preview equation clear equation insert equation into post at cursor Your name: (optional) Recent Comments Emulator 3ds android is basically the compact flash emulator and FAT emulator, which lets 3DS games to play on different open source. This analysis couldn't solely save lives, however additionally offer insight into a good vary of phenomena of an identical nature. Waves seem in nearly every space of physics; in this case most notably within the quantum field. Steinmeyer’s analysis team investigated the philosophical doctrine and foregone conclusion of scoundrel wave occurrences in 3 completely different scoundrel systems, of that one was oceanic however 2 were optical. Best Essay Writing Service UK read all article comments Please enter your e-mail address:
42c022df72a0a79c
The department of Astrophysics, Cosmology and Fundamental Interactions (COSMO) is a newly-formed department in CBPF. COSMO is an active department of 9 faculty members, holding weekly seminars, weekly journal clubs in various areas, organizing regular conferences and workshops, supervising many post­graduate students and postdoctoral researchers, attracting a dynamic flow of national and international visitors, establishing collaborations with international research groups and institutions as well as lecturing advanced post-graduate courses. The research carried out in COSMO comprises (but is not limited to) the following areas: * Quantum Cosmology: bouncing models; primordial cosmology; theory of cosmological perturbations of quantum-mechanical origin and their imprints on the Cosmic Microwave Background radiation and large-scale structure. [Felipe Tovar, Nelson Pinto-Neto] * Quantum Foundations: Bohm-de Broglie interpretation of quantum mechanics; applications to quantum cosmology. [Felipe Tovar, Nelson Pinto-Neto] * Out-of-equilibrium Relativistic Thermodynamics and Non-linear Electrodynamics: applications to cosmology; self-gravitating systems without singularity and to gravitational collapse. * Gravitational Waves: black hole inspirals in the extreme mass-ratio limit; gravitational self-force; ringdown. [Marc Casals] * Spectroscopy of Black Holes: quasinormal modes; classical stability in four and higher-dimensions; Green function. [Marc Casals] * Quantum Black Holes: evaporation via Hawking radiation; black hole thermodynamics; information paradox; 'gauge-gravity' duality. [Marc Casals] * Gravitational Lensing: simulations and inverse modelling of gravitational arcs; arc identification and characterisation; weak lensing and galaxy clusters. [Martin Makler] * Astronomical Image Processing and Wide-field Surveys: object detection; modelling of the point-spread-function; galaxy structural parameters; quality assessment. [Martin Makler] * Dark Matter and Dark Energy: observational constraints from lensing; large-scale structure; Cosmic Microwave Background; galaxy clusters and supernovae; models and observational tests of dark matter and dark energy unification; heterogeneous cosmological models as alternatives to dark energy. * Analogue Models of Gravity and Effective Geometry: non-linear electrodynamics; non-linear scalar field theories; analogue black holes. * Solutions of Heun Equations and their Applications: generalized spheroidal wave equation; Schrödinger equation and linear perturbations of space-times. [Bartolomeu Figueiredo]
518bb8841d5034ee
Atomic orbital: Wikis From Wikipedia, the free encyclopedia An atomic orbital is a mathematical function that describes the wave-like behavior of either one electron or a pair of electrons in an atom.[1] This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus. These functions may serve as three-dimensional graph of an electron’s likely location. The term may thus refer directly to the physical region defined by the function where the electron is likely to be.[2] Specifically, atomic orbitals are the possible quantum states of an individual electron in the collection of electrons around a single atom, as described by the orbital function. Despite the obvious analogy to planets revolving around the Sun, electrons cannot be described as solid particles and so atomic orbitals rarely, if ever, resemble a planet's elliptical path. A more accurate analogy might be that of a large and often oddly-shaped atmosphere (the electron), distributed around a relatively tiny planet (the atomic nucleus). Atomic orbitals exactly describe the shape of this atmosphere only when a single electron is present in an atom. When more electrons are added to a single atom, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection (sometimes termed the atom’s “electron cloud” [3]) tends toward a generally spherical zone of probability describing where the atom’s electrons will be found. Electron atomic and molecular orbitals. The chart of orbitals (left) is arranged by increasing energy (see Madelung rule). Note that atomic orbits are functions of three variables (two angles, and the distance from the nucleus, r). These images are faithful to the angular component of the orbital, but not entirely representative of the orbital as a whole. The idea that electrons might revolve around a compact nucleus with definite angular momentum was convincingly argued in 1913 by Niels Bohr,[4] and the Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electronic behavior as early as 1904.[5] However, it was not until 1926 that the solution of the Schrödinger equation for electron-waves in atoms provided the functions for the modern orbitals.[6] Because of the difference from classical mechanical orbits, the term "orbit" for electrons in atoms, has been replaced with the term orbital—a term first coined by chemist Robert Mulliken in 1932.[7] Atomic orbitals are typically described as “hydrogen-like” (meaning one-electron) wave functions over space, categorized by n, l, and m quantum numbers, which correspond with the pair of electrons' energy, angular momentum, and an angular momentum direction, respectively. Each orbital (defined by a different set of quantum numbers), and which contains a maximum of two electrons, is also known by the classical names used in the electron configurations shown on the right. These classical orbital names (s, p, d, f) are derived from the characteristics of their spectroscopic lines: sharp, principal, diffuse, and fundamental, the rest being named in alphabetical order. [8][9] From about 1920, even before the advent of modern quantum mechanics, the aufbau principle (construction principle) that atoms were built up of pairs of electrons, arranged in simple repeating patterns of increasing odd numbers (1,3,5,7..), had been used by Niels Bohr and others to infer the presence of something like atomic orbitals within the total electron configuration of complex atoms. In the mathematics of atomic physics, it is also often convenient to reduce the electron functions of complex systems into combinations of the simpler atomic orbitals. Although each electron in a multi-electron atom is not confined to one of the “one-or-two-electron atomic orbitals” in the idealized picture above, still the electron wave-function may be broken down into combinations which still bear the imprint of atomic orbitals; as though, in some sense, the electron cloud of a many-electron atom is still partly “composed” of atomic orbitals, each containing only one or two electrons. The physicality of this view is best illustrated in the repetitive nature of the chemical and physical behavior of elements which results in the natural ordering known from the 19th century as the periodic table of the elements. In this ordering, the repeating periodicity of 2, 6, 10, and 14 elements in the periodic table corresponds with the total number of electrons which occupy a complete set of s, p, d and f atomic orbitals, respectively. Orbital names Orbitals are given names in the form: X \, \mathrm{type}^y \ Formal quantum mechanical definition Connection to uncertainty relation Although Heisenberg used infinite sets of positions for the electron in his matrices, this does not mean that the electron could be anywhere in the universe.[citation needed] Rather there are several laws that show the electron must be in one localized probability distribution. An electron is described by its energy in Bohr's atom which was carried over to matrix mechanics. Therefore, an electron in a certain n-sphere had to be within a certain range from the nucleus depending upon its energy.[citation needed] This restricts its location. Hydrogen-like atoms Qualitative characterization Limitations on the quantum numbers The azimuthal quantum number \ell is a non-negative integer. Within a shell where n is some integer n0, \ell ranges across all (integer) values satisfying the relation 0 \le \ell \le n_0-1. For instance, the n = 1 shell has only orbitals with \ell=0, and the n = 2 shell has only orbitals with \ell=0, and \ell=1. The set of orbitals associated with a particular value of \ell are sometimes collectively called a subshell. The magnetic quantum number m_\ell is also always an integer. Within a subshell where \ell is some integer \ell_0, m_\ell ranges thus: -\ell_0 \le m_\ell \le \ell_0. l = 0 1 2 3 4 ... n = 1 ml = 0 2 0 -1, 0, 1 The shapes of orbitals The shapes of the first five atomic orbitals: 1s, 2s, 2px,2py, and 2pz. The colors show the wavefunction phase. Generally speaking, the number n determines the size and energy of the orbital for a given nucleus: as n increases, the size of the orbital increases. However, in comparing different elements, the higher nuclear charge Z of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the overall size of the whole atom remains very roughly constant, even as the number of electrons in heavier elements (higher Z) increases. For each s, p, d, f and g set of orbitals, the set of orbitals which composes it forms a spherically symmetrical set of shapes. For non-s orbitals, which have lobes, the lobes point in directions so as to fill space as symmetrically as possible for number of lobes which exist for a set of orientations. For example, the three p orbitals have six lobes which are oriented to each of the six primary directions of 3-D space; for the 5 d orbitals, there are a total of 18 lobes, in which again six point in primary directions, and the 12 additional lobes fill the 12 gaps which exist between each pairs of these 6 primary axes. Orbitals table s pz px py dz2 dxz dyz dxy dx2-y2 fz3 fxz2 fyz2 fxyz fz(x2-y2) fx(x2-3y2) fy(3x2-y2) n=1 S1M0.png n=3 S3M0.png P3M0.png P3M1.png P3M-1.png D3M0.png D3M1.png D3M-1.png D3M2.png D3M-2.png n=4 S4M0.png P4M0.png P4M1.png P4M-1.png D4M0.png D4M1.png D4M-1.png D4M2.png D4M-2.png F4M0.png F4M1.png F4M-1.png F4M2.png F4M-2.png F4M3.png F4M-3.png n=5 S5M0.png P5M0.png P5M1.png P5M-1.png D5M0.png D5M1.png D5M-1.png D5M2.png D5M-2.png . . . . . . . . . . . . . . . . . . . . . n=6 S6M0.png P6M0.png P6M1.png P6M-1.png . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Orbital energy s p d f g 1 1 2 2 3 3 4 5 7 4 6 8 10 13 5 9 11 14 17 21 6 12 15 18 22 26 7 16 19 23 27 31 8 20 24 28 32 36 Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could exist, but which do not hold electrons in any element currently known. Electron placement and the periodic table Relativistic effects Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium (which results from narrowing of 6s to 5d transition energy to the point that visible light begins to be absorbed). See [1]. In the Bohr Model, an n = 1 electron has a velocity given by v = Zαc, where Z is the atomic number, α is the fine-structure constant, and c is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wavefunction of the electron for atoms with Z > 137 is oscillatory and unbound. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of Z due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than Z. The critical Z value which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until Z is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. See Extension of the periodic table beyond the seventh period. See also 1. ^ Milton Orchin,Roger S. Macomber, Allan Pinhas, and R. Marshall Wilson(2005)"Atomic Orbital Theory" 3. ^ The Feynman Lectures on Physics -The Definitive Edition, Vol 1 lect 6 pg 11. Feynman, Richard; Leighton; Sands. (2006) Addison Wesley ISBN 0-8053-9046-4 4. ^ Bohr, Niels (1913). "On the Constitution of Atoms and Molecules". Philosophical Magazine 26 (1): 476.  5. ^ Nagaoka, Hantaro (May 1904). "Kinetics of a System of Particles illustrating the Line and the Band Spectrum and the Phenomena of Radioactivity". Philosophical Magazine 7: 445–455.  6. ^ Bryson, Bill (2003). A Short History of Nearly Everything. Broadway Books. pp. 141–143. ISBN 0-7679-0818-X.  7. ^ Mulliken, Robert S. (July 1932). "Electronic Structures of Polyatomic Molecules and Valence. II. General Considerations". Phys. Rev. 41 (1): 49–71. doi:10.1103/PhysRev.41.49.  8. ^ Griffiths, David (1995). Introduction to Quantum Mechanics. Prentice Hall. pp. 190–191. ISBN 0-13-124405-1.  9. ^ Levine, Ira (2000). Quantum Chemistry (5 ed.). Prentice Hall. pp. 144–145. ISBN 0-13-685512-1.  Further reading • Tipler, Paul; Ralph Llewellyn (2003). Modern Physics (4 ed.). New York: W. H. Freeman and Company. ISBN 0-7167-4345-0.  • Scerri, Eric (2007). The Periodic Table, Its Story and Its Significance. New York: Oxford University Press. ISBN 978-0-19-530573-9.  External links Simple English The diagram to the left shows the orbitals in order of increasing energy. To the right are examples of orbital shapes. Atomic orbitals are the places surrounding the nucleus of an atom where the electrons are most likely to be at any given time. It used to be believed that electrons behaved similarly to the solar system, where the sun is the nucleus and the planets are like the electrons. However, electrons do not go in circles, they move erratically. The number of atomic orbitals in an element is defined by the period the element is in. Electrons move between orbitals depending on how fast they are moving and how many other electrons there are. Got something to say? Make a comment. Your name Your email address
e3e7241a2b1f2206
Wednesday, February 10, 2010 “It is easy to explain something to a layman. It is easier to explain the same thing to an expert. But even the most knowledgeable person cannot explain something to one who has limited half-baked knowledge.” ------------- (Hitopadesha). “To my mind there must be, at the bottom of it all, not an equation, but an utterly simple idea. And to me that idea, when we finally discover it, will be so compelling, so inevitable, that we will say to one another: ‘Oh, How wonderful! How could it have been otherwise.” -----------(John Wheeler). “All these fifty years of conscious brooding have brought me no nearer to the answer to the question, 'What are light quanta?' Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken”. --------------- Einstein, 1954 Twentieth century was a marvel in technological advancement. But except for the first quarter, the advancement of theoretical physics has nothing much to be written about. The principle of mass-energy equivalence, which is treated as the corner-stone principle of all nuclear interactions, binding energies of atoms and nucleons, etc., enters physics only as a corollary of the transformation equations between frames of references in relative motion. Quantum Mechanics (QM) cannot justify this equivalence principle on its own, even though it is the theory concerned about the energy exchanges and interactions of fundamental particles. Quantum Field Theory (QFT) is the extension of QM (dealing with particles) over to fields. In spite of the reported advancements in QFT, there is very little back up experimental proof to validate many of its postulates including Higgs mechanism, bare mass/charge, infinite charge etc. It seems almost impossible to think of QFT without thinking of particles which are accelerated and scattered in colliders. But interestingly, the particle interpretation has the best arguments against QFT. Till recently, the Big Bang hypothesis held the center stage in cosmology. Now Loop Quantum Cosmology (LQC) with its postulates of the “Big Bounce” is taking over. Yet there are two distinctly divergent streams of thought on this subject also. The confusion surrounding interpretation of quantum physics is further compounded by the modern proponents, who often search historical documents of discarded theories and come up with new meanings to back up their own theories. For example, the cosmological constant, first proposed and subsequently rejected as the greatest blunder of his life by Einstein; has made a come back in cosmology. Bohr’s complementarity principle, originally central to his vision of quantum particles, has been reduced to a corollary and is often identified with the frameworks in Consistent Histories. There are a large number of different approaches or formulations to the foundations of Quantum Mechanics. There is the Heisenberg’s Matrix Formulation, Schrödinger’s Wave-function Formulation, Feynman’s Path Integral Formulation, Second Quantization Formulation, Wigner’s Phase Space Formulation, Density Matrix Formulation, Schwinger’s Variational Formulation, de Broglie-Bohm’s Pilot Wave Formulation, Hamilton-Jacobi Formulation etc. There are several Quantum Mechanical pictures based on placement of time-dependence. There is the Schrödinger Picture: time-dependent Wave-functions, the Heisenberg Picture: time-dependent operators and the Interaction Picture: time-dependence split. The different approaches are in fact, modifications of the theory. Each one introduces some prominent new theoretical aspect with new equations, which needs to be interpreted or explained. Thus, there are many different interpretations of Quantum Mechanics, which are very difficult to characterize. Prominent among them are; the Realistic Interpretation: wave-function describes reality, the Positivistic Interpretation: wave-function contains only the information about reality, the famous Copenhagen Interpretation: which is the orthodox Interpretation. Then there is Bohm’s Causal Interpretation, Everett’s Many World’s Interpretation, Mermin’s Ithaca Interpretation, etc. With so many contradictory views, quantum physics is not a coherent theory, but truly weird. General relativity breaks down when gravity is very strong: for example when describing the big bang or the heart of a black hole. And the standard model has to be stretched to the breaking point to account for the masses of the universe’s fundamental particles. The two main theories; quantum theory and relativity, are also incompatible, having entirely different notions: such as for the concept of time. The incompatibility of quantum theory and relativity has made it difficult to unite the two in a single “Theory of everything”. There are almost infinite numbers of the “Theory of Everything” or the “Grand Unified Theory”. But none of them are free from contradictions. There is a vertical split between those pursuing the superstrings route and others, who follow the little Higgs route. String theory, which was developed with a view to harmonize General Relativity with Quantum theory, is said to be a high order theory where other models, such as supergravity and quantum gravity appear as approximations. Unlike super-gravity, string theory is said to be a consistent and well-defined theory of quantum gravity, and therefore calculating the value of the cosmological constant from it should, at least in principle, be possible. On the other hand, the number of vacuum states associated with it seems to be quite large, and none of these features three large spatial dimensions, broken super-symmetry, and a small cosmological constant. The features of string theory which are at least potentially testable - such as the existence of super-symmetry and cosmic strings - are not specific to string theory. In addition, the features that are specific to string theory - the existence of strings - either do not lead to precise predictions or lead to predictions that are impossible to test with current levels of technology. There are many unexplained questions relating to the strings. For example, given the measurement problem of quantum mechanics, what happens when a string is measured? Does the uncertainty principle apply to the whole string? Or does it apply only to some section of the string being measured? Does string theory modify the uncertainty principle? If we measure its position, do we get only the average position of the string? If the position of a string is measured with arbitrarily high accuracy, what happens to the momentum of the string? Does the momentum become undefined as opposed to simply unknown? What about the location of an end-point? If the measurement returns an end-point, then which end-point? Does the measurement return the position of some point along the string? (The string is said to be a Two dimensional object extended in space. Hence its position cannot be described by a finite set of numbers and thus, cannot be described by a finite set of measurements.) How do the Bell’s inequalities apply to string theory? We must get answers to these questions first before we probe more and spend (waste!) more money in such research. These questions should not be put under the carpet as inconvenient or on the ground that some day we will find the answers. That someday has been a very long period indeed! The energy “uncertainty” introduced in quantum theory combines with the mass-energy equivalence of special relativity to allow the creation of particle/anti-particle pairs by quantum fluctuations when the theories are merged. As a result there is no self-consistent theory which generalizes the simple, one-particle Schrödinger equation into a relativistic quantum wave equation. Quantum Electro-Dynamics began not with a single relativistic particle, but with a relativistic classical field theory, such as Maxwell’s theory of electromagnetism. This classical field theory was then “quantized” in the usual way and the resulting quantum field theory is claimed to be a combination of quantum mechanics and relativity. However, this theory is inherently a many-body theory with the quanta of the normal modes of the classical field having all the properties of physical particles. The resulting many-particle theory can be relatively easily handled if the particles are heavy on the energy scale of interest or if the underlying field theory is essentially linear. Such is the case for atomic physics where the electron-volt energy scale for atomic binding is about a million times smaller than the energy required to create an electron positron pair and where the Maxwell theory of the photon field is essentially linear. However, the situation is completely reversed for the theory of the quarks and gluons that compose the strongly interacting particles in the atomic nucleus. While the natural energy scale of these particles, the proton, r meson, etc. is on the order of hundreds of millions of electron volts, the quark masses are about one hundred times smaller. Likewise, the gluons are quanta of a Yang-Mills field which obeys highly non-linear field equations. As a result, strong interaction physics has no known analytical approach and numerical methods is said to be the only possibility for making predictions from first principles and developing a fundamental understanding of the theory. This theory of the strongly interacting particles is called quantum chromodynamics or QCD, where the non-linearities in the theory have dramatic physical effects. One coherent, non-linear effect of the gluons is to “confine” both the quarks and gluons so that none of these particles can be found directly as excitations of the vacuum. Likewise, a continuous “chiral symmetry”, normally exhibited by a theory of light quarks, is broken by the condensation of chirally oriented quark/anti-quark pairs in the vacuum. The resulting physics of QCD is thus entirely different from what one would expect from the underlying theory, with the interaction effects having a dominant influence. It is known that the much celebrated Standard Model of Particle Physics is incomplete as it relies on certain arbitrarily determined constants as inputs - as “givens”. The new formulations of physics such as the Super String Theory and M-theory do allow mechanisms where these constants can arise from the underlying model. However, the problem with these theories is that they postulate the existence of extra dimensions that are said to be either “extra-large” or “compactified” down to the Planck length, where they have no impact on the visible world we live in. In other words, we are told to blindly believe that extra dimensions must exist, but on a scale that we cannot observe. The existence of these extra dimensions has not been proved. However, they are postulated to be not fixed in size. Thus, the ratio between the compactified dimensions and our normal four space-time dimensions could cause some of the fundamental constants to change! If this could happen then it might lead to physics that are in contradiction to the universe we observe. The concept of “absolute simultaneity” – an off-shoot of quantum entanglement and non-locality, poses the gravest challenge to Special Relativity. But here also, a different interpretation is possible for the double-slit experiment, Bell’s inequality, entanglement and decoherence, which can rub them off of their mystic character. The Ives - Stilwell experiment conducted by Herbert E. Ives and G. R. Stilwell in 1938 is considered to be one of the fundamental tests of the special theory of relativity. The experiment was intended to use a primarily longitudinal test of light wave propagation to detect and quantify the effect of time dilation on the relativistic Doppler effect of light waves received from a moving source. Also it intended to indirectly verify and quantify the more difficult to detect transverse Doppler effect associated with detection at a substantial angle to the path of motion of the source - specifically the effect associated with detection at a 90° angle to the path of motion of the source. In both respects it is believed that, a longitudinal test can be used to indirectly verify an effect that actually occurs at a 90° transverse angle to the path of motion of the source. Based on recent theoretical findings of the relativistic transverse Doppler effect, some scientists have shown that such comparison between longitudinal and transverse effects is fundamentally flawed and thus invalid; because it assumes compatibility between two different mathematical treatments. The experiment was designed to detect the predicted time dilation related red-shift effect (increase in wave-length with corresponding decrease in frequency) of special relativity at the fundamentally longitudinal angles at or near 00 and 1800, even though the time dilation effect is based on the transverse angle of 900. Thus, the results of the said experiment do not prove anything. More specifically, it can be shown that the mathematical treatment of special relativity to the transverse Doppler effect is invalid and thus incompatible with the longitudinal mathematical treatment at distances close to the moving source. Any direct comparisons between the longitudinal and transverse mathematical predictions under the specified conditions of the experiment are invalid. Cosmic rays are particles - mostly protons but sometimes heavy atomic nuclei - that travel through the universe at close to the speed of light. Some cosmic rays detected on Earth are produced in violent events such as supernovae, but physicists still don’t know the origins of the highest-energy particles, which are the most energetic particles ever seen in nature. As cosmic-ray particles travel through space, they lose energy in collisions with the low-energy photons that pervade the universe, such as those of the cosmic microwave background radiation. Special theory of relativity dictates that any cosmic rays reaching Earth from a source outside our galaxy will have suffered so many energy-shedding collisions that their maximum possible energy cannot exceed 5 × 1019 electron-volts. This is known as the Greisen-Zatsepin-Kuzmin limit. Over the past decade, University of Tokyo’s Akeno Giant Air Shower Array - 111 particle detectors have detected several cosmic rays above the GZK limit. In theory, they could only have come from within our galaxy, avoiding an energy-sapping journey across the cosmos. However, astronomers cannot find any source for these cosmic rays in our galaxy. One possibility is that there is something wrong with the observed results. Another possibility is that Einstein was wrong. His special theory of relativity says that space is the same in all directions, but what if particles found it easier to move in certain directions? Then the cosmic rays could retain more of their energy, allowing them to beat the GZK limit. A recent report (Physical Letters B, Vol. 668, p-253) suggests that the fabric of space-time is not as smooth as Einstein and others have predicted. During 1919, Eddington started his much publicised eclipse expedition to observe the bending of light by a massive object (here the Sun) to verify the correctness of General Relativity. The experiment in question concerned the problem of whether light rays are deflected by gravitational forces, and took the form of astrometric observations of the positions of stars near the Sun during a total solar eclipse. The consequence of Eddington’s theory-led attitude to the experiment, along with alleged data fudging, was claimed to favor Einstein’s theory over Newton’s when in fact the data supported no such strong construction. In reality, both the predictions were based on Einstein’s calculations in 1908 and again in 1911 using Newton’s theory of gravitation. In 1911, Einstein wrote: “A ray of light going past the Sun would accordingly undergo deflection to an amount of 4’10-6 = 0.83 seconds of arc”. He did not clearly explain which fundamental principle of physics used in his paper and giving the value of 0.83 seconds of arc (dubbed half deflection) was wrong. He revised his calculation in 1916 to hold that light coming from a star far away from the Earth and passing near the Sun will be deflected by the Sun’s gravitational field by an amount that is inversely proportional to the star’s radial distance from the Sun (1.745” at the Sun’s limb - dubbed full deflection). Einstein never explained why he revised his earlier figures. Eddington was experimenting which of the above two values calculated by Einstein is correct. Specifically it has been alleged that a sort of data fudging took place when Eddington decided to reject the plates taken by the one instrument (the Greenwich Observatory’s Astrographic lens, used at Sobral), whose results tended to support the alternative “Newtonian” prediction of light bending (as calculated by Einstein). Instead the data from the inferior (because of cloud cover) plates taken by Eddington himself at Principe and from the inferior (because of a reduced field of view) 4-inch lens used at Sobral were promoted as confirming the theory. While he claimed that the result proved Einstein right and Newton wrong, an objective analysis of the actual photographs shows no such clear cut result. Both theories are consistent with the data obtained. It may be recalled that when someone said that there are only two persons in the world besides Einstein who understood relativity, Eddington had replied that he does not know who the other person was. This arrogance clouded his scientific acumen, as was confirmed by his distaste for the theories of Dr. S Chandrasekhar, which subsequently won the Nobel Prize. Heisenberg’s Uncertainty relation is still a postulate, though many of its predictions have been verified and found to be correct. Heisenberg never called it a principle. Eddington was the first to call it a principle and others followed him. But as Karl Popper pointed out, uncertainty relations cannot be granted the status of a principle because theories are derived from principles, but uncertainty relation does not lead to any theory. We can never derive an equation like the Schrödinger equation or the commutation relation from the uncertainty relation, which is an inequality. Einstein’s distinction between “constructive theories” and “principle theories” does not help, because this classification is not a scientific classification. Serious attempts to build up quantum theory as a full fledged Theory of Principle on the basis of the uncertainty relation have never been carried out. At best it can be said that Heisenberg created “room” or “freedom” for the introduction of some non-classical mode of description of experimental data. But these do not uniquely lead to the formalism of quantum mechanics. There are a plethora of other postulates in Quantum Mechanics; such as: the Operator postulate, the Hermitian property postulate, Basis set postulate, Expectation value postulate, Time evolution postulate, etc. The list goes on and on and includes such undiscovered entities as strings and such exotic particles as the Higg’s particle (which is dubbed as the “God particle”) and graviton; not to speak of squarks et all. Yet, till now it is not clear what quantum mechanics is about? What does it describe? It is said that quantum mechanical systems are completely described by its wave function? From this it would appear that quantum mechanics is fundamentally about the behavior of wave-functions. But do the scientists really believe that wave-functions describe reality? Even Schrödinger, the founder of the wave-function, found this impossible to believe! He writes (Schrödinger 1935): “That it is an abstract, unintuitive mathematical construct is a scruple that almost always surfaces against new aids to thought and that carries no great message”. Rather, he was worried about the “blurring” suggested by the spread-out character of the wave-function, which he describes as, “affects macroscopically tangible and visible things, for which the term ‘blurring’ seems simply wrong”. Schrödinger goes on to note that it may happen in radioactive decay that “the emerging particle is described … as a spherical wave … that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however, does not show a more or less constant uniform surface glow, but rather lights up at one instant at one spot …”. He observed further that one can easily arrange, for example by including a cat in the system, “quite ridiculous cases” with the ψ-function of the entire system having in it the living and the dead cat mixed or smeared out in equal parts. Resorting to epistemology cannot save such doctrines. The situation was further made complicated by Bohr with interpretation of quantum mechanics. But how many scientists truly believe in his interpretation? Apart from the issues relating to the observer and observation, it usually is believed to address the measurement problem. Quantum mechanics is fundamentally about the micro-particles such as quarks and strings etc, and not the macroscopic regularities associated with measurement of their various properties. But if these entities are somehow not to be identified with the wave-function itself and if the description is not about measurements, then where is their place in the quantum description? Where is the quantum description of the objects that quantum mechanics should be describing? This question has led to the issues raised in the EPR argument. As we will see, this question has not been settled satisfactorily. The formulations of quantum mechanics describe the deterministic unitary evolution of a wave-function. This wave-function is never observed experimentally. The wave-function allows computation of the probability of certain macroscopic events of being observed. However, there are no events and no mechanism for creating events in the mathematical model. It is this dichotomy between the wave-function model and observed macroscopic events that is the source of the various interpretations in quantum mechanics. In classical physics, the mathematical model relates to the objects we observe. In quantum mechanics, the mathematical model by itself never produces observation. We must interpret the wave-function in order to relate it to experimental observation. Often these interpretations are related to the personal and socio-cultural bias of the scientist, which gets weightage based on his standing in the community. Thus, the arguments of Einstein against Bohr’s position has roots in Lockean notions of perception, which opposes the Kantian metaphor of the “veil of perception” that pictures the apparatus of observation as like a pair of spectacles through which a highly mediated sight of the world can be glimpsed. According to Kant, “appearances” simply do not reflect an independently existing reality. They are constituted through the act of perception in such a way that conform them to the fundamental categories of sensible intuitions. Bohr maintained that “measurement has an essential influence on the conditions on which the very definition of physical quantities in question rests” (Bohr 1935, 1025). In modern science, there is no unambiguous and precise definition of the words time, space, dimension, numbers, zero, infinity, charge, quantum particle, wave-function etc. The operational definitions have been changed from time to time to take into account newer facts that facilitate justification of the new “theory”. For example, the fundamental concept of the quantum mechanical theory is the concept of “state”, which is supposed to be completely characterized by the wave-function. However, till now it is not certain “what” a wave-function is. Is the wave-function real - a concrete physical object or is it something like a law of motion or an internal property of particles or a relation among spatial points? Or is it merely our current information about the particles? Quantum mechanical wave-functions cannot be represented mathematically in anything smaller than a 10 or 11 dimensional space called configuration space. This is contrary to experience and the existence of higher dimensions is still in the realm of speculation. If we accept the views of modern physicists, then we have to accept that the universe’s history plays itself out not in the three dimensional space of our everyday experience or the four-dimensional space-time of Special Relativity, but rather in this gigantic configuration space, out of which the illusion of three-dimensionality somehow emerges. Thus, what we see and experience is illusory! Maya? The measurement problem in quantum mechanics is the unresolved problem of how (or if) wave-function collapse occurs. The inability to observe this process directly has given rise to different interpretations of quantum mechanics, and poses a key set of questions that each interpretation must answer. If it is postulated that a particle does not have a value before measurement, there has to be conclusive evidence to support this view. The wave-function in quantum mechanics evolves according to the Schrödinger equation into a linear superposition of different states, but actual measurements always find the physical system in a definite state. Any future evolution is based on the state the system was “discovered” to be in when the measurement was made, implying that the measurement “did something” to the process under examination. Whatever that “something” may be does not appear to be explained by the basic theory. Further, quantum systems described by linear wave-functions should be incapable of non-linear behavior. But chaotic quantum systems have been observed. Though chaos appears to be probabilistic, it is actually deterministic. Further, if the collapse causes the quantum state to jump from superposition of states to a fixed state, it must be either an illusion or an approximation to the reality at quantum level. We can rule out illusion as it is contrary to experience. In that case, there is nothing to suggest that events in quantum level are not deterministic. We may very well be able to determine the outcome of a quantum measurement provided we set up an appropriate measuring device! The operational definitions and the treatment of the term wave-function used by researchers in quantum theory progressed through intermediate stages. Schrödinger viewed the wave-function associated with the electron as the charge density of an object smeared out over an extended (possibly infinite) volume of space. He did not regard the waveform as real nor did he make any comment on the waveform collapse. Max Born interpreted it as the probability distribution in the space of the electron’s position. He differed from Bohr in describing quantum systems as being in a state described by a wave-function which lives longer than any specific experiment. He considered the waveform as an element of reality. According to this view, also known as State Vector Interpretation, measurement implied the collapse of the wave function. Once a measurement is made, the wave-function ceases to be smeared out over an extended volume of space and the range of possibilities collapse to the known value. However, the nature of the waveform collapse is problematic and the equations of Quantum Mechanics do not cover the collapse itself. The view known as “Consciousness Causes Collapse” regards measuring devices also as quantum systems for consistency. The measuring device changes state when a measurement is made, but its wave-function does not collapse. The collapse of the wave-function can be traced back to its interaction with a conscious observer. Let us take the example of measurement of the position of an electron. The waveform does not collapse when the measuring device initially measures the position of the electron. Human eye can also be considered a quantum system. Thus, the waveform does not collapse when the photon from the electron interacts with the eye. The resulting chemical signals to the brain can also be treated as a quantum system. Hence it is not responsible for the collapse of the wave-form. However, a conscious observer always sees a particular outcome. The wave-form collapse can be traced back to its first interaction with the consciousness of the observer. This begs the question: what is consciousness? At which stage in the above sequence of events did the wave-form collapse? Did the universe behave differently before life evolved? If so, how and what is the proof for that assumption? No answers. Many-worlds Interpretation tries to overcome the measurement problem in a different way. It regards all possible outcomes of measurement as “really happening”, but holds that somehow we select only one of those realities (or in their words - universes). But this view clashes with the second law of thermodynamics. The direction of the thermodynamic arrow of time is defined by the special initial conditions of the universe which provides a natural solution to the question of why entropy increases in the forward direction of time. But what is the cause of the time asymmetry in the Many-worlds Interpretation? Why do universes split in the forward time direction? It is said that entropy increases after each universe-branching operation – the resultant universes are slightly more disordered. But some interpretations of decoherence contradict this view. This is called macroscopic quantum coherence. If particles can be isolated from the environment, we can view multiple interference superposition terms as a physical reality in this universe. For example, let us consider the case of the electric current being made to flow in opposite directions. If the interference terms had really escaped to a parallel universe, then we should never be able to observe them both as physical reality in this universe. Thus, this view is questionable. Transactional Interpretation accepts the statistical nature of waveform, but breaks it into an “offer” wave and an “acceptance” wave, both of which are treated as real. Probabilities are assigned to the likelihood of interaction of the offer waves with other particles. If a particle interacts with the offer wave, then it “returns” a confirmation wave to complete the transaction. Once the transaction is complete, energy, momentum, etc., are transferred in quanta as per the normal probabilistic quantum mechanics. Since Nature always takes the shortest and the simplest path, the transaction is expected to be completed at the first opportunity. But once that happens, classical probability and not quantum probability will apply. Further, it cannot explain how virtual particles interact. Thus, some people defer the waveform collapse to some unknown time. Since the confirmation wave in this theory is smeared all over space, it cannot explain when the transaction begins or is completed and how the confirmation wave determines which offer wave it matches up to. Quantum decoherence, which was proposed in the context of the many-worlds interpretation, but has also become an important part of some modern updates of the Copenhagen interpretation based on consistent histories, allows physicists to identify the fuzzy boundary between the quantum micro-world and the world where the classical intuition is applicable. But it does not describe the actual process of the wave-function collapse. It only explains the conversion of the quantum probabilities (that are able to interfere) to the ordinary classical probabilities. Some people have tried to reformulate quantum mechanics as probability or logic theories. In some theories, the requirements for probability values to be real numbers have been relaxed. The resulting non-real probabilities correspond to quantum waveform. But till now a fully developed theory is missing. Hidden Variables Theories treat Quantum mechanics as incomplete. Until a more sophisticated theory underlying Quantum mechanics is discovered, it is not possible to make any definitive statement. It views quantum objects as having properties with well-defined values that exist separately from any measuring devices. According to this view, chance plays no roll at all and everything is fully deterministic. Every material object invariably does occupy some particular region of space. This theory takes the form of a single set of basic physical laws that apply in exactly the same way to every physical object that exists. The waveform may be a purely statistical creation or it may have some physical role. The Causal Interpretation of Bohm and its latter development, the Ontological Interpretation, emphasize “beables” rather than the “observables” in contradistinction to the predominantly epistemological approach of the standard model. This interpretation is causal, but non-local and non-relativistic, while being capable of being extended beyond the domain of the current quantum theory in several ways. There are divergent views on the nature of reality and the role of science in dealing with reality. Measuring a quantum object was supposed to force it to collapse from a waveform into one position. According to quantum mechanical dogma, this collapse makes objects “real”. But new verifications of “collapse reversal” suggest that we can no longer assume that measurements alone create reality. It is possible to take a “weak” measurement of a quantum particle continuously partially collapsing the quantum state, and then “unmeasure” it altering certain properties of the particle and perform the same weak measurement again. In one such experiment reported in Nature News, the particle was found to have returned to its original quantum state just as if no measurement had ever been taken. This implies that, we cannot assume that measurements create reality because; it is possible to erase the effects of a measurement and start again. Newton gave his laws of motion in the second chapter, entitled “Axioms, or Laws of motion” of his book Principles of Natural Philosophy published in 1687 in Latin language. The second law says that the change of motion is proportional to the motive force impressed. Newton relates the force to the change of momentum (not to the acceleration as most textbooks do). Momentum is accepted as one of two quantities that, taken together, yield the complete information about a dynamic system at any instant. The other quantity is position, which is said to determine the strength and direction of the force. Since then the earlier ideas have changed considerably. The pairing of momentum and position is no longer viewed in the Euclidean space of three dimensions. Instead, it is viewed in phase space, which is said to have six dimensions, three for position and three for momentum. But here the term dimension has actually been used for direction, which is not a scientific description. In fact most of the terms used by modern scientists have not been precisely defined - they have only an operational definition, which is not only incomplete, but also does not stand scientific scrutiny, though it is often declared as “reasonable”. This has been done not by chance, but by design, as modern science is replete with such instances. For example, we quote from the paper of Einstein and his colleagues Boris Podolsky and Nathan Rosen, which is known as the EPR argument (Phys. Rev. 47, 777 (1935): “A comprehensive definition of reality is, however, unnecessary for our purpose. We shall be satisfied with the following criterion, which, we regard as reasonable. If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity. It seems to us that this criterion, while far from exhausting all possible ways of recognizing a physical reality, at least provides us with one such way, whenever the conditions set down in it occur. Regarded not as necessary, but merely as a sufficient, condition of reality, this criterion is in agreement with classical as well as quantum-mechanical ideas of reality.” Prima facie, what Einstein and his colleagues argued was that under ideal conditions, observation (includes measurement) functions like a mirror reflecting an independently existing, external reality. The specific criterion for describing reality characterizes it in terms of objectivity understood as independence from any direct measurement. This implies that, when a direct measurement of physical reality occurs, it merely passively reflects rather than actively constitutes the object under observation. It further implies that ideal observations not only reflect the state of the object during observation, but also before and after observation just like a photograph taken. It has a separate and fixed identity than the object whose photograph has been taken. While the object may be evolving in time, the photograph depicts a time invariant state. Bohr and Heisenberg opposed this notion based on the Kantian view by describing acts of observation and measurement more generally as constitutive of phenomena. More on this will be discussed later. The fact that our raw sense impressions and experiences are compatible with widely differing concepts of the world has led some philosophers to suggest that we should dispense with the idea of an “objective world” altogether and base our physical theories on nothing but direct sense impressions only. Berkeley expressed the positivist identification of sense impressions with objective existence by the famous phrase “esse est percipi” (to be is to be perceived). This has led to the changing idea of “objective reality”. However, if we can predict with certainty “the value of a physical quantity”, it only means that we have partial and not complete “knowledge” – which is the “total” result of “all” measurements - of the system. It has not been shown that knowledge is synonymous with reality. We may have the “knowledge” of mirage, but it is not real. Based on the result of our measurement, we may have knowledge that something is not real, but only apparent. The partial definition of reality is not correct as it talks about “the value of a physical quantity” and not “the value of all physical quantities”. We can predict with certainty “the value of a physical quantity” such as position or momentum, which are classical concepts, without in any way disturbing the system. This has been accepted for past events by Heisenberg himself, which has been discussed in latter pages. Further, measurement is a process of comparison between similars and not bouncing light off something to disturb it. This has been discussed in detail while discussing the measurement problem. We cannot classify an object being measured (observed) separately from the apparatus performing the measurement (though there is lot of confusion in this area). They must belong to the same class. This is clearly shown in the quantum world where it is accepted that we cannot divorce the property we are trying to measure from the type of observation we make: the property is dependent on the type of measurement and the measuring instrument must be designed to use that particular property. However, this interpretation can be misleading and may not have anything to do with reality as described below. Such limited treatment of the definition of “reality” has given the authors the freedom to manipulate the facts to suit their convenience. Needless to say; the conclusions arrived at in that paper has been successively proved wrong by John S. Bell, Alain Aspect, etc, though for a different reason. In the double slit experiment, it is often said that whether the electron has gone through the hole No.1 or No. 2 is meaningless. The electron, till we observe which hole it goes through, exists in a superposition state of equal measure of probability wave for going through the hole 1 and through the hole 2. This is a highly misleading notion as after it went through, we can always see its imprint on the photographic plate at a particular position and that is real. Before such observation we do not know which hole it went through, but there is no reason to presume that it went through a mixed state of both holes. Our inability to measure or know cannot change physical reality. It can only limit our knowledge of such physical reality. This aspect and the interference phenomenon have been discussed elaborately in later pages. If, we accept the modern view of superposition of states, we land in many complex situations. Suppose the Schrödinger’s cat is somewhere in deep space and a team of astronauts were sent to measure its state According to the Copenhagen interpretation, the astronauts by opening the box and performing the observation have now put the cat into a definite quantum state; say find it alive. For them, the cat is no longer in a superposition state of equal measure of probability of living or dead. But for their Earth bound colleagues, the cat and the astronauts on board the space shuttle who know the state of the cat (did they change to a quantum state?), are still in a probability wave superposition state of live cat and dead cat. Finally, when the astronauts communicate with a computer down on earth, they pass on the information that is stored in the magnetic memory of the computer. After the computer receives the information, but before its memory is read by the earth-bound scientists, the computer is part of the superposition state for the earth-bound scientists. Finally, in reading the computer output, the earth-bound scientists reduce the superposition state to a definite one. Reality springs into being or rather from being to becoming only after we observe it. Is the above description sensible? What really happens is that the cat interacts with the particles around it – protons, electrons, air molecules, dust particles, radiation, etc, which has the effect of “observing” it. The state is accessed by each of the conscious observers (as well as the other particles) by intercepting on its/our retina a small fraction of the light that has interacted with the cat. Thus, in reality, the field set up by his retina is perturbed and the impulse is carried out to the brain, where it is compared with previous similar impressions. If the impression matches with any previous impressions, we cognize it to be like that. Thereafter only we cognize the result of the measurement: the cat is alive or dead at the moment of observation. Thus, the process of measurement is carried out constantly without disturbing the system and evolution of the observed has nothing to do with the observation. This has been elaborated while discussing the measurement problem. Further someone has put the cat and the deadly apparatus in the box. Thus according to the generally accepted theory, the wave-function has collapsed for him at that time. The information is available to us. Only afterwards, the evolutionary state of the cat – whether living or dead – is not known to us including the person who put the cat in the box in the first place. But according to the above description, the cat, whose wave-function has collapsed for the person who put the cat in the box, again goes into a “superposition of states of both alive and dead” and needs another observation – directly or indirectly through a set of apparatus - to describe its proper state at any subsequent time. This implies that after the second observation, the cat again goes into a “superposition of states of both alive and dead” till it is again observed and so on ad infinitum till it is found dead. But then the same story repeats for the dead cat – this time about his state of decomposition! The cat example shows three distinct aspects: the state of the cat, i.e., dead or alive at the moment of observation (which information is time invariant as it is fixed), the state of the cat prior to and after the moment of observation (which information is time variant as the cat will die at some unspecified time due to unspecified reasons), and the cognition of these information by a conscious observer, which is time invariant but about the time evolution of the states of the cat. In his book “Popular Astronomy”, Prof. Bigelow says; Force, Mass, Surface, Electricity, Magnetism, etc., “are apprehended only during instantaneous transfer of energy”. He further adds; “Energy is the great unknown quantity, and its existence is recognized only during its state of change”. This is an eternal truth. We endorse the above view. It is well-known that the Universe is so called because everything in it is ever moving. Thus the view that observation not only describes the state of the object during observation, but also the state before and after it, is misleading. The result of measurement is the description of a state frozen in time, thus a fixed quantity. Its time evolution is not self-evident in the result of measurement. It has any meaning only after it is cognized by a conscious agent, as consciousness is time invariant. Thus, the observable, observation and observer depict three aspects of confined mass, displacing energy and revealing radiation of a single phenomenon depicting reality. Quantum physics has to explain these phenomena scientifically. We will discus it later. When one talks about what an electron is “doing”, one implies what sort of a wave function is associated with it. But the wave function is not a physical object in the sense a proton or an electron or a billiard ball. In fact, the rules of quantum theory do not even allot a unique wave function to a given state of motion, since multiplying the wave function by a factor of modulus unity does not change any physical consequence. Thus, Heisenberg opined that “the atoms or elementary particles are not as real; they form a world of potentialities or possibilities rather than one of things or facts”. This shows the helplessness of the physicists to explain the quantum phenomena in terms of the macro world. The activities of the elementary particles appear essential as long as we believe in the independent existence of fundamental laws that we can hope to understand better. Reality cannot differ from person to person or from measurement to measurement because it has existence independent of these factors. The elements of our “knowledge” are actually derived from our raw sense impressions, by automatically interpreting them in conventional terms based on our earlier impressions. Since these impressions vary, our responses to the same data also vary. Yet, unless an event is observed, it has no meaning by itself. Thus, it can be said that while observables have a time evolution independent of observation, it depends upon observation for any meaningful description in relation to others. For this reason the individual responses/readings to the same object may differ based on their earlier (at a different time and may be space) experience/environment. As the earlier example of the cat shows, it requires a definite link between the observer and the observed – a split (from time evolution), and a link (between the measurement representing its state and the consciousness of the observer for describing such state in communicable language). This link varies from person to person. At every interaction, the reality is not “created”, but the “presently evolved state” of the same reality gets described and communicated. Based on our earlier experiences/experimental set-up, it may return different responses/readings. There is no proof to show that a particle does not have a value before measurement. The static attributes of a proton or an electron such as its charge or its mass have well defined properties and will remain so even before and after observation even though it may change its position or composition due to the effect of the forces acting on it – spatial translation. The dynamical attributes will continue to evolve – temporal translation. The life cycles of stars and galaxies will continue till we notice their extinction in a supernova explosion. The moon will exist even when we are not observing it. The proof for this is their observed position after a given time matches our theoretical calculation. Before measurement, we do not know the “present” state. Since present is a dynamical entity describing time evolution of the particle, it evolves continuously from past to future. This does not mean that the observer creates reality – after observation at a given instant he only discovers the spatial and temporal state of its static and dynamical aspects. The prevailing notion of superposition (an unobserved proposition) only means that we do not know how the actual fixed value after measurement has been arrived at (described elaborately in later pages), as the same value could be arrived at by infinite numbers of ways. We superimpose our ignorance on the particle and claim that the value of that particular aspect is undetermined whereas in reality the value might already have been fixed (the cat might have died). The observer cannot influence the state of the observed (moment of death of the cat) before or after observation. He can only report the “present state”. Quantum mechanics has failed to describe the collapse mechanism satisfactorily. In fact many models (such as the Copenhagen interpretation) treat the concept of collapse as non-sense. The few models that accept collapse as real are incomplete and fail to come up with a satisfactory mechanism to explain it. In 1932, John von Neumann showed that if electrons are ordinary objects with inherent properties (which would include hidden variables) then the behavior of those objects must contradict the predictions of quantum theory. Because of his stature in those days, no one contradicted him. But in 1952, David Bohm showed that hidden variables theories were plausible if super-luminal velocities are possible. Bohm’s mechanics has returned predictions equivalent to other interpretations of quantum mechanics. Thus, it cannot be discarded lightly. If Bohm is right, then Copenhagen interpretation and its extensions are wrong. There is no proof to show that the characteristics of particle states are randomly chosen instantaneously at the time of observation/measurement. Since the value remains fixed after measurement, it is reasonable to assume that it remained so before measurement also. For example, if we measure the temperature of a particle by a thermometer, it is generally assumed that a little heat is transferred from the particle to the thermometer thereby changing the state of the particle. This is an absolutely wrong assumption. No particle in the Universe is perfectly isolated. A particle inevitably interacts with its environment. The environment might very well be a man-made measuring device. Introduction of the thermometer does not change the environment as all objects in the environment are either isothermic or heat is flowing from higher concentration to lower concentration. In the former case there is no effect. In the latter case also it does not change anything as the thermometer is isothermic with the environment. Thus the rate of heat flow from the particle to the thermometer remains constant – same as that of the particle to its environment. When exposed to heat, the expansion of mercury shows a uniform gradient in proportion to the temperature of its environment. This is sub-divided over a randomly chosen range and taken as the unit. The expansion of mercury when exposed to the heat flow from a particle till both become isothermic is compared with this unit and we get a scalar quantity, which we call the result of measurement at that instant. Similarly, the heat flow to the thermometer does not affect the object as it was in any case continuing with the heat flow at a steady rate and continued to do so even after measurement. This is proved from the fact that the thermometer reading does not change after sometime (all other conditions being unchanged). This is common to all measurements. Since the scalar quantity returned as the result of measurement is a number, it is sometimes said that numbers are everything. While there is no proof that measurement determines reality, there is proof to the contrary. Suppose we have a random group of people and we measure three of their properties: sex, height and skin-color. They can be male or female, tall or short and their skin-color could be fair or brown. If we take at random 30 people and measure the sex and height first (male and tall), and then the skin-color (fair) for the same sample, we will get one result (how many tall men are fair). If we measure the sex and the skin-color first (male and fair), and then the height (tall), we will get a different result (how many fare males are tall). If we measure the skin-color and the height first (fair and tall), and then the sex (male), we will get a yet different result (how many fare and tall persons are male). Order of measurement apparently changes result of measurement. But the result of measurement really does not change anything. The tall will continue to be tall and the fair will continue to be fair. The male and female will not change sex either. This proves that measurement does not determine reality, but only exposes selected aspects of reality in a desired manner – depending upon the nature of measurement. It is also wrong to say that whenever any property of a microscopic object affects a macroscopic object, that property is observed and becomes physical reality. We have experienced situations when an insect bite is not really felt (measure of pain) by us immediately even though it affects us. A viral infection does not affect us immediately. We measure position, which is the distance from a fixed reference point in different coordinates, by a tape of unit distance from one end point to the other end point or its sub-divisions. We measure mass by comparing it with another unit mass. We measure time, which is the interval between events by a clock, whose ticks are repetitive events of equal duration (interval) which we take as the unit, etc. There is no proof to show that this principle is not applicable to the quantum world. These measurements are possible when both the observer with the measuring instrument and the object to be measured are in the same frame of reference (state of motion); thus without disturbing anything. For this reason results of measurement are always scalar quantities – multiples of the unit. Light is only an accessory for knowing the result of measurement and not a pre-condition for measurement. Simultaneous measurement of both position and momentum is not possible, which is correct, though due to different reasons explained in later pages. Incidentally, both position and momentum are regarded as classical concepts. In classical mechanics and electromagnetism, properties of a point mass or properties of a field are described by real numbers or functions defined on two or three dimensional sets. These have direct, spatial meaning, and in these theories there seems to be less need to provide a special interpretation for those numbers or functions. The accepted mathematical structure of quantum mechanics, on the other hand, is based on fairly abstract mathematics (?), such as Hilbert spaces, (which is the quantum mechanical counterpart of the classical phase-space) and operators on those Hilbert spaces. Here again, there is no precise definition of space. The proof for the existence and justification of the different classification of “space” and “vacuum” are left unexplained. When developing new theories, physicists tend to assume that quantities such as the strength of gravity, the speed of light in vacuum or the charge on the electron are all constant. The so-called universal constants are neither self-evident in Nature nor have been derived from fundamental principles (though there are some claims to the contrary, each has some problem). They have been deduced mathematically and their value has been determined by actual measurement. For example, the fine structure constant has been postulated in QED, but its value has been derived only experimentally (We have derived the measured value from fundamental principles). Yet, the regularity with which such constants of Nature have been discovered points to some important principle underlying it. But are these quantities really constant? The velocity of light varies according to the density of the medium. The acceleration due to gravity “g” varies from place to place. We have measured the value of “G” from earth. But we do not know whether the value is the same beyond the solar system. The current value of the distance between the Sun and the Earth has been pegged at 149,597,870.696 kilometers. A recent (2004) study shows that the Earth is moving away from the Sun @ 15 cm per annum. Since this value is 100 times greater than the measurement error, something must really be pushing Earth outwards. While one possible explanation for this phenomenon is that the Sun is losing enough mass via fusion and the solar wind, alternative explanations include the influence of dark matter and changing value of G. We will explain it later. Einstein proposed the Cosmological Constant to allow static homogeneous solutions to his equations of General Relativity in the presence of matter. When the expansion of the Universe was discovered, it was thought to be unnecessary forcing Einstein to declare was it was his greatest blunder. There have been a number of subsequent episodes in which a non-zero cosmological constant was put forward as an explanation for a set of observations and later withdrawn when the observational case evaporated. Meanwhile, the particle theorists are postulating that the cosmological constant can be interpreted as a measure of the energy density of the vacuum. This energy density is the sum of a number of apparently unrelated contributions: potential energies from scalar fields and zero-point fluctuations of each field theory degree of freedom as well as a bare cosmological constant λ0, each of magnitude much larger than the upper limits on the cosmological constant as measured now. However, the observed vacuum energy is very very small in comparison to the theoretical prediction: a discrepancy of 120 orders of magnitude between the theoretical and observational values of the cosmological constant. This has led some people to postulate an unknown mechanism which would set it precisely to zero. Others postulate the mechanism to suppress the cosmological constant by just the right amount to yield an observationally accessible quantity. However, all agree that this illusive quantity does play an important dynamical role in the Universe. The confusion can be settled if we accept the changing value of G, which can be related to the energy density of the vacuum. Thus, the so-called constants of Nature could also be thought of as the equilibrium points, where different forces acting on a system in different proportions balance each other. For example, let us consider the Libration points called L4 and L5, which are said to be places that gravity forgot. They are vast regions of space, sometimes millions of kilometers across, in which celestial forces cancel out gravity and trap anything that falls into them. The Libration points, known as ¨ÉxnùÉäSSÉ and {ÉÉiÉ in earlier times, were rediscovered in 1772 by the mathematician Joseph-Louis Lagrange. He calculated that the Earth’s gravitational field neutralizes the gravitational pull of the sun at five regions in space, making them the only places near our planet where an object is truly weightless. Astronomers call them Libration points; also Lagrangian points, or L1, L2, L3, L4 and L5 for short. Of the five Libration points, L4 and L5 are the most intriguing. Two such Libration points sit in the Earth’s orbit also, one marching ahead of our planet, the other trailing along behind. They are the only ones that are stable. While a satellite parked at L1 or L2 will wander off after a few months unless it is nudged back into place (like the American satellite SOHO), any object at L4 or L5 will stay put due to a complex web of forces (like the asteroids). Evidence for such gravitational potholes appears around other planets too. In 1906, Max Wolf discovered an asteroid outside of the main belt between Mars and Jupiter, and recognized that it was sitting at Jupiter’s L4 point. The mathematics for L4 uses the “brute force approach” making it approximate. Lying 150 million kilometers away along the line of Earth’s orbit, L4 circles the sun about 60 degrees (slightly more, according to our calculation) in front of the planet while L5 lies at the same angle behind. Wolf named it Achilles, leading to the tradition of naming these asteroids after characters from the Trojan wars. The realization that Achilles would be trapped in its place and forced to orbit with Jupiter, never getting much closer or further away, started a flurry of telescopic searches for more examples. There are now more than 1000 asteroids known to reside at each of Jupiter’s L4 and L5 points. Of these, about ⅔ reside at L4 while the rest ⅓ are at L5. Perturbations by the other planets (primarily Saturn) causes these asteroids to oscillate around L4 and L5 by about 15-200 and at inclinations of up to 400 to the orbital plane. These oscillations generally take between 150 years and 200 years to complete. Such planetary perturbations may also be the reason why there have been so few Trojans found around other planets. Searches for “Trojan” asteroids around other planets have met with mixed results. Mars has 5 of them at L5 only. Saturn seemingly has none. Neptune has two. The asteroid belt surrounds the inner Solar system like a rocky, ring-shaped moat, extending out from the orbit of Mars to that of Jupiter. But there are voids in that moat in distinct locations called Kirkwood gaps that are associated with orbital resonances with the giant planets - where the orbital influence of Jupiter is especially potent. Any asteroid unlucky enough to venture into one of these locations will follow chaotic orbits and will be perturbed and ejected from the cozy confines of the belt, often winding up on a collision course with one of the inner, rocky planets (such as Earth) or the moon. But Jupiter’s pull cannot account for the extent of the belt’s depletion seen at present or for the spotty distribution of asteroids across the belt - unless there was a migration of planets early in the history of the solar system. According to a report (Nature 457, 1109-1111 dated 26 February 2009), the observed distribution of main belt asteroids does not fill uniformly even those regions that are dynamically stable over the age of the Solar System. There is a pattern of excess depletion of asteroids, particularly just outward of the Kirkwood gaps associated with the 5:2, the 7:3 and the 2:1 Jovian resonances. These features are not accounted for by planetary perturbations in the current structure of the Solar System, but are consistent with dynamical ejection of asteroids by the sweeping of gravitational resonances during the migration of Jupiter and Saturn. Some researchers designed a computer model of the asteroid belt under the influence of the outer “gas giant” planets, allowing them to test the distribution that would result from changes in the planets’ orbits over time. A simulation wherein the orbits remained static, did not agree with observational evidence. There were places where there should have been a lot more asteroids than we saw. On the other hand, a simulation with an early migration of Jupiter inward and Saturn outward - the result of interactions with lingering planetesimals (small bodies) from the creation of the solar system - fit the observed layout of the belt much better. The uneven spacing of asteroids is readily explained by this planet-migration process that other people have also worked on. In particular, if Jupiter had started somewhat farther from the sun and then migrated inward toward its current location, the gaps it carved into the belt would also have inched inward, leaving the belt looking much like it does now. The good agreement between the simulated and observed asteroid distributions is quite remarkable. One significant question not addressed in this paper is the pattern of migration - whether the asteroid belt can be used to rule out one of the presently competing theories of migratory patterns. The new study deals with the speed at which the planets’ orbits have changed. The simulation presumes a rather rapid migration of a million or two million years, but other models of Neptune’s early orbital evolution tend to show that migration proceeds much more slowly, over millions of years. We hold this period as 4.32 million years for the Solar system. This example shows that the orbits of planets, which are stabilized due to balancing of the centripetal force and gravity, might be changing from time to time. This implies that either the masses of the Sun and the planets or their distance from each other or both are changing over long periods of time (which is true). It can also mean that G is changing. Thus, the so-called constants of Nature may not be so constants after all. Earlier, a cosmology with changing physical values for the gravitational constant G was proposed by P.A.M. Dirac in 1937. Field theories applying this principle have been proposed by P. Jordan and D.W. Sciama and in 1961 by C. Brans and R.H. Dicke. According to these theories the value of G is diminishing. Brans and Dicke suggested a change of about 0.00000000002 per year. This theory has not been accepted on the ground that it would have profound effect on the phenomena ranging from the evolution of the Universe to the evolution of the Earth. For instance, stars evolve faster if G is greater. Thus, the stellar evolutionary ages computed with constant G at its present value would be too great. The Earth compressed by gravitation would expand having a profound effect on surface features. The Sun would have been hotter than it is now and the Earth’s orbit would have been smaller. No one bothered to check whether such a scenario existed or is possible. Our studies in this regard show that the above scenario did happen. We have data to prove the above point. Precise measurements in 1999 gave so divergent values of G from the currently accepted value that the result had to be pushed under the carpet, as otherwise most theories of physics would have tumbled. Presently, physicists are measuring gravity by bouncing atoms up and down off a laser beam (arXiv:0902.0109). The experiments have been modified to perform atom interferometry, whereby quantum interference between atoms can be used to measure tiny accelerations. Those still using the earlier value of G in their calculations, land in trajectories much different from their theoretical calculations. Thus, modern science is based on a value of G that has been proved to be wrong. The Pioneer and Fly-by anomalies and the change of direction of Voyager 2 after it passed the orbit of Saturn have cast a shadow on the authenticity of the theory of gravitation. Till now these have not been satisfactorily explained. We have discussed these problems and explained a different theory of gravitation in later pages. According to reports published in several scientific journals, precise measurements of the light from distant quasars and the only known natural nuclear reactor, which was active nearly 2 billion years ago at what is now Oklo in Gabon suggest that the value of the fine-structure constant may have changed over the history of the universe (Physical Review D, vol 69, p 121701). If confirmed, the results will be of enormous significance for the foundations of physics. Alpha is an extremely important constant that determines how light interacts with matter - and it shouldn’t be able to change. Its value depends on, among other things, the charge on the electron, the speed of light and Planck’s constant. Could one of these really have changed? If the fine-structure constant changes over time, it allows postulating that the velocity of light might not be constant. This would explain the flatness, horizon and monopole problems in cosmology. Recent work has shown that the universe appears to be expanding at an ever faster rate, and there may well be a non-zero cosmological constant. There is a class of theories where the speed of light is determined by a scalar field (the force making the cosmos expand, the cosmological constant) that couples to the gravitational effect of pressure. Changes in the speed of light convert the energy density of this field into energy. One off-shoot of this view is that in a young and hot universe during the radiation epoch, this prevents the scalar field dominating the universe. As the universe expands, pressure-less matter dominates and variations in c decreases making α (alpha) fixed and stable. The scalar field begins to dominate, driving a faster expansion of the universe. Whether the variation of the fine-structure constant claimed exists or not, putting bounds on the rate of change puts tight constraints on new theories of physics. One of the most mysterious objects in the universe is what is known as the black hole – a derivative of the general theory of relativity. It is said to be the ultimate fate of a super-massive star that has exhausted its fuel that sustained it for millions of years. In such a star, gravity overwhelms all other forces and the star collapses under its own gravity to the size of a pinprick. It is called a black hole as nothing – not even light – can escape it. A black hole has two parts. At its core is a singularity, the infinitesimal point into which all the matter of the star gets crushed. Surrounding the singularity is the region of space from which escape is impossible - the perimeter of which is called the event horizon. Once something enters the event horizon, it loses all hope of exiting. It is generally believed that a large star eventually collapses to a black hole. Roger Penrose conjectured that the formation of a singularity during stellar collapse necessarily entails the formation of an event horizon. According to him, Nature forbids us from ever seeing a singularity because a horizon always cloaks it. Penrose’s conjecture is termed the cosmic censorship hypothesis. It is only a conjecture. But some theoretical models suggest that instead of a black hole, a collapsing star might become a naked singularity. Most physicists operate under the assumption that a horizon must indeed form around a black hole. What exactly happens at a singularity - what becomes of the matter after it is infinitely crushed into oblivion - is not known. By hiding the singularity, the event horizon isolates this gap in our knowledge. General relativity does not account for the quantum effects that become important for microscopic objects, and those effects presumably intervene to prevent the strength of gravity from becoming truly infinite. Whatever happens in a black hole stays in a black hole. Yet Researchers have found a wide variety of stellar collapse scenarios in which an event horizon does not form, so that the singularity remains exposed to our view. Physicists call it a naked singularity. In such a case, Matter and radiation can both fall in and come out, whereas matter falling into the singularity inside a black hole would land in a one-way trip. In principle, we can come as close as we like to a naked singularity and return back. Naked singularities might account for unexplained high-energy phenomena that astronomers have seen, and they might offer a laboratory to explore the fabric of the so-called space-time on its finest scales. The results of simulations by different scientists show that most naked singularities are stable to small variations of the initial setup. Thus, these situations appear to be generic and not contrived. These counterexamples to Penrose’s conjecture suggest that cosmic censorship is not a general rule. The discovery of naked singularities would transform the search for a unified theory of physics, not the least by providing direct observational tests of such a theory. It has taken so long for physicists to accept the possibility of naked singularities because they raise a number of conceptual puzzles. A commonly cited concern is that such singularities would make nature inherently unpredictable. Unpredictability is actually common in general relativity and not always directly related to cosmic censorship violation described above. The theory permits time travel, which could produce causal loops with unforeseeable outcomes, and even ordinary black holes can become unpredictable. For example, if we drop an electric charge into an uncharged black hole, the shape of space-time around the hole radically changes and is no longer predictable. A similar situation holds when the black hole is rotating. Specifically, what happens is that space-time no longer neatly separates into space and time, so that physicists cannot consider how the black hole evolves from some initial time into the future. Only the purest of pure black holes, with no charge or rotation at all, is fully predictable. The loss of predictability and other problems with black holes actually stem from the occurrence of singularities; it does not matter whether they are hidden or not. Cosmologists dread the singularity because at this point gravity becomes infinite, along with the temperature and density of the universe. As its equations cannot cope with such infinities, general relativity fails to describe what happens at the big bang. In the mid 1980s, Abhay Ashtekar rewrote the equations of general relativity in a quantum-mechanical framework to show that the fabric of space-time is woven from loops of gravitational field lines. The theory is called the loop quantum gravity. If we zoom out far enough, the space appears smooth and unbroken, but a closer look reveals that space comes in indivisible chunks, or quanta, 10-35 square meters in size. In 2000, some scientists used loop quantum gravity to create a simple model of the universe. This is known as the LQC. Unlike general relativity, the physics of LQC did not break down at the big bang. Some others developed computer simulations of the universe according to LQC. Early versions of the theory described the evolution of the universe in terms of quanta of area, but a closer look revealed a subtle error. After this mistake was corrected it was found that the calculations now involved tiny volumes of space. It made a crucial difference. Now the universe according to LQC agreed brilliantly with general relativity when expansion was well advanced, while still eliminating the singularity at the big bang. When they ran time backwards, instead of becoming infinitely dense at the big bang, the universe stopped collapsing and reversed direction. The big bang singularity had disappeared (Physical Review Letters, vol.96, p-141301). The era of the Big Bounce has arrived. But the scientists are far from explaining all the conundrums. Often it is said that the language of physics is mathematics. In a famous essay, Wigner wrote about the “unreasonable effectiveness of mathematics”. Most physicists resonate with the perplexity expressed by Wigner and Einstein’s dictum that “the most incomprehensible thing about the universe is that it is comprehensible”. They marvel at the fact that the universe is not anarchic - that atoms obey the same laws in distant galaxies as in the lab. Yet, Gödel’s Theorem implies that we can never be certain that mathematics is consistent: it leaves open the possibility that a proof exists demonstrating that 0=1. The quantum theory tells that, on the atomic scale, nature is intrinsically fuzzy. Nonetheless, atoms behave in precise mathematical ways when they emit and absorb light, or link together to make molecules. Yet, is Nature mathematical? Language is a means of communication. Mathematics cannot communicate in the same manner like a language. Mathematics on its own does not lead to a sensible universe. The mathematical formula has to be interpreted in communicable language to acquire some meaning. Thus, mathematics is only a tool for describing some and not all ideas. For example, “observer” has an important place in quantum physics. Everett addressed the measurement problem by making the observer an integral part of the system observed: introducing a universal wave function that links observers and objects as parts of a single quantum system. But there is no equation for the “observer”. We have not come across any precise and scientific definition of mathematics. Concise Oxford Dictionary defines mathematics as: “the abstract science of numbers, quantity, and space studied in its own right”, or “as applied to other disciplines such as physics, engineering, etc”. This is not a scientific description as the definition of number itself leads to circular reasoning. Even the mathematicians do not have a common opinion on the content of mathematics. There are at least four views among mathematicians on what mathematics is. John D Barrow calls these views as: Platonism: It is the view that concepts like groups, sets, points, infinities, etc., are “out there” independent of us – “the pie is in the sky”. Mathematicians discover them and use them to explain Nature in mathematical terms. There is an offshoot of this view called “neo-Platonism”, which likens mathematics to the composition of a cosmic symphony by independent contributors, each moving it towards some grand final synthesis. Proof: completely independent mathematical discoveries by different mathematicians working in different cultures so often turn out to be identical. Conceptualism: It is the anti-thesis of Platonism. According to this view, scientists create an array of mathematical structures, symmetries and patterns and force the world into this mould, as they find it so compelling. The so-called constants of Nature, which arise as theoretically undetermined constants of proportionality in the mathematical equations, are solely artifacts of the peculiar mathematical representation they have chosen to use for different purposes. Formalism: This was developed during the last century, when a number of embarrassing logical paradoxes were discovered. There was proof which established the existence of particular objects, but offered no way of constructing them explicitly in a finite number of steps. Hilbert’s formalism belongs to this category, which defines mathematics as nothing more than the manipulation of symbols according to specified rules (not natural, but sometimes un-physical man-made rules). The resultant paper edifice has no special meaning at all. If the manipulations are done correctly, it should result in a vast collection of tautological statements: an embroidery of logical connections. Intuitionism: Prior to Cantor’s work on infinite sets, mathematicians had not made use of actual infinities, but only exploited the existence of quantities that could be made arbitrarily large or small – the concept of limit. To avoid founding whole areas of mathematics upon the assumption that infinite sets share the “obvious” properties possessed by finite one’s, it was proposed that only quantities that can be constructed from the natural numbers 1,2,3,…, in a finite number of logical steps, should be regarded as proven true. None of the above views is complete because it neither is a description derived from fundamental principles nor conforms to a proper definition of mathematics, whose foundation is built upon logical consistency. The Platonic view arose from the fact that mathematical quantities transcend human minds and manifests the intrinsic character of reality. A number, say three or five codes some information differently in various languages, but conveys the same concept in all civilizations. They are abstract entities and mathematical truth means correspondence between the properties of these abstract objects and our system of symbols. We associate the transitory physical objects such as three worlds or five sense organs to these immutable abstract quantities as a secondary realization. These ideas are somewhat misplaced. Numbers are a property of all objects by which we distinguish between similars. If there is nothing similar to an object, it is one. If there are similars, the number is decided by the number of times we perceive such similars (we may call it a set). Since perception is universal, the concept of numbers is also universal. Believers in eternal truth often point to mathematics as a model of a realm with timeless truths. Mathematicians explore this realm with their minds and discover truths that exist outside of time, in the same way that we discover the laws of physics by experiment. But mathematics is not only self-consistent, but also plays a central role in formulating fundamental laws of physics, which the physics Nobel laureate Eugene Wigner once referred to as the “unreasonable success of mathematics in physics”. One way to explain this “success” within the dominant metaphysical paradigm of the timeless multiverse is to suppose that physical reality is mathematical, i.e. we are creatures within the timeless Platonic realm. The cosmologist Max Tegmark calls this the mathematical universe hypothesis. A slightly less provocative approach is to posit that since the laws of physics can be represented mathematically, not only is their essential truth outside of time, but there is in the Platonic realm a mathematical object, a solution to the equations of the final theory, that is “isomorphic” in every respect to the history of the universe. That is, any truth about the universe can be mapped into a theorem about the corresponding mathematical object. If nothing exists or is true outside of time, then this description is void. However, if mathematics is not the description of a different timeless realm of reality, what is it? What are the theorems of mathematics about if numbers, formulas and curves do not exist outside of our world? Let us consider a game of chess. It was invented at a particular time, before which there is no reason to speak of any truths of chess. But once the game was invented, a long list of facts became demonstrable. These are provable from the rules and can be called the theorems of chess. These facts are objective in that any two minds that reason logically from the same rules will reach the same conclusions about whether a conjectured theorem is true or not. Platonists would say that chess always existed timelessly in an infinite space of mathematically describable games. By such an assertion, we do not achieve anything except a feeling of doing something elevated. Further, we have to explain how we finite beings embedded in time can gain knowledge about this timeless realm. It is much simpler to think that at the moment the game was invented, a large set of facts become objectively demonstrable, as a consequence of the invention of the game. There is no need to think of the facts as eternally existing truths, which are suddenly discoverable. Instead we can say they are objective facts that are evoked into existence by the invention of the game of chess. The bulk of mathematics can be treated the same way, even if the subjects of mathematics such as numbers and geometry are inspired by our most fundamental observations of nature. Mathematics is no less objective, useful or true for being evoked by and dependent on discoveries of living minds in the process of exploring the time-bound universe. The Mandelbrot Set is often cited as a mathematical object with an independent existence of its own. Mandelbrot Set is produced by a remarkably simple mathematical formula – a few lines of code (f(z) = z2+c) describing a recursive feed-back loop – but can be used to produce beautiful colored computer plots. It is possible to endlessly zoom in to the set revealing ever more beautiful structures which never seem to repeat themselves. Penrose called it “not an invention of the human mind: it was a discovery”. It was just out there. On the other hand, fractals – geometrical shapes found through out Nature – are self-similar because how far you zoom into them; they still resemble the original structure. Some people use these factors to plead that mathematics and not evolution is the sole factor in designing Nature. They miss the deep inner meaning of these, which will be described later while describing the structure of the Universe. The opposing view reflects the ideas of Kant regarding the innate categories of thought whereby all our experience is ordered by our minds. Kant pointed out the difference between the internal mental models we build of the external world and the real objects that we know through our sense organs. The views of Kant have many similarities with that of Bohr. The Consciousness of Kant is described as intelligence by Bohr. The sense organs of Kant are described as measuring devices by Bohr. Kant’s mental models are Bohr’s quantum mechanical models. This view of mathematics stresses more on “mathematical modeling” than mathematical rules or axioms. In this view, the so-called constants of Nature that arise as theoretically determined constants of proportionality in our mathematical equations, are solely artifacts of the particular mathematical representation we have chosen to use for explaining different natural phenomena. For example, we use G as the Gravitational constant because of our inclination to express the gravitational interaction in a particular way. This view is misleading as the large number of the so-called constants of Nature points to some underlying reality behind it. We will discuss this point later. The debate over the definition of “physical reality” led to the notion that it should be external to the observer – an observer-independent objective reality. The statistical formulation of the laws of atomic and sub-atomic physics has added a new dimension to the problem. In quantum mechanics, the experimental arrangements are treated in classical terms, whereas the observed objects are treated in probabilistic terms. In this way, the measuring apparatus and the observer are effectively joined into one complex system which has no distinct, well defined parts, and the measuring apparatus does not have to be described as an isolated physical entity. As Max Tegmark in his External Reality Hypothesis puts it: If we assume that reality exists independently of humans, then for a description to be complete, it must also be well-defined according to non-human entities that lack any understanding of human concepts like “particle”, “observation”, etc. A description of objects in this external reality and the relations between them would have to be completely abstract, forcing any words or symbols to be mere labels with no preconceived meanings what-so-ever. To understand the concept, you have to distinguish between two ways of viewing reality. The first is from outside, like the overview of a physicist studying its mathematical structure – a bird’s eye view. The second way is the inside view of an observer living in the structure – the view of a frog in the well. Though Tegmark’s view is nearer the truth (it will be discussed later), it has been contested by others on the ground of contradicting logical consistency. Tegmark relies on a quote of David Hilbert: “Mathematical existence is merely freedom from contradiction”. This implies that mathematical structures simply do not exist unless they are logically consistent. They cite the Russell’s paradox (discussed in detail in later pages) and other paradoxes - such as the Zermelo-Frankel set theory that avoids the Russell’s paradox - to point out that mathematics on its own does not lead to a sensible universe. We seem to need to apply constraints in order to obtain consistent physical reality from mathematics. Unrestricted axioms lead to Russell’s paradox. Conventional bivalent logic is assumed to be based on the principle that every proposition takes exactly one of two truth values: “true” or “false”. This is a wrong conclusion based on European tradition as in the ancient times students were advised to: observe, listen (to teachings of others), analyze and test with practical experiments before accepting anything as true. Till it is conclusively proved or disproved, it was “undecided”. The so-called discovery of multi-valued logic is nothing new. If we extend the modern logic then why stop at ternary truth values: it could be four or more-valued logic. But then what are they? We will discuss later. Though Euclid with his Axioms appears to be a Formalist, his Axioms were abstracted from the real physical world. But the focus of attention of modern Formalists is upon the relations between entities and the rules governing them, rather than the question of whether the objects being manipulated have any intrinsic meaning. The connection between the Natural world and the structure of mathematics is totally irrelevant to them. Thus, when they thought that the Euclidean geometry is not applicable to curved surfaces, they had no hesitation in accepting the view that the sum of the three angles of a triangle need not be equal to 1800. It could be more or less depending upon the curvature. This is a wholly misguided view. The lines or the sides drawn on a curved surface are not straight lines. Hence the Axioms of Euclid are not violated, but are wrongly applied. Riemannian geometry, which led to the chain of non-Euclidean geometry, was developed out of his interest in trying to solve the problems of distortion of metal sheets when they were heated. Einstein used this idea to suggest curvature of space-time without precisely defining space or time or spece-time. But such curvature is a temporary phenomenon due to the application of heat energy. The moment the external heat energy is removed, the metal plate is restored to its original position and Euclidean geometry is applicable. If gravity changes the curvature of space, then it should be like the external energy that distorts the metal plate. Then who applies gravity to mass or what is the mechanism by which gravity is applied to mass. If no external agency is needed and it acts perpetually, then all mass should be changing perpetually, which is contrary to observation. This has been discussed elaborately in latter pages. Once the notion of the minimum distance scale was firmly established, questions were raised about infinity and irrational numbers. Feynman raised doubts about the relevance of infinitely small scales as follows: “It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space”. Paul Davies asserted: “the use of differential equations assumes the continuity of space-time on arbitrarily small scales. The frequent appearance of π implies that their numerical values may be computed to arbitrary precision by an infinite sequence of operations. Many physicists tacitly accept these mathematical idealizations and treat the laws of physics as implementable in some abstract and perfect Platonic realm. Another school of thought, represented most notably by Wheeler and Landauer, stresses that real calculations involve physical objects, such as computers, and take place in the real physical universe, with its specific available resources. In short, information is physical. That being so, it follows that there will be fundamental physical limitations to what may be calculated in the real world”. Thus, Intuitionism or Constructivism divides mathematical structures into “physically relevant” and “physically irrelevant”. It says that mathematics should only include statements which can be deduced by a finite sequence of step-by-step constructions starting from the natural numbers. Thus, according to this view, infinity and irrational numbers cannot be part of mathematics. Infinity is qualitatively different from even the largest number. Finite numbers, however large, obey the laws of arithmetic. We can add, multiply and divide them, and put different numbers unambiguously in order of size. But infinity is the same as a part of itself, and the mathematics of other numbers is not applicable to it. Often the term “Hilbert’s hotel” is used as a metaphor to describe infinity. Suppose a hotel is full and each guest wants to bring a colleague who would need another room. This would be a nightmare for the management, who could not double the size of the hotel instantly. In an infinite hotel, though, there is no problem. The guest from room 1 goes into room 2, the guest in room 2 into room 4, and so on. All the odd-numbered rooms are then free for new guests. This is a wrong analogy. The numbers are divided into two categories based on whether there is similar perception or not. If after the perception of one object there is further similar perception, they are many, which can range from 2,3,4,…..n depending upon the sequence of perceptions? If there is no similar perception after the perception of one object, then it is one. In the case of Infinity, neither of the above conditions applies. However, Infinity is more like the number ‘one’ – without a similar – except for one characteristic. While one object has a finite dimension, infinity has infinite dimensions. The perception of higher numbers is generated by repetition of ‘one’ that many number of times, but the perception of infinity is ever incomplete. Since interaction requires a perceptible change anywhere in the system under examination or measurement, normal interactions are not applicable in the case of infinity. For example, space and time in their absolute terms are infinite. Space and time cannot be measured, as they are not directly perceptible through our sense organs, but are deemed to be perceived. Actually what we measure as space is the interval between objects or points on objects. These intervals are mental constructs and have no physical existence other than the objects, which are used to describe space through alternative symbolism. Similarly, what we measure as time is the interval between events. Space and time do not and cannot interact with each other or with other objects or events as no mathematics is possible between infinities. Our measurements of an arbitrary segment of space or time (which are really the intervals) do not affect space or time in any way. We have explained the quantum phenomena with real numbers derived from fundamental principles and correlated them to the macro world. The quantities like π and φ etc have other significances, which will be discussed later. The fundamental “stuff” of the Universe is the same and the differences arise only due to the manner of their accumulation and reduction – magnitude and sequential arrangement. Since number is a property of all particles, physical phenomena have some associated mathematical basis. However, the perceptible structures and processes of the physical world are not the same as their mathematical formulations, many of which are neither perceptible nor feasible. Thus the relationship between physics and mathematics is that of the map and the territory. Map facilitates study of territory, but it does not tell all about territory. Knowing all about the territory from the map is impossible. This creates the difficulty. Science is increasingly becoming less objective. The scientists are presenting data as if it is absolute truth merely liberated by their able hands for the benefit of lesser mortals. Thus, it has to be presented to the lesser mortals in a language that they do not understand – thus do not question. This leads to misinterpretations to the extent that some classic experiments become dogma even when they are fatally flawed. One example is the Olber’s paradox. In order to understand our environment and interact effectively with it, we engage in the activities of counting the total effect of each of the systems. Such counting is called mathematics. It covers all aspects of life. We are central to everything in a mathematical way. As Barrow points out; “While Copernicus’s idea that our position in the universe should not be special in every sense is sound, it is not true that it cannot be special in any sense”. If we consider our positioning as opposed to our position in the Universe, we will find our special place. For example, if we plot a graph with mass of the star relative to the Sun (with Sun at 1) and radius of orbit relative to Earth (with Earth at 1) and consider scale of the planets, its distance from the Sun, its surface conditions, the positioning of the neighboring planets etc; and consider these variables in a mathematical space, we will find that the Earth’s positioning is very special indeed. It is in a narrow band called the Habitable zone (For details, please refer to Wikipedia on planetary habitability hypothesis). If we imagine the complex structure of the Mandelbrot Set as representative of the Universe (since it is self similar), then we could say that we are right in the border region of the fractal structure. If we consider the relationship between different dimensions of space or a (bubble), then we find their exponential nature. If we consider the center of the bubble as 0 and the edge as 1 and map it in a logarithmic scale, we will find an interesting zone at 0.5. Starting for the Galaxy, to the Sun to Earth to the atoms, everything comes in this zone. For example, we can consider the galactic core as the equivalent of the S orbital of the atom, the bars as equivalent of the P orbital, the spiral arms as equivalent of the D orbital and apply the logarithmic scale, we will find the Sun at 0.5 position. The same is true for Earth. It is known that both fusion and fission push atoms towards iron. The element finds itself in the middle group of the middle period of the periodic table; again 0.5. Thus, there can be no doubt that Nature is mathematical. But the structures and the processes of the world are not the same as mathematical formulations. The map is not the territory. Hence there are various ways of representing Nature. Mathematics is one of them. However, only mathematics cannot describe Nature in any meaningful way. Even the modern mathematician and physicists do not agree on many concepts. Mathematicians insist that zero has existence, but no dimension, whereas the physicists insist that since the minimum possible length is the Planck scale; the concept of zero has vanished! The Lie algebra corresponding to SU (n) is a real and not a complex Lie algebra. The physicists introduce the imaginary unit i, to make it complex. This is different from the convention of the mathematicians. Mathematicians treat any operation involving infinity is void as it does not change by addition or subtraction of or multiplication or division by any number. History of development of science shows that whenever infinity appears in an equation, it points to some novel phenomenon or some missing parameters. Yet, physicists use renormalization by manipulation to generate another infinity in the other side of the equation and then cancel both! Certainly it is not mathematics! Often the physicists apply the “brute force approach”, in which many parameters are arbitrarily reduced to zero or unity to get the desired result. One example is the mathematics for solving the equations for the libration points. But such arbitrary reduction changes the nature of the system under examination (The modern values are slightly different from our computation). This aspect is overlooked by the physicists. We can cite many such instances, where the conventions of mathematicians are different from those of physicists. The famous Cambridge coconut puzzle is a clear representation of the differences between physics and mathematics. Yet, the physicists insist that unless a theory is presented in a mathematical form, they will not even look at it. We do not accept that the laws of physics break down at singularity. At singularity only the rules of the game change and the mathematics of infinities takes over. Modern scientists claim to depend solely on mathematics. But most of what is called as “mathematics” in modern science fails the test of logical consistency that is a corner stone for judging the truth content of a mathematical statement. For example, mathematics for a multi-body system like a lithium or higher atom is done by treating the atom as a number of two body systems. Similarly, the Schrödinger equation in so-called one dimension (it is a second order equation as it contains a term x2, which is in two dimensions and mathematically implies area) is converted to three dimensional by addition of two similar factors for y and z axis. Three dimensions mathematically imply volume. Addition of three areas does not generate volume and x2+y2+z2 ≠ (x.y.z). Similarly, mathematically all operations involving infinity is void. Hence renormalization is not mathematical. Thus, the so called mathematics of modern physicists is not mathematical at all! In fact, some recent studies appear to hint that perception is mathematically impossible. Imagine a black-and-white line drawing of a cube on a sheet of paper. Although this drawing looks to us like a picture of a cube, there are actually infinite numbers of other three-dimensional objects that could have produced the same set of lines when collapsed on the page. But we don’t notice any of these alternatives. The reason for the same is that, our visual systems have more to go on than just bare perceptual input. They are said to use heuristics and short cuts, based on the physics and statistics of the natural world, to make the “best guesses” about the nature of reality. Just as we interpret a two-dimensional drawing as representing a three-dimensional object, we interpret the two-dimensional visual input of a real scene as indicating a three-dimensional world. Our perceptual system makes this inference automatically, using educated guesses to fill in the gaps and make perception possible. Our brains use the same intelligent guessing process to reconstruct the past and help in perceiving the world. Memory functions differently than a video-recording with a moment-by-moment sensory image. In fact, it’s more like a puzzle: we piece together our memories, based on both what we actually remember and what seems most likely given our knowledge of the world. Just as we make educated guesses – inferences - in perception, our minds’ best inferences help “fill in the gaps” of memory, reconstructing the most plausible picture of what happened in our past. The most striking demonstration of the minds’ guessing game occurs when we find ways to fool the system into guessing wrong. When we trick the visual system, we see a “visual illusion” - a static image might appear as if it’s moving, or a concave surface will look convex. When we fool the memory system, we form a false memory - a phenomenon made famous by researcher Elizabeth Loftus, who showed that it is relatively easy to make people remember events that never occurred. As long as the falsely remembered event could plausibly have occurred, all it takes is a bit of suggestion or even exposure to a related idea to create a false memory. Earlier, visual illusions and false memories were studied separately. After all, they seem qualitatively different: visual illusions are immediate, whereas false memories seemed to develop over an extended period of time. A recent study blurs the line between these two phenomena. The study reveals an example of false memory occurring within 42 milliseconds - about half the amount of time it takes to blink your eye. It relied upon a phenomenon known as “boundary extension”, an example of false memory found when recalling pictures. When we see a picture of a location - say, a yard with a garbage can in front of a fence - we tend to remember the scene as though more of the fence were visible surrounding the garbage can. In other words, we extend the boundaries of the image, believing that we saw more fence than was actually present. This phenomenon is usually interpreted as a constructive memory error - our memory system extrapolates the view of the scene to a wider angle than was actually present. The new study, published in the November 2008 issue of the journal Psychological Science, asked how quickly this boundary extension happens. The researchers showed subjects a picture, erased it for a very short period of time by overlaying a new image, and then showed a new picture that was either the same as the first image or a slightly zoomed-out view of the same place. They found that when people saw the exact same picture again, they thought the second picture was more zoomed-in than the first one they had seen. When they saw a slightly zoomed-out version of the picture they had seen before, however, they thought this picture matched the first one. This experience is the classic boundary extension effect. However, the gap between the first and second picture was less than 1/20th of a second. In less than the blink of an eye, people remembered a systematically modified version of pictures they had seen. This modification is, by far, the fastest false memory ever found. Although it is still possible that boundary extension is purely a result of our memory system, the incredible speed of this phenomenon suggests a more parsimonious explanation: that boundary extension may in part be caused by the guesses of our visual system itself. The new dataset thus blurs the boundaries between the initial representation of a picture (via the visual system) and the storage of that picture in memory. This raises the question: is boundary extension a visual illusion or a false memory? Perhaps these two phenomena are not as different as previously thought. False memories and visual illusions both occur quickly and easily, and both seem to rely on the same cognitive mechanism: the fundamental property of perception and memory to fill in gaps with educated guesses, information that seems most plausible given the context. The work adds to a growing movement that suggests that memory and perception may be simply two sides of the same coin. This, in turn, implies that mathematics, which is based on perception of numbers and other visual imagery, could be misleading for developing theories of physics. The essence of creation is accumulation and reduction of the number of particles in each system in various combinations. Thus, Nature has to be mathematical. But then physics should obey the laws of mathematics, just as mathematics should comply with the laws of physics. We have shown elsewhere that all of mathematics cannot be physics. We may have a mathematical equation without a corresponding physical explanation. Accumulation or reduction can be linear or non-linear. If they are linear, the mathematics is addition and subtraction. If they are non-linear, the mathematics is multiplication and division. Yet, this principle is violated in a large number of equations. For an example, the Schrödinger’s equation in one dimension has been discussed earlier. Then there are unphysical combinations. For example, certain combinations of protons and neutrons are prohibited physically, though there is no restriction on devising one such mathematical formula. There is no equation for the observer. Thus, sole dependence on mathematics for discussing physics is neither desirable nor warranted. We accept “proof” – mathematical or otherwise - to validate the reality of any physical phenomena. We depend on proof to validate a theory as long as it corresponds to reality. The modern system of proof takes five stages: observation/experiment, developing hypothesis, testing the hypothesis, acceptance or rejection or modification of hypothesis based on the additional information and lastly, reconstruction of the hypothesis if it was not accepted. We also adopt a five stage approach to proof. First we observe/experiment and hypothesize. Then we look for corroborative evidence. In the third stage we try to prove that the opposite of the hypothesis is wrong. In the fourth stage we try to prove whether the hypothesis is universally valid or has any limitations. In the last stage we try to prove that any theory other than this is wrong. Mathematics is one of the tools of “proof” because of its logical constancy. It is a universal law that the tools are selected based on the nature of operations and not vice-versa. The tools can only restrict the choice of operations. Hence mathematics by itself does not provide proof, but the proof may use mathematics as a tool. We also depend on symmetry, as it is a fundamental property of Nature. In our theory, different infinities co-exist and do not interact with each other. Thus, we agree that the evolutionary process of the Universe could be explained mathematically, as basically it is a process of non-linear accumulation and corresponding reduction of particles and energies in different combinations. But we differ on the interpretation of the equation. For us, the left hand side of the equation represents the cause and the right hand side the effect, which is reversible only in the same order. If the magnitudes of the parameters of one side are changed, the effect on the other side also correspondingly changes. But such changes must be according to natural laws and not arbitrary changes. For example, we agree that e/m = c2 or m/e = 1/c2, which we derive from fundamental principles. But we do not agree that e = mc2. This is because we treat mass and energy as inseparable conjugates with variable magnitude and not interchangeable, as each has characteristics not found in the other. Thus, they are not fit to be used in an equation as cause and effect. Simultaneously, we agree with c2 as energy flow is perceived in fields, which are represented by second order quantities. If we accept the equation e = mc2, according to modern principles, it will lead to m = e/c2. In that case, we will land in many self contradicting situations. For example, if photon has zero rest mass, then m0 = 0/c2 (at rest, external energy that moves a particle has to be zero. Internal energy is not relevant, as a stable system has zero net energy). This implies that m0c2 = 0, or e = 0, which makes c2 = 0/0, which is meaningless. But if we accept e/m = c2 and both sides of the equation as cause and effect, then there is no such contradiction. As we have proved in our book “Vaidic Theory of Numbers”, all operations involving zero except multiplication are meaningless. Hence if either e or m becomes zero, the equation becomes meaningless and in all other cases, it matches the modern values. Here we may point out that the statement that the rest mass of matter is determined by its total energy content is not susceptible of a simple test since there is no independent measure of the later quantity. This proves our view that mass and energy are inseparable conjugates. The domain that astronomers call “the universe” - the space, extending more than 10 billion light years around us and containing billions of galaxies, each with billions of stars, billions of planets (and maybe billions of biospheres) - could be an infinitesimal part of the totality. There is a definite horizon to direct observations: a spherical shell around us, such that no light from beyond it has had time to reach us since the big bang. However, there is nothing physical about this horizon. If we were in the middle of an ocean, it is conceivable that the water ends just beyond your horizon - except that we know it doesn’t. Likewise, there are reasons to suspect that our universe - the aftermath of our big bang - extends hugely further than we can see. An idea called eternal inflation suggested by some cosmologists envisages big bangs popping off, endlessly, in an ever-expanding substratum. Or there could be other space-times alongside ours - all embedded in a higher-dimensional space. Ours could be but one universe in a multiverse. Other branches of mathematics then may become relevant. This has encouraged the use of exotic mathematics such as the transfinite numbers. It may require a rigorous language to describe the number of possible states that a universe could possess and to compare the probability of different configurations. It may just be too hard for human brains to grasp. A fish may be barely aware of the medium in which it lives and swims; certainly it has no intellectual powers to comprehend that water consists of interlinked atoms of hydrogen and oxygen. The microstructure of empty space could, likewise, be far too complex for unaided human brains to grasp. Can we guarantee that with the present mathematics we can overcome all obstacles and explain all complexities of Nature? Should we not resort to the so-called exotic mathematics? But let us see where it lands us. The manipulative mathematical nature of the descriptions of quantum physics has created difficulties in its interpretation. For example, the mathematical formalism used to describe the time evolution of a non-relativistic system proposes two somewhat different kinds of transformations: · Reversible transformations described by unitary operators on the state space. These transformations are determined by solutions to the Schrödinger equation. · Non-reversible and unpredictable transformations described by mathematically more complicated transformations. Examples of these transformations are those that are undergone by a system as a result of measurement. The truth content of a mathematical statement is judged from its logical consistency. We agree that mathematics is a way of representing and explaining the Universe in a symbolic way because evolution is logically consistent. This is because everything is made up of the same “stuff”. Only the quantities (number or magnitude) and their ordered placement or configuration create the variation. Since numbers are a property by which we differentiate between similar objects and all natural phenomena are essentially accumulation and reduction of the fundamental “stuff” in different permissible combinations, physics has to be mathematical. But then mathematics must conform to Natural laws: not un-physical manipulations or the brute force approach of arbitrarily reducing some parameters to zero to get a result that goes in the name of mathematics. We suspect that the over-dependence on mathematics is not due to the fact that it is unexceptionable, but due to some other reason described below. In his book “The Myth of the Framework”, Karl R Popper, acknowledged as the major influence in modern philosophy and political thought, has said: “Many years ago, I used to warn my students against the wide-spread idea that one goes to college in order to learn how to talk and write “impressively” and incomprehensibly. At that time many students came to college with this ridiculous aim in mind, especially in Germany …………. They unconsciously learn that highly obscure and difficult language is the intellectual value par excellence……………Thus arose the cult of incomprehensibility, of “impressive” and high sounding language. This was intensified by the impenetrable and impressive formalism of mathematics…………….” It is unfortunate that even now many Professors, not to speak of their students, are still devotees of the above cult. The modern Scientists justify the cult of incomprehensibility in the garb of research methodology - how “big science” is really done. “Big science” presents a big opportunity for methodologists. With their constant meetings and exchanges of e-mail, collaboration scientists routinely put their reasoning on public display (not the general public, but only those who subscribe to similar views), long before they write up their results for publication in a journal. In reality, it is done to test the reaction of others as often bitter debate takes place on such ideas. Further, when particle physicists try to find a particular set of events among the trillions of collisions that occur in a particle accelerator, they focus their search by ignoring data outside a certain range. Clearly, there is a danger in admitting a non-conformist to such raw material, since a lack of acceptance of their reasoning and conventions can easily lead to very different conclusions, which may contradict their theories. Thus, they offer their own theory of “error-statistical evidence” such as in the statement, “The distinction between the epistemic and causal relevance of epistemic states of experimenters may also help to clarify the debate over the meaning of the likelihood principle”. Frequently they refer to ceteris paribus (other things being equal), without specifying which other things are equal (and then face a challenge to justify their statement). The cult of incomprehensibility has been used even the most famous scientists with devastating effect. Even the obvious mistakes in their papers have been blindly accepted by the scientific community and remained un-noticed for hundreds of years. Here we quote from an article written by W.H. Furry of Department of Physics, Harvard University, published in March 1, 1936 issue of Physical Review, Volume 49. The paper “Note on the Quantum-Mechanical Theory of Measurement” was written in response to the famous EPR Argument and its counter by Bohr. The quote relates to the differentiation between “pure state” and “mixture state”. Our statistical information about a system may always be expressed by giving the expectation values of all observables. Now the expectation value of an arbitrary observable F, for a state whose wave function is φ, is If we do not know the state of the system, but know that wi are the respective probabilities of its being in states whose wave functions are φi, then we must assign as the expectation value of F the weighted average of its expectation values for the states φi. Thus, This formula for is the appropriate one when our system is one of an ensemble of systems of which numbers proportional to wi are in the states φi. It must not be confused with any such formula as which corresponds to the system’s having a wave function which is a linear combination of the φi. This last formula is of the type of (1), while (2) is an altogether different type. An alternative way of expressing our statistical information is to give the probability that measurement of an arbitrary observable F will give as result an arbitrary one of its eigenvalues, say δ. When the system is in the state φ, this probability is where xδ is the eigenfunction of F corresponding to the eigenvalues δ. When we know only that wi are the probabilities of the system’s being in the states φi, the probability in question is Formula (2’) is not the same as any special case of (1’) such as It differs generically from (1’) as (2) does from (1). When such equations as (1), (1’) hold, we say that the system is in the “pure state” whose wave function is φ. The situation represented by Eqs. (2), (2’) is called a “mixture” of the states φi with the weights wi. It can be shown that the most general type of statistical information about a system is represented by a mixture. A pure state is a special case, with only one non-vanishing wi. The term mixture is usually reserved for cases in which there is more than one non-vanishing wi.It must again be emphasized that a mixture in this sense is essentially different from any pure state whatever.” Now we quote from a recent Quantum Reality Web site the same description of “pure state” and “quantum state”: “The statistical properties of both systems before measurement, however, could be described by a density matrix. So for an ensemble system such as this the density matrix is a better representation of the state of the system than the vector. So how do we calculate the density matrix? The density matrix is defined as the weighted sum of the tensor products over all the different states: Where p and q refer to the relative probability of each state. For the example of particles in a box, p would represent the number of particles in state│ψ>, and q would represent the number of particles in state │φ>. Let’s imagine we have a number of qubits in a box (these can take the value │0> or│1>. Let’s say all the qubits are in the following superposition state: 0.6│0> +0.8i│1>. In other words, the ensemble system is in a pure state, with all of the particles in an identical quantum superposition of states │0> and│1>. As we are dealing with a single, pure state, the construction of the density matrix is particularly simple: we have a single probability p, which is equal to 1.0 (certainty), while q (and all the other probabilities) are equal to zero. The density matrix then simplifies to: │ψ><ψ│ This state can be written as a column (“ket”) vector. Note the imaginary component (the expansion coefficients are in general complex numbers): In order to generate the density matrix we need to use the Hermitian conjugate (or adjoint) of this column vector (the transpose of the complex conjugate│ψ>. So in this case the adjoint is the following row (“bra”) vector: What does this density matrix tell us about the statistical properties of our pure state ensemble quantum system? For a start, the diagonal elements tell us the probabilities of finding the particle in the│0> or│1> eigenstate. For example, the 0.36 component informs us that there will be a 36% probability of the particle being found in the │0> state after measurement. Of course, that leaves a 64% chance that the particle will be found in the │1> state (the 0.64% component). The way the density matrix is calculated, the diagonal elements can never have imaginary components (this is similar to the way the eigenvalues are always real). However, the off-diagonal terms can have imaginary components (as shown in the above example). These imaginary components have a associated phase (complex numbers can be written in polar form). It is the phase differences of these off-diagonal elements which produces interference (for more details, see the book Quantum Mechanics Demystified). The off-diagonal elements are characteristic of a pure state. A mixed state is a classical statistical mixture and therefore has no off-diagonal terms and no interference. So how do the off-diagonal elements (and related interference effects) vanish during decoherence? The off-diagonal (imaginary) terms have a completely unknown relative phase factor which must be averaged over during any calculation since it is different for each separate measurement (each particle in the ensemble). As the phase of these terms is not correlated (not coherent) the sums cancel out to zero. The matrix becomes diagonalised (all off-diagonal terms become zero. Interference effects vanish. The quantum state of the ensemble system is then apparently “forced” into one of the diagonal eigenstates (the overall state of the system becomes a mixture state) with the probability of a particular eigenstate selection predicted by the value of the corresponding diagonal element of the density matrix. Consider the following density matrix for a pure state ensemble in which the off-diagonal terms have a phase factor of θ: The above statement can be written in a simplified manner as follows: Selection of a particular eigenstate is governed by a purely probabilistic process. This requires a large number of readings. For this purpose, we must consider an ensemble – a large number of quantum particles in a similar state and treat them as a single quantum system. Then we measure each particle to ascertain a particular value; say color. We tabulate the results in a statement called the density matrix. Before measurement, each of the particles is in the same state with the same state vector. In other words, they are all in the same superposition state. Hence this is called a pure state. After measurement, all particles are in different classical states – the state (color) of each particle is known. Hence it is called a mixed state. In common-sense language, what it means is that: if we take a box of billiard balls of say 100 numbers of random colors - say blue and green, before counting balls of each color, we could not say what percentage of balls are blue and what percentage green. But after we count the balls of each color and tabulate the results, we know that (in the above example) 36% of the balls belong to one color and 64% belong to another color. If we have to describe the balls after counting, we will give the above percentage or say that 36 numbers of balls are blue and 64 numbers of balls are green. That will be a pure statement. But before such measurement, we can describe the balls as 100 balls of blue and green color. This will be a mixed state. As can be seen, our common-sense description is opposite of the quantum mechanical classification, which are written by two scientists about 75 years apart and which is accepted by all scientists unquestioningly. Thus, it is no wonder that one scientist jokingly said that: “A good working definition of quantum mechanics is that things are the exact opposite of what you thought they were. Empty space is full, particles are waves, and cats can be both alive and dead at the same time.” We quote another example from the famous EPR argument of Einstein and others (Phys. Rev. 47, 777 (1935): “To illustrate the ideas involved, let us consider the quantum-mechanical description of the behavior of a particle having a single degree of freedom. The fundamental concept of the theory is the concept of state, which is supposed to be completely characterized by the wave function ψ, which is a function of the variables chosen to describe the particle’s behavior. Corresponding to each physically observable quantity A there is an operator, which may be designated by the same letter. If ψ is an eigenfunction of the operator A, that is, if ψ’ ≡ Aψ = aψ (1) where a is a number, then the physical quantity A has with certainty the value a whenever the particle is in the state given by ψ. In accordance with our criterion of reality, for a particle in the state given by ψ for which Eq. (1) holds, there is an element of physical reality corresponding to the physical quantity A”. We can write the above statement and the concept behind it in various ways that will be far easier to understand by the common man. We can also give various examples to demonstrate the physical content of the above statement. However, such statements and examples will be difficult to twist and interpret differently when necessary. Putting the concept in an ambiguous format helps in its subsequent manipulation, as is explained below citing from the same example: “In accordance with quantum mechanics we can only say that the relative probability that a measurement of the coordinate will give a result lying between a and b is Since this probability is independent of a, but depends only upon the difference b - a, we see that all values of the coordinate are equally probable”. The above conclusion has been arrived at based on the following logic: “More generally, it is shown in quantum mechanics that, if the operators corresponding to two physical quantities, say A and B, do not commute, that is, if AB ≠ BA, then the precise knowledge of one of them precludes such a knowledge of the other. Furthermore, any attempt to determine the latter experimentally will alter the state of the system in such a way as to destroy the knowledge of the first”. The above statement is highly misleading. The law of commutation is a special case of non-linear accumulation as explained below. All interactions involve application of force which leads to accumulation and corresponding reduction. Where such accumulation is between similars, it is linear accumulation and its mathematics is called addition. If such accumulation is not fully between similars, but partially similars (and partially dissimilar) it is non-linear accumulation and its mathematics is called multiplication. For example, 10 cars and another 10 cars are twenty cars through addition. But if there are 10 cars in a row and there are two rows of cars, then rows of cars is common to both statements, but one statement shows the number of cars in a row while the other shows the number of rows of cars. Because of this partial dissimilarity, the mathematics has to be multiplication of 10 x 2 or 2 x 10. We are free to use any of the two sequences and the result will be the same. This is the law of commutation. However, no multiplication is possible if the two factors are not partially similar. In such cases, the two factors are said to be non-commutable. If the two terms are mutually exclusive, i.e., one of the terms will always be zero, the result of their multiplication will always be zero. Hence they may be said to be not commutable though in reality they are commutable, but the result of their multiplication is always zero. This implies that the knowledge of one precludes the knowledge of the other. The commutability or otherwise depend on the nature of the quantities – whether they are partially related and partially non-related to each other or not. Position is a fixed co-ordinate in a specific frame of reference. Momentum is a mobile co-ordinate in the same frame of reference. Both fixedity and mobility are mutually exclusive. If a particle has a fixed position, its momentum is zero. If it has momentum, it does not have a fixed position. Since “particle” is similar in both the above statements, i.e., since both are related to the particle, they can be multiplied, hence commutable. But since one or the other factors is always zero, the result will always be zero and the equation AB ≠ BA does not hold. In other words, while uncertainty is established due to other reasons, the equation Δx. Δp ≥ h is a mathematically wrong statement, as mathematically the answer will always be zero. The validity of a physical statement is judged by its correspondence to reality or as Einstein and others put it: “by the degree of agreement between the conclusions of the theory and human experience”. Since in this case the degree of agreement between the conclusions of the theory and human experience is zero, it cannot be a valid physical statement either. Hence, it is no wonder that the Heisenberg’s Uncertainty relation is still a hypothesis and not proven. In latter pages we have discussed this issue elaborately. In modern science there is a tendency of generalization or extension of one principle to others. For example; the Schrödinger equation in the so-called one dimension (actually it contains a second order term; hence cannot be an equation in one dimension) is generalized (?) to three dimensions by adding two more terms for y and z dimensions (mathematically and physically it is a wrong procedure). We have discussed it in latter pages. While position and momentum are specific quantities, the generalizations are done by replacing these quantities with A and B. When a particular statement is changed to a general statement by following algebraic principles, the relationship between the quantities of the particular statement is not changed. However, physicists often bypass or over-look this mathematical rule. A and B could be any set of two quantities. Since they are not specified, it is easy to use them in any way one wants. Even if the two quantities are commutable, since they are not precisely described, it gives one the freedom to manipulate by claiming that they are not commutable and vice-versa. Modern science is full of such manipulations. Here we give another example to prove that physics and modern mathematics are not always compatible. Bell’s Inequality is one of the important equations used by all quantum physicists. We will discuss it repeatedly for different purposes. Briefly the theory holds that if a system consists of an ensemble of particles having three Boolean properties A, B and C, and there is a reciprocal relationship between the values of measurement of A on two particles, the same type relationship exists between the particles with respect to the quantity B, the value of one particle measured and found to be a, and the value of another particle measured and found to be b, then the first particle must have started with state (A = a, B = b). In that event, the Theorem says that P (A, C) ≤ P (A, B) + P (B, C). In the case of classical particles, the theorem appears to be correct. Quantum mechanically: P(A, C) = ½ sin2 (θ), where θ is the angle between the analyzers. Let an apparatus emit entangled photons that pass through separate polarization analysers. Let A, B and C be the events that a single photon will pass through analyzers with axis set at 00, 22.50, and 450 to vertical respectively. It can be proved that C → C. Thus, according to Bell’s theorem: P(A, C) ≤ P(A, B) + P(B, C), Or ½ sin2 (450) ≤ ½ sin2 (22.50) + ½ sin2 (22.50), Or 0.25 ≤0.1464, which is clearly absurd. This inequality has been used by quantum physicists to prove entanglement and distinguish quantum phenomena from classical phenomena. We will discuss it in detail to show that the above interpretation is wrong and the same set of mathematics is applicable to both macro and the micro world. The real reason for such deviation from common sense is that because of the nature of measurement, measuring one quantity affects the measurement of another. The order of measurement becomes important in such cases. Even in the macro world, the order of measurement leads to different results. However, the real implication of Bell’s original mathematics is much deeper and points to one underlying truth that will be discussed later. A wave function is said to describe all possible states in which a particle may be found. To describe probability, some people give the example of a large, irregular thundercloud that fills up the sky. The darker the thundercloud, the greater the concentration of water vapor and dust at that point. Thus by simply looking at a thundercloud, we can rapidly estimate the probability of finding large concentrations of water and dust in certain parts of the sky. The thundercloud may be compared to a single electron's wave function. Like a thundercloud, it fills up all space. Likewise, the greater its value at a point, the greater the probability of finding the electron there! Similarly, wave functions can be associated with large objects, like people. As one sits in his chair, he has a Schrödinger probability wave function. If we could somehow see his wave function, it would resemble a cloud very much in the shape of his body. However, some of the cloud would spread out all over space, out to Mars and even beyond the solar system, although it would be vanishingly small there. This means that there is a very large likelihood that his, in fact, sitting here in his chair and not on the planet Mars. Although part of his wave function has spread even beyond the Milky Way galaxy, there is only an infinitesimal chance that he is sitting in another galaxy. This description is highly misleading. The mathematics for the above assumption is funny. Suppose we choose a fixed point A and walked in the north-eastern direction by 5 steps. We mark that point as B. There are an infinite number of ways of reaching the point B from A. For example, we can walk 4 steps to the north of A and then walk 3 steps to the east. We will reach at B. Similarly, we can walk 6 steps in the northern direction, 3 steps in the eastern direction and 2 steps in the Southern direction. We will reach at B. Alternatively; we can walk 8 steps in the northern direction, 6 steps in the eastern direction and 5 steps in the South-eastern direction. We will reach at B. It is presumed that since the vector addition or “superposition” of these paths, which are different sorts from the straight path, lead to the same point, the point B could be thought of as a superposition of paths of different sort from A. Since we are free to choose any of these paths, at any instant, we could be “here” or “there”. This description is highly misleading. To put the above statement mathematically, we take a vector V which can be resolved into two vectors V1 and V2 along the directions 1 and 2, we can write: V = V1 + V2. If a unit of displacement along the direction 1 is represented by 1, then V1 = V11, wherein V1 denotes the magnitude of the displacement V1. Similarly, V2 = V22. Therefore: V = V1 + V2 = V11 + V22. [1 and 2 are also denoted as (1,0) and (0,1) respectively]. This equation is also written as: V = λ1 + λ2, where λ is treated as the magnitude of the displacement. Here V is treated as a superposition of any standard vectors (1,0) and (0,1) with coefficients given by the numbers (ordered pair) (V1 , V2). This is the concept of a vector space. Here the vector has been represented in two dimensions. For three dimensions, this equation is written as V = λ1 + λ2 + λ3. For an n-tuple in n dimensions, the equation is written as V = λ1 + λ2 + λ3 +…… λn. It is said that the choice of dimensions appropriate to a quantum mechanical problem depends on the number of independent possibilities the system possesses. In the case of polarization of light, there are only two possibilities. The same is true for electrons. But in the case of electrons, it is not dimensions, but spin. If we choose a direction and look at the electron’s spin in relation to that direction, then either its axis of rotation points along that direction or it is wholly in the reverse direction. Thus, electron spin is described as “up” and “down”. Scientists describe the spin of electron as something like that of a top, but different from it. In reality, it is something like the nodes of the Moon. At one node, Moon appear to be always going in the northern direction and at the other node, it always appears to be going in the southern direction. It is said that the value of “up” and “down” for an electron spin is always valid irrespective of the directions we may choose. There is no contradiction here, as direction is not important in the case of nodes. It is only the lay out of the two intersecting planes that is relevant. In many problems, the number of possibilities is said to be unbounded. Thus, scientists use infinite dimensional spaces to represent them. For this they use something called the Hilbert space. We will discuss about these later. Any intelligent reader would have seen through the fallacy of the vector space. Still we are describing it again. Firstly, as we have described in the wave phenomena in later pages, superposition is a merger of two waves, which lose their own identity to create something different. What we see is the net effect, which is different from the individual effects. There are many ways in which it could occur at one point. But all waves do not stay in superposition. Similarly, the superposition is momentary, as the waves submit themselves to the local dynamics. Thus, only because there is a probability of two waves joining to cancel the effect of each other and merge to give a different picture, we cannot formulate a general principle such as the equation: V = λ1 + λ2 to cover all cases, because the resultant wave or flat surface is also transitory. Secondly, the generalization of the equation V = λ1 + λ2 to V = λ1 + λ2 + λ3 +…… λn is mathematically wrong as explained below. Even though initially we mentioned 1 and 2 as directions, they are essentially dimensions, because they are perpendicular to each other. Direction is the information contained in the relative position of one point with respect to another point without the distance information. Directions may be either relative to some indicated reference (the violins in a full orchestra are typically seated to the left of the conductor), or absolute according to some previously agreed upon frame of reference (Kolkata lies due north-east of Puri). Direction is often indicated manually by an extended index finger or written as an arrow. On a vertically oriented sign representing a horizontal plane, such as a road sign, “forward” is usually indicated by an upward arrow. Mathematically, direction may be uniquely specified by a unit vector in a given basis, or equivalently by the angles made by the most direct path with respect to a specified set of axes. These angles can have any value and their inter-relationship can take an infinite number of values. But in the case of dimensions, they have to be at right angles to each other which remain invariant under mutual transformation. According to Vishwakaema the perception that arises from length is the same that arises from the perception of breadth and height – thus they belong to the same class, so that the shape of the particle remains invariant under directional transformations. There is no fixed rule as to which of the three spreads constitutes either length or breadth or height. They are exchangeable in re-arrangement. Hence, they are treated as belonging to one class. These three directions have to be mutually perpendicular on the consideration of equilibrium of forces (for example, electric field and the corresponding magnetic field) and symmetry. Thus, these three directions are equated with “forward-backward”, “right-left”, and “up-down”, which remain invariant under mutual exchange of position. Thus, dimension is defined as the spread of an object in mutually perpendicular directions, which remains invariant under directional transformations. This definition leads to only three spatial dimensions with ten variants. For this reason, the general equation in three dimensions uses x, y, and z (and/or c) co-ordinates or at least third order terms (such as a3+3a2b+3ab2+b3), which implies that with regard to any frame of reference, they are not arbitrary directions, but fixed frames at right angles to one another, making them dimensions. A one dimensional geometric shape is impossible. A point has imperceptible dimension, but not zero dimensions. The modern definition of a one dimensional sphere or “one sphere” is not in conformity with this view. It cannot be exhibited physically, as anything other than a point or a straight line has a minimum of two dimensions. While the mathematicians insist that a point has existence, but no dimensions, the Theoretical Physicists insist that the minimum perceptible dimension is the Planck length. Thus, they differ over the dimension of a point from the mathematicians. For a straight line, the modern mathematician uses the first order equation, ax + by + c = 0, which uses two co-ordinates, besides a constant. A second order equation always implies area in two dimensions. A three dimensional structure has volume, which can be expressed only by an equation of the third order. This is the reason why Born had to use the term “d3r” to describe the differential volume element in his equations. The Schrödinger equation was devised to find the probability of finding the particle in the narrow region between x and x+dx, which is denoted by P(x) dx. The function P(x) is the probability distribution function or probability density, which is found from the wave function ψ(x) in the equation P(x) = [ψ(x)]2. The wave function is determined by solving the Schrödinger’s differential equation: d2ψ/dx2 + 8π2m/h2 [E-V(x)]ψ = 0, where E is the total energy of the system and V(x) is the potential energy of the system. By using a suitable energy operator term, the equation is written as Hψ = Eψ. The equation is also written as iħ ∂/∂tψ› = Hψ›, where the left hand side represents iħ times the rate of change with time of a state vector. The right hand side equates this with the effect of an operator, the Hamiltonian, which is the observable corresponding to the energy of the system under consideration. The symbol ψ indicates that it is a generalization of Schrödinger’s wave-function. The equation appears to be an equation in one dimension, but in reality it is a second order equation signifying a two dimensional field, as the original equation and the energy operator contain a term x2. A third order equation implies volume. Three areas cannot be added to create volume. Thus, the Schrödinger equation described above is an equation not in one, but in two dimensions. The method of the generalization of the said Schrödinger equation to the three spatial dimensions does not stand mathematical scrutiny. Three areas cannot be added to create volume. Any simple mathematical model will prove this. Hence, the Schrödinger equation could not be solved for other than hydrogen atoms. For many electron atoms, the so called solutions simply consider them as many one-electron atoms, ignoring the electrostatic energy of repulsion between the electrons and treating them as point charges frozen to some instantaneous position. Even then, the problem remains to be solved. The first ionization potential of helium is theorized to be 20.42 eV, against the experimental value of 24.58 eV. Further, the atomic spectra show that for every series of lines (Lyman, Balmer, etc) found for hydrogen, there is a corresponding series found at shorter wavelengths for helium, as predicted by theory. But in the spectrum of helium, there are two series of lines observed for every single series of lines observed for hydrogen. Not only does helium possess the normal Balmer series, but also it has a second “Balmer” series starting at λ = 3889 Å. This shows that, for the helium atom, the whole series repeats at shorter wavelengths. For the lithium atom, it is even worse, as the total energy of repulsion between the electrons is more complex. Here, it is assumed that as in the case of hydrogen and helium, the most stable energy of lithium atom will be obtained when all three electrons are placed in the 1s atomic orbital giving the electronic configuration of 1s3, even though it is contradicted by experimental observation. Following the same basis as for helium, the first ionization potential of lithium is theorized to be 20.4 eV, against the experimental value of 202.5 eV to remove all three electrons and only 5.4 eV to remove one electron from lithium. Experimentally, it requires less energy to ionize lithium than it does to ionize hydrogen, but the theory predicts ionization energy one and half times larger. More serious than this is the fact that, the theory should never predict the system to be more stable than it actually is. The method should always predict energy less negative than is actually observed. If this is not found to be the case, then it means that an incorrect assumption has been made or that some physical principle has been ignored. Further, it contradicts the principle of periodicity, as the calculation places each succeeding electron in the 1s orbital as it increases nuclear charge by unity. It must be remembered that, with every increase in n, all the preceding values of l are repeated, and a new l value is introduced. The reasons why more than two electrons could not be placed in the 1s orbit has not been explained. Thus, the mathematical formulations are contrary to the physical conditions based on observation. To overcome this problem, scientists take the help of operators. An operator is something which turns one vector into another. Often scientists describe robbery as an operator that transforms a state of wealth to a state of penury for the robbed and vice versa for the robber. Another example of an operator often given is the operation that rotates a frame clockwise or anticlockwise changing motion in northern direction to that in eastern or western directions. The act of passing light through a polarizer is called an operator as it changes the physical state of the photons polarization. Thus, the use of a polarizer is described as measurement of polarization, since the transmitted beam has to have its polarization in the direction perpendicular to it. We will come back to operators later. The probability does not refer to (as is commonly believed) whether the particle will be observed at any specific position at a specific time or not. Similarly the description of different probability of finding the particle at any point of space is misleading. A particle will be observed only at a particular position at a particular time and no where else. Since a mobile particle does not have a fixed position, the probability actually refers to the state in which the particle is likely to be observed. This is because all the forces acting on it and their dynamics, which influence the state of the particle, may not be known to us. Hence we cannot predict with certainty whether the particle will be found here or elsewhere. After measurement, the particle is said to acquire a time invariant “fixed state” by “wave-function collapse”. This is referred to as the result of measurement, which is an arbitrarily frozen time invariant non-real (since in reality, it continues to change) state. This is because; the actual state with all influences on the particle has been measured at “here-now”, which is a perpetually changing state. Since all mechanical devices are subject to time variance in their operational capacities, they have to be “operated” by a “conscious agent” – directly or indirectly - because, as will be shown later, only consciousness is time invariant. This transition from a time variant initial state to a time invariant hypothetical “fixed state” through “now” or “here-now” is the dividing line between quantum physics and the classical physics, as well as conscious actions and mechanical actions. To prove the above statement, we have examined what is “information” in latter pages, because only conscious agents can cognize information and use it to achieve the desired objects. However, before that we will briefly discuss the chaos prevailing in this area among the scientists. Modern science fails to answer the question “why” on many occasions. In fact it avoids such inconvenient questions. Here we may quote an interesting anecdote from the lives of two prominent persons. Once, Arthur Eddington was explaining the theory of the expanding universe to Bertrand Russell. Eddington told Russell that the expansion was so rapid and powerful that even a most powerful dictator would not be able to control the entire universe. He explained that even if the orders were sent with the speed of light, they would not reach the farthest parts of the universe. Bertrand Russell asked, “If that is so, how does God supervise what is going on in those parts?” Eddington looked keenly at Russell and replied, “That, dear Bertrand does not lie in the province of the physicists.” This begs the question: What is physics? We cannot take the stand that the role of physics is not to explain, but to describe reality. Description is also an explanation. Otherwise, why and to whom do you describe? If the validity of a physical statement is judged by its correspondence to reality, we cannot hide behind the veil of reductionism, but explain scientifically the theory behind the seemingly “acts of God”. There is a general belief that we can understand all physical phenomenon if we can relate it to the interactions of atoms and molecules. After all, the Universe is made up of these particles only. Their interactions – in different combinations – create everything in the Universe. This is called a reductionist approach because it is claimed that everything else can be reduced to this supposedly more fundamental level. But this approach runs into problem with thermodynamics and its arrow of time. In the microscopic world, no such arrow of time is apparent, irrespective of whether it is being described by Newtonian mechanics, relativistic or quantum mechanics. One consequence of this description is that there can be no state of microscopic equilibrium. Time-symmetric laws do not single out a special end-state where all potential for change is reduced to zero, since all instants in time are treated as equivalent. The apparent time reversibility of motion within the atomic and molecular regimes, in direct contradiction to the irreversibility of thermodynamic processes constitutes the celebrated irreversibility paradox put forward by in 1876 by Loschmidt among others (L. Boltzmann: Lectures on Gas Theory – University of California Press, 1964, page 9). The paradox suggests that the two great edifices – thermodynamics and mechanics – are at best incomplete. It represents a very clear problem in need of an explanation which should not be swept under carpet. As Lord Kelvin says: If the motion of every particle of matter in the Universe were precisely reversed at any instant, the course of Nature would be simply reversed for ever after. The bursting bubble of foam at the foot of a waterfall would reunite and descend into water. The thermal motions would reconcentrate energy and throw the mass up the fall in drops reforming in a close column of ascending water. Living creatures would grow backwards – from old age to infancy till they are unborn again – with conscious knowledge of the future but no memory of the past. We will solve this paradox in later pages. The modern view on reductionism is faulty. Reductionism is based on the concept of differentiation. When an object is perceived as a composite that can be reduced to different components having perceptibly different properties which can be differentiated from one another and from the composite as a whole, the process of such differentiation is called reductionism. Some objects may generate similar perception of some properties or the opposite of some properties from a group of substances. In such cases the objects with similar properties are grouped together and the objects with opposite properties are grouped together. The only universally perceived aspect that is common to all objects is physical existence in space and time, as the radiation emitted by or the field set up by all objects create a perturbation in our sense organs always in identical ways. Since intermediate particles exhibit some properties similar with other particles and are similarly perceived with other such objects and not differentiated from others, reductionism applies only to the fundamental particles. This principle is violated in most modern classifications. To give one example, x-rays and γ-rays exhibit exclusive characteristics that are not shared by other rays of the electromagnetic spectrum or between themselves – such as the place of their origin. Yet, they are clubbed under one category. If wave nature of propagation is the criterion for such categorisation, then sound waves that travel through a medium such as air or other gases in addition to liquids and solids of all kinds should also have been added to the classification. Then there are mechanical waves, such as the waves that travel though a vibrating string or other mechanical object or surface, waves that travel through a fluid or along the surface of a fluid, such as water waves. If electromagnetic properties are the criteria for such categorisation, then it is not scientific, as these rays do not interact with electromagnetic fields. If they have been clubbed together on the ground that theoretically they do not require any medium for their propagation, then firstly there is no true vacuum and secondly, they are known to travel through various mediums such as glass. There are many such examples of wrong classification due to reductionism and developmental history. The cults of incomprehensibility and reductionism have led to another deficiency. Both cosmology and elementary particle physics share the same theory of the plasma and radiation. They have independent existence that is seemingly eternal and may be cyclic. Their combinations lead to the sub-atomic particles that belong to the micro world of quantum physics. The atoms are a class by itself, whose different combinations lead to the perceivable particles and bodies that belong to the macro world of the so-called classical physics. The two worlds merge in the stars, which contain plasma of the micro world and the planetary system of the macro world. Thus, the study of the evolution of stars can reveal the transition from the micro world to the macro world. For example, the internal structures of planet Jupiter and protons are identical and like protons, Jupiter-like stars are abundant in the stars. Yet, in stead of unification of all branches of science, Cosmology and nuclear physics have been fragmented into several “specialized” branches. Here we are reminded of an anecdote related to Lord Chaitanya. While in his southern sojourn, a debate was arranged between him and a great scholar of yore. The scholar went off explaining many complex doctrines while Lord Chaitanya sat quietly and listened with rapt attention without any response. Finally the scholar told Lord Chaitanya that he was not responding at all to his discourse. Was it too complex for him? The Scholar was sure from the look on Lord Chaitanya’s face that he did not understand anything. To this, Lord Chaitanya replied; “I fully understand what you are talking about. But I was wondering why you are making the simple things look so complicated?” Then he explained the same theories in plain language after which the scholar fell at his feet. There has been very few attempts to list out the essence of all branches and develop “one” science. Each branch has its huge data bank with its specialized technical terms glorifying some person at the cost of a scientific nomenclature thereby enhancing incomprehensibility. Even if we read the descriptions of all six proverbial blind men repeatedly, one who has not seen an elephant cannot visualize it. This leaves the students with little opportunity to get a macro view of all theories and evaluate their inter-relationship. The educational system with its examination method of emphasizing the aspects of “memorization and reproduction at a specific instant” compounds the problem. Thus, the students have to accept many statements and theories as “given” without questioning it even on the face of ambiguities. Further, we have never come across any book on science, which does not glorify the discoveries in superlative terms, while leaving out the uncomfortable and ambiguous aspects, often with an assurance that they are correct and should be accepted as such. This creates an impression on the minds of young students to accept the theories unquestioningly making them superstitious. Thus, whenever some deficiencies have been noticed in any theory, there is an attempt at patch work within the broad parameters of the same theories. There have been few attempts to review the theories ab initio. Thus, the scientists cannot relate the tempest at a distant land to the flapping of the wings of the butterfly elsewhere. Till now scientists do not know “what” are electrons, photons, and other subatomic objects that have made the amazing technological revolution possible? Even the modern description of the nucleus and the nucleons leave many aspects unexplained. Photo-electric effect, for which Einstein got his Noble Prize, deals with electrons and photons. But it does not clarify “what” are these particles. The scientists, who framed the current theories, were not gifted with the benefit of the presently available data. Thus, without undermining their efforts, it is necessary to ab initio re-formulate the theories based on the presently available data. Only this way we can develop a theory whose correspondence resembles to reality. Here is an attempt in this regard from a different perspective. Like the child revealing the secret of the Emperor’s clothes, we, a novice in this field, are attempting to point the lamp in the direction of the Sun. Thousands of papers are read every year in various forums on as yet undiscovered particles. This reminds us of the saying which means: after taking bath in the water of the mirage, wearing the flower of the sky in the head, holding the bow made of the horns of a rabbit, here goes the son of the barren woman! Modern scientists are precisely making similar statements. This is a sheer waste of not only valuable time but also public money worth trillions for the pleasure of a few. In addition, this amounts to misguiding general public for generations. This is unacceptable because a scientific theory must stand up to experimental scrutiny within a reasonable time period. Till it is proved or disproved, it cannot be accepted, though not rejected either. We cannot continue for three quarters and more of a century to develop “theories” based on such unproven postulates in the hope that we may succeed someday – may be after a couple of centuries! We cannot continue research on the properties of the “flowers of the sky” on the ground that someday it may be discovered. Experiments with the subatomic phenomena show effects that have not been reconciled with our normal view of an objective world. Yet, they cannot be treated separately. This implies the existence of two different states – classical and quantum – with different dynamics, but linked to each other in some fundamentally similar manner. Since the validity of a physical statement is judged by its correspondence to reality, there is a big question mark on the direction in which theoretical physics is moving. Technology has acquired a pre-eminent position in the global epistemic order. However, Engineers and Technologists, who progress by trial and error methods, have projected themselves as experimental scientists. Their search for new technology has been touted as the progress of science, questioning whose legitimacy is projected as deserving a sacrament. Thus, everything that exposes the hollowness or deficiencies of science is consigned to defenestration. The time has come to seriously consider the role, the ends and the methods of scientific research. If we are to believe that the sole objective of the scientists is to make their impressions mutually consistent, then we lose all motivation in theoretical physics. These impressions are not of a kind that occurs in our daily life. They are extremely special, are produced at great cost, time and effort. Hence it is doubtful whether the mere pleasure their harmony gives to a selected few can justify the huge public spending on such “scientific research”. A report published in the Notices of the American Mathematical Society, October 2005 issue shows that the Theory of Dynamical Systems that is used for calculating the trajectories of space flights and the Theory of Transition States for chemical reactions share the same mathematics. This is the proof of a universally true statement that both microcosm and the macrocosm replicate each other. The only problem is to find the exact correlations. For example, as we have repeated pointed out, the internal structure of a proton and that of planet Jupiter are identical. We will frequently use this and other similarities between the microcosm and the macrocosm (from astrophysics) in this presentation to prove the above statement. Also we will frequently refer to the definitions of technical terms as defined precisely in our book “Vaidic Theory of Numbers”. No comments: Post a Comment let noble thoughts come to us from all around
0361284db76a9d4f
Quantum Field Theory Get Quantum Field Theory essential facts below. View Videos or join the Quantum Field Theory discussion. Add Quantum Field Theory to your Like2do.com topic list for future reference or share this resource on social media. Quantum Field Theory In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics[1]:xi and is used to construct physical models of subatomic particles (in particle physics) and quasiparticles (in condensed matter physics). QFT treats particles as they are in excited states (also called quanta) of their underlying fields, which are -in a sense- more fundamental than the basic particles. Interactions between particles are described by interaction terms in the Lagrangian involving their corresponding fields. Each interaction can be visually represented by Feynman diagrams, which are a formal computational tools, in the process of relativistic perturbation theory. As a successful theoretical framework today, quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century. Its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory -- quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure. A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory. Theoretical background Magnetic field lines visualized using iron filings. When a piece of paper is sprinkled with iron filings and placed above a bar magnet, the filings align according to the direction of the magnetic field, forming arcs. Quantum field theory is the result of the combination of classical field theory, quantum mechanics, and special relativity.[1]:xi A brief overview of these theoretical precursors is in order. The earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise Philosophiæ Naturalis Principia Mathematica. The force of gravity as described by Newton is an "action at a distance" -- its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, however, he stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter without mutual contact."[2]:4 It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields -- a numerical quantity (a vector (mathematics and physics)) assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered merely a mathematical trick.[3]:18 Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845. He introduced fields as properties of space (even when it is devoid of matter) having physical effects. He argued against "action at a distance", and proposed that interactions between objects occur via space-filling "lines of force". This description of fields remains to this day.[2][4]:301[5]:2 The theory of classical electromagnetism was completed in 1862 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, and electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light. Action-at-a-distance was thus conclusively refuted.[2]:19 Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths.[6]Max Planck's study of blackbody radiation marked the beginning of quantum mechanics. He treated atoms, which absorb and emit electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators. This process of restricting energies to discrete values is called quantization.[7]:Ch.2 Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons (the quanta of light). This implied that the electromagnetic radiation, while being waves in the classical electromagnetic field, also exists in the form of particles.[6] In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies. This is another example of quantization. The Bohr model successfully explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave-particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances.[6] Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, and Wolfgang Pauli.[3] In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformation, were given for the way time and space coordinates of an event change under changes in the observer's velocity, and the distinction between time and space was blurred.[3] It was proposed that all physical laws must be the same for observers at different velocities, i.e. that physical laws be invariant under Lorentz transformations. Two difficulties remained. Observationally, the Schrödinger equation underlying quantum mechanics could explain the stimulated emission of radiation from atoms, where an electron emits a new photon under the action of an external electromagnetic field, but it was unable to explain spontaneous emission, where an electron spontaneously decreases in energy and emits a photon even without the action of an external electromagnetic field. Theoretically, the Schrödinger equation could not describe photons and was inconsistent with the principles of special relativity -- it treats time as an ordinary number while promoting spatial coordinates to linear operators.[6] Quantum electrodynamics Quantum field theory naturally began with the study of electromagnetic interactions, as the electromagnetic field was the only known classical field as of the 1920s.[8] Through the works of Born, Heisenberg, and Pascual Jordan in 1925-1926, a quantum theory of the free electromagnetic field (one with no interactions with matter) was developed via canonical quantization by treating the electromagnetic field as a set of quantum harmonic oscillators.[8] With the exclusion of interactions, however, such a theory was yet incapable of making quantitative predictions about the real world.[3] In his seminal 1927 paper The quantum theory of the emission and absorption of radiation, Dirac coined the term quantum electrodynamics (QED), a theory that adds upon the terms describing the free electromagnetic field an additional interaction term between electric current density and the electromagnetic vector potential. Using first-order perturbation theory, he successfully explained the phenomenon of spontaneous emission. According to the uncertainty principle in quantum mechanics, quantum harmonic oscillators cannot remain stationary, but they have a non-zero minimum energy and must always be oscillating, even in the lowest energy state (the ground state). Therefore, even in a perfect vacuum, there remains an oscillating electromagnetic field having zero-point energy. It is this quantum fluctuation of electromagnetic fields in the vacuum that "stimulates" the spontaneous emission of radiation by electrons in atoms. Dirac's theory was hugely successful in explaining both the emission and absorption of radiation by atoms; by applying second-order perturbation theory, it was able to account for the scattering of photons, resonance fluorescence, as well as non-relativistic Compton scattering. Nonetheless, the application of higher-order perturbation theory was plagued with problematic infinities in calculations.[6]:71 In 1928, Dirac wrote down a wave equation that described relativistic electrons -- the Dirac equation. It had the following important consequences: the spin of an electron is 1/2; the electron g-factor is 2; it led to the correct Sommerfeld formula for the fine structure of the hydrogen atom; and it could be used to derive the Klein-Nishina formula for relativistic Compton scattering. Although the results were fruitful, the theory also apparently implied the existence of negative energy states, which would cause atoms to be unstable, since they could always decay to lower energy states by the emission of radiation.[6]:71-72 The prevailing view at the time was that the world was composed of two very different ingredients: material particles (such as electrons) and quantum fields (such as photons). Material particles were considered to be eternal, with their physical state described by the probabilities of finding each particle in any given region of space or range of velocities. On the other hand photons were considered merely the excited states of the underlying quantized electromagnetic field, and could be freely created or destroyed. It was between 1928 and 1930 that Jordan, Eugene Wigner, Heisenberg, Pauli, and Enrico Fermi discovered that material particles could also be seen as excited states of quantum fields. Just as photons are excited states of the quantized electromagnetic field, so each type of particle had its corresponding quantum field: an electron field, a proton field, etc. Given enough energy, it would now be possible to create material particles. Building on this idea, Fermi proposed in 1932 an explanation for ? decay known as Fermi's interaction. Atomic nuclei do not contain electrons per se, but in the process of decay, an electron is created out of the surrounding electron field, analogous to the photon created from the surrounding electromagnetic field in the radiative decay of an excited atom.[3] It was realized in 1929 by Dirac and others that negative energy states implied by the Dirac equation could be removed by assuming the existence of particles with the same mass as electrons but opposite electric charge. This not only ensured the stability of atoms, but it was also the first proposal of the existence of antimatter. Indeed, the evidence for positrons was discovered in 1932 by Carl David Anderson in cosmic rays. With enough energy, such as by absorbing a photon, an electron-positron pair could be created, a process called pair production; the reverse process, annihilation, could also occur with the emission of a photon. This showed that particle numbers need not be fixed during an interaction. Historically, however, positrons were at first thought of as "holes" in an infinite electron sea, rather than a new kind of particle, and this theory was referred to as the Dirac hole theory.[6]:72[3] QFT naturally incorporated antiparticles in its formalism.[3] Infinities and renormalization Robert Oppenheimer showed in 1930 that higher-order perturbative calculations in QED always resulted in infinite quantities, such as the electron self-energy and the vacuum zero-point energy of the electron and photon fields,[6] suggesting that the computational methods at the time could not properly deal with interactions involving photons with extremely high momenta.[3] It was not until 20 years later that a systematic approach to remove such infinities was developed. A series of papers were published between 1934 and 1938 by Ernst Stueckelberg that established a relativistically invariant formulation of QFT. In 1947, Stueckelberg also independently developed a complete renormalization procedure. Unfortunately, such achievements were not understood and recognized by the theoretical community.[6] Faced with these infinities, John Archibald Wheeler and Heisenberg proposed, in 1937 and 1943 respectively, to supplant the problematic QFT with the so-called S-matrix theory. Since the specific details of microscopic interactions are inaccessible to observations, the theory should only attempt to describe the relationships between a small number of observables (e.g. the energy of an atom) in an interaction, rather than be concerned with the microscopic minutiae of the interaction. In 1945, Richard Feynman and Wheeler daringly suggested abandoning QFT altogether and proposed action-at-a-distance as the mechanism of particle interactions.[3] In 1947, Willis Lamb and Robert Retherford measured the minute difference in the 2S1/2 and 2P1/2 energy levels of the hydrogen atom, also called the Lamb shift. By ignoring the contribution of photons whose energy exceeds the electron mass, Hans Bethe successfully estimated the numerical value of the Lamb shift.[6][3] Subsequently, Norman Myles Kroll, Lamb, James Bruce French, and Victor Weisskopf again confirmed this value using an approach in which infinities cancelled other infinities to result in finite quantities. However, this method was clumsy and unreliable and could not be generalized to other calculations.[6] The breakthrough eventually came around 1950 when a more robust method for eliminating infinities was developed Julian Schwinger, Feynman, Freeman Dyson, and Shinichiro Tomonaga. The main idea is to replace the initial, so-called "bare", parameters (mass, electric charge, etc.), which have no physical meaning, by their finite measured values. To cancel the apparently infinite parameters, one has to introduce additional, infinite, "counterterms" into the Lagrangian. This systematic computational procedure is known as renormalization and can be applied to arbitrary order in perturbation theory.[6] By applying the renormalization procedure, calculations were finally made to explain the electron's anomalous magnetic moment (the deviation of the electron g-factor from 2) and vacuum polarisation. These results agreed with experimental measurements to a remarkable degree, thus marking the end of a "war against infinities".[6] At the same time, Feynman introduced the path integral formulation of quantum mechanics and Feynman diagrams.[8] The latter can be used to visually and intuitively organise and to help compute terms in the perturbative expansion. Each diagram can be interpreted as paths of particles in an interaction, with each vertex and line having a corresponding mathematical expression, and the product of these expressions gives the scattering amplitude of the interaction represented by the diagram.[1] It was with the invention of the renormalization procedure and Feynman diagrams that QFT finally arose as a complete theoretical framework.[8] Given the tremendous success of QED, many theorists believed, in the few years after 1949, that QFT could soon provide an understanding of all microscopic phenomena, not only the interactions between photons, electrons, and positrons. Contrary to this optimism, QFT entered yet another period of depression that lasted for almost two decades.[3] The first obstacle was the limited applicability of the renormalization procedure. In perturbative calculations in QED, all infinite quantities could be eliminated by redefining a small (finite) number of physical quantities (namely the mass and charge of the electron). Dyson proved in 1949 that this is only possible for a small class of theories called "renormalizable theories", of which QED is an example. However, most theories, including the Fermi theory of the weak interaction, are "non-renormalizable". Any perturbative calculation in these theories beyond the first order would result in infinities that could not be removed by redefining a finite number of physical quantities.[3] The second major problem stemmed from the limited validity of the Feynman diagram method, which are based on a series expansion in perturbation theory. In order for the series to converge and low-order calculations to be a good approximation, the coupling constant, in which the series is expanded, must be a sufficiently small number. The coupling constant in QED is the fine-structure constant ? ? 1/137, which is small enough that only the simplest, lowest order, Feynman diagrams need to be considered in realistic calculations. In contrast, the coupling constant in the strong interaction is roughly of the order of one, making complicated, higher order, Feynman diagrams just as important as simple ones. There was thus no way of deriving reliable quantitative predictions for the strong interaction using perturbative QFT methods.[3] With these difficulties looming, many theorists began to turn away from QFT. Some focused on symmetry principles and conservation laws, while others picked up the old S-matrix theory of Wheeler and Heisenberg. QFT was used heuristically as guiding principles, but not as a basis for quantitative calculations.[3] Standard Model Elementary particles of the Standard Model: six types of matter quarks, four types of gauge bosons that carry fundamental interactions, as well as the Higgs boson, which endow elementary particles with mass. In 1954, Yang Chen-Ning and Robert Mills generalised the local symmetry of QED, leading to non-Abelian gauge theories (also known as Yang-Mills theories), which are based on more complicated local symmetry groups.[9]:5 In QED, (electrically) charged particles interact via the exchange of photons, while in non-Abelian gauge theory, particles carrying a new type of "charge" interact via the exchange of massless gauge bosons. Unlike photons, these gauge bosons themselves carry charge.[3][10] Sheldon Glashow developed a non-Abelian gauge theory that unified the electromagnetic and weak interactions in 1960. In 1964, Abdul Salam and John Clive Ward arrived at the same theory through a different path. This theory, nevertheless, was non-renormalizable.[11] Peter Higgs, Robert Brout, and François Englert proposed in 1964 that the gauge symmetry in Yang-Mills theories could be broken by a mechanism called spontaneous symmetry breaking, through which originally massless gauge bosons could acquire mass.[9] By combining the earlier theory of Glashow, Salam, and Ward with the idea of spontaneous symmetry breaking, Steven Weinberg wrote down in 1967 a theory describing electroweak interactions between all leptons and the effects of the Higgs boson. His theory was at first mostly ignored,[11][9] until it was brought back to light in 1971 by Gerard 't Hooft's proof that non-Abelian gauge theories are renormalizable. The electroweak theory of Weinberg and Salam was extended from leptons to quarks in 1970 by Glashow, John Iliopoulos, and Luciano Maiani, marking its completion.[11] Harald Fritzsch, Murray Gell-Mann, and Heinrich Leutwyler discovered in 1971 that certain phenomena involving the strong interaction could also be explained by non-Abelian gauge theory. Quantum chromodynamics (QCD) was born. In 1973, David Gross, Frank Wilczek, and Hugh David Politzer showed that non-Abelian gauge theories are "asymptotically free", meaning that under renormalization, the coupling constant of the strong interaction decreases as the interaction energy increases. (Similar discoveries had been made numerous times prior, but they had been largely ignored.) [9] Therefore, at least in high-energy interactions, the coupling constant in QCD becomes sufficiently small to warrant a perturbative series expansion, making quantitative predictions for the strong interaction possible.[3] These theoretical breakthroughs brought about a renaissance in QFT. The full theory, which includes the electroweak theory and chromodynamics, is referred to today as the Standard Model of elementary particles.[12] The Standard Model successfully describes all fundamental interactions except gravity, and its many predictions have been met with remarkable experimental confirmation in subsequent decades.[8] The Higgs boson, central to the mechanism of spontaneous symmetry breaking, was finally detected in 2012 at CERN, marking the complete verification of the existence of all constituents of the Standard Model.[13] Other developments The 1970s saw the development of non-perturbative methods in non-Abelian gauge theories. The 't Hooft-Polyakov monopole was discovered by 't Hooft and Alexander Polyakov, flux tubes by Holger Bech Nielsen and Poul Olesen, and instantons by Polyakov et al.. These objects are inaccessible through perturbation theory.[8] Supersymmetry also appeared in the same period. The first supersymmetric QFT in four dimensions was built by Yuri Golfand and Evgeny Likhtman in 1970, but their result failed to garner widespread interest due to the Iron Curtain. Supersymmetry only took off in the theoretical community after the work of Julius Wess and Bruno Zumino in 1973.[8] Among the four fundamental interactions, gravity remains the only one that lacks a consistent QFT description. Various attempts at a theory of quantum gravity led to the development of string theory,[8] itself a type of two-dimensional QFT with conformal symmetry.[14]Joël Scherk and John Schwarz first proposed in 1974 that string theory could be the quantum theory of gravity.[15] Condensed matter physics Although quantum field theory arose from the study of interactions between elementary particles, it has been successfully applied to other physical systems, particularly to many-body systems in condensed matter physics. Historically, the Higgs mechanism of spontaneous symmetry breaking was a result of Yoichiro Nambu's application of superconductor theory to elementary particles, while the concept of renormalization came out of the study of second-order phase transitions in matter.[16] Soon after the introduction of photons, Einstein performed the quantization procedure on vibrations in a crystal, leading to the first quasiparticle -- phonons. Lev Landau claimed that low-energy excitations in many condensed matter systems could be described in terms of interactions between a set of quasiparticles. The Feynman diagram method of QFT was naturally well suited to the analysis of various phenomena in condensed matter systems.[17] Gauge theory is used to describe the quantization of magnetic flux in superconductors, the resistivity in the quantum Hall effect, as well as the relation between frequency and voltage in the AC Josephson effect.[17] For simplicity, natural units are used in the following sections, in which the reduced Planck constant ? and the speed of light c are both set to one. Classical fields A classical field is a function of spatial and time coordinates.[18] Examples include the gravitational field in Newtonian gravity g(x, t) and the electric field E(x, t) and magnetic field B(x, t) in classical electromagnetism. A classical field can be thought of as a numerical quantity assigned to every point in space that changes in time. Hence, it has infinite degrees of freedom.[18] Many phenomena exhibiting quantum mechanical properties cannot be explained by classical fields alone. Phenomena such as the photoelectric effect are best explained by discrete particles (photons), rather than a spatially continuous field. The goal of quantum field theory is to describe various quantum mechanical phenomena using a modified concept of fields. Canonical quantisation and path integrals are two common formulations of QFT.[19]:61 To motivate the fundamentals of QFT, an overview of classical field theory is in order. The simplest classical field is a real scalar field -- a real number at every point in space that changes in time. It is denoted as ?(x, t), where x is the position vector, and t is the time. Suppose the Lagrangian of the field is where is the time-derivative of the field, ? is the divergence operator, and m is a real parameter (the "mass" of the field). Applying the Euler-Lagrange equation on the Lagrangian:[1] we obtain the equations of motion for the field, which describe the way it varies in time and space: This is known as the Klein-Gordon equation.[1] The Klein-Gordon equation is a wave equation, so its solutions can be expressed as a sum of normal modes (obtained via Fourier transform) as follows: where a is a complex number (normalised by convention), * denotes complex conjugation, and ?p is the frequency of the normal mode: Thus each normal mode corresponding to a single p can be seen as a classical harmonic oscillator with frequency ?p.[1] Canonical quantisation The quantisation procedure for the above classical field is analogous to the promotion of a classical harmonic oscillator to a quantum harmonic oscillator. The displacement of a classical harmonic oscillator is described by where a is a complex number (normalised by convention), and ? is the oscillator's frequency. Note that x is the displacement of a particle in simple harmonic motion from the equilibrium position, which should not be confused with the spatial label x of a field. For a quantum harmonic oscillator, x(t) is promoted to a linear operator : Complex numbers a and a* are replaced by the annihilation operator and the creation operator , respectively, where + denotes Hermitian conjugation. The commutation relation between the two is The vacuum state , which is the lowest energy state, is defined by Any quantum state of a single harmonic oscillator can be obtained from by successively applying the creation operator :[1] By the same token, the aforementioned real scalar field ?, which corresponds to x in the single harmonic oscillator, is also promoted to an operator , while ap and ap* are replaced by the annihilation operator and the creation operator for a particular p, respectively: Their commutation relations are:[1] where ? is the Dirac delta function. The vacuum state is defined by Any quantum state of the field can be obtained from by successively applying creation operators , e.g.[1] Although the field appearing in the Lagrangian is spatially continuous, the quantum states of the field are discrete. While the state space of a single quantum harmonic oscillator contains all the discrete energy states of one oscillating particle, the state space of a quantum field contains the discrete energy levels of an arbitrary number of particles. The latter space is known as a Fock space, which can account for the fact that particle numbers are not fixed in relativistic quantum systems.[20] The process of quantising an arbitrary number of particles instead of a single particle is often also called second quantisation.[1] The preceding procedure is a direct application of non-relativistic quantum mechanics and can be used to quantise (complex) scalar fields, Dirac fields,[1]vector fields (e.g. the electromagnetic field), and even strings.[21] However, creation and annihilation operators are only well defined in the simplest theories that contain no interactions (so-called free theory). In the case of the real scalar field, the existence of these operators was a consequence of the decomposition of solutions of the classical equations of motion into a sum of normal modes. To perform calculations on any realistic interacting theory, perturbation theory would be necessary. The Lagrangian of any quantum field in nature would contain interaction terms in addition to the free theory terms. For example, a quartic interaction term could be introduced to the Lagrangian of the real scalar field:[1] where ? is a spacetime index, , etc. The summation over the index ? has been omitted following the Einstein notation. If the parameter ? is sufficiently small, then the interacting theory described by the above Lagrangian can be considered as a small perturbation from the free theory. Path integrals The path integral formulation of QFT is concerned with the direct computation of the scattering amplitude of a certain interaction process, rather than the establishment of operators and state spaces. To calculate the probability amplitude for a system to evolve from some initial state at time t = 0 to some final state at t = T, the total time T is divided into N small intervals. The overall amplitude is the product of the amplitude of evolution within each interval, integrated over all intermediate states. Let H be the Hamiltonian (i.e. generator of time evolution), then[19] Taking the limit N -> ?, the above product of integrals becomes the Feynman path integral:[1][19] where L is the Lagrangian involving ? and its derivatives with respect to spatial and time coordinates, obtained from the Hamiltonian H via Legendre transform. The initial and final conditions of the path integral are respectively In other words, the overall amplitude is the sum over the amplitude of every possible path between the initial and final states, where the amplitude of a path is given by the exponential in the integrand. Two-point correlation function Now we assume that the theory contains interactions whose Lagrangian terms are a small perturbation from the free theory. In calculations, one often encounters such expressions: where x and y are position four-vectors, T is the time ordering operator (namely, it orders x and y according to their time-component, later time on the left and earlier time on the right), and is the ground state (vacuum state) of the interacting theory. This expression, known as the two-point correlation function or the two-point Green's function, represents the probability amplitude for the field to propagate from y to x.[1] In canonical quantisation, the two-point correlation function can be written as:[1] where ? is an infinitesimal number, ?I is the field operator under the free theory, and HI is the interaction Hamiltonian term. For the ?4 theory, it is[1] Since ? is a small parameter, the exponential function exp can be expanded into a Taylor series in ? and computed term by term. This equation is useful in that it expresses the field operator and ground state in the interacting theory, which are difficult to define, in terms of their counterparts in the free theory, which are well defined. In the path integral formulation, the two-point correlation function can be written as:[1] where is the Lagrangian density. As in the previous paragraph, the exponential factor involving the interaction term can also be expanded as a series in ?. According to Wick's theorem, any n-point correlation function in the free theory can be written as a sum of products of two-point correlation functions. For example, Since correlation functions in the interacting theory can be expressed in terms of those in the free theory, only the latter need to be evaluated in order to calculate all physical quantities in the (perturbative) interacting theory.[1] Either through canonical quantisation or path integrals, one can obtain: This is known as the Feynman propagator for the real scalar field.[1][19] Feynman diagram Correlation functions in the interacting theory can be written as a perturbation series. Each term in the series is a product of Feynman propagators in the free theory and can be represented visually by a Feynman diagram. For example, the ?1 term in the two-point correlation function in the ?4 theory is After applying Wick's theorem, one of the terms is whose corresponding Feynman diagram is Phi-4 one-loop.svg Every point corresponds to a single ? field factor. Points labelled with x and y are called external points, while those in the interior are called internal points or vertices (there is one in this diagram). The value of the corresponding term can be obtained from the diagram by following "Feynman rules": assign to every vertex and the Feynman propagator to every line with end points x1 and x2. The product of factors corresponding to every element in the diagram, divided by the "symmetry factor" (2 for this diagram), gives the expression for the term in the perturbation series.[1] In order to compute the n-point correlation function to the k-th order, list all valid Feynman diagrams with n external points and k or fewer vertices, and then use Feynman rules to obtain the expression for each term. To be precise, is equal to the sum of (expressions corresponding to) all connected diagrams with n external points. (Connected diagrams are those in which every vertex is connected to an external point through lines. Components that are totally disconnected from external lines are sometimes called "vacuum bubbles".) In the ?4 interaction theory discussed above, every vertex must have four legs.[1] In realistic applications, the scattering amplitude of a certain interaction or the decay rate of a particle can be computed from the S-matrix, which itself can be found using the Feynman diagram method.[1] Feynman diagrams devoid of "loops" are called tree-level diagrams, which describe the lowest-order interaction processes; those containing n loops are referred to as n-loop diagrams, which describe higher-order contributions, or radiative corrections, to the interaction.[19] Lines whose end points are vertices can be thought of as the propagation of virtual particles.[1] Feynman rules can be used to directly evaluate tree-level diagrams. However, naïve computation of loop diagrams such as the one shown above will result in divergent momentum integrals, which seems to imply that almost all terms in the perturbative expansion are infinite. The renormalisation procedure is a systematic process for removing such infinities. Parameters appearing in the Lagrangian, such as the mass m and the coupling constant ?, have no physical meaning -- m, ?, and the field strength ? are not experimentally measurable quantities and are referred to here as the bare mass, bare coupling constant, and bare field, respectively. The physical mass and coupling constant are measured in some interaction process and are generally different from the bare quantities. While computing physical quantities from this interaction process, one may limit the domain of divergent momentum integrals to be below some momentum cut-off ?, obtain expressions for the physical quantities, and then take the limit ? -> ?. This is an example of regularisation, a class of methods to treat divergences in QFT, with ? being the regulator. The approach illustrated above is called bare perturbation theory, as calculations involve only the bare quantities such as mass and coupling constant. A different approach, called renormalised perturbation theory, is to use physically meaningful quantities from the very beginning. In the case of ?4 theory, the field strength is first redefined: where ? is the bare field, ?r is the renormalised field, and Z is a constant to be determined. The Lagrangian density becomes: where mr and ?r are the experimentally measurable, renormalised, mass and coupling constant, respectively, and are constants to be determined. The first three terms are the ?4 Lagrangian density written in terms of the renormalised quantities, while the latter three terms are referred to as "counterterms". As the Lagrangian now contains more terms, so the Feynman diagrams should include additional elements, each with their own Feynman rules. The procedure is outlined as follows. First select a regularisation scheme (such as the cut-off regularisation introduced above or dimensional regularization); call the regulator ?. Compute Feynman diagrams, in which divergent terms will depend on ?. Then, define ?Z, ?m, and ?? such that Feynman diagrams for the counterterms will exactly cancel the divergent terms in the normal Feynman diagrams when the limit ? -> ? is taken. In this way, meaningful finite quantities are obtained.[1] It is only possible to eliminate all infinities to obtain a finite result in renormalisable theories, whereas in non-renormalisable theories infinities cannot be removed by the redefinition of a small number of parameters. The Standard Model of elementary particles is a renormalisable QFT,[1] while quantum gravity is non-renormalisable.[1][19] Renormalisation group The renormalisation group, developed by Kenneth Wilson, is a mathematical apparatus used to study the changes in physical parameters (coefficients in the Lagrangian) as the system is viewed at different scales.[1] The way in which each parameter changes with scale is described by its ? function.[1] Correlation functions, which underlie quantitative physical predictions, change with scale according to the Callan-Symanzik equation.[1] As an example, the coupling constant in QED, namely the elementary charge e, has the following ? function: where ? is the energy scale under which the measurement of e is performed. This differential equation implies that the observed elementary charge increases as the scale increases.[22] The renormalized coupling constant, which changes with the energy scale, is also called the running coupling constant.[1] The coupling constant g in quantum chromodynamics, a non-Abelian gauge theory based on the symmetry group SU(3), has the following ? function: where Nf is the number of quark flavours. In the case where Nf (the Standard Model has Nf = 6), the coupling constant g decreases as the energy scale increases. Hence, while the strong interaction is strong at low energies, it becomes very weak in high-energy interactions, a phenomenon known as asymptotic freedom.[1] Conformal field theories (CFTs) are special QFTs that admit conformal symmetry. They are insensitive to changes in the scale, as all their coupling constants have vanishing ? function. (The converse is not true, however -- the vanishing of all ? functions does not imply conformal symmetry of the theory.)[23] Examples include string theory[14] and N = 4 supersymmetric Yang-Mills theory.[24] According to Wilson's picture, every QFT is fundamentally accompanied by its energy cut-off ?, i.e. that the theory is no longer valid at energies higher than ?, and all degrees of freedom above the scale ? are to be omitted. For example, the cut-off could be the inverse of the atomic spacing in a condensed matter system, and in elementary particle physics it could be associated with the fundamental "graininess" of spacetime caused by quantum fluctuations in gravity. The cut-off scale of theories of particle interactions lies far beyond current experiments. Even if the theory were very complicated at that scale, as long as its couplings are sufficiently weak, it must be described at low energies by a renormalisable effective field theory.[1] The difference between renormalisable and non-renormalisable theories is that the former are insensitive to details at high energies, whereas the latter do depend of them.[8] According to this view, non-renormalisable theories are to be seen as low-energy effective theories of a more fundamental theory. The failure to remove the cut-off ? from calculations in such a theory merely indicates that new physical phenomena appear at scales above ?, where a new theory is necessary.[19] Other theories The quantisation and renormalisation procedures outlined in the preceding sections are performed for the free theory and ?4 theory of the real scalar field. A similar process can be done for other types of fields, including the complex scalar field, the vector field, and the Dirac field, as well as other types of interaction terms, including the electromagnetic interaction and the Yukawa interaction. As an example, quantum electrodynamics contains a Dirac field ? representing the electron field and a vector field A? representing the electromagnetic field (photon field). (Despite its name, the quantum electromagnetic "field" actually corresponds to the classical electromagnetic four-potential, rather than the classical electric and magnetic fields.) The full QED Lagrangian density is: where ?? are Dirac matrices, , and is the electromagnetic field strength. The parameters in this theory are the (bare) electron mass m and the (bare) elementary charge e. The first and second terms in the Lagrangian density correspond to the free Dirac field and free vector fields, respectively. The last term describes the interaction between the electron and photon fields, which is treated as a perturbation from the free theories.[1] Shown above is an example of a tree-level Feynman diagram in QED. It describes an electron and a positron annihilating, creating an off-shell photon, and then decaying into a new pair of electron and positron. Time runs from left to right. Arrows pointing forward in time represent the propagation of positrons, while those pointing backward in time represent the propagation of electrons. A wavy line represents the propagation of a photon. Each vertex in QED Feynman diagrams must have an incoming and an outgoing fermion (positron/electron) leg as well as a photon leg. Gauge symmetry If the following transformation to the fields is performed at every spacetime point x (a local transformation), then the QED Lagrangian remains unchanged, or invariant: where ?(x) is any function of spacetime coordinates. If a theory's Lagrangian (or more precisely the action) is invariant under a certain local transformation, then the transformation is referred to as a gauge symmetry of the theory.[1] Gauge symmetries form a group at every spacetime point. In the case of QED, the successive application of two different local symmetry transformations and is yet another symmetry transformation . For any ?(x), is an element of the U(1) group, thus QED is said to have U(1) gauge symmetry.[1] The photon field A? may be referred to as the U(1) gauge boson. U(1) is an Abelian group, meaning that the result is the same regardless of the order in which its elements are applied. QFTs can also be built on non-Abelian groups, giving rise to non-Abelian gauge theories (also known as Yang-Mills theories).[1]Quantum chromodynamics, which describes the strong interaction, is a non-Abelian gauge theory with an SU(3) gauge symmetry. It contains three Dirac fields ?i, i = 1,2,3 representing quark fields as well as eight vector fields Aa,?, a = 1,...,8 representing gluon fields, which are the SU(3) gauge bosons.[1] The QCD Lagrangian density is:[1] where D? is the gauge covariant derivative: where g is the coupling constant, ta are the eight generators of SU(3) in the fundamental representation (3×3 matrices), and fabc are the structure constants of SU(3). Repeated indices i,j,a are implicitly summed over following Einstein notation. This Lagrangian is invariant under the transformation: where U(x) is an element of SU(3) at every spacetime point x: The preceding discussion of symmetries is on the level of the Lagrangian. In other words, these are "classical" symmetries. After quantisation, some theories will no longer exhibit their classical symmetries, a phenomenon called anomaly. For instance, in the path integral formulation, despite the invariance of the Lagrangian density under a certain local transformation of the fields, the measure of the path integral may change.[19] For a theory describing nature to be consistent, it must not contain any anomaly in its gauge symmetry. The Standard Model of elementary particles is a gauge theory based on the group SU(3) × SU(2) × U(1), in which all anomalies exactly cancel.[1] The theoretical foundation of general relativity, the equivalence principle, can also be understood as a form of gauge symmetry, making general relativity a gauge theory based on the Lorentz group.[25] Noether's theorem states that every continuous symmetry, i.e. the parameter in the symmetry transformation being continuous rather than discrete, leads to a corresponding conservation law.[1][19] For example, the U(1) symmetry of QED implies charge conservation.[26] Gauge transformations do not relate distinct quantum states. Rather, it relates two equivalent mathematical descriptions of the same quantum state. As an example, the photon field A?, being a four-vector, has four apparent degrees of freedom, but the actual state of a photon is described by its two degrees of freedom corresponding to the polarisation. The remaining two degrees of freedom are said to be "redundant" -- apparently different ways of writing A? can be related to each other by a gauge transformation and in fact describe the same state of the photon field. In this sense, gauge invariance is not a "real" symmetry, but are a reflection of the "redundancy" of the chosen mathematical description.[19] To account for the gauge redundancy in the path integral formulation, one must perform the so-called Faddeev-Popov gauge fixing procedure. In non-Abelian gauge theories, such a procedure introduces new fields called "ghosts". Particles corresponding to the ghost fields are called ghost particles, which cannot be detected externally.[1] A more rigorous generalisation of the Faddeev-Popov procedure is given by BRST quantization.[1] Spontaneous symmetry breaking Spontaneous symmetry breaking is a mechanism whereby the symmetry of the Lagrangian is violated by the system described by it.[1] To illustrate the mechanism, consider a linear sigma model containing N real scalar fields, described by the Lagrangian density: where ? and ? are real parameters. The theory admits an O(N) gauge symmetry: The lowest energy state (ground state or vacuum state) of the classical theory is any uniform field ?0 satisfying Without loss of generality, let the ground state be in the N-th direction: The original N fields can be rewritten as: and the original Lagrangian density as: where k = 1,...,N-1. The original O(N) gauge symmetry is no longer manifest, leaving only the subgroup O(N-1). The larger symmetry before spontaneous symmetry breaking is said to be "hidden" or spontaneously broken.[1] In the QFT of ferromagnetism, spontaneous symmetry breaking can explain the alignment of magnetic dipoles at low temperatures.[19] In the Standard Model of elementary particles, the W and Z bosons, which would otherwise be massless as a result of gauge symmetry, acquire mass through spontaneous symmetry breaking of the Higgs boson, a process called the Higgs mechanism.[1] Goldstone's theorem states that under spontaneous symmetry breaking, every broken continuous symmetry corresponds to a massless field called the Goldstone boson. In the above example, O(N) has N(N-1)/2 continuous symmetries (the dimension of its Lie algebra), while O(N-1) has (N-1)(N-2)/2. The number of broken symmetries is their difference, N-1, which corresponds to the N-1 massless fields ?k.[1] All experimentally known symmetries in nature relate bosons to bosons and fermions to fermions. Theorists have hypothesised the existence of a type of symmetry, called supersymmetry, that relates bosons and fermions.[1][19] The Standard Model obeys Poincaré symmetry, whose generators are spacetime translation P? and Lorentz transformation J.[27]:58-60 In addition to these generators, supersymmetry in (3+1)-dimensions includes additional generators Q?, called supercharges, which themselves transform as Weyl fermions.[1][19] The symmetry group generated by all these generators is known as the super-Poincaré group. In general there can be more than one set of supersymmetry generators, Q?I, I = 1, ..., N, which generate the corresponding N = 1 supersymmetry, N = 2 supersymmetry, and so on.[1][19] Supersymmetry can also be constructed in other dimensions,[28] most notably in (1+1) dimensions for its application in superstring theory.[29] The Lagrangian of a supersymmetric theory must be invariant under the action of the super-Poincaré group.[19] Examples of such theories include: Minimal Supersymmetric Standard Model (MSSM), N = 4 supersymmetric Yang-Mills theory,[19] and superstring theory. In a supersymmetric theory, every fermion has a bosonic superpartner and vice versa.[19] If supersymmetry is promoted to a local symmetry, then the resultant gauge theory is an extension of general relativity called supergravity.[30] Supersymmetry is a potential solution to many current problems in physics. For example, the hierarchy problem of the Standard Model -- why the mass of the Higgs boson is not radiatively corrected (under renormalisation) to a very high scale such as the grand unified scale or the Planck scale -- can be resolved by relating the Higgs field and its superpartner, the Higgsino. Radiative corrections due to Higgs boson loops in Feynman diagrams are cancelled by corresponding Higgsino loops. Supersymmetry also offers answers to the grand unification of all gauge coupling constants in the Standard Model as well as the nature of dark matter.[1][31] Nevertheless, as of 2018, experiments have yet to provide evidence for the existence of supersymmetric particles. If supersymmetry were a true symmetry of nature, then it must be a broken symmetry, and the energy of symmetry breaking must be higher than those achievable by present-day experiments.[1][19] Other spacetimes The ?4 theory, QED, QCD, as well as the whole Standard Model all assume a (3+1)-dimensional Minkowski space (3 spatial and 1 time dimensions) as the background on which the quantum fields are defined. However, QFT a priori imposes no restriction on the number of dimensions nor the geometry of spacetime. In condensed matter physics, QFT is used to describe (2+1)-dimensional electron gases.[32] In high energy physics, string theory is a type of (1+1)-dimensional QFT,[19][14] while Kaluza-Klein theory uses gravity in extra dimensions to produce gauge theories in lower dimensions.[19] In Minkowski space, the flat metric ? is used to raise and lower spacetime indices in the Lagrangian, e.g. where ? is the inverse of ? satisfying ?? = ???. For QFTs in curved spacetime on the other hand, a general metric (such as the Schwarzschild metric describing a black hole) is used: where g is the inverse of g. For a real scalar field, the Lagrangian density in a general spacetime background is where g = det(g), and ?? denotes the covariant derivative.[33] The Lagrangian of a QFT, hence its calculational results and physical predictions, depends on the geometry of the spacetime background. Topological quantum field theory The correlation functions and physical predictions of a QFT depend on the spacetime metric g. For a special class of QFTs called topological quantum field theories (TQFTs), all correlation functions are independent of continuous changes in the spacetime metric.[34]:36 QFTs in curved spacetime generally change according to the geometry (local structure) of the spacetime background, while TQFTs are invariant under spacetime diffeomorphisms but are sensitive to the topology (global structure) of spacetime. This means that all calculational results of TQFTs are topological invariants of the underlying spacetime. Chern-Simons theory is an example of TQFT. Applications of TQFT include the fractional quantum Hall effect and topological quantum computers.[35]:1-5 Perturbative and non-perturbative methods Using perturbation theory, the total effect of a small interaction term can be approximated order by order by a series expansion in the number of virtual particles participating in the interaction. Every term in the expansion may be understood as one possible way for (physical) particles to interact with each other via virtual particles, expressed visually using a Feynman diagram. The electromagnetic force between two electrons in QED is represented (to first order in perturbation theory) by the propagation of a virtual photon. In a similar manner, the W and Z bosons carry the weak interaction, while gluons carry the strong interaction. The interpretation of an interaction as a sum of intermediate states involving the exchange of various virtual particles only makes sense in the framework of perturbation theory. In contrast, non-perturbative methods in QFT treat the interacting Lagrangian as a whole without any series expansion. Instead of particles that carry interactions, these methods have spawned such concepts as 't Hooft-Polyakov monopole, domain wall, flux tube, and instanton.[8] Mathematical rigour In spite of its overwhelming success in particle physics and condensed matter physics, QFT itself lacks a formal mathematical foundation. For example, according to Haag's theorem, there does not exist a well-defined interaction picture for QFT, which implies that perturbation theory of QFT, which underlies the entire Feynman diagram method, is fundamentally not rigorous.[36] Since the 1950s,[37] theoretical physicists and mathematicians have attempted to organise all QFTs into a set of axioms, in order to stablish the existence of concrete models of relativistic QFT in a mathematically rigorous way and to study their properties. This line of study is called constructive quantum field theory, a subfield of mathematical physics,[38]:2 which has led to such results as CPT theorem, spin-statistics theorem, and Goldstone's theorem.[37] Compared to ordinary QFT, topological quantum field theory and conformal field theory are better supported mathematically -- both can be classified in the framework of representations of cobordisms.[39] Algebraic quantum field theory is another approach to the axiomatisation of QFT, in which the fundamental objects are local operators and the algebraic relations between them. Axiomatic systems following this approach include Wightman axioms and Haag-Kastler axioms.[38] One way to construct theories satisfying Wightman axioms is to use Osterwalder-Schrader axioms, which give the necessary and sufficient conditions for a real time theory to be obtained from an imaginary time theory by analytic continuation (Wick rotation).[38] Yang-Mills existence and mass gap, one of the Millenium Prize Problems, concerns the well-defined existence of Yang-Mills theories as set out by the above axioms. The full problem statement is as follows.[40] See also 1. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq ar as at au av aw ax ay Peskin, M.; Schroeder, D. (1995). An Introduction to Quantum Field Theory. Westview Press. ISBN 978-0-201-50397-5. 2. ^ a b c Hobson, Art (2013). "There are no particles, there are only fields". American Journal of Physics. 81 (211): 211-223. arXiv:1204.4616. Bibcode:2013AmJPh..81..211H. doi:10.1119/1.4789885. 3. ^ a b c d e f g h i j k l m n o p Weinberg, Steven (1977). "The Search for Unity: Notes for a History of Quantum Field Theory". Daedalus. 106 (4): 17-35. JSTOR 20024506. 4. ^ John L. Heilbron (14 February 2003). The Oxford Companion to the History of Modern Science. Oxford University Press. ISBN 978-0-19-974376-6. 5. ^ Joseph John Thomson (1893). Notes on Recent Researches in Electricity and Magnetism: Intended as a Sequel to Professor Clerk-Maxwell's 'Treatise on Electricity and Magnetism'. Dawsons. 6. ^ a b c d e f g h i j k l m Weisskopf, Victor (November 1981). "The development of field theory in the last 50 years". Physics Today. 34 (11): 69-85. Bibcode:1981PhT....34k..69W. doi:10.1063/1.2914365. 7. ^ Werner Heisenberg (1999). Physics and Philosophy: The Revolution in Modern Science. Prometheus Books. ISBN 978-1-57392-694-2. 8. ^ a b c d e f g h i j Shifman, M. (2012). Advanced Topics in Quantum Field Theory. Cambridge University Press. ISBN 978-0-521-19084-8. 9. ^ a b c d 't Hooft, Gerard (2015-03-17). "The Evolution of Quantum Field Theory". The Standard Theory of Particle Physics. Advanced Series on Directions in High Energy Physics. 26. pp. 1-27. arXiv:1503.05007. Bibcode:2016stpp.conf....1T. doi:10.1142/9789814733519_0001. ISBN 978-981-4733-50-2. 10. ^ Yang, C. N.; Mills, R. L. (1954-10-01). "Conservation of Isotopic Spin and Isotopic Gauge Invariance". Physical Review. 96 (1): 191-195. Bibcode:1954PhRv...96..191Y. doi:10.1103/PhysRev.96.191. 11. ^ a b c Coleman, Sidney (1979-12-14). "The 1979 Nobel Prize in Physics". Science. 206 (4424): 1290-1292. Bibcode:1979Sci...206.1290C. doi:10.1126/science.206.4424.1290. JSTOR 1749117. PMID 17799637. 12. ^ Sutton, Christine. "Standard model". britannica.com. Encyclopædia Britannica. Retrieved . 13. ^ Kibble, Tom W. B. (2014-12-12). "The Standard Model of Particle Physics". arXiv:1412.4094 [physics.hist-ph]. 14. ^ a b c Polchinski, Joseph (2005). String Theory. 1. Cambridge University Press. ISBN 978-0-521-67227-6. 15. ^ Schwarz, John H. (2012-01-04). "The Early History of String Theory and Supersymmetry". arXiv:1201.0981 [physics.hist-ph]. 16. ^ "Common Problems in Condensed Matter and High Energy Physics" (PDF). science.energy.gov. Office of Science, U.S. Department of Energy. 2015-02-02. Retrieved . 17. ^ a b Wilczek, Frank (2016-04-19). "Particle Physics and Condensed Matter: The Saga Continues". Physica Scripta. 2016 (T168): 014003. arXiv:1604.05669. Bibcode:2016PhST..168a4003W. doi:10.1088/0031-8949/T168/1/014003. 18. ^ a b Tong 2015, Chapter 1 19. ^ a b c d e f g h i j k l m n o p q r s t Zee, A. (2010). Quantum Field Theory in a Nutshell. Princeton University Press. ISBN 978-0-691-01019-9. 20. ^ Fock, V. (1932-03-10). "Konfigurationsraum und zweite Quantelung". Zeitschrift für Physik (in German). 75 (9-10): 622-647. Bibcode:1932ZPhy...75..622F. doi:10.1007/BF01344458. 21. ^ Becker, Katrin; Becker, Melanie; Schwarz, John H. (2007). String Theory and M-Theory. Cambridge University Press. p. 36. ISBN 978-0-521-86069-7. 22. ^ Fujita, Takehisa (2008-02-01). "Physics of Renormalization Group Equation in QED". arXiv:hep-th/0606101. 23. ^ Aharony, Ofer; Gur-Ari, Guy; Klinghoffer, Nizan (2015-05-19). "The Holographic Dictionary for Beta Functions of Multi-trace Coupling Constants". Journal of High Energy Physics. 2015 (5): 31. arXiv:1501.06664v3. Bibcode:2015JHEP...05..031A. doi:10.1007/JHEP05(2015)031. 24. ^ Kovacs, Stefano (1999-08-26). "N = 4 supersymmetric Yang-Mills theory and the AdS/SCFT correspondence". arXiv:hep-th/9908171. 25. ^ Veltman, M. J. G. (1976). Methods in Field Theory, Proceedings of the Les Houches Summer School, Les Houches, France, 1975. 26. ^ Brading, Katherine A. (March 2002). "Which symmetry? Noether, Weyl, and conservation of electric charge". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 33 (1): 3-22. Bibcode:2002SHPMP..33....3B. CiteSeerX doi:10.1016/S1355-2198(01)00033-8. 27. ^ Weinberg, Steven (1995). The Quantum Theory of Fields. Cambridge University Press. ISBN 978-0-521-55001-7. 28. ^ de Wit, Bernard; Louis, Jan (1998-02-18). "Supersymmetry and Dualities in various dimensions". arXiv:hep-th/9801132. 29. ^ Polchinski, Joseph (2005). String Theory. 2. Cambridge University Press. ISBN 978-0-521-67228-3. 30. ^ Nath, P.; Arnowitt, R. (1975). "Generalized Super-Gauge Symmetry as a New Framework for Unified Gauge Theories". Physics Letters B. 56 (2): 177. Bibcode:1975PhLB...56..177N. doi:10.1016/0370-2693(75)90297-x. 31. ^ Munoz, Carlos (2017-01-18). "Models of Supersymmetry for Dark Matter". European Physical Journal Web of Conferences. 136: 01002. arXiv:1701.05259. Bibcode:2017EPJWC.13601002M. doi:10.1051/epjconf/201713601002. 32. ^ Morandi, G.; Sodano, P.; Tagliacozzo, A.; Tognetti, V. (2000). Field Theories for Low-Dimensional Condensed Matter Systems. Springer. ISBN 978-3-662-04273-1. 33. ^ Parker, Leonard E.; Toms, David J. (2009). Quantum Field Theory in Curved Spacetime. Cambridge University Press. p. 43. ISBN 978-0-521-87787-9. 34. ^ Ivancevic, Vladimir G.; Ivancevic, Tijana T. (2008-12-11). "Undergraduate Lecture Notes in Topological Quantum Field Theory". arXiv:0810.0344v5 [math-th]. 35. ^ Carqueville, Nils; Runkel, Ingo (2017-05-16). "Physics of Renormalization Group Equation in QED". arXiv:1705.05734 [math.QA]. 36. ^ Haag, Rudolf (1955). "On Quantum Field Theories" (PDF). Dan Mat Fys Medd. 29 (12). 37. ^ a b Buchholz, Detlev (2000). "Current Trends in Axiomatic Quantum Field Theory". Quantum Field Theory. 558: 43-64. arXiv:hep-th/9811233v2. Bibcode:2000LNP...558...43B. 38. ^ a b c Summers, Stephen J. (2016-03-31). "A Perspective on Constructive Quantum Field Theory". arXiv:1203.3991v2 [math-ph]. 39. ^ Sati, Hisham; Schreiber, Urs (2012-01-06). "Survey of mathematical foundations of QFT and perturbative string theory". arXiv:1109.0955v2 [math-ph]. 40. ^ Jaffe, Arthur; Witten, Edward. "Quantum Yang-Mills Theory" (PDF). Clay Mathematics Institute. Retrieved . Further reading General readers Introductory texts Advanced texts External links Top US Cities PopFlock.com : Music Genres | Musicians | Musical Instruments | Music Industry
9ddb64a75a59e07e
Electron cloud: Wikis (Redirected to Atomic orbital article) From Wikipedia, the free encyclopedia An atomic orbital is a mathematical function that describes the wave-like behavior of either one electron or a pair of electrons in an atom.[1] This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus. These functions may serve as three-dimensional graph of an electron’s likely location. The term may thus refer directly to the physical region defined by the function where the electron is likely to be.[2] Specifically, atomic orbitals are the possible quantum states of an individual electron in the collection of electrons around a single atom, as described by the orbital function. Despite the obvious analogy to planets revolving around the Sun, electrons cannot be described as solid particles and so atomic orbitals rarely, if ever, resemble a planet's elliptical path. A more accurate analogy might be that of a large and often oddly-shaped atmosphere (the electron), distributed around a relatively tiny planet (the atomic nucleus). Atomic orbitals exactly describe the shape of this atmosphere only when a single electron is present in an atom. When more electrons are added to a single atom, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection (sometimes termed the atom’s “electron cloud” [3]) tends toward a generally spherical zone of probability describing where the atom’s electrons will be found. Electron atomic and molecular orbitals. The chart of orbitals (left) is arranged by increasing energy (see Madelung rule). Note that atomic orbits are functions of three variables (two angles, and the distance from the nucleus, r). These images are faithful to the angular component of the orbital, but not entirely representative of the orbital as a whole. The idea that electrons might revolve around a compact nucleus with definite angular momentum was convincingly argued in 1913 by Niels Bohr,[4] and the Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electronic behavior as early as 1904.[5] However, it was not until 1926 that the solution of the Schrödinger equation for electron-waves in atoms provided the functions for the modern orbitals.[6] Because of the difference from classical mechanical orbits, the term "orbit" for electrons in atoms, has been replaced with the term orbital—a term first coined by chemist Robert Mulliken in 1932.[7] Atomic orbitals are typically described as “hydrogen-like” (meaning one-electron) wave functions over space, categorized by n, l, and m quantum numbers, which correspond with the pair of electrons' energy, angular momentum, and an angular momentum direction, respectively. Each orbital (defined by a different set of quantum numbers), and which contains a maximum of two electrons, is also known by the classical names used in the electron configurations shown on the right. These classical orbital names (s, p, d, f) are derived from the characteristics of their spectroscopic lines: sharp, principal, diffuse, and fundamental, the rest being named in alphabetical order. [8][9] From about 1920, even before the advent of modern quantum mechanics, the aufbau principle (construction principle) that atoms were built up of pairs of electrons, arranged in simple repeating patterns of increasing odd numbers (1,3,5,7..), had been used by Niels Bohr and others to infer the presence of something like atomic orbitals within the total electron configuration of complex atoms. In the mathematics of atomic physics, it is also often convenient to reduce the electron functions of complex systems into combinations of the simpler atomic orbitals. Although each electron in a multi-electron atom is not confined to one of the “one-or-two-electron atomic orbitals” in the idealized picture above, still the electron wave-function may be broken down into combinations which still bear the imprint of atomic orbitals; as though, in some sense, the electron cloud of a many-electron atom is still partly “composed” of atomic orbitals, each containing only one or two electrons. The physicality of this view is best illustrated in the repetitive nature of the chemical and physical behavior of elements which results in the natural ordering known from the 19th century as the periodic table of the elements. In this ordering, the repeating periodicity of 2, 6, 10, and 14 elements in the periodic table corresponds with the total number of electrons which occupy a complete set of s, p, d and f atomic orbitals, respectively. Orbital names Orbitals are given names in the form: X \, \mathrm{type}^y \ Formal quantum mechanical definition Connection to uncertainty relation In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere in a three dimensional atom and was pictured as the mean energy of the probability cloud of the electron's wave packet which surrounded the atom. Although Heisenberg used infinite sets of positions for the electron in his matrices, this does not mean that the electron could be anywhere in the universe.[citation needed] Rather there are several laws that show the electron must be in one localized probability distribution. An electron is described by its energy in Bohr's atom which was carried over to matrix mechanics. Therefore, an electron in a certain n-sphere had to be within a certain range from the nucleus depending upon its energy.[citation needed] This restricts its location. Hydrogen-like atoms Qualitative characterization Limitations on the quantum numbers The azimuthal quantum number \ell is a non-negative integer. Within a shell where n is some integer n0, \ell ranges across all (integer) values satisfying the relation 0 \le \ell \le n_0-1. For instance, the n = 1 shell has only orbitals with \ell=0, and the n = 2 shell has only orbitals with \ell=0, and \ell=1. The set of orbitals associated with a particular value of \ell are sometimes collectively called a subshell. The magnetic quantum number m_\ell is also always an integer. Within a subshell where \ell is some integer \ell_0, m_\ell ranges thus: -\ell_0 \le m_\ell \le \ell_0. The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of m_\ell available in that subshell. Empty cells represent subshells that do not exist. l = 0 1 2 3 4 ... n = 1 ml = 0 2 0 -1, 0, 1 Subshells are usually identified by their n- and \ell-values. n is represented by its numerical value, but \ell is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with n = 2 and \ell=0 as a '2s subshell'. The shapes of orbitals The shapes of the first five atomic orbitals: 1s, 2s, 2px,2py, and 2pz. The colors show the wavefunction phase. Generally speaking, the number n determines the size and energy of the orbital for a given nucleus: as n increases, the size of the orbital increases. However, in comparing different elements, the higher nuclear charge Z of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the overall size of the whole atom remains very roughly constant, even as the number of electrons in heavier elements (higher Z) increases. Also in general terms, \ell determines an orbital's shape, and m_\ell its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on m_\ell also. The three p-orbitals for n=2 have the form of two ellipsoids with a point of tangency at the nucleus (sometimes referred to as a dumbbell). The three p-orbitals in each shell are oriented at right angles to each other, as determined by their respective values of m_\ell. For each s, p, d, f and g set of orbitals, the set of orbitals which composes it forms a spherically symmetrical set of shapes. For non-s orbitals, which have lobes, the lobes point in directions so as to fill space as symmetrically as possible for number of lobes which exist for a set of orientations. For example, the three p orbitals have six lobes which are oriented to each of the six primary directions of 3-D space; for the 5 d orbitals, there are a total of 18 lobes, in which again six point in primary directions, and the 12 additional lobes fill the 12 gaps which exist between each pairs of these 6 primary axes. Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with n values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. As with s orbitals, this phenomenon provides p, d, f, and g orbitals at the next higher possible value of n (for example, 3p orbitals vs. the fundamental 2p), an additional node in each lobe. Still higher values of n further increase the number of radial nodes, for each type of orbital. Orbitals table This table shows all orbital configurations for the real hydrogen-like wave functions up to 7s, and therefore covers the simple electronic configuration for all elements in the periodic table up to radium. m=0 m=0 m=±1 m=0 m=±1 m=±2 m=0 m=±1 m=±2 m=±3 s pz px py dz2 dxz dyz dxy dx2-y2 fz3 fxz2 fyz2 fxyz fz(x2-y2) fx(x2-3y2) fy(3x2-y2) n=1 S1M0.png n=2 S2M0.png P2M0.png P2M1.png P2M-1.png n=3 S3M0.png P3M0.png P3M1.png P3M-1.png D3M0.png D3M1.png D3M-1.png D3M2.png D3M-2.png n=4 S4M0.png P4M0.png P4M1.png P4M-1.png D4M0.png D4M1.png D4M-1.png D4M2.png D4M-2.png F4M0.png F4M1.png F4M-1.png F4M2.png F4M-2.png F4M3.png F4M-3.png n=5 S5M0.png P5M0.png P5M1.png P5M-1.png D5M0.png D5M1.png D5M-1.png D5M2.png D5M-2.png . . . . . . . . . . . . . . . . . . . . . n=6 S6M0.png P6M0.png P6M1.png P6M-1.png . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Orbital energy In atoms with multiple electrons, the energy of an electron depends not only on the intrinsic properties of its orbital, but also on its interactions with the other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on n but also on \ell. Higher values of \ell are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When \ell = 2, the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s-orbital in the next higher shell; when \ell = 3 the energy is pushed into the shell two steps higher. The energy sequence of the first 24 subshells is given in the following table. Each cell represents a subshell with n and \ell given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. s p d f g 1 1 2 2 3 3 4 5 7 4 6 8 10 13 5 9 11 14 17 21 6 12 15 18 22 26 7 16 19 23 27 31 8 20 24 28 32 36 Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could exist, but which do not hold electrons in any element currently known. Electron placement and the periodic table The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same \ell-state (but the n associated with that \ell-state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell. Relativistic effects Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium (which results from narrowing of 6s to 5d transition energy to the point that visible light begins to be absorbed). See [1]. In the Bohr Model, an n = 1 electron has a velocity given by v = Zαc, where Z is the atomic number, α is the fine-structure constant, and c is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wavefunction of the electron for atoms with Z > 137 is oscillatory and unbound. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of Z due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than Z. The critical Z value which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until Z is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. See Extension of the periodic table beyond the seventh period. See also 1. ^ Milton Orchin,Roger S. Macomber, Allan Pinhas, and R. Marshall Wilson(2005)"Atomic Orbital Theory" 2. ^ Daintith, J. (2004). Oxford Dictionary of Chemistry. New York: Oxford University Press. ISBN 0-19-860918-3.  3. ^ The Feynman Lectures on Physics -The Definitive Edition, Vol 1 lect 6 pg 11. Feynman, Richard; Leighton; Sands. (2006) Addison Wesley ISBN 0-8053-9046-4 4. ^ Bohr, Niels (1913). "On the Constitution of Atoms and Molecules". Philosophical Magazine 26 (1): 476.  5. ^ Nagaoka, Hantaro (May 1904). "Kinetics of a System of Particles illustrating the Line and the Band Spectrum and the Phenomena of Radioactivity". Philosophical Magazine 7: 445–455. http://www.chemteam.info/Chem-History/Nagaoka-1904.html.  6. ^ Bryson, Bill (2003). A Short History of Nearly Everything. Broadway Books. pp. 141–143. ISBN 0-7679-0818-X.  7. ^ Mulliken, Robert S. (July 1932). "Electronic Structures of Polyatomic Molecules and Valence. II. General Considerations". Phys. Rev. 41 (1): 49–71. doi:10.1103/PhysRev.41.49. http://prola.aps.org/abstract/PR/v41/i1/p49_1.  8. ^ Griffiths, David (1995). Introduction to Quantum Mechanics. Prentice Hall. pp. 190–191. ISBN 0-13-124405-1.  9. ^ Levine, Ira (2000). Quantum Chemistry (5 ed.). Prentice Hall. pp. 144–145. ISBN 0-13-685512-1.  Further reading • Tipler, Paul; Ralph Llewellyn (2003). Modern Physics (4 ed.). New York: W. H. Freeman and Company. ISBN 0-7167-4345-0.  • Scerri, Eric (2007). The Periodic Table, Its Story and Its Significance. New York: Oxford University Press. ISBN 978-0-19-530573-9.  External links Simple English In chemistry and nuclear physics, the electron cloud is a way to describe where electrons are when they go around the nucleus of an atom. The electron cloud model is different from the older model by Niels Bohr. Bohr talked about electrons going around the nucleus in a fixed circle, the same way that planets go around the Sun. The electron cloud model says that we can not know exactly where an electron is, but the electrons are more likely to be in specific areas of an atom. It is the most modern and accepted form of the atom. Got something to say? Make a comment. Your name Your email address
75d5b588d19a2339
 Identical particles Quantum Mechanics Identical particles: Introduction Identical particles cannot be distinguished by means of any intrinsic properties. This can lead to effects that have no classical analog. Two particles are identical if there are no interactions that can distinguish them. Therefore a physical observable must be symmetrical with respect to the interchange of any pair of two particles. The time-dependent Schrödinger equation for two identical particles is Time-dependent Schroedinger equation for two identical particles As H (r1,r2) = H (r2,r1), there are two fundamentally different kinds of solutions, namely Psi (r1,r2) Psi (r2,r1) and Psi (r1,r2) = - Psi (r2,r1). The symmetric solution Psi (r1,r2) Psi (r2,r1) describes particles that are called bosons. Particles that are described by the antisymmetric solution Psi (r1,r2) = - Psi (r2,r1) are called fermions. Electrons, protons and neutrons are fermions, and photons and pions are bosons. Atoms, being aggregates of tightly bound particles, are either fermions or bosons. Particles with integer spin are always bosons while particles with half-integer spin are always fermions.
433e172d4f686c3a
Atmospheric entry From Wikipedia, the free encyclopedia   (Redirected from Reentry (orbital)) Jump to: navigation, search "Reentry" redirects here. For other uses, see Reentry (disambiguation). Mars Exploration Rover (MER) aeroshell, artistic rendition Atmospheric entry is the movement of an object into and through the gases of a planet's atmosphere from outer space. There are two main types of atmospheric entry - uncontrolled entry, such as in the entry of celestial objects, space debris or bolides - and controlled entry, such as the entry (or reentry) of technology capable of being navigated or following a predetermined course. Atmospheric drag and aerodynamic heating can cause atmospheric breakup capable of completely disintegrating smaller objects. These forces may cause objects with lower compressive strength to explode. For Earth, atmospheric entry occurs above the Kármán Line at an altitude of more than 100 km above the surface while Venus atmospheric entry occurs at 250 km and Mars atmospheric entry at about 80 km. Uncontrolled, objects accelerate through the atmosphere at extreme velocities under the influence of Earth's gravity. Most controlled objects enter at hypersonic speeds due to their suborbital (e.g., ICBM reentry vehicles), orbital (e.g., the Space Shuttle), or unbounded (e.g., meteors) trajectories. Various advanced technologies have been developed to enable atmospheric reentry and flight at extreme velocities. An alternative low velocity method of controlled atmospheric entry is buoyancy[1] which is suitable for planetary entry where thick atmospheres, strong gravity or both factors complicate high-velocity hyperbolic entry, such as the atmospheres of Venus, Titan and the gas giants.[2] Apollo Command Module flying at a high angle of attack for lifting entry, artistic rendition. The concept of the ablative heat shield was described as early as 1920 by Robert Goddard: "In the case of meteors, which enter the atmosphere with speeds as high as 30 miles per second (48 km/s), the interior of the meteors remains cold, and the erosion is due, to a large extent, to chipping or cracking of the suddenly heated surface. For this reason, if the outer surface of the apparatus were to consist of layers of a very infusible hard substance with layers of a poor heat conductor between, the surface would not be eroded to any considerable extent, especially as the velocity of the apparatus would not be nearly so great as that of the average meteor."[3] Practical development of reentry systems began as the range and reentry velocity of ballistic missiles increased. For early short-range missiles, like the V-2, stabilization and aerodynamic stress were important issues (many V-2s broke apart during reentry), but heating was not a serious problem. Medium-range missiles like the Soviet R-5, with a 1200 km range, required ceramic composite heat shielding on separable reentry vehicles (it was no longer possible for the entire rocket structure to survive reentry). The first ICBMs, with ranges of 8000 to 12,000 km, were only possible with the development of modern ablative heat shields and blunt-shaped vehicles. In the USA, this technology was pioneered by H. Julian Allen at Ames Research Center.[4] Terminology, definitions and jargon[edit] Over the decades since the 1950s, a rich technical jargon has grown around the engineering of vehicles designed to enter planetary atmospheres. It is recommended that the reader review the jargon glossary before continuing with this article on atmospheric reentry. When atmospheric entry is part of a spacecraft landing or recovery, particularly on a planetary body other than Earth, entry is part of a phase referred to as "entry, descent and landing", or EDL.[5] Blunt body entry vehicles[edit] Various reentry shapes (NASA) using shadowgraphs to show high-velocity flow These four shadowgraph images represent early reentry-vehicle concepts. A shadowgraph is a process that makes visible the disturbances that occur in a fluid flow at high velocity, in which light passing through a flowing fluid is refracted by the density gradients in the fluid resulting in bright and dark areas on a screen placed behind the fluid. In the United States, H. Julian Allen and A. J. Eggers, Jr. of the National Advisory Committee for Aeronautics (NACA) made the counterintuitive discovery in 1951[6] that a blunt shape (high drag) made the most effective heat shield. From simple engineering principles, Allen and Eggers showed that the heat load experienced by an entry vehicle was inversely proportional to the drag coefficient, i.e. the greater the drag, the less the heat load. If the reentry vehicle is made blunt, air cannot "get out of the way" quickly enough, and acts as an air cushion to push the shock wave and heated shock layer forward (away from the vehicle). Since most of the hot gases are no longer in direct contact with the vehicle, the heat energy would stay in the shocked gas and simply move around the vehicle to later dissipate into the atmosphere. Entry vehicle shapes[edit] Main article: Nose cone design There are several basic shapes used in designing entry vehicles: Sphere or spherical section[edit] The simplest axisymmetric shape is the sphere or spherical section.[8] This can either be a complete sphere or a spherical section forebody with a converging conical afterbody. The aerodynamics of a sphere or spherical section are easy to model analytically using Newtonian impact theory. Likewise, the spherical section's heat flux can be accurately modeled with the Fay-Riddell equation.[9] The static stability of a spherical section is assured if the vehicle's center of mass is upstream from the center of curvature (dynamic stability is more problematic). Pure spheres have no lift. However, by flying at an angle of attack, a spherical section has modest aerodynamic lift thus providing some cross-range capability and widening its entry corridor. In the late 1950s and early 1960s, high-speed computers were not yet available and computational fluid dynamics was still embryonic. Because the spherical section was amenable to closed-form analysis, that geometry became the default for conservative design. Consequently, manned capsules of that era were based upon the spherical section. Pure spherical entry vehicles were used in the early Soviet Vostok and Voskhod and in Soviet Mars and Venera descent vehicles. The Apollo Command/Service Module used a spherical section forebody heatshield with a converging conical afterbody. It flew a lifting entry with a hypersonic trim angle of attack of −27° (0° is blunt-end first) to yield an average L/D (lift-to-drag ratio) of 0.368.[10] This angle of attack was achieved by precisely offsetting the vehicle's center of mass from its axis of symmetry. Other examples of the spherical section geometry in manned capsules are Soyuz/Zond, Gemini and Mercury. Even these small amounts of lift allow trajectories that have very significant effects on peak g-force (reducing g-force from 8-9g for a purely ballistic (slowed only by drag) trajectory to 4-5g) as well as greatly reducing the peak reentry heat.[11] Galileo Probe during final assembly The sphere-cone is a spherical section with a frustum or blunted cone attached. The sphere-cone's dynamic stability is typically better than that of a spherical section. With a sufficiently small half-angle and properly placed center of mass, a sphere-cone can provide aerodynamic stability from Keplerian entry to surface impact. (The "half-angle" is the angle between the cone's axis of rotational symmetry and its outer surface, and thus half the angle made by the cone's surface edges.) The original American sphere-cone aeroshell was the Mk-2 RV (reentry vehicle), which was developed in 1955 by the General Electric Corp. The Mk-2's design was derived from blunt-body theory and used a radiatively cooled thermal protection system (TPS) based upon a metallic heat shield (the different TPS types are later described in this article). The Mk-2 had significant defects as a weapon delivery system, i.e., it loitered too long in the upper atmosphere due to its lower ballistic coefficient and also trailed a stream of vaporized metal making it very visible to radar. These defects made the Mk-2 overly susceptible to anti-ballistic missile (ABM) systems. Consequently an alternative sphere-cone RV to the Mk-2 was developed by General Electric.[citation needed] Mk-6 RV, Cold War weapon and ancestor to most of NASA's entry vehicles This new RV was the Mk-6 which used a non-metallic ablative TPS (nylon phenolic). This new TPS was so effective as a reentry heat shield that significantly reduced bluntness was possible.[citation needed] However, the Mk-6 was a huge RV with an entry mass of 3360 kg, a length of 3.1 meters and a half-angle of 12.5°. Subsequent advances in nuclear weapon and ablative TPS design allowed RVs to become significantly smaller with a further reduced bluntness ratio compared to the Mk-6. Since the 1960s, the sphere-cone has become the preferred geometry for modern ICBM RVs with typical half-angles being between 10° to 11°.[citation needed] "Discoverer" type reconnaissance satellite film Recovery Vehicle (RV) Reconnaissance satellite RVs (recovery vehicles) also used a sphere-cone shape and were the first American example of a non-munition entry vehicle (Discoverer-I, launched on 28 February 1959). The sphere-cone was later used for space exploration missions to other celestial bodies or for return from open space; e.g., Stardust probe. Unlike with military RVs, the advantage of the blunt body's lower TPS mass remained with space exploration entry vehicles like the Galileo Probe with a half angle of 45° or the Viking aeroshell with a half angle of 70°. Space exploration sphere-cone entry vehicles have landed on the surface or entered the atmospheres of Mars, Venus, Jupiter and Titan. The biconic is a sphere-cone with an additional frustum attached. The biconic offers a significantly improved L/D ratio. A biconic designed for Mars aerocapture typically has an L/D of approximately 1.0 compared to an L/D of 0.368 for the Apollo-CM. The higher L/D makes a biconic shape better suited for transporting people to Mars due to the lower peak deceleration. Arguably, the most significant biconic ever flown was the Advanced Maneuverable Reentry Vehicle (AMaRV). Four AMaRVs were made by the McDonnell-Douglas Corp. and represented a significant leap in RV sophistication. Three of the AMaRVs were launched by Minuteman-1 ICBMs on 20 December 1979, 8 October 1980 and 4 October 1981. AMaRV had an entry mass of approximately 470 kg, a nose radius of 2.34 cm, a forward frustum half-angle of 10.4°, an inter-frustum radius of 14.6 cm, aft frustum half angle of 6°, and an axial length of 2.079 meters. No accurate diagram or picture of AMaRV has ever appeared in the open literature. However, a schematic sketch of an AMaRV-like vehicle along with trajectory plots showing hairpin turns has been published.[12] The DC-X, shown during its first flight, was a prototype single stage to orbit vehicle, and used a biconic shape similar to AMaRV. Opportunity rover's heat shield lying inverted on the surface of Mars. AMaRV's attitude was controlled through a split body flap (also called a "split-windward flap") along with two yaw flaps mounted on the vehicle's sides. Hydraulic actuation was used for controlling the flaps. AMaRV was guided by a fully autonomous navigation system designed for evading anti-ballistic missile (ABM) interception. The McDonnell Douglas DC-X (also a biconic) was essentially a scaled up version of AMaRV. AMaRV and the DC-X also served as the basis for an unsuccessful proposal for what eventually became the Lockheed Martin X-33. Non-axisymmetric shapes[edit] Non-axisymmetric shapes have been used for manned entry vehicles. One example is the winged orbit vehicle that uses a delta wing for maneuvering during descent much like a conventional glider. This approach has been used by the American Space Shuttle and the Soviet Buran. The lifting body is another entry vehicle geometry and was used with the X-23 PRIME (Precision Recovery Including Maneuvering Entry) vehicle.[citation needed] The FIRST (Fabrication of Inflatable Re-entry Structures for Test) system was an Aerojet proposal for an inflated-spar Rogallo wing made up from Inconel wire cloth impregnated with silicone rubber and silicon carbide dust. FIRST was proposed in both one-man and six man versions, used for emergency escape and reentry of stranded space station crews, and was based on an earlier unmanned test program that resulted in a partially successful reentry flight from space (the launcher nose cone fairing hung up on the material, dragging it too low and fast for the thermal protection system (TPS), but otherwise it appears the concept would have worked; even with the fairing dragging it, the test article flew stably on reentry until burn-through).[citation needed] The proposed MOOSE system would have used a one-man inflatable ballistic capsule as an emergency astronaut entry vehicle. This concept was carried further by the Douglas Paracone project. While these concepts were unusual, the inflated shape on reentry was in fact axisymmetric.[citation needed] Shock layer gas physics[edit] An approximate rule-of-thumb used by heat shield designers for estimating peak shock layer temperature is to assume the air temperature in kelvins to be equal to the entry speed in meters per second[citation needed]— a mathematical coincidence. For example, a spacecraft entering the atmosphere at 7.8 km/s would experience a peak shock layer temperature of 7,800 K. This is unexpected, since the kinetic energy increases with the square of the velocity and can only occur because the specific heat of the gas increases greatly with temperature (unlike the nearly constant specific heat assumed for solids under ordinary conditions). At typical reentry temperatures, the air in the shock layer is both ionized and dissociated. This chemical dissociation necessitates various physical models to describe the shock layer's thermal and chemical properties. There are four basic physical models of a gas that are important to aeronautical engineers who design heat shields: Perfect gas model[edit] Almost all aeronautical engineers are taught the perfect (ideal) gas model during their undergraduate education. Most of the important perfect gas equations along with their corresponding tables and graphs are shown in NACA Report 1135.[13] Excerpts from NACA Report 1135 often appear in the appendices of thermodynamics textbooks and are familiar to most aeronautical engineers who design supersonic aircraft. The perfect gas theory is elegant and extremely useful for designing aircraft but assumes that the gas is chemically inert. From the standpoint of aircraft design, air can be assumed to be inert for temperatures less than 550 K at one atmosphere pressure. The perfect gas theory begins to break down at 550 K and is not usable at temperatures greater than 2,000 K. For temperatures greater than 2,000 K, a heat shield designer must use a real gas model. Real (equilibrium) gas model[edit] An entry vehicle's pitching moment can be significantly influenced by real-gas effects. Both the Apollo-CM and the Space Shuttle were designed using incorrect pitching moments determined through inaccurate real-gas modelling. The Apollo-CM's trim-angle angle of attack was higher than originally estimated, resulting in a narrower lunar return entry corridor. The actual aerodynamic centre of the Columbia was upstream from the calculated value due to real-gas effects. On Columbia’s maiden flight (STS-1), astronauts John W. Young and Robert Crippen had some anxious moments during reentry when there was concern about losing control of the vehicle.[14] An equilibrium real-gas model assumes that a gas is chemically reactive, but also assumes all chemical reactions have had time to complete and all components of the gas have the same temperature (this is called thermodynamic equilibrium). When air is processed by a shock wave, it is superheated by compression and chemically dissociates through many different reactions. Direct friction upon the reentry object is not the main cause of shock-layer heating. It is caused mainly from isentropic heating of the air molecules within the compression wave. Friction based entropy increases of the molecules within the wave also account for some heating.[original research?] The distance from the shock wave to the stagnation point on the entry vehicle's leading edge is called shock wave stand off. An approximate rule of thumb for shock wave standoff distance is 0.14 times the nose radius. One can estimate the time of travel for a gas molecule from the shock wave to the stagnation point by assuming a free stream velocity of 7.8 km/s and a nose radius of 1 meter, i.e., time of travel is about 18 microseconds. This is roughly the time required for shock-wave-initiated chemical dissociation to approach chemical equilibrium in a shock layer for a 7.8 km/s entry into air during peak heat flux. Consequently, as air approaches the entry vehicle's stagnation point, the air effectively reaches chemical equilibrium thus enabling an equilibrium model to be usable. For this case, most of the shock layer between the shock wave and leading edge of an entry vehicle is chemically reacting and not in a state of equilibrium. The Fay-Riddell equation,[9] which is of extreme importance towards modeling heat flux, owes its validity to the stagnation point being in chemical equilibrium. The time required for the shock layer gas to reach equilibrium is strongly dependent upon the shock layer's pressure. For example, in the case of the Galileo Probe's entry into Jupiter's atmosphere, the shock layer was mostly in equilibrium during peak heat flux due to the very high pressures experienced (this is counterintuitive given the free stream velocity was 39 km/s during peak heat flux). Determining the thermodynamic state of the stagnation point is more difficult under an equilibrium gas model than a perfect gas model. Under a perfect gas model, the ratio of specific heats (also called "isentropic exponent", adiabatic index, "gamma" or "kappa") is assumed to be constant along with the gas constant. For a real gas, the ratio of specific heats can wildly oscillate as a function of temperature. Under a perfect gas model there is an elegant set of equations for determining thermodynamic state along a constant entropy stream line called the isentropic chain. For a real gas, the isentropic chain is unusable and a Mollier diagram would be used instead for manual calculation. However, graphical solution with a Mollier diagram is now considered obsolete with modern heat shield designers using computer programs based upon a digital lookup table (another form of Mollier diagram) or a chemistry based thermodynamics program. The chemical composition of a gas in equilibrium with fixed pressure and temperature can be determined through the Gibbs free energy method. Gibbs free energy is simply the total enthalpy of the gas minus its total entropy times temperature. A chemical equilibrium program normally does not require chemical formulas or reaction-rate equations. The program works by preserving the original elemental abundances specified for the gas and varying the different molecular combinations of the elements through numerical iteration until the lowest possible Gibbs free energy is calculated (a Newton-Raphson method is the usual numerical scheme). The data base for a Gibbs free energy program comes from spectroscopic data used in defining partition functions. Among the best equilibrium codes in existence is the program Chemical Equilibrium with Applications (CEA) which was written by Bonnie J. McBride and Sanford Gordon at NASA Lewis (now renamed "NASA Glenn Research Center"). Other names for CEA are the "Gordon and McBride Code" and the "Lewis Code". CEA is quite accurate up to 10,000 K for planetary atmospheric gases, but unusable beyond 20,000 K (double ionization is not modelled). CEA can be downloaded from the Internet along with full documentation and will compile on Linux under the G77 Fortran compiler. Real (non-equilibrium) gas model[edit] A non-equilibrium real gas model is the most accurate model of a shock layer's gas physics, but is more difficult to solve than an equilibrium model. The simplest non-equilibrium model is the Lighthill-Freeman model.[15][16] The Lighthill-Freeman model initially assumes a gas made up of a single diatomic species susceptible to only one chemical formula and its reverse; e.g., N2 → N + N and N + N → N2 (dissociation and recombination). Because of its simplicity, the Lighthill-Freeman model is a useful pedagogical tool, but is unfortunately too simple for modelling non-equilibrium air. Air is typically assumed to have a mole fraction composition of 0.7812 molecular nitrogen, 0.2095 molecular oxygen and 0.0093 argon. The simplest real gas model for air is the five species model which is based upon N2, O2, NO, N and O. The five species model assumes no ionization and ignores trace species like carbon dioxide. When running a Gibbs free energy equilibrium program, the iterative process from the originally specified molecular composition to the final calculated equilibrium composition is essentially random and not time accurate. With a non-equilibrium program, the computation process is time accurate and follows a solution path dictated by chemical and reaction rate formulas. The five species model has 17 chemical formulas (34 when counting reverse formulas). The Lighthill-Freeman model is based upon a single ordinary differential equation and one algebraic equation. The five species model is based upon 5 ordinary differential equations and 17 algebraic equations. Because the 5 ordinary differential equations are loosely coupled, the system is numerically "stiff" and difficult to solve. The five species model is only usable for entry from low Earth orbit where entry velocity is approximately 7.8 km/s. For lunar return entry of 11 km/s, the shock layer contains a significant amount of ionized nitrogen and oxygen. The five species model is no longer accurate and a twelve species model must be used instead. High speed Mars entry which involves a carbon dioxide, nitrogen and argon atmosphere is even more complex requiring a 19 species model. An important aspect of modelling non-equilibrium real gas effects is radiative heat flux. If a vehicle is entering an atmosphere at very high speed (hyperbolic trajectory, lunar return) and has a large nose radius then radiative heat flux can dominate TPS heating. Radiative heat flux during entry into an air or carbon dioxide atmosphere typically comes from asymmetric diatomic molecules; e.g., cyanogen (CN), carbon monoxide, nitric oxide (NO), single ionized molecular nitrogen etc. These molecules are formed by the shock wave dissociating ambient atmospheric gas followed by recombination within the shock layer into new molecular species. The newly formed diatomic molecules initially have a very high vibrational temperature that efficiently transforms the vibrational energy into radiant energy; i.e., radiative heat flux. The whole process takes place in less than a millisecond which makes modelling a challenge. The experimental measurement of radiative heat flux (typically done with shock tubes) along with theoretical calculation through the unsteady Schrödinger equation are among the more esoteric aspects of aerospace engineering. Most of the aerospace research work related to understanding radiative heat flux was done in the 1960s, but largely discontinued after conclusion of the Apollo Program. Radiative heat flux in air was just sufficiently understood to ensure Apollo's success. However, radiative heat flux in carbon dioxide (Mars entry) is still barely understood and will require major research. Frozen gas model[edit] The frozen gas model describes a special case of a gas that is not in equilibrium. The name "frozen gas" can be misleading. A frozen gas is not "frozen" like ice is frozen water. Rather a frozen gas is "frozen" in time (all chemical reactions are assumed to have stopped). Chemical reactions are normally driven by collisions between molecules. If gas pressure is slowly reduced such that chemical reactions can continue then the gas can remain in equilibrium. However, it is possible for gas pressure to be so suddenly reduced that almost all chemical reactions stop. For that situation the gas is considered frozen. The distinction between equilibrium and frozen is important because it is possible for a gas such as air to have significantly different properties (speed-of-sound, viscosity etc.) for the same thermodynamic state; e.g., pressure and temperature. Frozen gas can be a significant issue in the wake behind an entry vehicle. During reentry, free stream air is compressed to high temperature and pressure by the entry vehicle's shock wave. Non-equilibrium air in the shock layer is then transported past the entry vehicle's leading side into a region of rapidly expanding flow that causes freezing. The frozen air can then be entrained into a trailing vortex behind the entry vehicle. Correctly modelling the flow in the wake of an entry vehicle is very difficult. Thermal protection shield (TPS) heating in the vehicle's afterbody is usually not very high, but the geometry and unsteadiness of the vehicle's wake can significantly influence aerodynamics (pitching moment) and particularly dynamic stability. Thermal protection systems[edit] A thermal protection system or TPS is the barrier that protects a spacecraft during the searing heat of atmospheric reentry. A secondary goal may be to protect the spacecraft from the heat and cold of space while on orbit. Multiple approaches for the thermal protection of spacecraft are in use, among them ablative heat shields, passive cooling and active cooling of spacecraft surfaces. Ablative heat shield (after use) on Apollo 12 capsule The ablative heat shield functions by lifting the hot shock layer gas away from the heat shield's outer wall (creating a cooler boundary layer). The boundary layer comes from blowing of gaseous reaction products from the heat shield material and provides protection against all forms of heat flux. The overall process of reducing the heat flux experienced by the heat shield's outer wall by way of a boundary layer is called blockage. Ablation occurs at two levels in an ablative TPS: the outer surface of the TPS material chars, melts, and sublimes, while the bulk of the TPS material undergoes pyrolysis and expels product gases. The gas produced by pyrolysis is what drives blowing and causes blockage of convective and catalytic heat flux. Pyrolysis can be measured in real time using thermogravimetric analysis, so that the ablative performance can be evaluated.[17] Ablation can also provide blockage against radiative heat flux by introducing carbon into the shock layer thus making it optically opaque. Radiative heat flux blockage was the primary thermal protection mechanism of the Galileo Probe TPS material (carbon phenolic). Carbon phenolic was originally developed as a rocket nozzle throat material (used in the Space Shuttle Solid Rocket Booster) and for re-entry vehicle nose tips. Early research on ablation technology in the USA was centered at NASA's Ames Research Center located at Moffett Field, California. Ames Research Center was ideal, since it had numerous wind tunnels capable of generating varying wind velocities. Initial experiments typically mounted a mock-up of the ablative material to be analyzed within a hypersonic wind tunnel.[18] Mars Pathfinder during final assembly showing the aeroshell, cruise ring and solid rocket motor The thermal conductivity of a particular TPS material is usually proportional to the material's density.[19] Carbon phenolic is a very effective ablative material, but also has high density which is undesirable. If the heat flux experienced by an entry vehicle is insufficient to cause pyrolysis then the TPS material's conductivity could allow heat flux conduction into the TPS bondline material thus leading to TPS failure. Consequently for entry trajectories causing lower heat flux, carbon phenolic is sometimes inappropriate and lower density TPS materials such as the following examples can be better design choices: SLA in SLA-561V stands for super light-weight ablator. SLA-561V is a proprietary ablative made by Lockheed Martin that has been used as the primary TPS material on all of the 70° sphere-cone entry vehicles sent by NASA to Mars other than the Mars Science Laboratory (MSL). SLA-561V begins significant ablation at a heat flux of approximately 110 W/cm², but will fail for heat fluxes greater than 300 W/cm². The MSL aeroshell TPS is currently designed to withstand a peak heat flux of 234 W/cm². The peak heat flux experienced by the Viking-1 aeroshell which landed on Mars was 21 W/cm². For Viking-1, the TPS acted as a charred thermal insulator and never experienced significant ablation. Viking-1 was the first Mars lander and based upon a very conservative design. The Viking aeroshell had a base diameter of 3.54 meters (the largest used on Mars until Mars Science Laboratory). SLA-561V is applied by packing the ablative material into a honeycomb core that is pre-bonded to the aeroshell's structure thus enabling construction of a large heat shield.[20] NASA's Stardust sample return capsule successfully landed at the USAF Utah Range. Phenolic impregnated carbon ablator (PICA), a carbon fiber preform impregnated in phenolic resin,[21] PICA is a modern TPS material and has the advantages of low density (much lighter than carbon phenolic) coupled with efficient ablative capability at high heat flux. It is a good choice for ablative applications such as high-peak-heating conditions found on sample-return missions or lunar-return missions. PICA's thermal conductivity is lower than other high-heat-flux ablative materials, such as conventional carbon phenolics.[citation needed] PICA was patented by NASA Ames Research Center in the 1990s and was the primary TPS material for the Stardust aeroshell.[22] The Stardust sample-return capsule was the fastest man-made object ever to reenter Earth's atmosphere (12.4 km/s or 28,000 mph at 135 km altitude). This was faster than the Apollo mission capsules and 70% faster than the Shuttle.[23] PICA was critical for the viability of the Stardust mission, which returned to Earth in 2006. Stardust's heat shield (0.81 m base diameter) was manufactured from a single monolithic piece sized to withstand a nominal peak heating rate of 1.2 W/cm2. A PICA heat shield has also been used for the Mars Science Laboratory entry into the Martian atmosphere.[24] An improved and easier to manufacture version called PICA-X was developed by SpaceX in 2006-2010[24] for the Dragon space capsule.[25] The first re-entry test of a PICA-X heatshield was on the Dragon C1 mission on 8 December 2010.[26] The PICA-X heat shield was designed, developed and fully qualified by a small team of only a dozen engineers and technicians in less than four years.[24] PICA-X is ten times less expensive to manufacture than the NASA PICA heat shield material.[27] The Dragon 1 spacecraft initially used PICA-X version 1 and was later equipped with version 2. The Dragon V2 spacecraft uses PICA-X version 3. SpaceX has indicated that each new version of PICA-X primarily improves upon heat shielding capacity rather than the manufacturing cost.[citation needed] Deep Space 2 impactor aeroshell, a classic 45° sphere-cone with spherical section afterbody enabling aerodynamic stability from atmospheric entry to surface impact Silicone-impregnated reusable ceramic ablator (SIRCA) was also developed at NASA Ames Research Center and was used on the Backshell Interface Plate (BIP) of the Mars Pathfinder and Mars Exploration Rover (MER) aeroshells. The BIP was at the attachment points between the aeroshell's backshell (also called the afterbody or aft cover) and the cruise ring (also called the cruise stage). SIRCA was also the primary TPS material for the unsuccessful Deep Space 2 (DS/2) Mars impactor probes with their 0.35 m base diameter aeroshells. SIRCA is a monolithic, insulating material that can provide thermal protection through ablation. It is the only TPS material that can be machined to custom shapes and then applied directly to the spacecraft. There is no post-processing, heat treating, or additional coatings required (unlike Space Shuttle tiles). Since SIRCA can be machined to precise shapes, it can be applied as tiles, leading edge sections, full nose caps, or in any number of custom shapes or sizes. As of 1996, SIRCA had been demonstrated in backshell interface applications, but not yet as a forebody TPS material.[28] AVCOAT is a NASA-specified ablative heat shield, a glass-filled epoxy-novolac system.[29] NASA originally used it for the Apollo capsule and then utilized the material for its next-generation beyond low Earth-orbit Orion spacecraft.[30] The Avcoat to be used on Orion has been reformulated to meet environmental legislation that has been passed since the end of Apollo.[31][32] Thermal soak[edit] Astronaut Andrew S. W. Thomas takes a close look at TPS tiles underneath Space Shuttle Atlantis. Rigid black LI-900 tiles were used on the Space Shuttle. Thermal soak is a part of almost all TPS schemes. For example, an ablative heat shield loses most of its thermal protection effectiveness when the outer wall temperature drops below the minimum necessary for pyrolysis. From that time to the end of the heat pulse, heat from the shock layer convects into the heat shield's outer wall and would eventually conduct to the payload.[citation needed] This outcome is prevented by ejecting the heat shield (with its heat soak) prior to the heat conducting to the inner wall. Typical Space Shuttle TPS tiles (LI-900) have remarkable thermal protection properties. An LI-900 tile exposed to a temperature of 1000 K on one side will remain merely warm to the touch on the other side. However, they are relatively brittle and break easily, and cannot survive in-flight rain. Passively cooled[edit] In some early ballistic missile RVs; e.g., the Mk-2 and the sub-orbital Mercury spacecraft, radiatively cooled TPS were used to initially absorb heat flux during the heat pulse and then, after the heat pulse, radiate and convect the stored heat back into the atmosphere. However, the earlier version of this technique required a considerable quantity of metal TPS (e.g., titanium, beryllium, copper, etc.). Modern designers prefer to avoid this added mass by using ablative and thermal soak TPS instead. The Mercury Capsule design (shown with escape tower) originally used a radiatively cooled TPS, but was later converted to an ablative TPS Radiatively cooled TPS can still be found on modern entry vehicles, but reinforced carbon-carbon (RCC) (also called carbon-carbon) is normally used instead of metal. RCC is the TPS material on the Space Shuttle's nose cone and wing leading edges. RCC was also proposed as the leading edge material for the X-33. Carbon is the most refractory material known with a one atmosphere sublimation temperature of 3825 °C for graphite. This high temperature made carbon an obvious choice as a radiatively cooled TPS material. Disadvantages of RCC are that it is currently very expensive to manufacture and lacks impact resistance.[33] Some high-velocity aircraft, such as the SR-71 Blackbird and Concorde, deal with heating similar to that experienced by spacecraft but at much lower intensity and for hours at a time. Studies of the SR-71's titanium skin revealed the metal structure was restored to its original strength through annealing due to aerodynamic heating. In the case of Concorde, the aluminum nose was permitted to reach a maximum operating temperature of 127 °C (typically 180 °C warmer than the, sub-zero, ambient air); the metallurgical implications (loss of temper) that would be associated with a higher peak temperature were the most significant factors determining the top speed of the aircraft. A radiatively cooled TPS for an entry vehicle is often called a hot metal TPS. Early TPS designs for the Space Shuttle called for a hot metal TPS based upon nickel superalloy (René 41) and titanium shingles.[34] The earlier Shuttle TPS concept was rejected because it was believed a silica tile based TPS offered less expensive development and manufacturing costs.[citation needed] A nickel superalloy shingle TPS was again proposed for the unsuccessful X-33 single-stage to orbit (SSTO) prototype.[35] Recently, newer radiatively cooled TPS materials have been developed that could be superior to RCC. Referred to by their prototype vehicle Slender Hypervelocity Aerothermodynamic Research Probe (SHARP), these TPS materials have been based upon substances such as zirconium diboride and hafnium diboride. SHARP TPS have suggested performance improvements allowing for sustained Mach 7 flight at sea level, Mach 11 flight at 100,000 ft (30,000 m) altitudes, and significant improvements for vehicles designed for continuous hypersonic flight. SHARP TPS materials enable sharp leading edges and nose cones to greatly reduce drag for air breathing combined cycle propelled space planes and lifting bodies. SHARP materials have exhibited effective TPS characteristics from zero to more than 2,000 °C, with melting points over 3,500 °C. They are structurally stronger than RCC, thus do not require structural reinforcement with materials such as Inconel. SHARP materials are extremely efficient at re-radiating absorbed heat, thus eliminating the need for additional TPS behind and between SHARP materials and conventional vehicle structure. NASA initially funded (and discontinued) a multi-phase R&D program through the University of Montana in 2001 to test SHARP materials on test vehicles.[36][37] Actively cooled[edit] Various advanced reusable spacecraft and hypersonic aircraft designs have been proposed to employ heat shields made from temperature-resistant metal alloys that incorporated a refrigerant or cryogenic fuel circulating through them. Such a TPS concept was proposed for the X-30 National Aerospace Plane (NASP). The NASP was supposed to have been a scramjet powered hypersonic aircraft, but failed in development. In the early 1960s various TPS systems were proposed to use water or other cooling liquid sprayed into the shock layer, or passed through channels in the heat shield. Advantages included the possibility of more all-metal designs which would be cheaper to develop, be more rugged, and eliminate the need for classified technology. The disadvantages are increased weight and complexity, and lower reliability. The concept has never been flown, but a similar technology (the plug nozzle[38]) did undergo extensive ground testing. Feathered reentry[edit] In 2004, aircraft designer Burt Rutan demonstrated the feasibility of a shape-changing airfoil for reentry with the suborbital SpaceShipOne. The wings on this craft rotate upward into the feather configuration that provides a shuttlecock effect. Thus SpaceShipOne achieves much more aerodynamic drag on reentry while not experiencing significant thermal loads. The configuration increases drag, as the craft is now less streamlined and results in more atmospheric gas particles hitting the spacecraft at higher altitudes than otherwise. The aircraft thus slows down more in higher atmospheric layers which is the key to efficient reentry. Secondly the aircraft will automatically orient itself in this state to a high drag attitude.[39] However, the velocity attained by SpaceShipOne prior to reentry is much lower than that of an orbital spacecraft, and engineers, including Rutan, recognize that a feathered reentry technique is not suitable for return from orbit. On 4 May 2011, the first test on the SpaceShipTwo of the feathering mechanism was made during a glideflight after release from the White Knight Two. The feathered reentry was first described by Dean Chapman of NACA in 1958.[40] In the section of his report on Composite Entry, Chapman described a solution to the problem using a high-drag device: It may be desirable to combine lifting and nonlifting entry in order to achieve some advantages... For landing maneuverability it obviously is advantageous to employ a lifting vehicle. The total heat absorbed by a lifting vehicle, however, is much higher than for a nonlifting vehicle... Nonlifting vehicles can more easily be constructed... by employing, for example, a large, light drag device... The larger the device, the smaller is the heating rate. Nonlifting vehicles with shuttlecock stability are advantageous also from the viewpoint of minimum control requirements during entry. ... an evident composite type of entry, which combines some of the desirable features of lifting and nonlifting trajectories, would be to enter first without lift but with a... drag device; then, when the velocity is reduced to a certain value... the device is jettisoned or retracted, leaving a lifting vehicle... for the remainder of the descent. Inflatable heat shield reentry[edit] NASA engineers check IRVE Deceleration for atmospheric reentry, especially for higher-speed Mars-return missions, benefits from maximizing "the drag area of the entry system. The larger the diameter of the aeroshell, the bigger the payload can be."[41] An inflatable aeroshell provides one alternative for enlarging the drag area with a low-mass design. Such an inflatable shield/aerobrake was designed for the penetrators of Mars 96 mission. Since the mission failed due to the launcher malfunction, the NPO Lavochkin and DASA/ESA have designed a mission for Earth orbit. The Inflatable Reentry and Descent Technology (IRDT) demonstrator was launched on Soyuz-Fregat on 8 February 2000. The inflatable shield was designed as a cone with two stages of inflation. Although the second stage of the shield failed to inflate, the demonstrator survived the orbital reentry and was recovered.[42][43] The subsequent missions flown on the Volna rocket were not successful due to launcher failure.[44] NASA launched an inflatable heat shield experimental spacecraft on 17 August 2009 with the successful first test flight of the Inflatable Re-entry Vehicle Experiment (IRVE). The heatshield had been vacuum-packed into a 15 inches (380 mm) diameter payload shroud and launched on a Black Brant 9 sounding rocket from NASA's Wallops Flight Facility on Wallops Island, Virginia. "Nitrogen inflated the 10-foot (3.0 m) diameter heat shield, made of several layers of silicone-coated [Kevlar] fabric, to a mushroom shape in space several minutes after liftoff."[41] The rocket apogee was at an altitude of 131 miles (211 km) where it began its descent to supersonic speed. Less than a minute later the shield was released from its cover to inflate at an altitude of 124 miles (200 km). The inflation of the shield took less than 90 seconds.[41] Entry vehicle design considerations[edit] There are four critical parameters considered when designing a vehicle for atmospheric entry: 1. Peak heat flux 2. Heat load 3. Peak deceleration 4. Peak dynamic pressure Peak heat flux and dynamic pressure selects the TPS material. Heat load selects the thickness of the TPS material stack. Peak deceleration is of major importance for manned missions. The upper limit for manned return to Earth from Low Earth Orbit (LEO) or lunar return is 10 Gs.[45] For Martian atmospheric entry after long exposure to zero gravity, the upper limit is 4 Gs.[45] Peak dynamic pressure can also influence the selection of the outermost TPS material if spallation is an issue. Starting from the principle of conservative design, the engineer typically considers two worst case trajectories, the undershoot and overshoot trajectories. The overshoot trajectory is typically defined as the shallowest allowable entry velocity angle prior to atmospheric skip-off. The overshoot trajectory has the highest heat load and sets the TPS thickness. The undershoot trajectory is defined by the steepest allowable trajectory. For manned missions the steepest entry angle is limited by the peak deceleration. The undershoot trajectory also has the highest peak heat flux and dynamic pressure. Consequently the undershoot trajectory is the basis for selecting the TPS material. There is no "one size fits all" TPS material. A TPS material that is ideal for high heat flux may be too conductive (too dense) for a long duration heat load. A low density TPS material might lack the tensile strength to resist spallation if the dynamic pressure is too high. A TPS material can perform well for a specific peak heat flux, but fail catastrophically for the same peak heat flux if the wall pressure is significantly increased (this happened with NASA's R-4 test spacecraft).[45] Older TPS materials tend to be more labor-intensive and expensive to manufacture compared to modern materials. However, modern TPS materials often lack the flight history of the older materials (an important consideration for a risk-averse designer). Based upon Allen and Eggers discovery, maximum aeroshell bluntness (maximum drag) yields minimum TPS mass. Maximum bluntness (minimum ballistic coefficient) also yields a minimal terminal velocity at maximum altitude (very important for Mars EDL, but detrimental for military RVs). However, there is an upper limit to bluntness imposed by aerodynamic stability considerations based upon shock wave detachment. A shock wave will remain attached to the tip of a sharp cone if the cone's half-angle is below a critical value. This critical half-angle can be estimated using perfect gas theory (this specific aerodynamic instability occurs below hypersonic speeds). For a nitrogen atmosphere (Earth or Titan), the maximum allowed half-angle is approximately 60°. For a carbon dioxide atmosphere (Mars or Venus), the maximum allowed half-angle is approximately 70°. After shock wave detachment, an entry vehicle must carry significantly more shocklayer gas around the leading edge stagnation point (the subsonic cap). Consequently, the aerodynamic center moves upstream thus causing aerodynamic instability. It is incorrect to reapply an aeroshell design intended for Titan entry (Huygens probe in a nitrogen atmosphere) for Mars entry (Beagle-2 in a carbon dioxide atmosphere). Prior to being abandoned, the Soviet Mars lander program achieved one successful landing (Mars 3), on the second of three entry attempts (the others were Mars 2 and Mars 6). The Soviet Mars landers were based upon a 60° half-angle aeroshell design. A 45 degree half-angle sphere-cone is typically used for atmospheric probes (surface landing not intended) even though TPS mass is not minimized. The rationale for a 45° half-angle is to have either aerodynamic stability from entry-to-impact (the heat shield is not jettisoned) or a short-and-sharp heat pulse followed by prompt heat shield jettison. A 45° sphere-cone design was used with the DS/2 Mars impactor and Pioneer Venus Probes. Notable atmospheric entry accidents[edit] Re-entry window A- Friction with air, B- In air flight. C- Expulsion lower angle, D- Perpendicular to the entry point, E- Excess friction 6.9° to 90°, F- Repulsion of 5.5° or less, G- Explosion friction, H- plane tangential to the entry point Not all atmospheric re-entries have been successful and some have resulted in significant disasters. • Friendship 7 — Instrument readings showed that the heat shield and landing bag were not locked. The decision was made to leave the retrorocket pack in position during reentry. Lone astronaut John Glenn survived. The instrument readings were later found to have been erroneous. • Voskhod 2 — The service module failed to detach for some time, but the crew survived. • Soyuz 1 — The attitude control system failed while still in orbit and later parachutes got entangled during the emergency landing sequence (entry, descent and landing (EDL) failure). Lone cosmonaut Vladimir Mikhailovich Komarov died. • Soyuz 5 — The service module failed to detach, but the crew survived. • Soyuz 11 — Early depressurization led to the death of all three crew. • Mars Polar Lander — Failed during EDL. The failure was believed to be the consequence of a software error. The precise cause is unknown for lack of real-time telemetry. • Space Shuttle Columbia — The failure of an RCC panel on a wing leading edge led to breakup of the orbiter at hypersonic speed resulting in the death of all seven crew members. Genesis entry vehicle after crash • Genesis — The parachute failed to deploy due to a G-switch having been installed backwards (a similar error delayed parachute deployment for the Galileo Probe). Consequently, the Genesis entry vehicle crashed into the desert floor. The payload was damaged, but most scientific data were recoverable. • Soyuz TMA-11 (April 19, 2008) — The Soyuz propulsion module failed to separate properly; fallback ballistic reentry was executed that subjected the crew to forces about eight times that of gravity.[46] The crew survived. Uncontrolled and unprotected reentries[edit] Of satellites that reenter, approximately 10-40% of the mass of the object is likely to reach the surface of the Earth.[47] On average, about one catalogued object reenters per day.[48] In 1979, Skylab reentered uncontrolled, spreading debris across the Australian Outback, damaging several buildings and killing a cow.[51][52] The re-entry was a major media event largely due to the Cosmos 954 incident, but not viewed as much as a potential disaster since it did not carry nuclear fuel. The city of Esperance, Western Australia, issued a fine for littering to the United States, which was finally paid 30 years later (not by NASA, but by privately collected funds from radio listeners).[53] NASA had originally hoped to use a Space Shuttle mission to either extend its life or enable a controlled reentry, but delays in the program combined with unexpectedly high solar activity made this impossible.[54][55] On February 7, 1991 Salyut 7 underwent uncontrolled reentry with Kosmos 1686. Reentering over Argentina, scattering much of its debris over the town of Capitan Bermudez.[56][57][58] Deorbit disposal[edit] In 1971, the world's first space station Salyut 1 was deliberately de-orbited into the Pacific Ocean following the Soyuz 11 accident. Its successor, Salyut 6, was de-orbited in a controlled manner as well. On June 4, 2000 the Compton Gamma Ray Observatory was deliberately de-orbited after one of its gyroscopes failed. The debris that did not burn up fell harmlessly into the Pacific Ocean. The observatory was still operational, but the failure of another gyroscope would have made de-orbiting much more difficult and dangerous. With some controversy, NASA decided in the interest of public safety that a controlled crash was preferable to letting the craft come down at random. In 2001, the Russian Mir space station was deliberately de-orbited, and broke apart in the fashion expected by the command center during atmospheric re-entry. Mir entered the Earth's atmosphere on March 23, 2001, near Nadi, Fiji, and fell into the South Pacific Ocean. On February 21, 2008, a disabled US spy satellite, USA 193, was successfully hit at an altitude of approximately 246 kilometers (153 mi) by an SM-3 missile fired from the U.S. Navy cruiser Lake Erie off the coast of Hawaii. The satellite was inoperative, having failed to reach its intended orbit when it was launched in 2006. Due to its rapidly deteriorating orbit, it was destined for uncontrolled reentry within a month. United States Department of Defense expressed concern that the 1,000-pound (450 kg) fuel tank containing highly toxic hydrazine might survive reentry to reach the Earth’s surface intact. Several governments including those of Russia, China, and Belarus protested the action as a thinly-veiled demonstration of US anti-satellite capabilities.[59] China had previously caused an international incident when it tested an anti-satellite missile in 2007. On September 7, 2011, NASA announced the impending uncontrolled re-entry of Upper Atmosphere Research Satellite and noted that there was a small risk to the public.[60] The decommissioned satellite reentered the atmosphere on September 24, 2011, and some pieces are presumed to have crashed into the South Pacific Ocean over a debris field 500 miles (800 km) long.[61] Successful atmospheric re-entries from orbital velocities[edit] Manned orbital re-entry, by country/governmental entity Manned orbital re-entry, by commercial entity • None to date Unmanned orbital re-entry, by country/governmental entity Unmanned orbital re-entry, by commercial entity Selected atmospheric re-entries[edit] What Re-entry Phobos-Grunt 2012 ROSAT 2011 UARS 2011 Mir 2001 Skylab 1979 See also[edit] Further reading[edit] • Launius, Roger D.; Jenkins, Dennis R. (October 10, 2012). Coming Home: Reentry and Recovery from Space. NASA. ISBN 9780160910647. OCLC 802182873. Retrieved August 21, 2014.  • Martin, John J. (1966). Atmospheric Entry - An Introduction to Its Science and Engineering. Old Tappan, NJ: Prentice-Hall.  • Regan, Frank J. (1984). Re-Entry Vehicle Dynamics (AIAA Education Series). New York: American Institute of Aeronautics and Astronautics, Inc. ISBN 0-915928-78-7.  • Etkin, Bernard (1972). Dynamics of Atmospheric Flight. New York: John Wiley & Sons, Inc. ISBN 0-471-24620-4.  • Vincenti, Walter G.; Kruger, Jr., Charles H. (1986). Introduction to Physical Gas Dynamics. Malabar, Florida: Robert E.Krieger Publishing Co. ISBN 0-88275-309-6.  • Hansen, C. Frederick (1976). Molecular Physics of Equilibrium Gases, A Handbook for Engineers. NASA. NASA SP-3096.  • Hayes, Wallace D.; Probstein, Ronald F. (1959). Hypersonic Flow Theory. New York and London: Academic Press.  A revised version of this classic text has been reissued as an inexpensive paperback: Hayes, Wallace D. (1966). Hypersonic Inviscid Flow. Mineola, New York: Dover Publications. ISBN 0-486-43281-5.  reissued in 2004 • Anderson, Jr., John D. (1989). Hypersonic and High Temperature Gas Dynamics. New York: McGraw-Hill, Inc. ISBN 0-07-001671-2.  Notes and references[edit] 1. ^ 2. ^ 3. ^ Goddard, Robert H. (Mar 1920). "Report Concerning Further Developments". The Smithsonian Institution Archives. Archived from the original on 26 June 2009. Retrieved 2009-06-29. In the case of meteors, which enter the atmosphere with speeds as high as 30 miles per second, the interior of the meteors remains cold, and the erosion is due, to a large extent, to chipping or cracking of the suddenly heated surface. For this reason, if the outer surface of the apparatus were to consist of layers of a very infusible hard substance with layers of a poor heat conductor between, the surface would not be eroded to any considerable extent, especially as the velocity of the apparatus would not be nearly so great as that of the average meteor.  4. ^ Boris Chertok, "Rockets and People", NASA History Series, 2006 5. ^ 6. ^ Hansen, James R. (Jun 1987). "Chapter 12: Hypersonics and the Transition to Space". Engineer in Charge: A History of the Langley Aeronautical Laboratory, 1917-1958. The NASA History Series. sp-4305. United States Government Printing. ISBN 978-0-318-23455-7.  7. ^ Allen, H. Julian; Eggers, Jr., A. J. (1958). "A Study of the Motion and Aerodynamic Heating of Ballistic Missiles Entering the Earth's Atmosphere at High Supersonic Speeds". NACA Annual Report (NASA Technical Reports) 44.2 (NACA-TR-1381): 1125–1140. [dead link] 8. ^ Przadka, W.; Miedzik, J.; Goujon-Durand, S.; Wesfreid, J.E. "The wake behind the sphere; analysis of vortices during transition from steadiness to unsteadiness." (PDF). Polish french cooperation in fluid research. Archive of Mechanics., 60, 6, pp. 467–474, Warszawa 2008. Received May 29, 2008; revised version November 13, 2008. Retrieved 3 April 2015.  9. ^ a b Fay, J. A.; Riddell, F. R. (February 1958). "Theory of Stagnation Point Heat Transfer in Dissociated Air" (PDF REPRINT). Journal of the Aeronautical Sciences 25 (2): 73–85. doi:10.2514/8.7517. Retrieved 2009-06-29.  11. ^ Whittington, Kurt Thomas. "A Tool to Extrapolate Thermal Reentry Atmosphere Parameters Along a Body in Trajectory Space" (PDF). NCSU Libraries Technical Reports Repository. A thesis submitted to the Graduate Faculty of North Carolina State University in partial fulfillment of the requirements for the degree of Master of Science Aerospace Engineering Raleigh, North Carolina 2011, pp.5. Retrieved 5 April 2015.  12. ^ Regan, Frank J. and Anadakrishnan, Satya M., "Dynamics of Atmospheric Re-Entry," AIAA Education Series, American Institute of Aeronautics and Astronautics, Inc., New York, ISBN 1-56347-048-9, (1993). 13. ^ "Equations, tables, and charts for compressible flow". NACA Annual Report (NASA Technical Reports) 39 (NACA-TR-1135): 611–681. 1953.  14. ^ Kenneth Iliff and Mary Shafer, Space Shuttle Hypersonic Aerodynamic and Aerothermodynamic Flight Research and the Comparison to Ground Test Results, Page 5-6 15. ^ Lighthill, M.J. (Jan 1957). "Dynamics of a Dissociating Gas. Part I. Equilibrium Flow". Journal of Fluid Mechanics 2 (1): 1–32. Bibcode:1957JFM.....2....1L. doi:10.1017/S0022112057000713.  16. ^ Freeman, N.C. (Aug 1958). "Non-equilibrium Flow of an Ideal Dissociating Gas". Journal of Fluid Mechanics 4 (4): 407–425. Bibcode:1958JFM.....4..407F. doi:10.1017/S0022112058000549.  18. ^ Hogan, C. Michael, Parker, John and Winkler, Ernest, of NASA Ames Research Center, "An Analytical Method for Obtaining the Thermogravimetric Kinetics of Char-forming Ablative Materials from Thermogravimetric Measurements", AIAA/ASME Seventh Structures and Materials Conference, April, 1966 19. ^ Di Benedetto, A.T.; Nicolais, L.; Watanabe, R. (1992). Composite materials : proceedings of Symposium A4 on Composite Materials of the International Conference on Advanced Materials--ICAM 91, Strasbourg, France, 27-29 May, 1991. Amsterdam: North-Holland. p. 111. ISBN 0444893563.  20. ^ Tran, Huy; Michael Tauber; William Henline; Duoc Tran; Alan Cartledge; Frank Hui; Norm Zimmerman (1996). Ames Research Center Shear Tests of SLA-561V Heat Shield Material for Mars-Pathfinder (PDF) (Technical report). NASA Ames Research Center. NASA Technical Memorandum 110402.  21. ^ Lachaud, Jean; N. Mansour, Nagi (June 2010). A pyrolysis and ablation toolbox based on OpenFOAM (PDF). 5th OpenFOAM Workshop. Gothenburg, Sweden. p. 1.  22. ^ Tran, Huy K, et al., "Qualification of the forebody heatshield of the Stardust's Sample Return Capsule," AIAA, Thermophysics Conference, 32nd, Atlanta, GA; 23–25 June 1997. 23. ^ Stardust - Cool Facts 24. ^ a b c Chambers, Andrew; Dan Rasky (2010-11-14). "NASA + SpaceX Work Together". NASA. Retrieved 2011-02-16. SpaceX undertook the design and manufacture of the reentry heat shield; it brought speed and efficiency that allowed the heat shield to be designed, developed, and qualified in less than four years.'  25. ^ SpaceX Manufactured Heat Shield Material - February 23, 2009 26. ^ Dragon could visit space station next,, 2010-12-08, accessed 2010-12-09. 27. ^ Chaikin, Andrew (January 2012). "1 visionary + 3 launchers + 1,500 employees = ? : Is SpaceX changing the rocket equation?". Air & Space Smithsonian. Retrieved 2012-11-13. SpaceX’s material, called PICA-X, is 1/10th as expensive than the original [NASA PICA material and is better], ... a single PICA-X heat shield could withstand hundreds of returns from low Earth orbit; it can also handle the much higher energy reentries from the Moon or Mars.  28. ^ Tran, Huy K., et al., "Silicone impregnated reusable ceramic ablators for Mars follow-on missions," AIAA-1996-1819, Thermophysics Conference, 31st, New Orleans, LA, June 17–20, 1996. 29. ^ Flight-Test Analysis Of Apollo Heat-Shield Material Using The Pacemaker Vehicle System NASA Technical Note D-4713, pp. 8, 1968-08, accessed 2010-12-26. "Avcoat 5026-39/HC-G is an epoxy novolac resin with special additives in a fiberglass honeycomb matrix. In fabrication, the empty honeycomb is bonded to the primary structure and the resin is gunned into each cell individually. ... The overall density of the material is 32 lb/ft3 (512 kg/m3). The char of the material is composed mainly of silica and carbon. It is necessary to know the amounts of each in the char because in the ablation analysis the silica is considered to be inert, but the carbon is considered to enter into exothermic reactions with oxygen. ... At 2160O R (12000 K), 54 percent by weight of the virgin material has volatilized and 46 percent has remained as char. ... In the virgin material, 25 percent by weight is silica, and since the silica is considered to be inert the char-layer composition becomes 6.7 lb/ft3 (107.4 kg/m3) of carbon and 8 lb/ft3 (128.1 kg/m3) of silica." 30. ^ NASA Selects Material for Orion Spacecraft Heat Shield, 2009-04-07, accessed 2011-01-02. 31. ^ NASA's Orion heat shield decision expected this month 2009-10-03, accessed 2011-01-02 32. ^ Company Watch (Apr 12, 2009 ) 33. ^ [1] Columbia Accident Investigation Board report. 34. ^ [2] Shuttle Evolutionary History. 35. ^ [3] X-33 Heat Shield Development report. 36. ^ 37. ^ sharp structure homepage w left 38. ^ - J2T-200K & J2T-250K 39. ^ SpaceShipOne 40. ^ Chapman, Dean R. (May 1958). "An approximate analytical method for studying reentry into planetary atmospheres" (PDF). NACA Technical Note 4276: 38. Archived from the original (PDF) on 2011-04-07.  41. ^ a b c NASA Launches New Technology: An Inflatable Heat Shield, NASA Mission News, 2009-08-17, accessed 2011-01-02. 42. ^ Inflatable Re-Entry Technologies: Flight Demonstration and Future Prospects 43. ^ Inflatable Reentry and Descent Technology (IRDT) Factsheet, ESA, September, 2005 44. ^ IRDT demonstration missions 45. ^ a b c Pavlosky, James E., St. Leger, Leslie G., "Apollo Experience Report - Thermal Protection Subsystem," NASA TN D-7564, (1974). 46. ^ William Harwood (2008). "Whitson describes rough Soyuz entry and landing". Spaceflight Now. Retrieved July 12, 2008.  47. ^ Spacecraft Reentry FAQ: How much material from a satellite will survive reentry? 48. ^ NASA - Frequently Asked Questions: Orbital Debris 49. ^ Center for Orbital and Reentry Debris Studies - Spacecraft Reentry 50. ^ Settlement of Claim between Canada and the Union of Soviet Socialist Republics for Damage Caused by "Cosmos 954" (Released on April 2, 1981) 51. ^ Hanslmeier, Arnold (2002). The sun and space weather. Dordrecht ; Boston: Kluwer Academic Publishers. p. 269. ISBN 9781402056048.  52. ^ Mitnik, Donald (2009). Death of a Trillion Dreams. (October 19, 2009). p. 113. ISBN 978-0557156016.  53. ^ Littering fine paid[dead link] 54. ^ Lamprecht, Jan (1998). Hollow planets : a feasibility study of possible hollow worlds. Austin, TX: World Wide Pub. p. 326. ISBN 9780620219631.  55. ^ Elkins-Tanton, Linda (2006). The Sun, Mercury, and Venus. New York: Chelsea House. p. 56. ISBN 9780816051939.  56. ^, Spacecraft Reentry FAQ:[dead link] 57. ^ Astronautix, Salyut 7. 58. ^ NYT, Salyut 7, Soviet Station in Space, Falls to Earth After 9-Year Orbit 59. ^ Gray, Andrew (2008-02-21). "U.S. has high confidence it hit satellite fuel tank". Reuters. Archived from the original on 25 February 2008. Retrieved 2008-02-23.  60. ^ David, Leonard (7 September 2011). "Huge Defunct Satellite to Plunge to Earth Soon, NASA Says". Retrieved 10 September 2011.  61. ^ "Final Update: NASA's UARS Re-enters Earth's Atmosphere". Retrieved 2011-09-27.  External links[edit]
f9d1b03f888a50fb
Home      Discussion      Topics      Dictionary      Almanac Signup       Login Quantum chemistry Quantum chemistry Ask a question about 'Quantum chemistry' Start a new discussion about 'Quantum chemistry' Answer questions from other users Full Discussion Forum Quantum chemistry is a branch of chemistry Chemistry is the science of matter, especially its chemical reactions, but also its composition, structure and properties. Chemistry is concerned with atoms and their interactions with other atoms, and particularly with the properties of chemical bonds....  whose primary focus is the application of quantum mechanics Quantum mechanics  in physical models and experiments of chemical systems. It involves heavy interplay of experimental and theoretical methods: • Experimental quantum chemists rely heavily on spectroscopy , through which information regarding the quantization Quantization is the procedure of constraining something from a relatively large or continuous set of values to a relatively small discrete set...  of energy on a molecular scale can be obtained. Common methods are infra-red (IR) spectroscopy and nuclear magnetic resonance (NMR) spectroscopy. • Theoretical quantum chemistry, the workings of which also tend to fall under the category of computational chemistry Computational chemistry , seeks to calculate the predictions of quantum theory; as this task, when applied to polyatomic species, invokes the many-body problem Many-body problem The many-body problem is a general name for a vast category of physical problems pertaining to the properties of microscopic systems made of a large number of interacting particles. Microscopic here implies that quantum mechanics has to be used to provide an accurate description of the system... , these calculations are performed using computers rather than by back-of-the-envelope calculations. In these ways, quantum chemists investigate chemical phenomena. • In reactions, quantum chemistry studies the ground state of individual atoms and molecules, the excited states, and the transition states that occur during chemical reactions. • On the calculations: quantum chemical studies use also semi-empirical and other methods based on quantum mechanical principles, and deal with time dependent problems. Many quantum chemical studies assume the nuclei are at rest (Born-Oppenheimer approximation). Many calculations involve iterative methods that include self-consistent field methods. Major goals of quantum chemistry include increasing the accuracy of the results for small molecular systems, and increasing the size of large molecules that can be processed, which is limited by scaling considerations—the computation time increases as a power of the number of atoms. The history of quantum chemistry essentially began with the 1838 discovery of cathode rays by Michael Faraday Michael Faraday Michael Faraday, FRS was an English chemist and physicist who contributed to the fields of electromagnetism and electrochemistry.... Gustav Kirchhoff Gustav Robert Kirchhoff was a German physicist who contributed to the fundamental understanding of electrical circuits, spectroscopy, and the emission of black-body radiation by heated objects... , the 1877 suggestion by Ludwig Boltzmann Ludwig Boltzmann Ludwig Eduard Boltzmann was an Austrian physicist famous for his founding contributions in the fields of statistical mechanics and statistical thermodynamics... Max Planck Max Karl Ernst Ludwig Planck, ForMemRS, was a German physicist who actualized the quantum physics, initiating a revolution in natural science and philosophy. He is regarded as the founder of the quantum theory, for which he received the Nobel Prize in Physics in 1918.-Life and career:Planck came...  that any energy radiating atomic system can theoretically be divided into a number of discrete energy elements ε such that each of these energy elements is proportional to the frequency  ν with which they each individually radiate energy , as defined by the following formula: Photoelectric effect In the photoelectric effect, electrons are emitted from matter as a consequence of their absorption of energy from electromagnetic radiation of very short wavelength, such as visible or ultraviolet light. Electrons emitted in this manner may be referred to as photoelectrons...  (1839), i.e., that shining light on certain materials can function to eject electrons from the material, Albert Einstein Albert Einstein  postulated, based on Planck’s quantum hypothesis, that light Light or visible light is electromagnetic radiation that is visible to the human eye, and is responsible for the sense of sight. Visible light has wavelength in a range from about 380 nanometres to about 740 nm, with a frequency range of about 405 THz to 790 THz... Electronic structure Schrödinger equation The Schrödinger equation was formulated in 1926 by Austrian physicist Erwin Schrödinger. Used in physics , it is an equation that describes how the quantum state of a physical system changes in time....  (or Dirac equation Dirac equation The Dirac equation is a relativistic quantum mechanical wave equation formulated by British physicist Paul Dirac in 1928. It provided a description of elementary spin-½ particles, such as electrons, consistent with both the principles of quantum mechanics and the theory of special relativity, and...  in relativistic quantum chemistry Relativistic quantum chemistry Relativistic quantum chemistry invokes quantum chemical and relativistic mechanical arguments to explain elemental properties and structure, especially for heavy elements of the periodic table.... ) with the electronic molecular Hamiltonian. This is called determining the electronic structure of the molecule. It can be said that the electronic structure of a molecule or crystal implies essentially its chemical properties. An exact solution for the Schrödinger equation can only be obtained for the hydrogen atom. Since all other atomic, or molecular systems, involve the motions of three or more "particles", their Schrödinger equations cannot be solved exactly and so approximate solutions must be sought. Wave model Atomic nucleus The nucleus is the very dense region consisting of protons and neutrons at the center of an atom. It was discovered in 1911, as a result of Ernest Rutherford's interpretation of the famous 1909 Rutherford experiment performed by Hans Geiger and Ernest Marsden, under the direction of Rutherford. The...  surrounded by electrons. Unlike the earlier Bohr model Bohr model Probability amplitude In quantum mechanics, a probability amplitude is a complex number whose modulus squared represents a probability or probability density.For example, if the probability amplitude of a quantum state is \alpha, the probability of measuring that state is |\alpha|^2... Predictive power The predictive power of a scientific theory refers to its ability to generate testable predictions. Theories with strong predictive power are highly valued, because the predictions can often encourage the falsification of the theory... Valence bond Although the mathematical basis of quantum chemistry had been laid by Schrödinger Erwin Schrödinger Erwin Rudolf Josef Alexander Schrödinger was an Austrian physicist and theoretical biologist who was one of the fathers of quantum mechanics, and is famed for a number of important contributions to physics, especially the Schrödinger equation, for which he received the Nobel Prize in Physics in 1933... Walter Heitler Walter Heinrich Heitler was a German physicist who made contributions to quantum electrodynamics and quantum field theory...  and Fritz London Fritz London Fritz Wolfgang London was a German theoretical physicist. His fundamental contributions to the theories of chemical bonding and of intermolecular forces are today considered classic and are discussed in standard textbooks of physical chemistry.With his brother Heinz, he made a significant... John C. Slater John Clarke Slater was a noted American physicist who made major contributions to the theory of the electronic structure of atoms, molecules and solids. This work is of ongoing importance in chemistry, as well as in many areas of physics. He also made major contributions to microwave electronics....  and the American theoretical chemist Linus Pauling Linus Pauling Chemical bond Molecular orbital An alternative approach was developed in 1929 by Friedrich Hund Friedrich Hund Friedrich Hermann Hund was a German physicist from Karlsruhe known for his work on atoms and molecules.Hund worked at the Universities of Rostock, Leipzig, Jena, Frankfurt am Main, and Göttingen....  and Robert S. Mulliken Robert S. Mulliken , in which electron s are described by mathematical functions delocalized over an entire molecule In computational physics and chemistry, the Hartree–Fock method is an approximate method for the determination of the ground-state wave function and ground-state energy of a quantum many-body system.... and further post Hartree-Fock methods. Density functional theory The Thomas-Fermi model Gas in a box In quantum mechanics, the results of the quantum particle in a box can be used to look at the equilibrium situation for a quantum ideal gas in a box which is a box containing a large number of molecules which do not interact with each other except for instantaneous thermalizing collisions... was developed independently by Thomas and Fermi Enrico Fermi Enrico Fermi was an Italian-born, naturalized American physicist particularly known for his work on the development of the first nuclear reactor, Chicago Pile-1, and for his contributions to the development of quantum theory, nuclear and particle physics, and statistical mechanics... Electronic density In quantum mechanics, and in particular quantum chemistry, the electronic density is a measure of the probability of an electron occupying an infinitesimal element of space surrounding any given point. It is a scalar quantity depending upon three spatial variables and is typically denoted as either...  instead of wave functions, although it was not very successful in the treatment of entire molecules. The method did provide the basis for what is now known as density functional theory. Though this method is less developed than post Hartree-Fock methods, its significantly lower computational requirements (scaling typically no worse than with respect to basis functions) allow it to tackle larger polyatomic molecules and even macromolecule A macromolecule is a very large molecule commonly created by some form of polymerization. In biochemistry, the term is applied to the four conventional biopolymers , as well as non-polymeric molecules with large molecular mass such as macrocycles... s. This computational affordability and often comparable accuracy to MP2 and CCSD (post-Hartree–Fock methods) has made it one of the most popular methods in computational chemistry Computational chemistry  at present. Chemical dynamics A further step can consist of solving the Schrödinger equation Schrödinger equation  with the total molecular Hamiltonian Molecular Hamiltonian In atomic, molecular, and optical physics as well as in quantum chemistry, molecular Hamiltonian is the name given to the Hamiltonian representing the energy of the electrons and nuclei in a molecule... Classical mechanics In physics, classical mechanics is one of the two major sub-fields of mechanics, which is concerned with the set of physical laws describing the motion of bodies under the action of a system of forces...  framework molecular dynamics Molecular dynamics Molecular dynamics is a computer simulation of physical movements of atoms and molecules. The atoms and molecules are allowed to interact for a period of time, giving a view of the motion of the atoms... . Statistical approaches, using for example Monte Carlo method Monte Carlo method Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in computer simulations of physical and mathematical systems... s, are also possible. Adiabatic chemical dynamics In adiabatic dynamics, interatomic interactions are represented by single scalar Scalar (physics) In physics, a scalar is a simple physical quantity that is not changed by coordinate system rotations or translations , or by Lorentz transformations or space-time translations . This is in contrast to a vector... *In linguistics, the potential mood*The mathematical study of potentials is known as potential theory; it is the study of harmonic functions on manifolds... s called potential energy surface Potential energy surface s. This is the Born-Oppenheimer approximation Born-Oppenheimer approximation In quantum chemistry, the computation of the energy and wavefunction of an average-size molecule is a formidable task that is alleviated by the Born–Oppenheimer approximation, named after Max Born and J. Robert Oppenheimer. For instance the benzene molecule consists of 12 nuclei and 42...  introduced by Born Max Born Max Born was a German-born physicist and mathematician who was instrumental in the development of quantum mechanics. He also made contributions to solid-state physics and optics and supervised the work of a number of notable physicists in the 1920s and 30s...  and Oppenheimer Robert Oppenheimer Rudolph A. Marcus Rudolph "Rudy" Arthur Marcus is a Canadian-born chemist who received the 1992 Nobel Prize in Chemistry for his theory of electron transfer. Marcus theory, named after him, provides a thermodynamic and kinetic framework for describing one electron outer-sphere electron transfer.He was born in...  who took the transition state Transition state The transition state of a chemical reaction is a particular configuration along the reaction coordinate. It is defined as the state corresponding to the highest energy along this reaction coordinate. At this point, assuming a perfectly irreversible reaction, colliding reactant molecules will always...  theory developed by Eyring Henry Eyring Henry Eyring was a Mexican-born American theoretical chemist whose primary contribution was in the study of chemical reaction rates and intermediates.... Non-adiabatic chemical dynamics Ernst Stueckelberg Ernst Carl Gerlach Stueckelberg was a Swiss mathematician and physicist.- Career :In 1927 Stueckelberg got his Ph. D. at the University of Basel under August Hagenbach... , Landau, and Zener Clarence Zener Clarence Melvin Zener was the American physicist who first described the property concerning the breakdown of electrical insulators. These findings were later exploited by Bell Labs in the development of the Zener diode, which was duly named after him... A diabatic process is one in which heat transfer takes place, which is the opposite of an adiabatic process. In quantum chemistry, the potential energy surfaces are obtained within the adiabatic or Born-Oppenheimer approximation...  potential curves in the neighborhood of an avoided crossing Avoided crossing [Image:Avoided_crossing.png|thumb|right|300px|An avoided energy level crossing in a two level system subjected to an external magnetic field. Note the energies of the diabatic states, \scriptstyle...  to be calculated. See also • Atomic physics Atomic physics Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. It is primarily concerned with the arrangement of electrons around the nucleus and... • Computational chemistry Computational chemistry • Condensed matter physics Condensed matter physics • International Academy of Quantum Molecular Science International Academy of Quantum Molecular Science The International Academy of Quantum Molecular Science is an international scientific learned society covering all applications of quantum theory to chemistry and chemical physics. It was created in Menton in 1967. The founding members were Raymond Daudel, Per-Olov Löwdin, Robert G. Parr, John... • Molecular modelling Molecular modelling Molecular modelling encompasses all theoretical methods and computational techniques used to model or mimic the behaviour of molecules. The techniques are used in the fields of computational chemistry, computational biology and materials science for studying molecular systems ranging from small... • Physical chemistry Physical chemistry • Quantum chemistry computer programs Quantum chemistry computer programs Quantum chemistry computer programs are used in computational chemistry to implement the methods of quantum chemistry. Most include the Hartree–Fock and some post-Hartree–Fock methods. They may also include density functional theory , molecular mechanics or semi-empirical quantum... • Quantum electrochemistry Quantum electrochemistry The scientific school of Quantum electrochemistry began to form in the 1960s under Revaz Dogonadze. Generally speaking, the field comprises the notions arising in electrodynamics, quantum mechanics, and electrochemistry; and so is studied by a very large array of different professional researchers... • QMC@Home QMC@Home is a distributed computing project for the BOINC client aimed at further developing and testing Quantum Monte Carlo for use in quantum chemistry. It is hosted by the University of Münster with participation by the Cavendish Laboratory... • Theoretical physics Theoretical physics Theoretical physics is a branch of physics which employs mathematical models and abstractions of physics to rationalize, explain and predict natural phenomena... • Electron localization function Electron localization function In quantum chemistry, the electron localization function is a measure of the likelihood of finding an electron in the neighborhood space of a reference electron located at a given point and with the same spin... Further reading • Karplus M., Porter R.N. (1971). Atoms and Molecules. An introduction for students of physical chemistry , Benjamin-Cummings Publishing Company, ISBN 978-0-8053-5218-4 • Attila Szabo, Neil S. Ostlund. (1996). Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory , Dover, ISBN 0-486-69-186-1 External links Nobel lectures by quantum chemists
8ee06b0b9531e501
From Wikipedia, the free encyclopedia Jump to: navigation, search In philosophy, emergentism is the belief in emergence, particularly as it involves consciousness and the philosophy of mind, and as it contrasts (or not) with reductionism. A property of a system is said to be emergent if it is in some sense more than the "sum" of the properties of the system's parts. An emergent property is said to be dependent on some more basic properties (and their relationships and configuration), so that it can have no separate existence. However, a degree of independence is also asserted of emergent properties, so that they are not identical to, or reducible to, or predictable from, or deducible from their bases. The different ways in which the independence requirement can be satisfied lead to variant types of emergence. Forms of emergentism[edit] All varieties of emergentism strive to be compatible with physicalism, the theory that the universe is composed exclusively of physical entities, and in particular with the evidence relating changes in the brain with changes in mental functioning. [Many forms of emergentism, including proponents of complex adaptive systems, do not hold a material but rather a relational or processural view of the universe. Furthermore, they view mind–body dualism as a conceptual error insofar as mind and body are merely different types of relationships.] As a theory of mind (which it is not always), emergentism differs from idealism, eliminative materialism, identity theories, neutral monism, panpsychism, and substance dualism, whilst being closely associated with property dualism. It is generally not obvious whether an emergent theory of mind embraces mental causation or must be considered epiphenomenal. Some varieties of emergentism are not specifically concerned with the mind-body problem, and instead suggest a hierarchical or layered view of the whole of nature, with the layers arranged in terms of increasing complexity with each requiring its own special science. Typically physics is basic, with chemistry built on top of it, then biology, psychology and social sciences. Reductionists respond that the arrangement of the sciences is a matter of convenience, and that chemistry is derivable from physics (and so forth) in principle, an argument which gained force after the establishment of a quantum-mechanical basis for chemistry.[1] Other varieties see mind or consciousness as specifically and anomalously requiring emergentist explanation, and therefore constitute a family of positions in the philosophy of mind. Douglas Hofstadter summarises this view as "the soul is more than the sum of its parts". A number of philosophers have offered the argument that qualia constitute the hard problem of consciousness, and resist reductive explanation in a way that all other phenomena do not. In contrast, reductionists generally see the task of accounting for the possibly atypical properties of mind and of living things as a matter of showing that, contrary to appearances, such properties are indeed fully accountable in terms of the properties of the basic constituents of nature and therefore in no way genuinely atypical. Intermediate positions are possible: for instance, some emergentists hold that emergence is neither universal nor restricted to consciousness, but applies to (for instance) living creatures, or self organising systems, or complex systems. Some philosophers hold that emergent properties causally interact with more fundamental levels, an idea known as downward causation. Others maintain that higher-order properties simply supervene over lower levels without direct causal interaction. All the cases so far discussed have been synchronic, i.e. the emergent property exists simultaneously with its basis. Yet another variation operates diachronically. Emergentists of this type believe that genuinely novel properties can come into being, without being accountable in terms of the preceding history of the universe. (Contrast with indeterminism where it is only the arrangement or configuration of matter that is unaccountable). These evolution-inspired theories often have a theological aspect, as in the process philosophy of Alfred North Whitehead and Charles Hartshorne. The concept of emergence has been applied to the theory of literature and art, history, linguistics, cognitive sciences, etc. by the teachings of Jean-Marie Grassin at the University of Limoges (v. esp.: J. Fontanille, B. Westphal, J. Vion-Dury, éds. L'Émergenceh-- Poétique de l'Émergence, en réponse aux travaux de Jean-Marie Grassin, Bern, Berlin, etc., 2011; and: the article "Emergence" in the International Dictionary of Literary Terms (DITL). Relationship to vitalism[edit] A refinement of vitalism may be recognized in contemporary molecular histology in the proposal that some key organising and structuring features of organisms, perhaps including even life itself, are examples of emergent processes; those in which a complexity arises, out of interacting chemical processes forming interconnected feedback cycles, that cannot fully be described in terms of those processes since the system as a whole has properties that the constituent reactions lack.[2][3] Whether emergent system properties should be grouped with traditional vitalist concepts is a matter of semantic controversy.[4] In a light-hearted millennial vein, Kirshner and Michison call research into integrated cell and organismal physiology “molecular vitalism.”[5] According to Emmeche et al. (1997): The first emergentist theorists used the example of water having a new property when hydrogen, H, and oxygen, O, combine to form H2O (water). In this example there emerge such new properties as liquidity under standard conditions. (Analogous hydrides of the oxygen family, such as hydrogen sulfide, are gases). However, a better and more recent example of an emergent phenomenon, one provided by physicist Erwin Schrödinger, is found in the case of families of molecules known as isomers, which are made up of precisely the same atoms, differently arranged, which nevertheless have different physical properties. Similarly, enantiomers are molecules made up of precisely the same atoms but in mirror image arrangement: they exist in "right-handed" and "left-handed" forms which have different properties when interacting with other molecules. Biologists Ursula Goodenough and Terrence Deacon in their 2006 essay The Sacred Emergence of Nature have assembled a range of examples of physical and biological emergent properties that provide the evidential basis for emergentism as a philosophy that comports with a modern scientific understanding of how complexity arises in the natural world, and as a philosophy that supports religious naturalism. A longer compilation of emergent forms in nature is the 2004 book by biologist Harold J. Morowitz: The Emergence of Everything. In the game of Go, the rules stipulate various constraints on the placement and removal of playing pieces. As a consequence of this, an "emergent" pattern is that groups of pieces with two eyes are "alive" and can never be removed. This is a vital part of the game, without which it cannot be played or understood; but is not part of the rules. Similarly, in John Conway's Game of Life, some patterns of cells have striking properties — such as the ability to move or reproduce — which are not explicitly coded into the rules. Although examples of higher level properties which are not identical to lower order properties are easy to find, examples where they are not reducible to or predicable from their bases are more controversial. John Stuart Mill[edit] John Stuart Mill outlined his version of emergentism in System of Logic (1843). Mill argued that the properties of some physical systems, such as those in which dynamic forces combine to produce simple motions, are subject to a law of nature he called the "Composition of Causes". According to Mill, emergent properties are not subject to this law, but instead amount to more than the sums of the properties of their parts. Mill believed that various chemical reactions (poorly understood in his time) could provide examples of emergent properties, although some critics believe that modern physical chemistry has shown that these reactions can be given satisfactory reductionist explanations. For instance, it has been claimed that the whole of chemistry is, in principle, contained in the Schrödinger equation. C. D. Broad[edit] British philosopher C. D. Broad defended a realistic epistemology in The Mind and its Place in Nature (1925) arguing that emergent materialism is the most likely solution to the mind-body problem. Broad defined emergence as follows: This definition amounted to the claim that mental properties would count as emergent if and only if philosophical zombies were metaphysically possible[citation needed]. Many philosophers take this position to be inconsistent with some formulations of psychophysical supervenience. C. Lloyd Morgan and Samuel Alexander[edit] Samuel Alexander's views on emergentism, argued in Space, Time, and Deity (1920), were inspired in part by the ideas in psychologist C. Lloyd Morgan's Emergent Evolution. Alexander believed that emergence was fundamentally inexplicable, and that emergentism was simply a "brute empirical fact": "The higher quality emerges from the lower level of existence and has its roots therein, but it emerges therefrom, and it does not belong to that level, but constitutes its possessor a new order of existent with its special laws of behaviour. The existence of emergent qualities thus described is something to be noted, as some would say, under the compulsion of brute empirical fact, or, as I should prefer to say in less harsh terms, to be accepted with the “natural piety” of the investigator. It admits no explanation." (Space, Time, and Deity) Despite the causal and explanatory gap between the phenomena on different levels, Alexander held that emergent qualities were not epiphenomenal. His view can perhaps best be described as a form of nonreductive physicalism (NRP)[8] or supervenience theory. Ludwig von Bertalanffy[edit] Ludwig von Bertalanffy founded General System Theory (GST), which is a more contemporary approach to emergentism. A popularization of many of the elements of GST may be found in The Web of Life by Fritjof Capra. Jaegwon Kim[edit] Figure demonstration how M1 and M2 are not reduced to P1 and P2. Addressing emergentism (under the guise of non-reductive physicalism) as a solution to the mind-body problem Jaegwon Kim has raised an objection based on causal closure and overdetermination. Emergentism strives to be compatible with physicalism, and physicalism, according to Kim, has a principle of causal closure according to which every physical event is fully accountable in terms of physical causes. This seems to leave no "room" for mental causation to operate. If our bodily movements were caused by the preceding state of our bodies and our decisions and intentions, they would be overdetermined. Mental causation in this sense is not the same as free will, but is only the claim that mental states are causally relevant. If emergentists respond by abandoning the idea of mental causation, their position becomes a form of epiphenomenalism. In detail: he proposes (using the chart on the right) that M1 causes M2 (these are mental events) and P1 causes P2 (these are physical events). P1 realises M1 and P2 realises M2. However M1 does not causally effect P1 (i.e., M1 is a consequent event of P1). If P1 causes P2, and M1 is a result of P1, then M2 is a result of P2. He says that the only alternatives to this problem is to accept dualism (where the mental events are independent of the physical events) or eliminativism (where the mental events do not exist). See also[edit] 1. ^ Crane, T. The Significance of Emergence 5. ^ Kirschner, M; Gerhart, J; Mitchison, T (2000). "Molecular "vitalism". Cell 100 (1): 79–88. doi:10.1016/S0092-8674(00)81685-2. PMID 10647933.  6. ^ Emmeche C (1997) EXPLAINING EMERGENCE:towards an ontology of levels. Journal for General Philosophy of Science available online 7. ^ Dictionary of the History of Ideas 8. ^ Stanford Encyclopedia of Philosophy Further reading[edit] • Jones, Richard H. Analysis & the Fullness of Reality: An Introduction to Reductionism and Emergence (2013) • Laughlin, Robert B. A Different Universe (2005) • Ansgar Beckermann, Hans Flohr, Jaegwon Kim: Emergence Or Reduction? Essays on the Prospects of Nonreductive Physicalism (1992) External links[edit] • Emergentism in the Stanford Encyclopedia of Philosophy, 2007. • Emergentism in the Dictionary of Philosophy of Mind, 2007.
f8810c5d5e42fbf8
We gratefully acknowledge support from the Simons Foundation and member institutions Full-text links: Current browse context: Change to browse by: References & Citations (what is this?) Nonlinear Sciences > Exactly Solvable and Integrable Systems Title: The derivative nonlinear Schrödinger equation on the half-line Abstract: We analyze the derivative nonlinear Schr\"odinger equation $iq_t + q_{xx} = i(|q|^2q)_x$ on the half-line using the Fokas method. Assuming that the solution $q(x,t)$ exists, we show that it can be represented in terms of the solution of a matrix Riemann-Hilbert problem formulated in the plane of the complex spectral parameter $\zeta$. The jump matrix has explicit $x,t$ dependence and is given in terms of the spectral functions $a(\zeta)$, $b(\zeta)$ (obtained from the initial data $q_0(x) = q(x,0)$) as well as $A(\zeta)$, $B(\zeta)$ (obtained from the boundary values $g_0(t) = q(0,t)$ and $g_1(t) = q_x(0,t)$). The spectral functions are not independent, but related by a compatibility condition, the so-called global relation. Given initial and boundary values $\{q_0(x), g_0(t), g_1(t)\}$ such that there exist spectral function satisfying the global relation, we show that the function $q(x,t)$ defined by the above Riemann-Hilbert problem exists globally and solves the derivative nonlinear Schr\"odinger equation with the prescribed initial and boundary values. Comments: 27 pages, 4 figures Subjects: Exactly Solvable and Integrable Systems (nlin.SI) Cite as: arXiv:0808.1534 [nlin.SI]   (or arXiv:0808.1534v1 [nlin.SI] for this version) Submission history From: Jonatan Lenells [view email] [v1] Mon, 11 Aug 2008 16:08:48 GMT (33kb,D)
5d4cf0ae216d5f1d
Dismiss Notice Join Physics Forums Today! Why is the solution of the schrödinger equation always symmetric or antisymmetric? 1. Apr 18, 2007 #1 I read that in an 1D potential, the solution for the Schrödinger equation is always either symmetric or antisymmetric if the potential is a symmetric function: V(x) = V(-x). How can I proof this? Thanks, for ur answers! 2. jcsd 3. Apr 18, 2007 #2 The proof for this is typically done using parity operators. If your hamiltonian is given by [tex]\mathcal{H} = \hat{p}^2/2m + V(\hat{x})[/tex] Write down the time-independent Schrodinger equation, then flip the signs on all the x-coordinates and see what this imposes on the wave function. If you want a better discussion of this, check out Chapter 4 of Sakurai. 4. Apr 19, 2007 #3 Great! Thanks for this answer! 5. Apr 19, 2007 #4 I have another question to the same problem: In which situation can the eigenstates be degenerated? 6. Apr 19, 2007 #5 I'm not familiar with the word "degenerated". If you mean "having degenerate eigenstates", degeneracies usually arise when you have another observable [tex]\mathcal{O}[/tex] such that [tex]\left [ \mathcal{H}, \mathcal{O} \right ] - 0[/tex]. This implies that an eigenstate of the hamiltonian is also an eigenstate of your new observable (I leave it to you to figure out why). What frequently happens in this case is that there are multiple values of [tex]\mathcal{O}[/tex] for a given energy eigenvalue, and so you end up with degeneracies in the energy spectrum. Of course, sometimes things end up being more degenerate than they should be. For example, in hydrogen, the energy levels don't depend on the [tex]\ell[/tex] quantum number, although in general a spherically symmetric potential leads to an [tex]\ell[/tex] dependent energy spectrum. This is called an "accidental degeneracy". In the case of the hydrogen atom, the degeneracy arises because the angular momentum operators aren't the only ones that commute with the hamiltonian, and the underlying group symmetry of the hydrogen hamiltonian is SO(4). 7. Apr 20, 2007 #6 User Avatar Science Advisor A superposition of solutions is also a solution. A superposition of a symmetric and an antisymmetric solution is neither symmetric nor antisymmetric. Therefore, the solution does NOT need to be either symmetric or antisymmetric. 8. Apr 20, 2007 #7 The solution is not always symmetric or antisymmetric !!! It depends on the symmetries of the forces in the system. If the potential is symmetric (V(-x)=V(x)), the the hamiltonian cumutes with the inversion operator (P), and the operators H and P share a common basis of eigenvectors. Read about the consequence of [A,B]=0, apply that to your case here: [H,P]=0 . 9. Apr 20, 2007 #8 User Avatar Science Advisor Are we talking about solutions of the Schrodinger equation, or about eigenstates of the Hamiltonian operator? A superposition of solutions is a solution, whereas a superposition of eigenstates is NOT an eigenstate. Similar Discussions: Why is the solution of the schrödinger equation always symmetric or antisymmetric?
5c51678f03b5f558
Tae-Chang Jo Learn More High-volume, multistage continuous production flow through a re-entrant factory is modeled through a conservation law for a continuous-density variable on a continuous-production line augmented by a state equation for the speed of the production along the production line. The resulting nonlinear, nonlocal hyperbolic conservation law allows fast and accurate(More) The spread of H5N1 virus to Europe and continued human infection in Southeast Asia have heightened pandemic concern. Although, fortunately, sustained human-to-human transmissions have not been reported yet, it is said that a pandemic virus which can be easily transmitted among humans certainly emerges in the future. In this study, we extended the previous(More) We present a simple iterative scheme to solve numerically a regularized internal wave model describing the large amplitude motion of the interface between two layers of different densities. Compared with the original strongly nonlinear internal wave model of Miyata [10] and Choi and Camassa [2], the regularized model adopted here suppresses shear(More) A strongly nonlinear asymptotic model describing the evolution of large amplitude internal waves in a two-layer system is studied numerically. While the steady model has been demonstrated to capture correctly the characteristics of large amplitude internal solitary waves, a local stability analysis shows that the time-dependent inviscid model suffers from(More) The number of applications to enable keyword query over graph-structured data are enormously increasing in various application domains like Web, Database, Chemical compounds, Bio-informatics etc. The existing search systems reveal serious performance problem due to their failure to integrate information from multiple connected resources. In this paper, we(More) The Mathieu partial differential equation (PDE) is analyzed as a prototypical model for pattern formation due to parametric resonance. After averaging and scaling, it is shown to be a perturbed nonlinear Schrödinger equation (NLS). Adiabatic perturbation theory for solitons is applied to determine which solitons of the NLS survive the perturbation due to(More) • 1
74c83729e3d57993
Saturday, January 31, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Davos: Klaus met Gore for 2 hours In Davos, Czech president Václav Klaus met Al Gore for two hours. National Review, AFP It was a friendlier encounter than the differences would suggest. Klaus summarized it by saying that there is no global warming, that he (Klaus) is the most important denier in the world (alarmists may call me Dr Victor Frankenstein for the win) :-), and that Al Gore is a key figure of a movement trying to suppress freedom. Environmentalists don't listen to the other side while Klaus does. Also, Klaus is more afraid of the regulation that will be justified by the current crisis than the crisis itself. President Klaus also met with Shimon Peres (Klaus' favorite Israeli politician) today, discussing the Middle East issues. The Gore effect is working in Davos, too. The current temperature, -12 °C, matches the recent record cold reading from January 31st, 2003. A windstorm By the way, there is one more entertaining story about Klaus. You know that it's politically incorrect to exclusively use female names for hurricanes and windstorms. It's politically incorrect despite the fact that a woman is like a hurricane: when it arrives, it is a pleasant and warm humid wind. When it leaves, it takes cars and houses with it. One week ago, the European meteorologists had a great idea how to call the windstorm that was underway in France and Spain: Klaus. It killed about 20 people and will cost the insurance industry half a billion euro or so. Click to jump, use a fixed-length string to support the yellow ball. Hit the green one. Secondary forest growth beats human consumption 50:1 Tome has pointed out a remarkably balanced story in The New York Times, New Jungles Prompt a Debate on Rain Forests. Secondary forests i.e. new jungles are growing in previously agricultural (or logging or natural disaster) areas as much as 50 times faster than people are able and allowed to cut the primeval rain forests. The area of secondary forests is doubling every 18 years and people are quoted in the article as saying that there are many more forests than they could see 30 years ago. In the good old times, rain forests were one of the main symbols of environmentalism. They're so pretty and diverse. (You know, I am an old environmentalist who has participated - together with Greenpeace guys - in weekly voluntary events to help the trees in the Bohemian Forest and elsewhere!) That old environmental problem was arguably captivating but it has never gained the political power of the contemporary greenhouse religion, especially because of its local (and distant) character. People may be just revealing that even the old problem was based on a deep misunderstanding of the internal mechanisms of Nature and Her inherent strength. I guess that the higher concentration of CO2, the gas we call life, is contributing to the fast expansion of the new forests, too. Rudolf Mössbauer: 80th birthday Rudolf Mössbauer was born in Munich on January 31st, 1929. Congratulations! He has been an eager teacher who thought it was important to explain anything and everything to everyone else, including your cat. Now he is a Prof Emeritus at Caltech. He has taught physics of neutrinos, neutrons, electroweak theory, and other things. Of course, he is most famous for his 1957 discovery of the Mössbauer effect, or "Recoilless Nuclear Resonance Absorption" if you happen to be himself and you still want to look excessively modest. :-) See his 1961 lecture about it and the paper in German. He received the 1961 physics Nobel prize for that. He was promoted to a professor in advance so that Caltech wouldn't become the place where Nobel prize winners are treated as postdocs. :-) Well, he actually shared the award with Robert Hofstadter who studied electron scattering in atomic nuclei. Right now, the most culturally important fact about Robert Hofstadter is that Leonard Hofstadter from The Big Bang Theory (CBS) was named after him. ;-) Emergence of thermodynamics from statistical physics Thursday, January 29, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere LHC: black holes living for seconds? Yesterday, I had to spend hours with a debate about global warming under my article at The Invisible Dog, a famous Czech personal internet daily of Mr Ondřej Neff, a well-known science-fiction writer, called Rationally About Weather And Climate - a modified Czech version of Weather and Climate: Noise and Timescales. Yes, it seems that the skeptics have won once again. ;-) The IPCC's proxy, Dr Ladislav Metelka, is an OK chap and he's not even terribly radical. But he has shown his remarkable ignorance in many ways. For example, a reader asked him why the IPCC seems to predict that the temperature change per CO2 concentration change is speeding up as the concentration goes up, in violation of the logarithmic law. Metelka answered some incoherent nonsense that the IPCC result includes all feedbacks, and it can therefore be accelerating. Of course, the real explanation was that the reader had calculated "temperature change OVER concentration" instead of "temperature change OVER concentration change". He forgot to subtract 280 ppm from the concentration, and when he did it right, it worked as expected: the influence is slowing down. The reader understood the error (and the correct answer) completely. I am sure that he must have learned the feeling of being sure that his knowledge is more robust than the knowledge of the self-declared best Czech mainstream climatologist. Black holes at the LHC But there is one more type of alarmism that began to spread in the media, the LHC alarmism. You know, the LHC will create a black hole that will destroy the Earth. A few days ago, a new wave of this stuff began to penetrate through the media. See e.g. MSNBC: Study revisits black-hole controversy FoxNews: Scientists not so sure 'doomsday machine' won't destroy world and others (Google News). The story is based on a new preprint by Roberto Casadio, Sergio Fabi, Benjamin Harms, Tuesday, January 27, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Balling, Michaels: Climate of Extremes I received a book by Patrick Michaels and Robert Balling Jr, "Climate of Extremes". It is a very nice book that is crowded with graphs and information. At the beginning, Michaels announces that he will have to leave his school in June 2009 because the current conditions don't allow him to keep both his scientific integrity and the funding. You will find some embarrassing quotes by leading IPCC scientists and Al Gore. But then the real book begins. Hořava, Lifshitz, Cotton, and UV general relativity Let me start with some fun: Click the picture of April Motl, my very distant relative who is "getting to the heart of the matter", too. ;-) Amusingly enough, in 1998, I was using pen name April Lumo for a while. Petr Hořava wrote an interesting preprint: Quantum gravity at a Lifshitz point (see also: November 2007 talk in Santa Barbara) He wants to find a "smaller" theory of quantum gravity than string theory, so he looks at the hypothetical UV fixed point (a theory without a preferred scale) that could flow to Einstein's equations at long distances. Fixed points are an intellectual value that the CMT and HEP cultures share. See also NYU about Hořava-Lifshitz gravity for more comments about the paper and the sociology surrounding it... This research program has been unsuccessfully tried many times in the past. The new twist is that his proposed fixed point is non-relativistic. Normal scale-invariant relativistic theories have a scaling symmetry that affects space and time equally. Dispersion relations tell us that "E=p" and we say that the exponent "z=1". Ordinary non-relativistic mechanics scales them differently and "E=p^2/2m", giving "z=2". His starting point is even more non-relativistic, with "z=3". But he wants to get to "z=1" at long distances. Weather in the year 3000 Gene Day has sent me a cute article at MSNBC. Do you want to invite your friends for a barbecue party in the year 3000, a few years after the collapse of the Thousand Year Reich? And do you need to have plans for different weather scenarios? Expect 1,000-year climate impacts, experts say. Although science can only remove the noise and predict something specific about the atmosphere for a month in advance, while the behavior in any further future seems to be an intractable problem, the average experts got used to "predictions" of the weather for the year 2100 or 2200. No one is going to check these predictions during the people's careers - which is great - and because other people want to listen to them, anyway, these forecasts became widespread. So if you want to be ahead of your climatological colleagues these days, 2100 or 2200 is not enough. So Susan Solomon is telling us that the catastrophe is going to be lasting. Even if we stop all production of CO2, she says, the Earth will be "dying" at least until the year 3000 because the "murder" we are committing against Gaia is "irreversible". One of the greatest catastrophes is that the sea level will jump by 1 meter by the year 3000 just because of the CO2-related greenhouse heating expansion. What a cataclysm: it's almost a millimeter per year, roughly 10 times slower rate than when we were going out of the last ice age. Moreover, it is extremely logical (for them) to talk about the year 3000 and use these speculations as a justification of an immediate "action", in the year 2009. There must exist a time machine or a wormhole between the year 2009 and 3000 if they can be linked in this way. Believe it or not, the fate of the people in the year 3000 depends on your decision on January 27th, 2009! :-) She can't possibly be serious when she says that a 1 meter change of the sea level in 1000 years is bad. During the last 20 millenia, the sea level naturally jumped by 120 meters which is 6 meters per millenium. If you focus on the interval from 15,000 years ago to 7,000 years ago, the rate doubles. The rise was more than 10 meters per millenium. Relatively to these rates, the sea level rise nearly stopped 7,000 years ago or so. Why did it stop so quickly? No, there was no discontinuity of the laws of physics. The reason was that the glaciers have disappeared from the bulk of the Earth's surface so there was no ice left to be melted - except for Greenland and Antarctica which might naturally melt (even without any human intervention) in a few thousand years, too. You know, ice can sometimes naturally melt, lady! It is no coincidence that 6,000 years ago or so, the ancient civilizations started to be born and flourish. The reason is that ice is pretty bad for life while the warming was damn good for them. Snow and ice are clean and cute but that's exactly why there's almost no life in them. Sunday, January 25, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Anti-quantum zeal Evidence vs prejudices Saturday, January 24, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Microcanonical leftists & cap-and-trade system Do you prefer the microcanonical ensembles over the canonical ones? Then you're a leftist, a TRF theory claims. ;-) In thermodynamics, a microcanonical ensemble chooses microstates according to the value of their energy (or another extensive quantity) that must belong to an interval. Each microstate in the ensemble has the same weight. A canonical (or grand-canonical) ensemble chooses a temperature or a chemical potential (or another intensive quantity) and allows different microstates to contribute differently, by the exponentially decreasing weights. The average energy is calculated out of the temperature. I have always found the canonical ensemble to be the more natural one, for many reasons. The microcanonical weights are discontinuous and depend on an arbitrary choice of an interval (and its length). The canonical weights are smooth and admit a natural calculation in terms of a periodic Euclidean time. Now I believe that this preference of mine is a by-product of my being a rightwinger. ;-) Conservative people like to define fixed laws in the society that apply to everyone, and it is up to everyone what he or she does with them. The outcomes are not known in advance and they depend on the detailed dynamics - the Hamiltonian, if you wish. On the other hand, leftists prefer plans (like in a planned economy). So they decide what the energy should be in advance and eliminate every microstate that doesn't agree with their plans. Those (microstates or people) who make it through the microcanonical filter must be treated in an egalitarian way while the rest is sent to a Gulag. Cap and trade system We recently encountered this funny analogy during a discussion of the carbon cap-and-trade system. Is that a better idea than ordinary taxes, or fees paid for every ton of CO2? In the cap-and-trade system, one first defines the "emission goals" for everyone (the cap part) and companies are then allowed to buy or sell the indulgences or "credits" (the trade part). This costs us about half a trillion dollars a year right now. Is that a market-economy solution? Well, if the "caps" were natural, the "trade" part would surely be more market-friendly than strict pre-determined plans required from everyone. Such a trade could increase the efficiency of the system. On the other hand, there's still the microcanonical "cap" part here which is completely artificial and dirigistic in character. There would be no indulgence market if there were no caps. The strictness of the regulator will always be the main force that will determine the price of the indulgences. That's why we have seen the price of carbon credits oscillate wildly, by many orders of magnitude. Price oscillations surely exist in genuine market economy, too. However, these particular price oscillations - in the carbon markets - were clearly driven by the government policies. And I find them almost completely unnatural and counterproductive. If the carbon dioxide emissions are viewed as "finite damages", someone should have at least a vague idea what are the damages caused by one ton of CO2. I believe that they're zero, if not negative, but those people who believe that CO2 hurts should have a positive number in mind and an argument that justifies it. This number should be simply added as an extra tax. You can still emit whatever you want but it will cost you some money. The difference between the cap-and-trade system and a tax is the same as the difference between the microcanonical and canonical ensemble in physics. The main difference is that the cap-and-trade system admits a variable price of one ton but the variations are primarily determined by the regulators, anyway. To put it differently, it is not true that the cap-and-trade system is more market-friendly than the extra taxes. It's because the whole nonzero price of the whole market is effectively dictated by the government - in the same way as if the government determines the new tax rate. In a genuine market, the price to emit 1 ton CO2 would clearly be zero. The indulgence price swings look like genuine price swings in the free markets but they are not: they just reflect the hawkish or dovish mood of the regulators and the precise details about their choice of the caps (which moreover increase the risk of insider trading and corruption). Moreover, if it turned out that the carbon indulgences must be extremely expensive to achieve a sizable reduction of CO2 emissions (which would be likely if we saw no new technological breakthroughs), the damages to the economy could become much higher than even what the alarmists believe to be the damages caused by the CO2, which is simply wrong. So if any policy of this form ever has to be adopted, a new tax is the cleanest solution. The best formula for the CO2 tax rate was promoted by Ross McKitrick, the T3 tax. The tax rate would be determined by the measured warming of the tropical troposphere where the greenhouse effect should leave the cleanest and strongest fingerprints. Needless to say, during the recent years, the T3 tax would have been negative because we have seen cooling, especially in this particular part of the atmosphere, so I guess that according to this policy, CO2 emitters should have been paid extra money! The T3 tax should satisfy everyone. Those people who don't believe that CO2 production causes significant temperature change have pretty much equal chances to expect a loss as to expect a profit, and those who believe that the warming will escalate because of CO2 production should be looking forward to a high carbon tax rate. ;-) Hippie non-solutions to the black hole information problem Sabine Hossenfelder and Lee Smolin wrote a down-to-earth manifesto called Conservative solutions to the black hole information problem (PDF) about the qualitative approaches to describe the survival or death of the information inside the black holes. The paper uses the adjective "conservative" 18 times. That's quite a high frequency for hippie and feminist authors who clearly have no idea what the word "conservative" means - either in politics or in science. Why don't they omit these political adjectives that, as they must know, are not apt for their ideas? Would it be too much to ask? They seem to abandon the last traces of the rational thinking - an attitude that I don't consider to be a "conservative" virtue. The fact that it is the horizon below which the information is guaranteed to be lost (semiclassically) simply because causality prevents it to get outside, and not the singularity (which is irrelevant for the information loss puzzle), has been repeatedly explained in detail, for example in Black hole singularities in AdS/CFT (and by Moshe Rozali and others), so that I am pretty sure that a significant fraction of the lay readers start to understand this elementary point, too. Smolin and Hossenfelder are clearly not among them. The authors reasonably sketch four or five possible macroscopic fates of the Schwarzschild black hole, including 1. the correct Penrose diagram of an evaporating black hole 2. possible evolutions that involve naked singularities 3. a crazy star-like degeneration of the black hole that avoids both horizon and singularity 4. a future with a baby universe born inside (A) or a massive remnant (B) Clearly, only the option (1) is an acceptable macroscopic description of the spacetime with a black hole in it, and every acceptable - or "conservative" - solution must be compatible with the general shape of spacetime sketched in (1), as we will show again momentarily. This choice doesn't quite solve all the puzzles yet but it is inevitable. Friday, January 23, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Steven Weinberg on condensed matter physics culture We have written a lot stuff on the emergent phenomena and various cultures in physics, including the difference between the relativistic and particle-physics cultures. But when I looked at Clifford Johnson's blog an hour ago or so, I saw his comments about an interesting article by Steven Weinberg in the CERN Courier: From BCS to the LHC Clifford Johnson seems to disagree with Weinberg but I think that his reasons to disagree are based on a misunderstanding. Well, it shouldn't be shocking that my general philosophical opinions about physics are probably indistinguishable from Weinberg's opinions. Let me try to defend his viewpoint. There are many ways in which high-energy and condensed-matter theorists use similar methods and tools that are often helpful in the other discipline. The two cultures overlap and flows of ideas are sometimes helpful. But as Weinberg correctly says, there is a huge gap between the goals, aims, values, motivation, and sources of satisfaction in between the two cultures. I always knew this to be the case but a few years ago, many friendly discussions with those roughly five condensed-matter theorists in the Society of Fellows - like Yaroslav Tserkovnyak (now UCLA: Privet!) - have convinced me that the differences are much deeper than I had previously thought. U.S.: global warming is the least concern As Benjamin (and Marc Morano) has pointed out, global warming is the smallest concern for the U.S. citizens among 20 topics they were offered. It is clear that most people usually return to some common sense after some time, and because of the weak economy (#1 topic) and a cool winter, it is clear that possible & imaginary threats and expensive proposals to avoid them simply can't be important for the people, regardless of the actual merit of this fearful science (which happens to be non-existent). Cosmic-climate link supported by muons that see the stratosphere Also, we follow Anthony Watts, the web's best science blogger in 2008 according to a large poll, and bring you a weekly dose of the peer-reviewed denialist literature. In a press release, a new paper in Geophysical Research Letters is promoted: S. Osprey and hundreds of authors: Sudden stratospheric warmings seen in MINOS deep underground muon data This scientific work actually comes from high-energy physics. Deep underground, in an iron mine in Minnesota (the same state where Minnesotans for global warming live) that is controlled by the Fermilab's MINOS collaboration, one can use a detector to measure the intensity of cosmic rays (well, the flux of muons, the electrons' 206.8 times heavier siblings) and these measurement display an unexpectedly strong correlation with the weather (temperature) in the upper atmosphere called the stratosphere. The link was especially strong and surprising during sudden, multi-day-long stratospheric warming episodes in the Northern Hemispheric winter. In other words, underground muons can now be used to reconstruct the stratospheric temperature! The correlation between the cosmic rays and the climate is pretty much proven by now. The direction of the causation Still, don't judge too quickly: you must be careful before you declare this to be a proof of the Svensmark-Shaviv cosmoclimatological theories because the MINOS correlation is at least partially (and possibly mostly) caused by the influence of the temperature on the production of muons from mesons - the opposite direction of the causal influence than climatologists would care about. To be sure, the causal relationship between the underground muon flux and the stratospheric temperatures can go in both ways - both directions can contribute to the correlation. The cosmic mesons may speed up the creation of low-lying clouds which usually cool down the surface, but because they reflect more of the solar radiation back to space, they give more opportunity to the stratosphere to heat up: more cosmic rays mean a warmer stratosphere. The opposite relationship exists, too. A warmer stratosphere is "expanded" and the fraction (and the typical position) of the mesons destroyed by the air is influenced, too. That's why the fraction of mesons with long enough life to decay to muons is also affected. But let me admit that the sign of the relationship in this paragraph isn't quite clear to me at this point. At any rate, most forcings predict the opposite changes of the trend for the stratosphere than for the troposphere. For example, the greenhouse effect cools down the stratosphere much like it heats up the troposphere. Sociology & other MINOS stuff If you care about the sociology, the MINOS authors are almost as numerous as the IPCC and their average IQ exceeds the IPCC's IQ roughly by 7 points. ;-) The list of authors includes my Harvard ex-colleague, Prof Gary Feldman, who is clearly even higher on this scale. :-) This is the second article on this blog that is largely dedicated to MINOS. The first one was not related to the climate: it was about the neutrino oscillations: Bush lost a few neutrinos in Minnesota There's no one as Irish as Barack O'Bama. Via Gene Day. ;-) Bohmists & segregation of primitive and contextual observables A student named Maaneli decided to defend his favorite theory, the Bohmian model of quantum phenomena. Update: See also Anti-quantum zeal for a sociological discussion of these issues. This picture, originally pioneered by Louis de Broglie in the late 1920s under the name "pilot wave theory", was promoted and extended by David Bohm in the 1950s. Because Bohm was a holy Marxist while de Broglie was just a sinful aristocrat, the term "Bohmian mechanics" for de Broglie's theory has been used for decades and I will mostly follow this convention, too. ;-) At any rate, there is no real reason to fight for priority because the theory is worthless nonsense. The framework tries to describe the quantum phenomena in a deterministic way. What is the pilot wave theory? In this approach, the wave function is an actual wave, a "pilot wave", analogous to the electromagnetic waves. Besides these classical degrees of freedom, there are additional classical degrees of freedom, the positions and velocities of the particles. These positions are influenced by the "pilot wave" in such a way that the pilot wave drives the particles away from the interference minima. More precisely, if the probability distribution "rho(x)" for the (effectively unknown to us, but known to Nature in principle) particle's position "X(t)" at "t=0" agrees with "|psi(x,t)|^2", it will agree with it at later times "t", too. Such a law for "X(t)" can be written down - as a first-order equation - while the classical "psi(t)" obeys the classically interpreted Schrödinger equation. Thursday, January 22, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Cell phone battery and the fridge A month ago, during the 0 °F cold snap, it looked like my cell phone whose (18 months old) battery used to last for 10 days at the very beginning got discharged after 2 days or so. Even with a new battery (not from the same company but arguably having the same capacity), the energy disappeared quickly. What would you think was the reason? I decided that the cold weather could have something to do with it. When it's cold outside, the energy from the Li-ion battery is not released efficiently and some circuits might think that the battery is already getting empty. However, the battery emptied itself in 2-3 days even when the cell phone was kept at home, around 20 °C. This looked hopeless. Maybe, the cell phone is not being charged completely, because of some memory effects or a wrong idea about the voltage needed to have a full battery. But if there is a mistake based on a wrong calibration, the argument above must be revertible: if you recharge your cell phone in the fridge, the circuit must think that it's not yet full, and it will try to charge it more fully than if the recharging occurs outside the fridge. I tried to charge the cell phone in the fridge and indeed, it seems that the lifetime has been extended, at least to 4 days (and counting). Do you think it's possible, reproducible, and that my explanation is correct? Do you need to be a chemical engineer who studies Lithium batteries to give me a sensible answer? ;-) If you agree with my method, is the battery fixed for another year or do I have to charge it in the fridge all the time? Will it work? Can I recommend it to others to fight the aging of their batteries? Dramatic update Wednesday, January 21, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Klebanov and Maldacena about hyperbolic cows In the new issue of Physics Today, Juan Maldacena and Igor Klebanov write a semi-popular "feature article" about the 11th anniversary of the AdS/CFT correspondence: Solving quantum field theories via curved spacetimes (PDF) One of the way to describe a negatively-curved space (in this case the Euclidean AdS2, or the Poincaré disk) is in terms of hyperbolic cows: Physicists have been using spherical cows as an idealization of the real ones for centuries but only at the end of 1997, they finally discovered another comparably important idealization, the hyperbolic cow. Well, if these comments sound less technical than expected, you should prepare for the rest of their article that could be more technical than expected. HadCRUT3: autocorrelation and records Eduardo Zorita was kind enough to look at my previous calculations of autocorrelation and frequency of clustered records that used the GISS data. Because I claim that the probability of their clustered records is above 6% while they claim it to be below 0.1% and because both of us know that my higher result is caused by a higher autocorrelation of my random data relatively to theirs, Dr Zorita told me that he thought that my autocorrelation was much higher than the observed one. However, it's not the case. The only correct statement would be that my autocorrelation is higher than theirs. But my autocorrelation matches the observations while theirs is much lower than the observed one. One of the technical twists of our discussion has been that I switched to the HadCRUT3 monthly data. We have much more resolution here: 159 * 12 = 1908 months of global (and other) temperature anomalies. In a new Mathematica notebook (PDF preview), I did several things: Tuesday, January 20, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere The Friendship Algorithm Episode 2x13 of The Big Bang Theory: Watch at YouKu (click the right lower corner of the video rectangle for full screen) Sheldon develops a scientific procedure for making friends. ;-) Well, it's a bizarre episode but it's also fun. Reid has pointed out that Natalie Angier of the New York Times wrote another ideological article about women in science, pretending that it is "baffling" why women's percentage in maths and physics doesn't seem to increase. She also quotes some other zealous feminists who said that "diversity is a form of excellence". Oh, really? I thought that diversity is, by its very definition, a form of mediocrity and averageness because "diversity" is meant to reproduce the distributions of the average society. Finally, she expects Obama to promote her feminist values. Well, I might be an uncurable optimist but I don't see any evidence that Obama agrees with those obnoxious frigid women more than he agrees with me. Sitcoms and "stereotypes" It makes sense to mention this particular feminist article because she also blames The Big Bang Theory for the "stereotype" of having geeky chaps and a blonde attractive young woman. Well, realism is a major reason why I like TBBT so much. Monday, January 19, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Weather and climate: noise and timescales To see that our nation is not quite innocent, click the frog and review a shameful "climate change" event sponsored by the Czech government. A few days ago, an alarmist nicknamed Tamino (Grant Foster) wrote a shallow posting about the extrapolation of trends: What if? Foster argues that one can't blindly extrapolate trends, especially not the cooling ones. Well, I agree with the first part of the sentence but unlike Foster, I think that one should blindly extrapolate neither cooling nor warming trends. I agree that the absence of a warming trend since 1998 (and the fact that 2008 was the coolest year since 2000; and it was also cooler than 1998, of course) doesn't mean that we know that there won't be any warming in the next 50 years. But in the same way, the existence of some warming in the last 100 years doesn't mean that there will be the same - or even much larger - warming in the 21st century. Related: Rasmussen: Only 41% of Americans believe that climate change is man-made And the absence of a warming trend in the last 10 years indicates that it is somewhat likely that there won't be any warming trend in the following 10 years, either. In other words, even if "man-made climate change" exists, it is almost certainly not going to be an urgent problem at the time scale of a decade or shorter. Every sensible person should agree that short-term observations of either sign cannot be extrapolated to an arbitrarily far future. But every sensible person should also agree that the observed temperatures do matter, and as their total volume grows bigger, their importance for our conclusions should increase, too. If someone talks about an "underlying trend" but his "underlying trend line" is allowed to deviate from the observed temperatures for arbitrarily long times and by arbitrarily large amounts, the "underlying trend line" is clearly scientifically meaningless and should have no impact for a rational decision-making. Hep-th papers on Monday There are ten pretty interesting hep-th papers today. The last one, by Frederik Denef, Mboyo Esole, and Megha Padi, elaborates on a very clever way to look at type IIB orientifold compactifications: translate them into black holes. To do so, they compactify the Universe on a three-torus, T-dualize all of it, and end up with a black hole in a Universe where a left shoe may become a right shoe after a round trip. ;-) This allows them to count the vacua via the black hole entropy - they're able to say e.g. that a sector of compactifications contains 10^{777} vacua - and study some of their detailed properties, including a new orientifold variation of the OSV identity (which doesn't involve any squares of the partition sums). Very interesting. All classical N=8 SUGRA amplitudes The first paper, by J. Drummond, Mark Spradlin, Nastya Volovich, and Congkao Wen explains a recursive algorithm to calculate all tree level amplitudes in the N=8 supergravity: quite a powerful technical result (although the experts in this sub-field usually care about loop amplitudes more than they do about complicated tree-level ones). They build their new comprehensive algorithm on a similar recent algorithm for N=4 Yang-Mills theory: the only novelty is that some invariants have to be squared and new dressing factors have to be inserted. Universal, inflating N=1 SUGRA There are actually three papers today that deal with the N=8 SUGRA. The second one I mention, by James Gates and Sergei Ketov, argues that one chiral superfield coupled to the minimal flat space supergravity is equivalent to a higher-derivative supergravity built from the chiral curvature superfield. That allows them to view the early inflation and the present acceleration by the dark energy to have mathematically isomorphic roots - roots that they also try to trace to a dilaton-axion stringy origin. 3D toy model of N=8 SUGRA as a TOE The third paper where the N=8 SUGRA is important was written by Jean Alexandre, John Ellis, and Nikolaos Mavromatos. They study the emergence of various composite fields in three-dimensional coset field theories - with holons and spinons being the elementary building blocks - and argue that these mechanisms are also important for qualitatively new physics of the N=8 SUGRA in four dimensions that may be relevant for its application as a "TOE", a statement that is surely both both exaggerated and somewhat obsolete. Predicting a wrong, negative C.C. George Ellis and Lee Smolin have arguably submitted the first paper co-authored by the second author that I could agree with, even though all the content can be summarized in the following sentence: if there are infinitely many semi-realistic vacua with the cosmological constant clustered around minus epsilon - and no counterparts with a positive epsilon, as suggested by some recent papers e.g. Shelton-Taylor-Wecht -, then it is fair to say that the weak anthropic principle (apparently incorrectly) predicts that the cosmological constant is negative. Well, there are at least three facts that make this trivial (and probably obvious to many experts, because whether or not string theory predicted or predicts a negative C.C. has been discussed for years: of course not) conclusion weaker than tea. The stability and the physical character of the Shelton-Taylor-Wecht vacua has not really been established; it has not really been established that there are no corresponding positive-C.C. vacua; and the weak anthropic assumption is wrong which means that even an infinite multiplicative underrepresentation of a class of vacua doesn't kill it. ;-) AdS/QCD: quark-gluon plasma Johanna Erdmenger, Matthias Kaminski, and Felix Rust study an N=4 gauge theory coupled to N=2 matter, looking for mesons etc., and they claim that their results about their spectrum (and widths) are relevant for the quark-gluon plasma regime of ordinary QCD. Dimensional reduction of monopoles Brian Dolan and Richard Szabo consider the dimensional reduction of compactifications with spheres and focus on the effect of the reduction on the magnetic flux through the sphere, especially on the magnetic monopoles. They look at the Kaluza-Klein tower and its Yukawa interactions and make some tools more controllable by switching to a fuzzy sphere instead of the ordinary one. No fixed points in Yukawa systems Holger Gies and Michael Scherer study the UV properties of some toy models of the Higgs mechanism with various fermions and Yukawa couplings. They use the term "asymptotic safety" even though IMHO it should be reserved for the (unlikely) existence of UV fixed points in gravity. They show that some models admit no non-trivial UV fixed points. QFT on quantum graphs E. Ragoucy computes properties of a "quantum field theory" on graphs - which should probably be called "quantum mechanics" only, using the standard jargon. The author calculates physical properties including scattering amplitudes and conductance. Orientable objects? D. Gitman and A. Shelepin discuss "orientable objects" associated with some fields on the Poincaré group. I don't understand their point and the meaning of their "objects" but frankly speaking, I tend to doubt that dozens of pages that seem to be struggling with some elementary facts about the Poincaré group and spinors contains something really new. If I am wrong, some of their unexpected conclusions sound rather sharp - for example, you need ten quantum numbers to describe an orientable object. ;-) Saturday, January 17, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Arctic global warming reaches the CSA A Santa Claus in Indiana The American South is usually thought to be a hot place. However, Alabama is now colder than Alaska. Big Chill (AP) Record cold temperatures (Google News) In Central Europe, we've had temperatures around -15 °C or 0 °F for weeks and it is fair that those folks in the U.S. South taste it, too. Meanwhile, we expect a dramatically warmer end of January - around the freezing point. Time to find the swimming suits again. ;-) OK, not quite swimming suits but the Minnesotan global warming anthem is pretty apt now again. See also Imagine there's no global warming by John Lennon. ;-) These days, when you say Minnesota, I imagine these jolly guys whose smile always survives the freezing weather in similar songs. But a decade ago, I had completely different things in mind. In 1999, my friend, mathematician Petr Čížek, invited me to a journey across the Midwest etc. which I didn't attend because of looming qualifiers at Rutgers. In Minnesota, his Russian friend, a lively girl, borrowed the steering wheel and tried to drive on a road under construction which is fine because you could go in the left lane as there is no traffic there. Well, except for a truck that instantly killed the two (and other two students from the car spent months in hospital). For an electric car, Cadillac Converj Concept looks pretty hot and aggressive. Google Chrome 5.0 I believe that only since the arrival of the 2.0 version, Google Chrome has become the best Internet browser on the market. Download Google Chrome 5.0.396.0 or later (dev channel, May 2010 or later) If you can't upgrade your Chrome to the "hot" dev channel (with most frequent, weekly updates), download and run the Chrome Channel Changer. Generally, Chrome is extremely fast. What I love are the configurable search engines. For example, when press ALT/D and write the generalized URLs like or many others, I automatically obtain the corresponding searches at,,, SPIRES, Pilsner public transportation (line 30),, and dozens of others. It's very convenient. Right-click the address bar with the URL to modify the search engines. The tabs that can be consolidated, split, or permuted easily are also pleasant. Click to magnify the screenshot. The new pre-beta version 2.0 seems to be working perfectly and it has several new features, including Friday, January 16, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Fighter pilot to lead NASA The tension between Michael Griffin and the new Obama administration has been sufficiently strong to expect Griffin's resignation. It will take place almost exactly during Obama's inauguration. Space.COM announced that Obama plans to name Jonathan Scott Gration as the new boss of NASA. His military experience is impressive - he might be classified as a hero and a natural authority - and I am sure that many conservative readers will be happy about such a choice. On the other hand, I am not that sure that the choice is good for NASA as a scientific institution. Jan Kaplický (1937-2009) Today, the wife of Czech architect Jan Kaplický gave birth to a child. That would be good news for Kaplický. Unfortunately, he died on the street today, too. My condolences to his family. His most famous building that has been realized is Bull Ring in Birmingham, followed by the Lord's Media Centre in London. He became famous as the author of a controversial project for a new building of the Czech National Library near the Prague Castle. It's been nicknamed octopus, blob, or jellyfish. It's been argued that the extraterrestrial aliens have landed and melted the Prague Castle and the building above was what was left. According to some YouTube dudes, the library was attacking people in some Marshmallow commercials. :-) Smartkit: Cryptogram Full screen... Try to solve this cryptogram. Play. Click an empty box and choose your letter - or drag the green letters to the blue ones with a mouse. Hint: The cryptogram hides a quotation by a late author who is considered wise even though he would probably find the black holes politically incorrect. The proposition both praises and criticizes the mankind's scientific skills. Obama okays coal industry The Wall Street Journal argues that the coal industry, responsible for about 1/2 of the U.S. energy, is going to do just fine under Barack Obama. The anti-fossil-fuels environmentalists who want coal and oil to be replaced by solar energy don't realize one important thing. Coal is solar energy, too. It is a solar energy that has been conveniently packaged by the Earth's geological processes much like meat is nothing else than nice plants that have been packaged by metabolism (Rajendra Pachauri should listen to both parts of the sentence!). Coal comes from the same beautiful Sun that has been buried much like in the famous song by Rammstein: The Sonne been buried, bringing a new ice age. But if you watch the video clip to the very end, you will see that the Sun may be revived again. Drill, baby, drill. Killing the information softly The information has been complaining, until 1996 or so, in this way: Yes, Strominger plays with his fingers, stringing her life with his words. I wanted to make sure that almost everyone finds something enjoyable in this posting. ;-) Moshe Rozali has written something about the black hole information paradox. He praised Juan Maldacena's 2001 paper about the information inside eternal AdS black holes, the paper that was essential for Stephen Hawking to convince himself and admit that the information was preserved. And he discussed the preservation of the information. Information is not lost, in principle In the AdS/CFT context, a black hole may also be described as a generic thermal "gas" of gluons and quarks. A new particle that enters this bath will eventually distribute its energy among all the other particles. Thursday, January 15, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere The Killer Robot Instability Episode 2x12 of The Big Bang Theory: Watch at ZSHARE.NET (right-click the video for full screen) With 11.8 million viewers, which has set a new record, this episode was, for the second time, the #1 in ratings. Record-breaking years in autocorrelated series As Rafa has pointed out, E. Zorita, T. Stocker, and H. von Storch have a paper in Geophysical Research Letters, How unusual is the recent series of warm years? (full text, PDF; see also abstract), in which they claim that even if we consider temperature to be an autocorrelated function of time with sensible parameters, there is only 0.1% probability that the 13 hottest years in the list of 127 years (since 1880) appear in the most recent 17 years, much like they do in reality according to HadCRUT3/GISS stations. If we add a non-autocorrelated noise, typical for local temperature data, the temperature readings become more random and a similar clustering of records becomes even less likely because the autocorrelation that keeps the probability of clustered records from becoming insanely low is suppressed. This matches the reality, too, because local weather records usually don't have that many record-breakers in the recent two decades. What percentage of civilized planets shoot An Inconvenient Truth? But after detailed simulations, I am confident that the main statement of their paper about the probability in the global context - 0.1% (that would strongly indicate that the recent warm years are unlikely to be due to chance) - is completely wrong. The correct figure for the global case is between 5-10% (depending on the damping of the long term memory, and we will argue that the 10% figure is realistic at the end), if you allow record cold years as well as record hot years, which you should because both possibilities could feed alarmism. If you ask about strict record hot years only, pretending that the alarmists wouldn't exist if we were breaking record cold years :-), you should divide my probability by two. The last alarmist planet I generated: temperature anomaly in °C in the last 127 years. About 10% of randomly generated realistic temperature data look like this and satisfy the 127/13/17 record-breaking condition - by chance. Click to zoom in. At any rate, the probability is rather high and it is completely sensible to think that the clustering of the hottest years into the recent decades occurred by chance. In roughly one decade per century, we get the opportunity to see this "miracle" (13 hottest years occurring in the last 17 years). Wednesday, January 14, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Final digit and the possibility of a cheating GISS David Stockwell has analyzed the frequency of the final digits in the temperature data by NASA's GISS led by James Hansen, and he claims that the unequal distribution of the individual digits strongly suggests that the data have been modified by a human hand. With Mathematica 7, such hypotheses take a few minutes to be tested. And remarkably enough, I must confirm Stockwell's bold assertion although - obviously - this kind of statistical evidence is never quite perfect and the surprising results may always be due to "bad luck" or other explanations mentioned at the end of this article. Update: Steve McIntyre disagrees with David and myself and thinks that there's nothing remarkable in the statistics. I confirm that if the absolute values are included, if their central value is carefully normalized, and the anomalies are distributed over just a couple of multiples of 0.1 °C, there's roughly a 3% variation in the frequency of different digits which is enough to explain the non-uniformities below. However, one simply obtains a monotonically decreasing concentration of different digits and I feel that they have a different fingerprint than the NASA data below. But this might be too fine an analysis for such a relatively small statistical ensemble. This page shows the global temperature anomalies as collected by GISS. It indicates that the year 2008 (J-D) was the coldest year in the 21st century so far, even according to James Hansen et al., a fact you won't hear from them. But we will look at some numerology instead. Looking at those 1,548 figures Among the 129*12 = 1,548 monthly readings, you would expect each final digit (0..9) to appear 154.8 times or so. That's the average statistics and you don't expect that each digit will appear exactly 154.8 times. Instead, the actual frequencies will be slightly different than 154.8. How big is the usual fluctuation from the central value? Cosmology of F-theory GUTs Dmitry Podolsky has brought my attention to a semi-popular explanation of cosmology in the F-theoretical grand unified models by Jonathan Heckman, Cosmology of F-theory GUTs, who is one of the young big shots working on this bottom-up phenomenology with Cumrun Vafa. The model is very predictive and quite a lot of these predictions seem to make sense. Tuesday, January 13, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Best European Blog: a victory Well, a winner (click the logo)... Thank you very much for your votes in the poll that was choosing the best European blog. Out of 6,200 votes, we received almost 34% of votes, defeating nine competitors. Additional thanks to Eduardo who nominated TRF in this category. Our continental neighbors in the Asian category saw some adjustments of the votes which have actually changed the winner but Europe is more honest and our lead was more substantial. So there were no changes made to our score and No Pasaran picked the silver medal for its 21% of votes. This is primarily a success of the TRF community and readers like you. Thank you again. I don't like awards but it is somewhat pleasant not to feel hunted all the time. :-) Stalagmites support cosmoclimatology In this weekly dose of the peer-reviewed skeptical literature about the climate, we look at some new evidence for cosmoclimatology. In a news story called The earth's magnetic field impacts climate: Danish study AFP informs about a new article in the U.S. journal "Geology" by Mads Faurschou Knudsen and Peter Riisager (Denmark): Is there a link between Earth's magnetic field and low-latitude precipitation? (full text paper, PDF) The page with the abstract... In the last 5,000 years of data, they found a strong correlation between • the Earth's magnetic dipole moment, as extracted from lava flows and burned archeological materials, on one side • and the amount of precipitation in the tropics, as extracted from Oxygen-18 inside the stalactites and stalagmites in Oman and China, on the other side. The only plausible explanation of this correlation is Svensmark's mechanism of cosmoclimatology: the oscillating geomagnetic field regulates the amount of galactic cosmic rays that reach the lower layers of the atmosphere which subsequently influences the amount of cloudiness and precipitation (and temperature). Stereograms and dinograms Reconstructing the secret know-how Entropa: celebrating the European entropy Sunday, January 11, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Pravda: Earth on the brink of an ice age Pravda: Earth on the brink of an ice age Well, there is surely a potential for a 5-8 °C cooling in a few thousands years (or tens of thousands of years) - because the human civilizations that we know from the history textbooks already started in an unusually warm period (see the scary left side of the last, black graph on the image below) - but should we expect the cooling soon? Update: David Archibald has reminded me of this picture (with an imminent icy prediction) from the 2000 article in Science by Berger and Loutre. Archibald included it in his cute book, Solar Cycle 24, as Figure 3. Well, an imminent cooling is surely possible but the article doesn't look like the most convincing piece of science to me - both because of technical reasons, missing references, as well as entertaining, otherwise unimportant mistakes (for example, they claim that the Serbian astronomer Milutin Milanković was Czech). But I would like to ask you what you know and think about the reconstruction of the climate record from the Milankovitch cycles. How good a fit can we actually obtain by combining the known astronomical cycles with well-chosen coefficients? There are indications that the purely astronomical theory fails to describe very low-frequency signals - with approximately 100,000-year or 400,000-year periodicity - whose observed amplitude seems to be much larger than the theoretically predicted one: natural climate change at very long timescales seems to be much more intense than our theories say. Reincarnation of an infalling observer In this essay, I would like to talk about physics and perceptions inside black holes. The picture of reincarnation above was sketched by Prof Krishna :-). Note that the death and the birth are two faces of the same object, namely the infinity, through which they are connected. While our treatment will try to be more serious than what you have seen so far, similar spiritual considerations will actually be unexpectedly important, especially when we get closer to the singularity. Why? Well, we should start with the question: Saturday, January 10, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Russia-Ukraine gas disputes The delivery of the Russian gas to Europe has been stopped because Russia has accused Ukraine of stealing the gas on the Ukrainian territory, a claim that Yushchenko vehemently denies. The first thing to say is that none of us can be certain whether the accusation is true or not. The dispute is probably going to end soon and the gas delivery will be restarted - because Ukraine has agreed with the Russian proposal to put independent monitors on its territory, largely thanks to the EU Council boss, Mr Mirek Topolánek: see for official info from the Czech EU presidency. But let us look at the dispute, anyway. Many people tend to decide according to their prejudices. And the prejudices in the West usually say that the Ukrainians are the good guys while the Russians are the bad guys. One of the proposals for a Gazprom building in St Petersburg. Click for more. Well, give me a break with this stupidity. Ukraine and Russia are two parts of the same cultural territory. In fact, Ukraine is the real historical "cradle" of Russia; it is more Russian than Russia itself. People in Central Europe who actually have some experience with both nations know that both of them are poor, essentially Russian-speaking nations from the East. Members of both nations tend to be employed in low-paid occupations and they are ocassionally connected with the Russian-speaking mafias. Friday, January 09, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Vanishing entropy of extremal black holes? Sean Carroll, Matthew Johnson, and Lisa Randall have submitted a provocative paper that tries to defend an old unpopular idea that the entropy of extremal black holes is exactly zero. Their new argument is that the nonzero entropy calculated in hundreds of careful stringy papers, agreeing with the nonzero Hawking-Bekenstein entropy, actually refers to a different spacetime than the "pure" extremal black hole, namely a spacetime that has an extra "AdS2 x S2" patch in it. Here is the main picture: The object in the middle is the Penrose diagram of a non-extremal charged black hole. They slowly adjust the mass/charge relationship to approach the extremal limit. The extremal black hole is the object on the right. Instead of saying that the pink regions are the only ones that survive in the limit, they say that in the limit, the non-extremal black hole becomes a union of the extremal one (pink) and some additional "AdS2 x S2" space (a brown wiggly strip in the middle of the diagram), and it is the latter space (also redrawn as a straight strip on the left) that is supposed to carry the nonzero entropy. Thursday, January 08, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Eurosocialists insulted by common sense By the way, a good news in the journalistic world. The Wall Street Journal becomes the first important newspaper that praises Czechia for A Prague Spring for Political Honesty. The European socialists have read the refreshing if not brilliant essay by the Czech president in the Financial Times, Do not tie the markets: free them. Václav Klaus explains that all moments could be called "exceptional" but this adjective is usually used in order to manipulate with the people. And he argues that Europe should weaken if not repeal various environmental, social, health, and other regulations and "standards". How did the socialists react? Well, you can guess! ;-) They went ballistic: Eurosocialists angrily rejected Klaus' calls. The article above is pretty entertaining, so let me respond to individual paragraphs: They urged the Czech Premier Mirek Topolánek to issue an immediate statement declaring that the president speaks for himself and not the government and that his views do not reflect the priorities of the EU Czech presidency. Very nice but before 1989, I have seen a lot of virtually identical stuff. For example, in 1977, everyone was supposed to sign Anti-Charter 77. The government should also denounce the witches, right? Why do they exactly think that Klaus' opinions do not reflect the priorities of the Czech EU presidency? UAH MSU: month-on-month cooling UAH MSU satellite records have released the December 2008 data. The anomaly shows 0.074 °C of cooling since November which is pretty substantial (100 °C per century haha). Unfortunately, the newest numbers are not yet on the website I linked so we have to rely on private channels of Anthony Watts which are surely reliable because they are coming from the very center of UAH MSU. ;-) Of course, the data show 2008 as the coldest year of the century that began 8 years ago. So far because we may see colder years in the future. However, I just wanted to draw a few UAH graphs, anyway. So they don't contain December 2008 yet. First, here are the graphs that you rarely see: the temperature of the world's oceans and land: Click any graph to zoom in. You can see that the land was warming by about 50% faster rate than the ocean, 0.16 and 0.11 °C per decade, and you may hypothesize that some effects of the civilization have influenced this observation. It would probably be incorrect to talk about the "urban heat island" effects because the satellites are unlikely to be affected by the popular barbecue parties at the weather stations. Here are the two hemispheres: Shift/click the picture to open a bigger one in a new window. (That's an even more important keyboard shortcut than tabulator/enter that changes the color of a ball.) You see that the Southern Hemisphere is warming up more than 3 times slower than the Northern Hemisphere which is another reason to think that the observed changes are not really global in character. If you care, the Southern polar regions are cooling by 0.08 °C per decade. I am not going to post these graphs because that would be a more serious blasphemy than the pictures of Mohammed! :-) Genes and memes, ideas and empty words Because a large part of the Spanish online community seems to be infected with a meme of a ball that changes its color upon clicking (almost 1 visit to TRF per second, is the reason for the propagation of this meme, or a nonsense of the day, if you wish to follow their terminology), let me write something about the memes. A few weeks ago, I had an e-mail exchange about memes with a reader of TRF whom I have also met in the real life - unlike many of you. Greetings, Tom. He argued that the concept of a "meme" is an amazing discovery because it allows us to understand the fascinating phenomenon of a "Mexican wave" that moves around the Earth every 24 hours and that affects a field that is defined as the density of the vibrations of five-inch sticks referred to as toothbrushes.  How is it possible that these toothbrushes move in unison? It is surely a divine phenomenon proving that memes are jumping in between the brains of different people. And the extraterrestrial aliens would surely be talking about "memes" all the time when their attention would focus on the Mexican wave of toothbrushes on the Earth, Tom argued.  ;-) As you may expect, I was skeptical about these big assertions about the importance of "memes" because the aliens would probably be thrilled by very different things than "memes" or "toothbrushing waves" and they could even use the toothbrushes themselves in ways that we couldn't have predicted. So let me defend my viewpoint. Memes: a few positive words I am personally using the word "meme", at least sometimes. What does it mean? It is a small idea, an elementary building block of an ideology, a partial method to look at a particular or general problem, or a myth, a joke, or a viral video or another computer file that people send to each other to have some fun, and what is important for every meme is that it can spread just like an infection. It is very stupid to click at the ball in the previous posting. But people are doing it nevertheless. And they lead other people to do the same thing. There exists a clear analogy of this behavior to the concept of the genes. Much like genes, memes are "selfish", if you allow me to use Dawkins' colorful adjective. They have their own identity - or at least it's the point of "memetics" to imagine that they do - and they want to become more powerful and to control a larger portion of the world. So they are using and abusing the environment in order to spread. Each of them may choose a different strategy. Wednesday, January 07, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Nonsense of the day: click the ball to change its color Tontería del día: Pincha en la bola para cambiarla de color Full screen here... Special bienvenido for Spanish visitors! Tuesday, January 06, 2009 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere • go to an NCDC page Record cold temperatures in 2009 Update: For the summary of the average temperatures in 2009 and its ranking, as written at the end of December 2009, click the link in this sentence. Current U.S. temperatures in °F. See Anthony Watts' blog for more comments. Record cold temperatures have arrived to the United Kingdom, Canada (24 consecutive days below -24 °C in a city). Cold Siberian air has also hit Central Europe, France, and Italy. London is colder than Antarctica. Literally. The cold snap is costly. The temperature in Pilsen and Czechia in general keeps on oscillating around -10 °C, too. The snow around is clean and pretty. The coldest official Czech weather station, Stráž pod Ralskem, has seen -25.1 °C. Journalists are also freezing in Colorado and Wyoming, among other places, while North Dakota continues to see record snow. Poor people in chilly India solve the situation by burning books; at least 55 people have died. I hope that they have enough copies of An Inconvenient Truth, like in Belgium (I invented the joke before them!). Sorry, the picture above are commies in a warm weather, not poor people in a chilly weather.
783379840554b78d
What if Time Really Exists? By Sean Carroll | November 24, 2008 12:01 pm The Foundational Questions Institute is sponsoring an essay competition on “The Nature of Time.” Needless to say, I’m in. It’s as if they said: “Here, you keep talking about this stuff you are always talking about anyway, except that we will hold out the possibility of substantial cash prizes for doing so.” Hard to resist. The deadline for submitting an entry is December 1, so there’s still plenty of time (if you will), for anyone out there who is interested and looking for something to do over Thanksgiving. They are asking for essays under 5000 words, on any of various aspects of the nature of time, pitched “between the level of Scientific American and a review article in Science or Nature.” That last part turns out to be the difficult one — you’re allowed to invoke some technical concepts, and in fact the essay might seem a little thin if you kept it strictly popular, but hopefully it should be accessible to a large range of non-experts. Most entries seem to include a few judicious equations while doing their best to tell a story in words. All of the entries are put online here, and each comes with its own discussion forum where readers can leave comments. A departure from the usual protocols of scientific communication, but that’s a good thing. (Inevitably there is a great deal of chaff along with the wheat among the submitted essays, but that’s the price you pay.) What is more, in addition to a judging by a jury of experts, there is also a community vote, which comes with its own prizes. So feel free to drop by and vote for mine if you like — or vote for someone else’s if you think it’s better. There’s some good stuff there. time-flies-clock-10-11-2006.gifMy essay is called “What if Time Really Exists?” A lot of people who think about time tend to emerge from their contemplations and declare that time is just an illusion, or (in modern guise) some sort of semi-classical approximation. And that might very well be true. But it also might not be true; from our experiences with duality in string theory, we have explicit examples of models of quantum gravity which are equivalent to conventional quantum-mechanical systems obeying the time-dependent Schrödinger equation with the time parameter right there where Schrödinger put it. And from that humble beginning — maybe ordinary quantum mechanics is right, and there exists a formulation of the theory of everything that takes the form of a time-independent Hamiltonian acting on a time-dependent quantum state defined in some Hilbert space — you can actually reach some sweeping conclusions. The fulcrum, of course, is the observed arrow of time in our local universe. When thinking about the low-entropy conditions near the Big Bang, we tend to get caught up in the fact that the Bang is a singularity, forming a boundary to spacetime in classical general relativity. But classical general relativity is not right, and it’s perfectly plausible (although far from inevitable) that there was something before the Bang. If the universe really did come into existence out of nothing 14 billion years ago, we can at least imagine that there was something special about that event, and there is some deep reason for the entropy to have been so low. But if the ordinary rules of quantum mechanics are obeyed, there is no such thing as the “beginning of time”; the Big Bang would just be a transitional stage, for which our current theories don’t provide an adequate spacetime interpretation. In that case, the observed arrow of time in our local universe has to arise dynamically according to the laws of physics governing the evolution of a wave function for all eternity. Interestingly, that has important implications. If the quantum state evolves in a finite-dimensional Hilbert space, it evolves ergodically through a torus of phases, and will exhibit all of the usual problems of Boltzmann brains and the like (as Dyson, Kleban, and Susskind have emphasized). So, at the very least, the Hilbert space (under these assumptions) must be infinite-dimensional. In fact you can go a bit farther than that, and argue that the spectrum of energy eigenvalues must be arbitrarily closely spaced — there must be at least one accumulation point. Sexy, I know. The remarkable thing is that you can say anything at all about the Hilbert space of the universe just by making a few simple assumptions and observing that eggs always turn into omelets, never the other way around. Turning it into a respectable cosmological model with an explicit spacetime interpretation is, admittedly, more work, and all we have at the moment are some very speculative ideas. But in the course of the essay I got to name-check Parmenides, Heraclitus, Lucretius, Augustine, and Nietzsche, so overall it was well worth the effort. • Jessica Good luck! • Big Vlad Come again? • Elliot Tarabour I found your essay this morning on the fqxi site and commented. (and voted ;) ) As I said… to paraphrase… “thank god somebody is standing up for time”. • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean Thanks for the vote, Elliot! I feel that time has been maligned for too long, and it’s about time (just can’t stop doing that) that someone stood up for it. • http://pantheon.yale.edu/~pwm22/ Peter Morgan Hi, Sean, nice to see another big name in the FQXi contest. Good luck. I see that you’re already a member of FQXi, but for little fish the big call of this contest is that people who place in the first three juried prizes (which means up to 8 essays, if they think there are that many that are good enough) will be invited to FQXi membership. I misunderstood the Community Prizes at first, as it also appears you may have, when you say: “So feel free to drop by and vote for mine if you like — or vote for someone else’s if you think it’s better”. Community prizes are the result only of “restricted votes”, of which we, the authors, can cast three each. People who are already members of FQXi can also cast three restricted votes each. To get the coveted restricted votes, most people have to write. This is what I missed: “Community Prizes: The top recipients of Restricted Votes will be awarded “Community Prizes.” Prizes will not be awarded directly on the basis of Public Votes, but it is anticipated that Public Voting may influence either Restricted Voting or Expert Judging.” That is, Public Votes seem close to irrelevant. Also, there are up to 18 juried prizes available in total, judged, I suppose, by Physicists, Philosophers, and presumably serious academics in other fields, all appointed by FQXi, but only up to 3 community prizes, judged half by members of FQXi and half by the writers of the papers you see there, the result of which can hardly be guessed. I’m curious whether this kind of competition, run with more editorial control and tighter rules, could form a new publishing model. Being able to say, “2nd place in 2008 FQXi contest” would seem comparable to “published in minor journal X” for one’s CV. The pre-publication discussion is potentially useful, though some form of access control might improve the condition of the FQXi comment threads. If FQXi provide an archive of winning essays that is as robust over the long-term as is provided by journals, it would not be necessary for the essays to be conventionally published in a journal. The Gravity Research Foundation has been running this kind of competition for years, of course, in partnership with GRG, apparently with good success. • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean Peter, thanks. You’re right, I had misunderstood the nature of the community prizes; probably the procedure they are actually using is much more sensible than the one I thought they were. • http://www.fqxi.org Anthony A. Peter & Sean, With my FQXi hat on: The biggest reason for basing prizes on restricted votes is that it would be too much of a headache to prevent stuffing of the ballot box in the unrestricted voting. In addition, we were worried that people with ‘celebrity’ status (or big blog followings) would be extra-unfairly advantaged (i.e, nice try Sean! You’ll have fall back on the merits of your essay.) • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean What’s the point in selling out to become a celebrity blogger if you can’t stuff the occasional ballot box? • Bruce Kellett Sean, a most interesting essay. Would I be correct in assuming that one consequence of an infinite dimensional Hilbert space and the existence of an accumulation point for energy eigenvalues would be that time (and hence also space) must be continuous? I can think of only messy arguments for this implication at the moment, but I feel that an elegant demonstration of such a connection should be possible. • http://tyrannogenius.blogspot.com Neil B If you believe that the wave function is real, then nature is not really time-reversible after all. Consider that at an “emission” point the WF expands out and gets bigger and bigger, but then (maybe) “collapses” at the absorption point. Run that backwards in time and it will not go the same way. Well, some people say something like many-worlds takes care of that. I suppose if the wave keeps evolving then there’s no time-asymmetry problem? However, with unobservable worlds/branches, it looks “not even wrong” from here. Also, modified from a point I made at http://scienceblogs.com/principles/2008/11/manyworlds_and_decoherence.php#commentsArea, here’s an issue relating to expansion of the WF and the issue of time: Regardless of whether you call it a literal “other world”, I in this one observe a specific outcome. If you think the other outcome/s must be actualized then it has to be “somewhere” in some sense of the term. (It’s gross because we are making more of the total integral of the WF over all spaces/?s combined, to have the whole particle “here” as well as the whole particle “there” – but let that go for a minute.) We still have to justify “my” chance of getting various chances of the outcomes, even granting the bastardization of conventional statistics (one person confronting multiple cases in sequence) into the idea of how likely a random “version of you” will run into a given outcome in multiplications of a given trial. Well, suppose there are two possible outcomes, but the chance is not 50/50. I ask: OK, so how many worlds/?s are created in the split? No, it seems that the WF must actually behave inherently differently during absorption/detection than emission/creation, which is asymmetry in time. I think our problem with “collapse” goes beyond just the “metaphysical” issue of what happens during collapse, but even what sort of “interaction” should cause it to happen. If we leave humans out of it, some might say “detectors” are inherently special, maybe from decoherence. But consider a photon entering a Mach-Zehnder interferometer. Why doesn’t the first beam splitter “collapse” the photon so we don’t even get interference from recombining? That silvered surface has atoms which are excited by the photon for “re-emission”. (Or indeed, why not consider the “split” photon to be one in each world for each possible direction, already?) But if a phototube gets a click, many assume that really happens (unless “awareness” makes it so.) Hence I don’t think we can avoid the odd challenge to “time” in physics presented by wave function issues. Wave functions tend to get bigger as time progresses – is that perhaps even more fundamental than thermodynamic issues? • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean Bruce, the assumption that time is continuous is put in by hand — I’m assuming conventional evolution according to the time-dependent Schrodinger equation, in which the time parameter is certainly continuous. So we can’t really count that as a conclusion. • Pingback: Fails, “Forgiveness” and Time « blueollie • Interested I read Sean’s 9 page article, referred to in his post above, and New Scientist’s time article “What makes the universe tick?” by Michael Brooks http://www.newscientist.com/article/mg20026831.500-what-makes-the-universe-tick.html. I read the latter first and was thrilled by a large consensus among scientists that it might be better to do away with the common sense notion of time. And Roger Penrose’s middle ground that ‘time pops in and out of existence as the universe matures’. Reading Sean’s, I found it difficult to grasp everything ( most of it went over my head) but carried away at least 3 small nuggets for myself to store for winter. # (1) One is that, “by taking time seriously we can conclude a great deal about the deep architecture of reality” (page 2 , 4th para, last two lines) By that I infer that, where Sean disagrees with the stance of those who would rather do away with the notion of time ( that to Michael Brooks, [time] makes no sense expect in terms of human experience …. “…..Last month, Smolin and other theorists, along with mathematicians and philosophers, got together at the Perimeter Institute to thrash out time’s problems. So complex is the issue that everyone involved seems to have a different idea. It turns out that if you want to understand time, you might need to grab some measurements from the future, watch a big bang explode at the edge of the universe, or delve into the anomalies presented by the most unruly of the subatomic particles. For some, the only solution is to scrap the notion of time altogether……” While I do not grasp Sean’s “deep architecture of reality” I would think and assume that there is a lot there for the picking and it would be a worthwhile journey to see such an architecture, standing the ground for not doing away with the notion of time, even when the notion of time is problematic and may not exist. # (2) The Big Bang is outdated. Sean : “The modern idea that time does have a beginning arises from the existence of a Big Bang singularity in cosmological models based on general relativity. But from our current perspective, that is an outmoded relic of our stubborn insistence to think in terms of spacetime, rather than directly in terms of the quantum state. Classical general relativity, after all, is not correct; at some point it must be subsumed into a quantum description of gravity. We therefore imagine that the classical Big Bang corresponds to some particular kind of quantum state, which may be obscure from the perspective of our current knowledge, but will ultimately be resolved. It follows, under our ssumptions (sic) , that there was something before the Big Bang, and time stretches back into the infinite past.” ( page 4 , 2nd para, last 8 lines) . Over the past few years I gathered on my scanty readings the dawning realization that the scientist’s frontier is not the Big Bang but something more or beyond that, but I guess this says it so clearly it is a relic. If one constructs one philosophy based partly on the Big Bang, when that shifts, one needs to adjust. It is here that it makes me think that, those who construct their philosophy based on the Crucifixion and salvation theme, do not have to make adjustment usually in their life time. Man has about four score or five score years ( 80-100 years old) and dealing with life, exigencies of life, often do not permit many adjustments to philosophy and thus, there is a certain efficacy to a model that changes less ( and to Kevembuangga, this is not about whether that is the better or best model in case it crosses his mind). # (3) Sean says “As our universe expands, it is increasingly dominated by vacuum energy. Currently,structures are still forming and complex life forms are riding the wave of entropy generated by hot suns shining in cold skies. But ultimately those stars will grow dim, galaxies will collapse into black holes, black holes will evaporate, and all we will be left with is an increasingly thin gruel of elementary particles in a background of vacuum energy. That, then, is a high-entropy state: a nearly-empty universe suffused with a tiny amount of vacuum energy.” (page 8, 3rd para, last 6 lines). This seems like , we should make hay when the sun shines! ( smile) And that one day in the very far distant future, our known universe could experience the big crunch and it will be no more at least as it once was. I wonder what scientists think and feel about the sort of death of the universe if they do not speculate about personal mortality and life after, then at least of universe mortality and after. # These are not exactly scientific contributions to the scientific post and article, but to a lay person, like me, when I read them, these are some of my thoughts. Thank you for your ‘time’ blog post. • Pingback: Tuesday Links - Nov 25 2008 | Aligning Technology, Strategy, People & Projects • http://notes.kateva.org John Faughnan I’d love to see a list of the essays you liked. Otherwise I have to dig through them; that would create too much entropy. • http://www.savory.de/blog.htm Eunoia Surely Time is Cantor dust, with all that that implies? Such as dark matter being on a neighbouring point, forever unreachable ;-) • http://kloanchoyss@gmail.com Don Bronkema panta rhei, auden menei: a century hence some/most/all of you will be risible • http://kloanchoyss@gmail.com Don Bronkema Panta rhei, auden menei…a century hence most of your vues will be risible • http://kloanchoyss@gmail.com Don Bronkema think Boltzman, think Bostrum, think mind of Brahma • Lawrence Crowell I have only read the start of this paper, but I think I get the “gist” of it. I do have a sort of question concerning this One possible local example of an accumulation point might be positronium atoms in the distant future ~ 10^{40} years from now. At that time the universe might likely consist of black holes and proton decay will have dissolved away baryonic matter. So there might be a thin soup of electrons and positron in space. These could then form “atoms” with radii that measure in the light years. The Rydberg levels of such an atom, which the atom shifts between by absorbing weak photons or by being pulled by the cosmic expansion, are tiny. So the frequencies associated with these “atomic clocks” are very small and the time intervals they “compute” very large. As time progresses some of these atoms approach the E = 0 through transitions involving Rydberg states that pile up near that limit. I am not sure about there being no upper bound to compuational complexity. In my idea of there being a branching of geodesics in the AdS spacetime with a black hole, there is a renormalizatio group associated with this. It sets the values of gauge parameters, and I think does set some extremization of possible local complexity in the corresponding dS spacetime for the physical universe. However, the universe appears poised to expand into a pure deSitter spacetime as the density of mass-energy approaches zero. Eventually the cosmological event horizon at r = sqrt{3/Lambda} will begin to decay as it emits Hawking-Gibbon radiation of enormous wavelengths (billions of light years!) and as Lambda —> 0 the horizon radius approaches infinity leaving the universe as a complete void or Minkowski spacetime M^{3,1} as the classical attractor point, or the endpoint of the cosmological Feynman path integral. I am not sure there is much complexity of computation going on in M^{3,1}! So if we think of the Minkowski spacetime, the conformal boundary of AdS, as some sort of conformal time mapped to a I^{oo}, then this boundary might represent an upper bound on time computations. It is worth pointing out that time a clock computes is something which is measured, or the time computed by the system is decoherently reduced to some classical-like value. So in this incredibly distant future the universe might compute its time with various systems with very long time intervals. In the case of the cosmological event horizon receeding away the emitted quanta are decoherent, and so the universe is computing ever large time intervals (decoherently so as a sort of measurement) as the cosmological event horizon retreats away. So the universe computes longer time intervals as time —> infinity. There is some sort of ratio here at work, say Delta t/t, where the time frame the universe computes for itself is given by the number of these Delta t’s it computes in the future. If the intervals Delta t increase in magnitude slow enough there would then be an infinite number of these “time computations” in the future, yet if they increases faster than some criterion, say the value of Delta(Delta t/t) —> 0 as t –> infinity then there might be some upper bound to the computational complexity (time computations etc) which can exist in the universe. Lawrence B. Crowell • matthew kolasinski Hello Sean, “…and observing that eggs always turn into omelets, never the other way around.” noting that an egg left to itself has been known to turn into chicken, and re-emerge an egg. sort of a work-around for entropy – a carrying on of the same cyclical energy system which is the point of the form. as long as the cycle isn’t interrupted with an omelet for the egg cycle or a pot for the chicken cycle, the energy system which was the essence of the first egg still exists. but these are events in time and not time itself, subject to processes which occur in time and are not time themselves. that is, unless you’re religious about some physics model for which time is exclusively the metric of a duration compared to some cyclical event, eh.. in time. noting that this concept of metric time as Time has its origin primarily in western cultural convention, which physics has largely inherited. it has an appeal; physics likes to measure things; casting around for something to measure what people said was time and everybody believed existed and was measurable, they just naturally grabbed a clock – it was handy. if you wish to rescue an existence for time, it will require stepping outside of metrics. but what happens in so doing, physics tends to loose interest quickly in things which can’t be quantified in some way – whether it exists or no. in stepping outside of metrics, you also step outside of all flavors of what is presently known as physics. it’s not too bad out here. we won’t argue with you if you want to call time real. if physicists want to call that an illusion, that’s fine too. judging from the entries in the contest, it’s reasonably clear that physics doesn’t have any better handle on it, highly speculative (“So there might be a thin soup of electrons and positron in space. These could then form “atoms” with radii that measure in the light years…”) and trying to promote their case appears to occasionally have strong potentials for adverse impacts on their blood pressure – that is a quantifiable. nobody argues about it out here. which leads us to a choice here. invoking the issue of free will – another topic within the parameters of the Time essay… myself i’d prefer to call it ‘event’ rather than ‘contest’ (not terribly interested in the competition so much as the opportunity to participate in an exchange of ideas – a sort of mega-brainstorming session. and having quite a bit of fun in the process. i doubt there’s any participant who’s taken the time to read some of the other posts who hasn’t learned something themselves in the process). so, there’s a choice between time as a reality, or the metrics of physics and no “real” existence for time. or to not choose. that’s a choice too. what will you do here? some wiggle room. you can’t ‘choose’ very well to walk to the moon – some limitations to free will. there appears to be an option when it comes to what we think. a momentous fork in the road for Sean, folks. let’s watch and see what he does. :-) welcome to Time, Sean; i’m looking forward to reading your post there. i’m afraid i’m falling behind in my reading a little; coming in more quickly than i can keep up with. but i’ll get to it, in due time… maybe you’ll get a chance to stop by and read ‘some thoughts on time’. you wouldn’t mind my borrowing your blog to promote my entry, would you? thanks, Sean, for promoting interest in the contest. and thanks to the folks at FQXi for hosting it. i don’t know what, if anything, you’re going to do with all this stuff your collecting there, but it’s a delightful venue. matt kolasinski • Loki Sean and other knowledgeable people, i’ve got a question about Boltzman Brain problem (and this question doesn’t look stupid to me): Why should an isolated B.Brain be more plausible fluctuation than the observable Universe we happen to be in? To have a fluctuation in the form of B.Brain, lots of otherwise non-existent particles should form into immensely complex form kind of “out of nothing”. The Universe is apparently much bigger than B.Brain and more complex, but it is not a fluctuation per se – rather the result of 13b years evolution of initial, probably much simpler fluctuation. You get just one very energetic Mega-Fluctuaton popping out of vacuum and voila – Big Bang, inflation, reheating, reionisation etc. etc. .. mathematical physisists on Cosmic Variance! I guess the good analogy is Darvin evolution here. Obviously, a sudden appearance of a fully-fledged math physisict on an otherwise abiotic Earth is numerically more possbile than the whole big biosphere with gazillions of different animals. But if you take evolution into account, biosphere looks almost inevitable while single m.ph. keep being an impossible example for discussion. • Lawrence Crowell The Boltzmann brain is something which comes about from a large number of possible configurations in an equilibrium system. The universe in contrast appears to permit a vast array of complex configurations, lots of stars, different planets, complicated chemistry and so forth, from which life appears to be just one example. The universe has a Goldilock condition where small homegenieties exist which act similar to open thermodynamic systems. Lawrence B. Crowell • http://guidetoreality.blogspot.com Steve Esser That’s a great essay, Sean. We’ve gotten used to science proving common sense dead wrong, so it has been tempting to say time is an illusion. But time is different. It is not some postulated feature of reality. It is the dimension of experience. There is no world to talk about without experience. • Henrik Jonsson Great, just in time when I’ve started getting the hang of thinking of time as illusory you come and attempt to tell us it might be real all along..! But seriously, fantastic essay, your skill at making esoteric physics understandable is unmatched. I did find some mistakes you might want to fix if it’s possible to send in another draft before the deadline: Typo on last line of page 6: “..the Poincaré recurrence theorem (and brining to life Friedrich Nietzsche’s image of eternal return).” Broken citation towards bottom of page 8: “..eternal cosmologies that feature a low-entropy “bounce” that replaces the Big Bang [?]“ • http://magicdragon.com Jonathan Vos Post Albert Einstein, in a letter to the widow of his best friend, Michael Besso: My coauthors Professor Philp V. Fellman, Prof. Christine M. Carmichael, Andrew C. Post, and others have a series of refereed papers that compare the theories of time of Dr. Sean Carroll (Senior Research Associate in Physics at Caltech), Hawking, Penrose, and the un-institutionally supported New Zealand iconoclast Peter Lynds (born May 17, 1975) is a New Zealander who first drew sudden attention in 2003 with the publication of a physics paper about time, mechanics and Zeno’s paradoxes. Lynds attended university for only 6 months. His career as a physicist began in 2001 with his submission of an article entitled “Time and Classical and Quantum Mechanics: Indeterminacy vs. Discontinuity” to the journal Foundations of Physics Letters. The papers that I coauthored can easily be found via GoogleScholar. The next will be at Complexity’09, February 2009 in Shanghai, China. I am a big fan of Sean Carroll. But I have to keep a very open mind to possibilities such as that the cosmos as a whole in trapped in a closed time-like curve, with the “origin” and “end” of time being the same point. It is hard to analytically continue such curves, which is why I suspect ambiguities in quantum cosmology. • Loki Lawrence, thanks for responce. Still, what is wrong with my biological analogy? Rephrasing B.Brain problem – “It is vastly more probable to be a single human on Earth than see other people around”. • Lawrence Crowell This all tends to point to the matter of the cosmological constant and the value of coupling gauge terms. The cosmological constant is probably not constant. It depends upon a Higgs-like field which inflated the spacetime rapidly into a flat (or near flat) geometry. The cosmological constant settled into the value we infer today which is involved with the latent accelerated expansion of the universe. We might imagine a universe with a large comsological constant which inflates early on much more rapidly and enters into a latent inflationary (accelerated expansion we observe) that is far more rapid. In such a spacetime mass-energy would rapidly expand away and little local structure (stars, galaxies, planets etc) would occur. Conversely suppose that that cosmological constant is very small. Such a universe might expand more slowly and have matter far more clumped together with far more black holes. In that case too much mass-energy would be tied up in black holes and local complex structure not as possible. Sean’s thesis here, which BTW assumes time exist, has quantum systems entering states which accumulate or have some asymptotic bound. These ever closely packed states have a lot of detailed selection rules for transitions between them. In order for all of these states to be occupied it probably requires that the universe becomes colder and for the cosmological arrow of time to march forwards as entropy increases. Robert M. Wald in The Arrow of Time and the Initial Conditions of the Universe (Enrico Fermi Institute and Department of Physics University of Chicago, arXiv:gr-qc/050709 vl 21 Jul 2005) writes: “There is no question that our present universe displays a thermodynamic arrow of time: We all have observed phenomena in our everyday lives where entropy is seen to increase significantly, but no one has ever reliably reported an observation of entropy decrease in a macroscopic system.” So a universe which expands too fast or one which buries quantum bits into black holes by expanding too slow might not satisfy Sean’s thesis as well. So the running parameters in a renormalization group appear adjust so that the time scales for the universe are appropriate for the “Goldilocks” condition we observe around us. We might be seeing Liebniz thesis about this being the best of all possible world at work! Voltaire’s Pangloss aside :-) In fact, the renormalization group equations (Wilson, Polchinski etc) are remarkably similar to the Navier-Stokes equation! So there is some underlying sense of a flow. The forward march of time, whether that be an emergent macroscopic thing or something absolutely fundamental, appears wrapped up in the structure of the universe, the “fine tuning” of gauge coupling terms and the cosmological constant. So this Liebnizian or Goldilock universe we observe appears to permit local inhomogeneities that give rise to structure on a wide range of scales: galaxies, stars, planets, complex chemistry, life and so forth. Some of these local clumps behave approximately as open thermodynamic systems which can give rise to very complex systems. It appears this is how the complex structure arises. The Boltzmann brain thesis is still outstanding in some sence. In a sufficiently large sample space of states Einstein could emerge spontaneously. Yet the universe we observe appears to not let this possible. An ever expanding universe is one which prevents incredibly long Poincare recurrences and for any equilibrium system to enter into all possible states. Lawrence B. Crowell • Loki I think i understand now, thanks. So, St.Augustine was right when he called the idea that history goes in circles totally preposterous? He used different arguments though :-) • CarlN Sean seems to think that infinite time is possible. Let’s make ourselves immortal and check. From now on we are immortal (regardless what happens to the universe) and we keep track of time. We will realize that after being around for 10^100000 years we can still be around a little longer. We see that there will never come time when we can say that we have lived for an infinite amount of time. Even though we live for ever we will never reach infinite age. So time cannot be infinite. By reflection of the argument a physical reality cannot be eternal. There is not enough time that can pass form minus infinity to reach the present time. One cannot escape creation from nothing. :-) • Lawrence Crowell I think it likely that time is infinite into the future, at least as measured by some standard clock which could measure that long. Time going back to the past appears to be bounded by the big bang, though there is room for arguments there whether time pushes further back to some pretunnelling state or other universe and so forth. The “time at infinity” is the attractor point in the (mini)superspace, which might be a Minkowski spacetime that is completely void or empty. We humans of course will not be there to see much of it. In fact there is room to question whether we will survive this century. The demise of the sun strikes me as a drop dead end time limit. Paleontology indicates that most large mammalian species are on the Darwinian game table for only a few million years. That is far short of any cosmological time scale, and our historical time frame is far shorter still. Lawrence B. Crowell • CarlN LC, you must realize that there will never come a time when the clock says: “An infinite amount of time has passed!” That time will simply never come even for a clock that runs forever. Jeez. Time is finite no matter how long. • Lawrence Crowell In part I indicated this with the problem of time intervals. If there is an accumulation point of energy eigenvalues then the energy spacings between these levels becomes very small Delta E_{i,i+1} < Delta E_{i-1,i} for all i, so in the limit i goes to infinity these spacings converge in a Cauchy type of sequence. The corresponding time intervals associated with these transitions which this quantum system computes become larger and larger. This system in order to be a physical clock must exhibit decoherence similar to a measurement. So as the "pure time" goes to infinity what quantum clocks there are which compute this time do so with ever larger time intervals. So there appears to be an issue of detailed balance or ratios here. So depending on how these energy eigenvalues values converge it could be that what quantum clocks the universe provides for itself may in the end compute a finite number (albeit enormously large) of time intervals. I'd need to think more about this to be certain of this conjecture. As for this so called pure time, this might be something considered by the Compton wavelength of an electron L = hbar/mc. As the universe expands in this accelerated manner there might in the distant future be some sort of "quantum solipcism," where every quanta which exists does so in isolation within a cosmological event horizon, which in the very distant future will retreat off to infinity. The electron appears absolutely stable, and for argument I will assume it is, even through vast times of 10^{10^{10^{…}}} years. So the quantum oscillations of an electron in its rest frame will beat with a T = hbar/mc^2 = 6.4×10^{-22}sec. So this might be seen as some parameter for the forward direction of time. If the universe expands endlessly and the electron is stable then there will be an infinite number of Compton oscillations. Of course this might not count as a quantum clock, for there is no decoherence involved with state transitions. Maybe these lonely quanta become entangled across vast distances across cosmic event horizons, or there are some sort of physics involving transitions, but I will defer opinion on that for now. Lawrence B. Crowell • CarlN LC, you don’t see. There will never be an infinite number of Compton oscillations. The number will keep increasing, from a finite n to n+1 to n+2 etc forever, but the number will NEVER become infinite. A transition from a finite value to an infinite “value” cannot take place by finite increments. Such a transition can only happen for an infinite increment. • Doug I agree with you that this “departure from the usual protocols of scientific communication” is a “good thing.” When I asked Peter Woit if he would be inclined to participate, he didn’t think it would be a good idea, given that he objects to the Templeton funding. Did that give you any pause and do you think it might be a more widely spread deterrant? • Lawrence Crowell CarlN: your argument would then mean the real number line can’t be infinite because nobody could ever count them all. You might in a sense pop out of the picture and think as Einstein told us to and think of the whole spacetime. The temporal part then “goes to infinity” in much the same way that the real number line goes to infinity. Lawrence B. Crowell • Fermi-Walker Public Transport Hi Sean Very interesting essay. I have a question, namely that if we are talking about the time evolution of a wave function, then the equations which describe the time evolution of such a wave function in quantum mechanics are invariant under Galilean transformation, not Lorentz. Galilean invariance assumes a universal time, does this matter in your argument ? • CarlN LC, the correct wording is that the real number line is unbounded. For any number you write down I can write a larger one. Then you write one larger still etc. We could go on for all “eternity” without ever reaching the “infinite number”. I guess you are starting to see what is wrong with Einsteins spacetime geometry. • http://cosmicvariance.com/sean/ Sean FWPT– You can certainly have a Lorentz-invariant version of quantum time evolution (the Schrodinger equation). The only trick is that the fundamental variables are not the positions of particles, but the amplitudes of quantum fields. (You can look up “functional Schrodinger equation” in some textbooks.) The notion of a Hamiltonian demands that you choose a frame in which to define it, but the result is independent of the frame you choose, if the underlying theory is Lorentz invariant. I discussed this more in the original version of the essay, but space constraints did not permit me to keep it. In a Lorentz-invariant theory, the time parameter is certainly not unique, but any choice is fine. • Lawrence Crowell CarlN, indeed the real number line is unbounded, and in a deSitter type cosmology the same is the case. I am not sure I see anything particularly wrong with this. We might ponder whether there are really indeed physical systems which demark time intervals endlessly. Sean says they do with the accumulation of eigenvalues. I think the conjecture is fairly reasonable. I suppose I see nothing wrong with the prospect of a quantum system that oscillates endlessly and demarks and infinite number of time intervals. That the system will never count some “final” time does not bother me particularly. Lawrence B. Crowell • CarlN LC, I’ll try one last time. What unbounded means: You can never reach the end of the line, but you can’t reach infinity either with finite increments (you could try by divison by zero, though). You could fix your favorite spacetime by a supplementary condition that says only a finite spacetime interval is physical. You will still have other problems though.. The fact that “time” never can be infinite into the future from the present has very important implications. It means that an infinite amount of time can’t already have passed, even for a physical reality “outside” the BB. In fact all kinds of eternities are eliminated. You can work out the implications.. Keep in mind that apart from what exits, there is nothing. If you think that entropy is an important concept: It requires less effort to specify low entropy initial conditions than high entropy conditions. In fact zero entropy initial conditions are easiest to arrange. • http://tyrannogenius.blogspot.com Neil B CarlN, I don’t see how you get the idea that time can’t be “infinite into the future.” That was never supposed to mean, there’d be “a time” at which the clock actually read “infinity” (the same as a “highest integer”, or treading “infinity” like another uber-integer somehow beyond each and every other integer ….) What’s to keep things from simply continuing to behave various ways in an ever-expanding universe? BTW, it can’t be both the case that time was indefinite into the past and yet constants could change with each iteration of a collapsing universe: the chance of becoming an open universe would always have already happened. (Contradiction: I can’t be even in such an open universe, contemplating “the infinity” of cycles before now, since however tiny the chance, the open cycle must have already happened earlier.) One thing throwing a monkey wrench into consideration of infinite time, and space, is how we can mathematically compress infinite extents into a finite length. We can for example (“normalize” as is convenient and use x or t), use t’ = ArcTan t, or x’ = x/sqrt(x^2 + 1) and remap an entire infinite extent onto a finite line or space. That possibility challenges a realist, common sense notion of what can coexist “in the same space.” Why? Because I can imagine that my entire “infinite universe” is mapped onto the space defined with ArcTan of the given value of r distance from me (such as it is.) It then has a pseudoboundary. (Not a true boundary, since the limit is not an actual defined value: it’s like the open circle at unity for x < 1.) Mappings are perfectly valid transformations, right, than cannot affect or "be observed" by the inhabitants. And yet, if I do so I can now imagine another entire universe "beyond" the infinity, mapped onto another finite segment in the new x' or t' container space. Even weirder, we can then compress an entire Aleph null of such compressed infinities via a second iteration onto x'' or t'', etc …. One can even imagine apparent absurdities like moving a point along x' such that it will actually blow past "infinity" of x in a finite time, or outwait the entire infinite time extent in t by slowing down appropriately, etc. (Compare "supertask.") Maybe that mathematical trick is not physically realizable. OTOH compare to the old saw about outsiders never being able to see the an object actually fall into a black hole. Yet the faller finds himself reaching not only the event horizon, but the singularity later in a finite time. The latter disjunction was the framing of a poignant SF story, "Kyrie" by Poul Anderson. The heroic energy-being type alien falls into the black hole. His telepathic human companion can "hear" his laments forever since for her, the fall never ends. See e.g. http://www.nvcc.edu/home/ataormina/scifi/works/stories/kyrie.htm. I admit to getting a little sniffly, reading the story as a boy (always a sucker for sentimental stuff like "Lassie Come Home.") • CarlN Neil B, “What’s to keep things from simply continuing to behave various ways in an ever-expanding universe?” Exactly. What remains invariant is that the future will always be at a finite (but increasing) “distance” from a given point in the past. Mathematical transformations won’t help you. You can’t “remap” the infinite future before it has happened. That time will never come. • Lawrence Crowell Neil B’s mathematical example is spot on. He did the work and keyboard tapping I was trying to avoid. The Penrose conformal diagram for the universe can put the t = infinity boundary, maybe a Minkowski spacetime, at a finite distance. The existence of an accumulation point in Hilbert space permits the continued existence of systems which mark time intervals. Whether they reach the endpoint as a summation of their iterations is besides the point. Lawrence B. Crowell • CarlN That is the other problem with spacetime geometries. People are led to believe that the future actually exists in some 4D (or more) geometry. As a calculational tool it is OK, but don’t get confused. This is why we never will see a visitor from the future. The future does not exist :-) • CarlN LC, the point is that the future will ALWAYS remain at a finite “distance” from for example today. This is an invariant fact of reality. Spacetime geometries go in and out of fashion on the other hand :-) Come on, is anybody willing to step up and say that there will come a time that is infinite into the future from Dec 1, 2008? I guess at least Sean should do so, since he suggests time going from – to + infinity :-) • Lawrence Crowell I am reminded of an account of George Bernard Shaw who apparently said he did not think the planets exist at the distances demonstrated by astronomers because nobody could ever walk to them. Lawrence B. Crowell • Roman Invariant fact of reality is that we will have some great new theories like CN’s popping out now and then. I don’t have one yet, but I have this question bugging me for some time now: why physics has problems with infinities while mathematics doesn’t? • http://tyrannogenius.blogspot.com Neil B Roman: of course math has problems with infinities, everything from dividing by zero to the status of infinite sets. BTW, I have yet to find a good rundown of “complex number infinity” descriptions, anyone know? I expect it would be “bifurcated” into two incommensurable limit approaches. One would be along a parallel line (such as 5 + infnty i) as one approached infinity along either the real or complex axes. The other would use r, theta guidelines. It would be in the form r = infinity, theta = whatever. Anyone see this? CarlN, your intuitive notions and “common sense” reasoning (reminding me of how “analytical philosophers” think) may not really constrain the real world, and they certainly don’t constrain the conceptual world. • CarlN My only problem with infinities is getting there :-) So if you become eternal you believe that there comes a time when you can say: Gee, look at the calendar. It has passed infinitely many years since 2008. Either you believe that or you don’t. Or you are not able to make up your mind. • http://tyrannogenius.blogspot.com Neil B CarlN, I don’t think there is an actual limit moment you can ID as “infinity”. Just think again of relations like x x’ transformation and the weird time issues about black holes. Maybe *I* can’t ever notice a calendar moment when “infinitely many years” since whenever, but *someone else* operating through mappings might be able to go past all of my infinite time or distance (with my “infinity” like the open circle on “one” at x < 1.) • Roman Neil B, I agree I put it too broadly – what I meant is that infinity has its place in math, is part of the system. And from my layman perspective it looks like in physics infinity needs to be avoided at all cost. I know that there are good reasons for this – it just seams (maybe better would be feel) strange. • Lawrence Crowell Physics has problems with infinities associated with an observable. These crop up with electric potentials V(r) = qq’/r, where r —>0 at a point charge you get an infinite potential. There are of course ways around this, but singularities regularly crop up as troublesome. However, infinite time is not the same problem. A two body problem in Newtonian mechanics is ideally eternal. Of course we accept that for two stars in a mutual orbit other external factors will probably at some time end it, but there is nothing which causes physicist hair to stand up over the prospect of a two body system ticking away forever. The issue Neil B with the black hole is one example. If you fall into a black hole you will cross the event horizon in a short finite period of time. An external observer will see you slow down and never reach the horizon. This has a relationship called the tortoise coordinates, where you never can witness anything cross the horizon. Of course it becomes incredibly redshifted so that you can’t observe much except Planck scale modes. This gets into the whole matter of the stretched horizon, black hole holography and AdS/CFT. The curious thing which happens of course is that the external observer in principle can witness you, or Planck modes (strings etc) associated with you, right up to the moment the black hole quantum evaporates. In the case of cosmology there is no evaporation point of the same type. So with respect to some conformally mapped coordinates. So the “infinity” is reduced to something finite. In the Anti de Sitter cosmology, a sort of strange twin of the more physical de Sitter cosmology, the boundary of the space is in Fermat coordinates a finite boundary. Yet particles leave and exit this boundary at +/- infinity along great arcs similar to a Poincare disk. So something which is infinite in one definition (time) is mapped to something finite in Fermat metric coordinates. Of course you can put a black hole in this space and have all sort of fun with dilated times and coordinates for particles on paths between the boundary and horizon. Lawrence B. Crowell • Interested I am not able to follow the discussion, but it comes to my mind, the issue at least to lay people is whether time is an illusion. It seems the scientific discussion focuses on that it is not. ( at least that is what I gather ) Wanting to get a better grip of whether time is an illusion, I decided to search readable sources— and gather that from New Scientist. http://www.newscientist.com/article/mg19726391.500-is-time-an-illusion.html “… Physicists have long struggled to understand what time really is. In fact, they are not even sure it exists at all. In their quest for deeper theories of the universe, some researchers increasingly suspect that time is not a fundamental feature of nature, but rather an artefact (sic) of our perception. One group has recently found a way to do quantum physics without invoking time, which could help pave a path to a time-free “theory of everything”. If correct, the approach suggests that time really is an illusion, and that we may need to rethink how the universe at large works.” I thought this is cute, that the holy grail quest for the “theory of everything” will or could soon be a “time free” theory of everything ( smile) (smile) (smile) So to get around, with a time free theory of everything, it seems to get back or return to or come home to square one with the suggestion time really is an illusion. So is time an illusion? I am a bit rushed for time so did not read the entire 7 page article if the fonts are expanded to size 12 Roman Times font style ( saved for a later day), so I cannot tease out the writer’s suggestion (if any) time is an illusion. http://www.newscientist.com/article/mg19726391.500-is-time-an-illusion.html • CarlN Ok. So Neil, as an immortal local observer, will not ever turn the calendar page “Year Infinite”. But LC, via some “conformally mapped co-ordinates” probably, will actually see that Neil wraps around to the page “year infinite”. LC, via a strong telescope can actually read “Year Infinte” on Neils calendar. Guys, this is not general relativity, this is general inconsistency. You seriously need to get your spacetime straight. • Interested My apologies, for being insensitive to many great , serious , & genuine attempts at the theory of everything, something dear to the heart, soul & mind of those professionally & scientifically engaged in that worthy pursuit. Such efforts are to be commended whether they bear immediate fruit or not. It may be there are several or more baskets of paradigms, and to reconcile the different baskets seem herculean now, with our current perception of separate and distinct baskets with their own self contained logic. This comment on New Scientist’s Is time an illusion?” echoes in my being … “It is not reality that has a time flow, it is our very approximate knowledge of reality that has a time flow,” says Rovelli. “Time is the effect of our ignorance.”…. because it comes close to home of ancient buddhist teachings that ignorance is the greatest taint, on the perception of the world as real, when it is not. Science view of and on time, gives a small window to introduce concepts of perception versus fundamental or non fundamental ( of which time is said to be a non fundamental) that seems to defy human understanding based on perception & experience, a rare & unique occasion of possible interface or criss crossing of views or worldviews. But the explanation given by the author is distinctly and admittedly different, “Imagine gas in a box. In principle we could keep track of the position and momentum of each molecule at every instant and have total knowledge of the microscopic state of our surroundings. In this scenario, no such thing as temperature exists; instead we have an ever-changing arrangement of molecules. Keeping track of all that information is not feasible in practice, but we can average the microscopic behaviour to derive a macroscopic description. We condense all the information about the momenta of the molecules into a single measure, an average that we call temperature.” And THUS “According to Connes and Rovelli, the same applies to the universe at large. There are many more constituents to keep track of: not only do we have particles of matter to deal with, we also have space itself and therefore gravity. When we average over this vast microscopic arrangement, the macroscopic feature that emerges is not temperature, but time. ..” This brings to my mind somewhere where it is said that time is connected or related to gravity, and the higher we go up a high rise building, and live there, we age faster because of weaker gravity as compared to another who lives on ground level. It also brings to mind something about travelling at speed of light and ?? not aging? Or something odd (forgotten), and so, it is not the usual perception of the clock ticking time as we ordinarily experience. • CarlN What comes to my mind is the similarity between the thermodynamic partition function and the QM path integral. By “Wick rotation” of the time you get a temperature. I guess Rovelli is right by saying that time does not really exist. Classically the present is encoded in the particles positions. The past exists only as encoded in the particles present momenta. And the future does not exist at all. Only the present really exist. Any spacetime geometry that allows some kind of time travel is fundamentally wrong. • Lawrence Crowell Interested: The problem is somewhat beyond whether time exists, but how is it that time in general relativity and quantum mechanics can ever be made to agree with each other about time. In general relativity time is a symmetry of the theory, or in the su(1,1) part of the sl(2,C) group. One can in ADM relativity consider spatial surfaces as foliating a spacetime. How these spatial surfaces link together is chosen by the analyst in a manner similar to a gauge choice. This is of course curious, for we can think of time as some one dimensional space with a fibration given by spatial surfaces. So “choosing” how time acts is equivalent to choosing a section in a fibration of spatial surfaces over a real line which parameterized time. Quantum mechanics in the other hand treats time as more concrete. The Schrodinger equation ifrac{partialpsi}{partial t}~=~Hpsi holds time as an evolutionary parameter which is not a symmetry of the Hamiltonian. In relativistic quantum theory one must assign fields on a spatial surface of simultaneity (equal time commutators of fields etc), and from there time acts as a parameter which fixes a Hamiltonian, but is not a symmetry of the Hamiltonian. Things get somewhat odd when the space or spacetime is curved. In particular the distinction between a vacuum and a particle state breaks down. General relativity does not derive a Schrodinger equation per se, but in the “space plus time” ADM approach the canonical quantization of variables give the above equation with the left hand side zero. This Wheeler-DeWitt equation is then not a wave equation of evolution, but a constraint type of equation which specifies a wave functional on metric configuration variables. This dichotomy between how general relativity and quantum mechanics treat time still obtains. The big theory of unification is string theory, which is more particle based based. There general relativity is treated with a background, which adulterates some aspect of gravitation. The other theory called Loop Quantum Gravity is more general relativity oriented, but this theory has difficulty in deriving particle or quantum physics in a workable manner. At the core of this problem is that GR and QM simply regard time in basically different ways. In euclidean gravity time is related to a temperature as which indicates that on a microscopic level time is the phase description of spinor fields (or whatever substructure there is) to gravitation. There may then be a time, or equivalently a temperature, where the physics is scale invariant, which will occur for some very low temperature (T goes to zero) or equivalently a long time parameter. This is I think a quantum critical point, similar to the “breakdown” of electrons or quasi-particle Fermions in a Landau fluid. In this setting time appears to be emergent, but at the critical temperature T = 0 time does “go to infinity,” if you will. Lawrence B. Crowell • http://tyrannogenius.blogspot.com Neil B CarlN, there will not be a way to read the “year infinite” on my calendar since as I have repeatedly said, remapping makes the limit to infinity a “closed” point like the “one” in x < 1. But you may well be right that there is no way for any entity to physcially relate to another one in that remapped way which seems to "include" the limit to infinity within it's own finite coordinates. I could be just a math notion that has no real physical enactment. This is all part of the question, what is really real, wave functions, time itself, "curved space" and "so then, what does it curve into" etc. I think the universe isn't fundamentally real or self-sufficient in itself anyway. It's just a "Matrix" like scheme for generating phenomenal existence, i.e. for "a purpose" as hinted through anthropic fine tuning. • Lawrence Crowell It is hard to know if the universe expands to its attractor point in an infinite time period for sure. The cosmological horizon, just as with a black hole horizon, might produce exceedingly long wavelength radiation and decay. The following article arxiv.org/abs/0803.1987 discusses this, though I am not sure about part of this. However if this occurs the cosmological horizon at r = sqrt{3/Lambda} will have some temperature T ~ Lambda, and over an immense period of time the cosmological constant (parameter) will approach zero and the horizon will retreat off to “infinity.” The final state of this is an empty Minkowski spacetime. Whether this takes a literal “infinity time” or not is somewhat academic. It might be that as with BEC’s there is some tiny temperature T > 0 where the process stops. So the final state of the universe might then be at some finite time, though enormously large, in the future. Again we are in a domain of great uncertainty, and because of the nature of this subject we obviously will never do an experiment or make any observations! It is also curious that the final state might be a Minkowski spacetime, which would be an eternal void with no clock or anything else in it. Lawrence B. Crowell • CarlN Neil, I guess you took the point. You can’t remap infinite time, since it will not ever exist. On what is “real” you can only rely on what does not introduce inconsistency. A “thing” that is in conflict with itself cannot exist. It cannot exist more than 2=3. Certain spacetime geometries will in fact give rise to inconsistency or “incomputability” like GR regarding the BH singularity or time travel. However, the most important point is that “time” is invariably finite. The impossibility of future infinite time means that there is no infinite time in the past. The infinite simply takes too long for time to pass. So the beginning of time is finite in the past. This is of course the BB. And this is creation from nothing. Of course one can think about something “before” the BB, that has caused the BB. But the same logic applies. That needs a beginning too. As we have discussed before, you cannot explain anything using eternal (outside of time if necessary) concepts, since there is no way of explaining the properties of eternal “things”. There is no way of explaining something eternal has this set of properties instead of that set of properties. If you go that way all you have is wishful thinking and no explanation. You already know that creation “from” nothing is logical. That is simply logically possible. And also, it is of course impossible for something not self-consistent to start to exist. This is why our physics books is full of math! But still there are things to be said. On what it means to exist, for example. • CarlN LC, time (or the size of the universe) will always be finite, but increasing. This is just a mathematical fact. You can’t reach infinity by finite increments applied one after the other. You just move from one finite value to another for ever. That is just the way it is. And how it must be. Physically it is impossible to measure anything infinite. • Lawrence Crowell Whether or not an alarm clock will ever ring once time reaches infinity is besides the point. Of course that can’t happen. That still does not mean that time can’t continue endlessly. In effect what you are saying is that because there is not the infinite register space for information required to enumberate that infinite time it then must not exist. Again one can consider a Zeno type of argument. Suppose that in one second I or some oracle machine counts 1, and then in the next 1/2 second it counts 2, and the in the next 1/4 it counts 3 and so forth. Then this is an infinite sequence of performed in 1 + 1/2 + 1/4 + 1/8 + … = 2 seconds. Now I have invoked a bit of “magic” here, but this does indicate that in a finite time an infinite number of counting steps is possible, at least mathematically. So for the universe being infinite in spacial extent, that is of course an interesting question. It indeed could be infinite! Inflation expanded a region the size of an atom to about 1 meter in the first 10^{-20} seconds or so of the universe. Suppose the universe tunnelled out of the vacuum from a wormhole with the Reissnor Nordstrom F(r) = 1 – 2M/r – /r^3/3, for ds^2 = -F dt^2 + F^{-1}dr^2 + r^2 dOmega^2 Suppose the cosmological constant / = /(f, f-dot), for f a Higgian field, this for small scales, such as with a virtual wormhole near the Planck scale, this / might be very large. This will then act to inflate the two 3-balls in the interior of the wormhole boundaries, which compose a three sphere S^3. Now suppose that this Higgsian field is dynamical, say it is composed of gauge particles or quark-like particles. Then there is a theorem by Fred Taubs which says that such fields can be concentrated at a pole on the sphere, associated with a Chern class. This could not only inflate the sphere, but puncture it and push the boundary “off to infinity!” So inflation might not have just blown up a three sphere with a small radius of curvature to one with 10^{-20} that curvature, it might have literally “popped it” and stretched it out to infinity. I am not saying this happened with any certainty, but who knows. The universe looks awfully spatially flat, so it could be indeed infinite! Lawrence B. Crowell • CarlN LC, you are almost there. Time can indeed continue endlessly, while always remaining finite. The “transition” from a finite to an infinite value cannot happen by finite steps. Making the steps ever smaller can also prevent you from going past a certain value as your example shows. Anything infinite cannot be measured. Any theory involving infinities is impossible to verify. • Lawrence Crowell There are certain infinities which are more troublesome in physics than others. Physics is really about local principles, so if there is an infinte cosmology in either space or time which permits finite measurements in any local region this is not that troublesome. Infinities which are troublesome in theories is where you get infinite masses for particles or other divergences which would be locally observable. Again, I think the problem you raise is somewhat artificial. If there is a Cauchy-like sequence of energy eigenvalues, similar to what Sean argues for, which permit the endless occurrence of time increments to be “measured” by quantum transitions, then just because this will never register the “number” infinity, this does not mean there is not infinite time. By the same token just because there is no meter stick large enough to measure the real number line does not mean there is some consistency problem with the idea of the reals, or infinte spaces, or their application in physical or cosmological models. Lawrence B. Crowell • http://mccabism.blogspot.com/2008/11/cosmogenic-drift.html Gordon McCabe Interesting stuff. Any attempt to deny the real existence of time breaks the physicalist notion of the correspondence between brain states and mental states. We subjectively experience the passage of time as a succession of mental states; physicalism assumes, amongst other things, that you cannot have a change of mental state without a change of brain state; hence there is an objective succession of brain states. • daemon Sean, I think your classification of Parmenides as belonging to ‘presentism’ is incorrect. He would actually belong the the school of thought called ‘eternalism’. • Lawrence Crowell Parmenides might be the first person to advance something similar to block time, which is the viewpoint in general relativity from a purist perspective. L. C. • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean The classification was only a loose one, as noted in the essay — given the word limit, I couldn’t get into the various specific schools of thought. • Interested CarlN Says: “Only the present really exist. Any spacetime geometry that allows some kind of time travel is fundamentally wrong” I do not personally know, having little maths, but if there is something to this below ( same comment above about this funny script), then time travel might be fundamentally right, albeit progress in this area is slow. ” Prof RICHARD GOTT: The greatest time traveller so far is an astronaut, cosmonaut named Sergei Abediev who spent 748 days aboard the Meer space station travelling over 17000 miles per hour. And when he came back because of this he had aged a 50th of a second less than he would have if he had stayed home. In other words when he got back to the earth he found it to be a 50th of a second to the future of where he expected it to be, he has time travelled a 50th of a second into the future. NARRATOR (DILLY BARLOW): Admittedly a 50th of a second may not sound dramatic but the cosmonauts of Meer could have travelled much further into the future, all they’d have needed to do was travel even faster. Prof PAUL DAVIES: Imagine if I go off in a rocket ship at very close to the speed of light and perhaps I’m gone for about a year whizzing around our part of the galaxy and I come back to earth and I find that you’re ten years older. I’ve been away one year but ten years have elapsed here on earth, and so in effect I’ve just nine years into your future. Prof DAVID DEUTSCHE: This sounds like an extremely weird and unbelievable property of nature. It’s actually one of the best corroborated physical effects that we know of. People who build satellites have to routinely take into account the fact that time travels at a different speed in different states of motion.” http://www.bbc.co.uk/science/horizon/2003/timetriptrans.shtml Lawrence Crowell , You attempted to explain to me, but I think it had more science than I could manage or chew off. I read this http://www.bbc.co.uk/science/horizon/2003/timetriptrans.shtml It seems like a pretend script, a made up story or dialogue but it also seemed to have some content . Yes ? / No? The ending of the script was surprising for it echoes what I felt at the onset of knowing the big question about time, if it exists. “Prof FRANK TIPLER: Inside the simulation you can’t tell any difference between the simulated environment, the virtual reality and the real environment. In fact this environment we need find ourselves in could be just a simulation. NARRATOR (DILLY BARLOW): Three hundred years ago science set out on a quest to master time, to control it. People didn’t like time being controlled by a super intelligent superior being, we do it for ourselves instead. But every time we made a break through there was a downside. Now we’re told we may not even be real. Instead we may merely be part of a computer program, our free will as Newton suggested is probably an illusion. And just to rub it in, we are being controlled by a super intelligent superior being, who is after all the master of time. Prof DAVID DEUTSCHE: From the point of view of science it’s a catastrophic idea, the purpose of science is to understand reality. If we’re living in a virtual reality we are forever barred from understanding nature. Prof PAUL DAVIES: Our investigation of the nature of time has lead inevitably to question the nature of reality and it would be a true irony if the culmination of this great scientific story was to undermine the very existence of the whole enterprise and indeed the existence of the rational universe.” Back to me – If it were to turn out that, we would walk down the road where, we realize we are not real, and that we are some virtual reality, then it will/could/might open door to other ancient knowledge of what type of virtual reality our rational being and rational universe is. It would/could/might be knowledge converging at some point in future, through the scientific pursuit of the scientific understanding of time, and thus of real nature of reality. • CarlN Interested, you refer to relativistic time dilatation. This is not time travel in the usual sense. You can slow your time (and aging) with respect to earth (for example) by taking a rocket trip. Time travel is stepping into a stationary machine and find that you have moved back in time or forward in time more than the time you spent in the machine. I tend take the absence of time travelers as empirical “proof” of the impossibility of this. LC, Sean is not using the accumulation point(s) of eigenvalues to “generate” time by quantum transitions. Instead he says nothing about time and uses the superposition of eigenstates to make sure the universe never repeats itself (given time from the outset). However, he does not explain in this setup how an omelet turning into eggs is less probable than eggs turning into an omelet. • Lawrence Crowell The idea is to provide a large enought a Hilbert space, which has energy gaps small enough for any future cold condition, so that a quantum Poincare recurrence does not set in. An elementary example of a recurrence is a two state atom and photon in a cavity described by the Janes-Cummings Hamiltonian. Lawrence B. Crowell • CarlN LC, what in this setup make sure the future cold condition? • Lawrence Crowell As the universe expands things get colder. The 3K CMB temperature is a remnant of the period about 380,000 years after the big bang where radiation and matter existed in a plasma. The end of that age released the radiation which is now redshifted and “cooled” to the feeble microwave background we see today. This will continue as the universe expands into a sort of void that approaches absolute zero temperature. Lawrence B. Crowell • CarlN I know, but what about what my question? • Lawrence Crowell It is the expansion of the universe which keeps temperature dropping. Consider a box, where the boundary points of that box keep expanding out by comoving of coordinates. The wavelength of light in that volume increases as that expansion occurs, if we think of the box as being like an EM cavity. The energy of photons is E = nhc/L, L = wavelength and n = # of photons which is constant. So if these photons are thought of as particles in the box with E = pV, which is decreasing, then an analogy with the natural gas law pV = nkT tells you that temperature must be declining. Lawrence B. Crowell • Interested I feel like I am a parrot. We have a 30 years old parrot at home, and i let him or her out of the cage so he or she is free ( sex/gender cannot be easily determined without DNA or what test as not visible) I am parroting the answers I read. CarlN, answers to your question. First what is the source? BBC Space. Second what is your question. You (CarlN) : Time travel is stepping into a stationary machine and find that you have moved back in time or forward in time more than the time you spent in the machine. Paraphrase your question: Is a stationary time machine for travel to future theoretically possible according to science? Parroting http://www.bbc.co.uk/science/horizon/2003/timetripqa.shtml : “The Future According to Professor Paul Davies “Scientists have no doubt whatever that it is possible to build a time machine to visit the future”. Since the publication of Einstein’s Special Theory of Relativity in 1905, few, if any, scientists would dispute that time travel to the future is perfectly possible. “ What are the different possible time machines we could build? There are now a number of different proposals for time machines that have been put forward by well-regarded physicists, for example:…” Paraphrase your question : Is travel to past possible as opposed to future ? Parroting : The Past Time travel to the past is more problematic, but there is nothing in the known laws of physics to prevent it. It is accepted that if you could travel faster than light, you could travel to the past. However, it is impossible to accelerate anything to a speed faster than light because you would need an infinite amount of energy. You: I tend take the absence of time travelers as empirical “proof” of the impossibility of this. Parroting : If time machines are possible, why haven’t we built one? Although the time machines suggested by physicists are theoretically possible, all of them would require massive amounts of energy and a level of engineering technology that we don’t have at the moment, and which we are unlikely to have for quite some time. “One of the most famous arguments against time travel is that if time travel is possible, why haven’t we been visited by lots of time travellers from the future? Again, people have come up with ways round this objection: we may be inundated with time travellers and not be aware of it. Maybe that’s what UFOs are. Perhaps civilisations don’t last long enough to develop the knowledge and technology required to build a time machine. And most convincing of all, general relativity says that you can only go back to the time a time machine was created. Since no one has built a time machine yet, no one can come back to this time.” (*I do not know why general relativity says that or has to say that???##@@@) • CarlN Hi LC, again..I know. To be more precise: What in Seans setup ensure the expansion of space? And what in the setup make sure that an omelet turning into eggs is less probable than eggs turning into an omelet? I can’t find any of that. Interested, people who believe in time travel are people who think that spacetime is a “structure” where the past and future actually exist (“block” time view) instead of viewing it as a calculating tool. In some of these proposed geometries time travel is indeed possible. Which only prove that the geometry is badly “constructed”, since there is no way of preventing future time travelers of visiting us. Remember that the future already exists in Einsteins 4D spacetime (if you view it the “wrong” way), which of course is nonsense. Please, no conspiracy theory about secret visitors from the future. • Interested CarlN, would it be apt to paraphrase or capture your expressed and implied point of view, that time is a calculating tool (epxressed), but does not exist as a real dimension (implied). That ordinary people perceive time as a real dimension, the past, present and future, 13.7 billion years ago, and the time in future of LC’s ” This will continue as the universe expands into a sort of void that approaches absolute zero temperature.” when the distant future, approaches absolute zero coldness, is then a common defect of perception (implied), a prevailing shared defect of humans (implied). That it takes skilful science or maths understanding to see or accept time merely as a calculating tool, and not a real entity by itself. • CarlN Interested, I do not belive that time (as something fundamental) really exists at all. Time can only be measured by comparing one motion (or change) to another. So motion seems to be more fundamental than time. It seems you belive that the universe not only exists as we see it today, but that it also exists in the cold, expanded “future” state, and also in all stages in between. In this view the universe is also still in the Big Bang state, probably. I agree that it is possible to view the 4D spacetime this way. I only say that the absence of time travelers shows that this view is probably wrong. So yes, the 4D spacetime geometry is only a calculating tool. Be careful when you use it :-) • Lawrence Crowell Time machines? Why yes of course, or well … uhmm maybe, but on second thought probably no. Kip Thorne showed that a worm hole can act as a time machine. A worm hole is like a blackhole, but if one enters it you pop out elsewhere from a connected identical opening. So there are two connected openings with a boundary which have points identified with each other. Now suppose you hang a clock near each of these openings. If one opening is Lorentz boosted to near the speed of light and then brought back near the speed of light on another boost the so called twin paradox has the clock near that opening far behind that of the first. As a result a person can loop through the worm hole and travel back in time. There is a null congurence of rays, like a light cone, which connect the openings at equal times on their clocks where the time machine is “turned on.” This sounds simple, right? What is the problem? It requires that some exotic quantum field of matter exist right around where the event horizon would otherwise exist. This field acts to defocus geodesics and connect them to this other region. This exotic field violates some energy conditions established by Hawking and Penrose, in particular T^{00} > 0 is violated. The momentum-energy tensor terms are ultimately determined by a quantum field, and if they violate this energy condition it leads to a big problem. In particular the quantum states are not bounded below, such as the elementary case of the minimum S-wave for the electron in a hydrogen atom. This means that quanta can endlessly transition to lower energy states and produce an infinite amount of energy. The result for spacetime is that it would lead to enormous fluctuations which would destroy the worm hole. Ford and Roman have in connection to this demonstrated a quantum interest conjecture which indicates that attempting to accumulate negative energy, T^{00} < 0, always results in more positive energy which overwhelms your attempt. The Dirac sea of the electron suggests this as well. The negative energy states are "occupied," so no real quanta can fill them. If you try to excite a state there you generate an electron with opposite quantum numbers, but with positive mass, which is called the positron. Quantum mechanics has other hints as well. For space and time translations are determined by the momentum and energy operators, as with Noether's theorem. Yet worm holes, multiply connected spacetimes and time machines indicate there exist nonunique maps, or operator determined translations, which connect these points. This leads to problems with the uniqueness of operators on Hilbert space. There are other exotic spacetime solutions. The Alcubierre warp drive and the Krasnikov tube are examples. These all rely upon T^{00} = 0, and appears as some sort of protection that ensures this. This connects with worm holes as well, for one could imagine generating an opening into a black hole, pulling out hidden information in the interior and thus violating the laws of thermodynamics which apply to black holes. On the matter of time as something fundamental, say linked with quantum gravity and field theory, it is best to keep an open mind. Lawrence B. Crowell • CarlN LC, good job. Yes the entropy keeps increasing.. Note that it by definition can’t be negative. That is interesting as we rewind time (evolution of the universe). We can’t rewind anymore when we reach S=0. We reach the beginning of time. The big bang, the creation from nothing. The beginning of time follows from the second law. Note that you logically (and thermodynamically) can’t have periods (finite or infinite) where dS/dt = 0 for the universe as a whole somewhere in the past. From our earlier discussion we noted that time will always be finite in the future, hence it must be finite in the past. Nice to note that the second law comply with this purely mathematical result. Well, it would indeed need to comply! • Interested You: Interested, I do not belive (sic) that time (as something fundamental) really exists at all. Time can only be measured by comparing one motion (or change) to another. So motion seems to be more fundamental than time. CarlN, if time ( to you) does not exist ( to put it plainly and to then dispense with the coating of “fundamental” ), (i) then how would you tell a lay person, why time does not exist when he or she sees time exist in so many ways every day every minute. (ii) If time which is so commonly experience and thought of as real and existing, what is there to suggest that the same cannot be said of other equally real things that we take as real in our everyday day to day life? (iii) There has to be a way that scientists can explain the science time rules to the “12 jurors and their alternates” so that, these every day jurors, people who live down your street, can apply these rules to the facts that they have to figure out. Every juror has to know what the community of scientists say the rules of time are. Like in a trial court, opposing counsels will not agree with the rules of law for the judge to direct the jury, and they will split hairs over the rules, and the judge rules on what the rules are when there is lack of disagreement, taking note later, of the objections of the counsel whose version of the rules have been sidelined. So to spare the jury the torment of knowing what the conflicting rules are, the judge hears the disagreement of the opposing counsel when the jury is out, and makes his or her decision as to what the rules should be for the instant case, and when the time is right, the judge tells the jury in no uncertain way, what the rules of law are and how they should apply it to the facts of the case. The jury then only has to weigh the facts. As lay people, we have our experiences of time, we look at the picture of ourselves when we look hip, had moustache, dark hair, lean and slim and with a big dog beside us when the snap was taken, we pass by our alma mater where we studied, and the grave stone of our loved ones where we place a bouquet of flowers now and then and on Thanksgiving too, as they are not around us to celebrate the day. What are the time rules that we the jury should know and follow and apply to determine the case? CarlN, I have not till now thought of it that way. Reading and re reading and again rereading it several times, I ponder is that so? I see that you see it that way. The I ponder what is the difference between the way I see it and the way you perceive I see it. I ask myself if I infer from the way I see it, would I come to the derived at position that you describe of my perception? (i) I see it I existed before I was conceived and before I was born, and after I die. As to the form of existence, I do not know. I have no idea. I have no clue. A part of my Buddhist grounding, and I read widely across the three branches of Buddhism, for some years, and had a personal collection library of Buddhist books I bought, and helped to proof read the Theravada Buddhist bible ( The Dhammapada ) for the Chief Reverend of the temple I was somewhat connected with, is that maybe I might be born again as a human being in my next life ( after death of this life). I see the Big Bang ( as scientists tell us it happened probably) and see or imagine that you were there, I was there, all of us were there – how I do not know – but that all of us were there somehow and participated in it. The birds too that fly over the river. (ii) I disagree with the bleak outlook and end of the universe when we get the deep freeze in the far future expanded state and life can no longer be supported or at least as we know it. I tend to have some optimistic outlook that, if we can come from those who began to use fire, use and shape tools, and hunt and farm, and start factories, and invent technology, in that short span of time, if we compare the bigger time span of 13.7 billion years, then the exponential curve of progress we have made, can continue its exponential curve, how I do not know, where I do not know, but it would be to some fine end, the beauty of it we cannot imagine or conceive as the man who first learnt to build a fire could not imagine, how we can built a furnace to melt iron and make steel. Even if the universe has be very cold, something would still be going on in that exponential curve, what and how I have no inkling whatsoever. I grew up with no landline home phone and now there is easily accessible cell phones. (iii) CarlN, if you see that I see things as (i) (ii) can you see how difficult it is for me to capture your “the universe not only exists as we see it today, but that it also exists in the cold, expanded “future” state, and also in all stages in between. In this view the universe is also still in the Big Bang state, probably” and bring it within my fold of perception and worldview and personal view? Tell me what missing links must I fill in, to move from (i) and (ii) to “the universe not only exists as we see it today, but that it also exists in the cold, expanded “future” state, and also in all stages in between. In this view the universe is also still in the Big Bang state, probably” so that there can be a logical progression of thoughts. CarlN, you have slipped through the fence. Let me break it down and see if I understand what you are saying. Are you saying – (i) Time does not exist . So time cannot be a calculating tool. Thus time is not a calculating tool. (ii) Space time ( as opposed to time itself) is a calculating tool. (iii) Space time is 4Dimension (4D) and the everyday world where we see hear feel is 3 dimension. (iv) Mathematics, geometry, algebra are calculating tools. Space time geometry as other branches of geometry as other branches of mathematics are all calculating tools. • CarlN Interested, I don’t know if time is real or not. All I know is that the more I think about it, the more unreal it gets. It is like gravity. It seems natural that things fall down when dropped until we start to think about it. Then it gets difficult. We are all born as idiots with stupid instincts so that whatever we see or “feel” it seems natural. And we remain idiots until we start to think about things. Distance (space) can be measured by comparing with a unit distance (one meter). Same with mass and electric charge. Not so with time. Time can only be measured indirectly by comparing one motion to another. Why is it so? Why is there no time “quantity” that we just can “pick up” and use as a unit? Why do we have go through all this stuff with motion and change in order to be able to measure time? I guess this is the main reason for my worries about time. Looks like time is not fundamental, but it is of course still a very useful concept. Anyway, science is not settled via consensus, democracy or jury. Science is all about consistency and reductionism. Explain ever more using fewer and fewer hypothesis. In the end all will be explained using nothing unexplained. I normally get Gødel thrown at me at this stage. By people who has not understood Gødel :-) • Lawrence Crowell In general relativity the proper time ds = sqrt(g_{ab}dx^adx^b) has a relationship to a clock. The coordinate variable t is a chart dependent calculating device. This coordinate time only approximately has a clock meaning for a spacetime with some asymptotically flat region, such as a black hole sitting in a spacetime that is flat far removed from r = 2GM/c^2. Spacetimes in general do not provide this convenience. Quantum mechanics on the other hand involves dynamical wave equations which explicitly use the coordinate time t. The Schrodinger equation and relativistic wave equations all do this. So the coordinate time is treated as a physical parameter for the dynamics of a quantum wave. As a result some hard work is required to place quantum fields in spacetime (equal time commutators etc), and this leads to some curious physics for quantum fields in curved spacetimes, which in turn leads to the radiation emitted by black holes. Because of this there exists a dichotomy in our concepts of time between general relativity and quantum mechanics. This is related in part to the problem of quantizing gravity, for you are attempting to quantize a field theory with one concept of time according to a procedure which requires another concept of time. As for comparing different times according to different moving objects, we sure do this! I worked on how to synchronize clocks in Earth orbit before GPS devices got embedded in nearly everything these days. General relativity tells us that clocks moving in different regions of the gravity field will mark time according to different intervals or proper times. For GPS purposes this will cause time on these different satellite frames to drift apart, which will result in errors in the triangulation of a point on Earth. So time is measured in a sense relative to other “times.” Galileo measured the periodicity of a chandelier by using his pulse, presumably as the story goes during a mass. He then later used a clock to measure the periodicity of a pulse. If you have one clock you always know what time it is, but if you have two you might not. With measuring spatial distance we compare one distance according to some established length, a meter stick. So in some ways we do much the same thing. If one is to consider time as strange and maybe not existing the same really has to apply to space. General relativity is a relationship system between particles that involves geometry. Quantum mechanics is another relationship system which fundamentally relates particles to each other by an abstract Hilbert space of states. Quantum wave equations emerge by our representations of quantum states in spacetime, which have some funny elements to it. The fact that general relativity and quantum mechanics are different relationship systems between particles is manifested in the dichotomy in how the two define time. Lawrence B. Crowell • Interested I guess we all know what is real and what is real, and so we do not define ‘real’. But when it comes to ‘time’, when you say, ‘real’ or ‘not real’, you may mean something than what I think and I may not be aware of that. So I would like to see what you mean by ‘real’ and ‘unreal’ or ‘not real’. As for me, I am more drawn to time is not real than real, since scientists have come out in the open in public domain to discuss them. Before the recent understanding of a month or so back, I did not even know scientists have been concerned ( or as you say ‘worried’ about time). The reason I am more drawn to the idea that time is not real, is my own limited understanding of intuition in meditation, which I do not pursue to achieve enlightenment this life time or on paced program of within seven human lifetimes ( seven rebirths to go through all the levels of meditation to enlightenment) though at one stage of my life, such was my sincere and avowed aspiration to pursue it for seven life times, this lifetime included, but I support the institution that preserves that ancient method for others and for posterity, giving it the benefit of doubt ( as absent attainment, there will be reasonable doubt) The core of Buddhism is this world is not real, and there is no self, quaintly put, the light is on but no one is at home. If that be the perception of the conventional world, then time is but an element of the conventional world, and thus to see time as not real, is a sub part of the whole seeing the world as not real. Because of my limited experience in meditation, and other human experiences, and in the face of the open conflict between scientists as to whether time is real or not, I lean to think or infer that time is not real. But at this juncture, absent clearer more understandable science advice, I do not think I fully grasp what scientists have in mind when they worry whether time is real or not real or unreal. IF scientists can clarify in plain simple lay people language and modes of understanding absent abstract calculations, what it means when you say time is real, when you say time is not real, when you say time is unreal. All of us are born much the same way, we cry for food, comfort and through environment, and genetic advantages, we learn to speak write think research. Not many of us will have the advantage of further education, much less advanced higher maths and science education. Thus not many of us will have the scientific and mathematical ability to think in a structured way about gravity and time. Does it mean that absent scientific and mathematical training, we will never have the avenue to think about reality ( gravity, time, universe)? The people who can provide the avenue for those without the special science and maths training to think of reality, would have to step down their explanation to reach the ordinary man in the street. In today’s time and age & for the future, why should such avenues of thinking about reality be made popular knowledge or in public domain? There could be many reasons, but the reason that I wish to articulate here is that, every life lived lives philosophy examined or not. If we see or “feel” with our instincts, we imbibe a philosophy of life, and if we do not have the avenue to think about things, we will go by our instinct, but if society gives us the avenue, then we can think about reality even absent a higher science or maths qualification. If time is measured indirectly by comparing one motion to another, what does this say about process of ageing, where we experience time in the most intimate of way. We grow old, the day we are born, we age every day every year. What happened to the baby the child teenager that we once were? What does 1 year old mean , 4 years old mean, 6 years old mean, 25 years old mean, 45 years old mean, 70 years old mean? IF time is measured indirectly by comparing one motion to another, than what motions are we supposed ( IF supposed to ) to compare? The motion of our cells , human cells with each other? With the earth? With other humans or other human cells? You: I normally get Gødel thrown at me at this stage. By people who has not understood Gødel I had to look up wikipedia to know who Godel is. It was too deep. So I looked up MSN Encarta and it made easier reading though in terms of content, it is understandably negligible. Absent understanding of Godel’s work, and absent maths training of even undergraduate level, and just based on MSN Encarta, which is really scanty, and for kids, my knee jerk response, is that, logic within a certain paradigm is self contained, and outside the parameters, it tends to become illogical. If so, then, the proof of a matter within a paradigm ( be it a field of maths or any field of human enquiry of recent centuries) can only be proven within the enclosed paradigm. Godel’s is thus another example of this human adventure to understand the universe and humans, where to contain that understanding we map out a field of understanding and the basis for it, and then, we delve into it and unfold many things in that field. • http://magicdragon.com Jonathan Vos Post I just submitted a link to this thread at The Status of Coalgebra Posted by David Corfield , the n-Category Cafe, because of the issues raised there about infinite-dimensional (even still countable-dimensional) topological vector space, whose underlying discrete vector space is (by the axiom of choice) uncountable-dimensional. • http://magicdragon.com Jonathan Vos Post Specifically, see what my friend Dr. Jonathan Farley writes: “Birkhoff and von Neumann developed the logic of quantum mechanics in the 1930’s. One central question is to characterize lattice theoretically lattices of closed subspaces of Hilbert spaces: they satisfy the orthomodular law at least [I read that this term was coined by Kaplansky, just to drop more names: as someone working in a field (lattice theory) a number theorist at Princeton has called 'not interesting or important' I am somewhat self-conscious].” “I may be wrong—my memory is poor—but I believe this question may only be interesting in the infinite-dimensional case because otherwise you just get orthocomplemented modular lattices.” In the n-Category Cafe thread I then expand on his correct citation and, correcting notation to be ASCII-ized, explain what orthocomplemented modular lattices are about, as T. S. Fofanova outlined roughly 30 years ago. I’ve not yet thrashed the matter to death with another friend, Dr. George Hockney, but he thinks that the foundational difference between countable-dimensional and uncountable-dimensional Hilbert Spaces for QM does not matter FOR PHYSICS as such, in part because a physical system is not quantized as such, but second-quantized. That it doesn’t matter how we renormalize is morally the same as that it doesn’t matter if we have countable-dimensional and uncountable-dimensional Hilbert Spaces for QM. What is the order of the renormalization group (countable or uncountable?) Another way to look at this is in terms of gauge invariance. We can’t get away from ghosts. For the gauge invariance, we must have that the Lorentz-transformed EM is a subspace of the Minkowski-transformed system as a whole. The real particles mix with virtual particles whether we like it or not. Philosophically, this relates to a question that I’ve been asking for 35 years: what is the topology of the space of all possible ideas (what Fritz Zwicky called the “ideocosm”)? What is the topology of the space of all mathematical theories? How do we make a hyperplane or hypersurface to separate the physical theories from the nonphysical theories within the space of all mathematical theories? (this is touched on in S. Majid, Principle of representation-theoretic self-duality, Phys. Essays. 4 (1991) 395-405). To avoid paradox, that seems to me neither a mathematical nor a physical meta-theory, that partition. But what is it? • Lawrence Crowell I would need to think to a bit about this. A countable Hilbert space, the classic case being a harmonic oscillator, is what is used in second quantization. A ghost field is employed to define a supergenerator so that unphysical fields are cancelled out. This is done so that Q^2 = 0 (eg fermionic) and the ghost anticommuting scalars are used to insure this condition. Where the size of Hilbert space comes in, which might have some bearing on ghosts and gauge theory is with the quantum cohomology of the generators. If one wants to get fussy this does depend upon issues of compactness, paracompactness and the rest. If states cluster up and become dense on some base of support then these subtle issues might crop up. Lawrence B. Crowell • Interested Jonathan : Philosophically, this relates to a question that I’ve been asking for 35 years:“what is the topology of the space of all possible ideas (what Fritz Zwicky called the “ideocosm”)? Of what good would it do to collect and store and archive all possible ideas? Not knowing what ideas Fritz Zwicky stored and archived, and retrieved for his research use, would it not suggest a storage has to be tailored to the intended use and needs of the specific user/s? The usual general storage area is the library. Some may collect and archive many things, like a human sponge. It is said that babies before age 5 are like sponges, and they can absorb many new things easily and learning is effortless then. It is said that Sanskrit is difficult to learn but a child below 5 who lives in that environment can learn it easily and speak it then. Those who collect sort out and compartmentalise and archive, are like the before 5 babies, human sponges, brains that absorb all. http://www.amazon.com/Absorbent-Mind-Maria-Montessori/dp/0805041567 Is that one of the further along human evolutionary process/es? On a parallel tangent, I have met 2 people who have told me that when some very good meditator monks die, and are cremated, their remains, include some crystals besides the ashes and bones ( as they have been privileged by circumstances to have a few of them.) I have also read that in one book on the monk who purchased the land in Northern CA, for his temple (something like ten thousand buddhas) and before he died, he donated a part of the forest land to the monks of a different buddhist branch (something like Redwood Valley area in Northern CA). ( I just pulled out the book to get his name Hsuan Hua, city of ten thousand buddhas) These people do not collect such worldly knowledge but it seems they are different and their difference is in what forms their body so that when cremated they leave behind things that others do not. What is the topology of the space of all mathematical theories? Do not know Do not know • http://magicdragon.com Jonathan Vos Post The issue is not to store all ideas, but to perfect methodologies for systematically exploring the Ideocosm in search of really great ideas. I’ve discussed that with Zwicky himself (whose first words to me were “who the hell are you?”) and Herman Kahn, and Linus Pauling, Jr. — they all agreed on this. Stephen Wolfram asked me the sneaky question: if you have a computer search for you, who owns the intellectual property to what is discovered? • Interested Jonathan: “The issue is not to store all ideas, but to perfect methodologies for systematically exploring the Ideocosm in search of really great ideas.” Books are stored in libraries, catalogued, so they can be searched by topic, specific subject matter, author, etc. Abstracts enable easy overview of the longer works. What then would be methodologies to systematically explore the Ideocosm? How different would they be from that used by libraries? One apparent difference between “to perfect methodologies for systematically exploring the Ideocosm in search of really great ideas” for public use versus private system of Zwicky ( not having spoken to him, nor met him, and not being called ……. [ seven letters] by him would be design of the system, made easier when ( imagine) Zwicky employs it for his own research & his area of interest is limited to his field of study. Given that, the world has many areas of studies that humanity has come up with and more yet to come, how can there be same methodologies for different areas of studies? For example, sciences differ from social sciences & humanities & arts. Even within broad categorisation, differences would appear in the different sub fields. How can mankind conceive of perfecting “methodologies” to cover all areas of human studies and explorations? Who determines what ‘really great ideas” are? If that be the criteria to selection for the Ideocosm? What is this ideocosm? The reality of nature , of the universe? Who determines what is the ideocosm? Zwicky? Based on what he collected and stored and the method he employed for himself for his limited area of study? Is that to be exponentially transferred to the whole world, whole of humanity, whole of universe, from beginning of time to end of time ( if there is a beginning and if there is an end since there is some doubt serious doubt whether time even exists) • http://magicdragon.com Jonathan Vos Post See also equation (5) of On the consistency of the constraint algebra in spin network quantum gravity, R Gambini, J Lewandowski, D Marolf, J Pullin – Arxiv preprint gr-qc/9710018, 1997: “While this sum involves an (uncountable) infinity of terms, its action on spin network states|Γ′is well-defined since only one term (the one in which σ maps the vertices of Γ to the vertices of Γ′ in the proper way) can be nonzero…” And also: Unimodular eigenvalues and linear chaos in Hilbert spaces Journal Geometric And Functional Analysis Publisher Birkhäuser Basel ISSN 1016-443X (Print) 1420-8970 (Online) Issue Volume 5, Number 1 / January, 1995 DOI 10.1007/BF01928214 Pages 1-13 PDF (722.1 KB) Unimodular eigenvalues and linear chaos in Hilbert spaces E. Flytzanis1 (1) Athens University of Economics and Business, 76 Patission Street, 104 34 Athens, Greece Received: 15 May 1993 Accepted: 15 October 1994 Abstract For linear operators T in a complex separable Hilbert space H we consider the problem of existence of invariant Gaussian measuresm: mT^–1 = m. We relate the size of the unimodular point spectrum of T to mixing properties of the measure preserving transformations defined by T with respect to such invariant measures, and we draw some conclusions concerning orbit structure properties of T. The research for this work has been supported by a grant from the Research Center (KoE) of the Athens University of Economics and Business. “Unimodular eignevalues of linear operators in Hilbert space are usually associated with periodic or quasiperiodic orbits. We will show that this is indeed the case if they are countable. However if the unimodular point spectrum is uncountable then we will show that the orbits of the operator are also characterized by erratic behavior associated with chaotic motion. This happens because the linear transformations defined by such operators accept invariant probability measures having mixing properties in the context of ergodic theory.” Does someone want to draw a Cosmological conclusion from this? • Interested Jonathan: “Stephen Wolfram asked me the sneaky question: if you have a computer search for you, who owns the intellectual property to what is discovered?” If you go to Outlinedepot.com you can preview the various law school outlines on intellectual property , and purchase them. I think they cost about $ 10 per subject outline. Many schools offer their outlines and so you have a choice of outlines. The preview section will give you an idea of the writing and pedagogical style that suits your taste. To answer your question, will require visiting those outlines and then framing them in a way that meets your expectations. It will take very much time and some cost. Maybe it may be done if circumstances permit, but otherwise this is the direction you are looking at on the net. In a broad brush, some things you want to watch out for- (i) intellectual property rights to protect writing (books) music [ copy right] patents, trademarks, service marks (?) and of course intellectual property rights and technology (ii) intellectual property rights at international level, TRIPS – Trade Related Intellectual Property rightS and concomitant 153 states’ obligation vide TRIPS Agreement and interlinked with about 40 (?) agreements through membership of World Trade Organisation, of which it was about 120 in 1994 at inception and 153 today. 153: - [Albania 8 September 2000 Angola 23 November 1996 Antigua and Barbuda 1 January 1995 Argentina 1 January 1995 Armenia 5 February 2003 Australia 1 January 1995 Austria 1 January 1995 Bahrain, Kingdom of 1 January 1995 Bangladesh 1 January 1995 Barbados 1 January 1995 Belgium 1 January 1995 Belize 1 January 1995 Benin 22 February 1996 Bolivia 12 September 1995 Botswana 31 May 1995 Brazil 1 January 1995 Brunei Darussalam 1 January 1995 Bulgaria 1 December 1996 Burkina Faso 3 June 1995 Burundi 23 July 1995 Cambodia 13 October 2004 Cameroon 13 December 1995 Canada 1 January 1995 Cape Verde 23 July 2008 Central African Republic 31 May 1995 Chad 19 October 1996 Chile 1 January 1995 China 11 December 2001 Colombia 30 April 1995 Congo 27 March 1997 Costa Rica 1 January 1995 Côte d'Ivoire 1 January 1995 Croatia 30 November 2000 Cuba 20 April 1995 Cyprus 30 July 1995 Czech Republic 1 January 1995 Democratic Republic of the Congo 1 January 1997 Denmark 1 January 1995 Djibouti 31 May 1995 Dominica 1 January 1995 Dominican Republic 9 March 1995 Ecuador 21 January 1996 Egypt 30 June 1995 El Salvador 7 May 1995 Estonia 13 November 1999 European Communities 1 January 1995 Fiji 14 January 1996 Finland 1 January 1995 Former Yugoslav Republic of Macedonia (FYROM) 4 April 2003 France 1 January 1995 Gabon 1 January 1995 The Gambia 23 October 1996 Georgia 14 June 2000 Germany 1 January 1995 Ghana 1 January 1995 Greece 1 January 1995 Grenada 22 February 1996 Guatemala 21 July 1995 Guinea 25 October 1995 Guinea Bissau 31 May 1995 Guyana 1 January 1995 Haiti 30 January 1996 Honduras 1 January 1995 Hong Kong, China 1 January 1995 Hungary 1 January 1995 Iceland 1 January 1995 India 1 January 1995 Indonesia 1 January 1995 Ireland 1 January 1995 Israel 21 April 1995 Italy 1 January 1995 Jamaica 9 March 1995 Japan 1 January 1995 Jordan 11 April 2000 Kenya 1 January 1995 Korea, Republic of 1 January 1995 Kuwait 1 January 1995 Kyrgyz Republic 20 December 1998 Latvia 10 February 1999 Lesotho 31 May 1995 Liechtenstein 1 September 1995 Lithuania 31 May 2001 Luxembourg 1 January 1995 Macao, China 1 January 1995 Madagascar 17 November 1995 Malawi 31 May 1995 Malaysia 1 January 1995 Maldives 31 May 1995 Mali 31 May 1995 Malta 1 January 1995 Mauritania 31 May 1995 Mauritius 1 January 1995 Mexico 1 January 1995 Moldova 26 July 2001 Mongolia 29 January 1997 Morocco 1 January 1995 Mozambique 26 August 1995 Myanmar 1 January 1995 Namibia 1 January 1995 Nepal 23 April 2004 Netherlands — For the Kingdom in Europe and for the Netherlands Antilles 1 January 1995 New Zealand 1 January 1995 Nicaragua 3 September 1995 Niger 13 December 1996 Nigeria 1 January 1995 Norway 1 January 1995 Oman 9 November 2000 Pakistan 1 January 1995 Panama 6 September 1997 Papua New Guinea 9 June 1996 Paraguay 1 January 1995 Peru 1 January 1995 Philippines 1 January 1995 Poland 1 July 1995 Portugal 1 January 1995 Qatar 13 January 1996 Romania 1 January 1995 Rwanda 22 May 1996 Saint Kitts and Nevis 21 February 1996 Saint Lucia 1 January 1995 Saint Vincent & the Grenadines 1 January 1995 Saudi Arabia 11 December 2005 Senegal 1 January 1995 Sierra Leone 23 July 1995 Singapore 1 January 1995 Slovak Republic 1 January 1995 Slovenia 30 July 1995 Solomon Islands 26 July 1996 South Africa 1 January 1995 Spain 1 January 1995 Sri Lanka 1 January 1995 Suriname 1 January 1995 Swaziland 1 January 1995 Sweden 1 January 1995 Switzerland 1 July 1995 Chinese Taipei 1 January 2002 Tanzania 1 January 1995 Thailand 1 January 1995 Togo 31 May 1995 Tonga 27 July 2007 Trinidad and Tobago 1 March 1995 Tunisia 29 March 1995 Turkey 26 March 1995 Uganda 1 January 1995 Ukraine 16 May 2008 United Arab Emirates 10 April 1996 United Kingdom 1 January 1995 United States of America 1 January 1995 Uruguay 1 January 1995 Venezuela (Bolivarian Republic of) 1 January 1995 Viet Nam 11 January 2007 Zambia 1 January 1995 Zimbabwe 5 March 1995 ] The legal text for TRIPS agreement and others are in the website http://www.wto.org/ . Your question also crosses states borders through technology and you might wish to look at the user in any of the above 153 states or outside those 153 states, that is 42 states [ 195-153] • http://magicdragon.com Jonathan Vos Post Thank you, Interested. Although my wife and I have earned over $100,000.00 in consulting for top Intellectual Property law firms, I now delegate that subject to my son. My son, after all, is smarter than me. I was a ripe old 16 when I arrived at Caltech on full scholarship and worked with family friend Feynman. My son started full time at university at age 13, and got his double B.S. in Math and Computer Science at 18. He’s halfway through his J.D. program, specializing in Intellectual Property, at the Gould School of Law, University of Southern California. Stephen Wolfram (who met my son when my son presented a paper years ago at a Wolfram NKS conference) is in no way naive about IP, having won his showdown with Caltech, a complicated story dating back to when Wolfram left his Computational Physics professorship to commercialize Mathematica. Referring back to the title of this blog thread, “What if Time Really Exists?”, the deeper questions involve the period with which IP grants monopoly to the patent holder, versus the benefits to Arts & Sciences that it confers on society as a whole. Once computers have legal rights (inevitable when a system that passes the Turing Test has a good enough lawyer) then the whole game changes. Time really exists alright (though Sean Carroll opened a cute loophole with the uncountably infinite Hilbert Space notion) but the computers of the future, merged in ways we can’t yet describe with human beings, explore the Ideocosm dramatically faster with quantum hardware and genetic algorithm software. • Interested Thank you. You are welcomed. I cannot imagine such a computer, though one sees the likes of it in movies, where the computer takes on a life of its own. One of my favorites was the robot who opened his own bank account, and decided to go out and find other robots like him and he found none and lived by himself near the sea. But this http://www.poodwaddle.com/worldclock.swf that is circulating among my husband’s friends and sent to me just now, is a far cry but still something : – ))) • Pingback: Richard Feynman on Boltzmann Brains | Cosmic Variance | Discover Magazine • Pingback: The Envelope Please… | Cosmic Variance | Discover Magazine • Pingback: Les Natures du Temps « Dr. Goulu DISCOVER's Newsletter Cosmic Variance Random samplings from a universe of ideas. About Sean Carroll See More Collapse bottom bar Login to your Account E-mail address: Remember me Forgot your password? Not Registered Yet?
b9f5e863c3f7193a
AQME Advancing Quantum Mechanics for Engineers by Tain Lee Barzso, Dragica Vasileska, Gerhard Klimeck Introduction to Advancing Quantum Mechanics for Engineers and Physicists “Advancing Quantum Mechanics for Engineers” (AQME) toolbox is an assemblage of individually authored tools that, used in concert, offer educators and students a one-stop-shop for semiconductor education. The AQME toolbox holds a set of easily employable nanoHUB tools appropriate for teaching a quantum mechanics class in either engineering or physics. Users no longer have to search the nanoHUB to find the appropriate applications for discovery that are related to quantum mechanics; users, both instructors and students, can simply log in and take advantage of the assembled tools and associated materials such as homework or project assignments. Thanks to its contributors, nanoHUB users and AQME’s toolbox have benefited tremendously from the hard work invested in tools development. Simulation runs performed using the AQME tools are credited to the individual tools, and count toward individual tool rankings. Uses of individual tools within the AQME tool set are also counted, to measure AQME impact and to improve the tool. On their respective pages, the individual tools are linked to the AQME toolbox. Participation in this open source, interactive educational initiative is vital to its success, and all nanoHUB users can: • Contribute content to AQME by uploading it to the nanoHUB. (See “Contribute>Contribute Content” on the nanoHUB mainpage.) Tagging contributions with “AQME” will effect an association with this initiative and, because the toolbox is actively managed, such contributions may also may be added to the toolbox. • Provide feedback for the items you use in AQME and on through the review system. (Please be explicit and provide constructive feedback.) • Let us know when things do not work by filing a ticket via the nanoHUB “Help” feature on every page. • Finally, let us know what you are doing and submityour suggestions improving the nanoHUB by using the “Feedback” section, which you can find under “Support Finally, be sure to share AQME and other nanoHUB success stories; the nanotechnology community and its supporters need to hear of nanoHUB’s impact. Discovery that is Possible through Quantum Mechanics Nanotechnology has yielded a number of unique structures that are not found readily in nature. Most demonstrate an essential quality of Quantum Mechanics known as quantum confinement. Confinement is the idea of keeping electrons trapped in a small area, about 30 nm or smaller. Quantum confinement comes in several dimensions. 2-D confinement, for example, is restricted in only one dimension, resulting in a quantum well (or plane). Lasers are currently built from this dimension. 1-D confinement occurs in nanowires, and 0-D confinement is found only in the quantum dot. The study of quantum confinement leads, foremost, to electronic properties not found in today’s semiconductor devices. The quantum dot works well as a first example. The typical quantum dot is anywhere between 3-60 nm in diameter. That’s still 30 to 600 times the size of a typical atom. A quantum dot exhibits 0-D confinement, meaning that electrons are confined in all three dimensions. In nature, only atoms have 0-D confinement; thus, a quantum dot can be described loosely as an ‘artificial atom.’ This knowledge is vitally important, as atoms are too small and too difficult to isolate in experiments. Conversely, quantum dots are large enough to be manipulated by magnetic fields and can even be moved around with an STM or AFM. We can deduce many important atomistic characteristics from a quantum dot that would otherwise be impossible to research in an atom. Confinement also increases the efficiency of today’s electronics. The laser is based on a 2-D confinement layer that is usually created with some form of epitaxy such as Molecular Beam Epitaxy or Chemical Vapor Deposition. The bulk of modern lasers created with this method are highly functional, but these lasers are ultimately inefficient in terms of energy consumption and heat dissipation. Moving to 1-D confinement in wires or 0-D confinement in quantum dots allows for higher efficiencies and brighter lasers. Quantum dot lasers are currently the best lasers available, although their fabrication is still being worked out. Confinement is just one manifestation of quantum mechanics in nanodevices. Tunneling and quantum interference are two other manifestations of quantum mechanics in the operation of scanning tunneling microscopes and resonant tunneling diodes, respectively. For more information on the theoretical aspects of Quantum Mechanics check the following resources: Quantum Mechanics for Engineers: Podcasts Quantum Mechanics for Engineers: Course Assignments Because understanding quantum mechanics is so foundational to an understanding of the operation of nanoscale devices, almost every Electrical Engineering department (in which there is a strong nanotechnology experimental or theoretical group) and all Physics departments teach the fundamental principles of quantum mechanics and their application to nanodevice research. Several conceptual sets and theories are taught within these courses. Normally, students are first introduced to the concept of particle-wave duality (the photoelectric effect and the double-slit experiment), the solutions of the time-independent Schrödinger equation for open systems (piece-wise constant potentials), tunneling, and bound states. The description of the solution of the Schrödinger equation for periodic potentials (Kronig-Penney model) naturally follows from the discussion of double well, triple well and n-well structures. This leads the students to the concept of energy bands and energy gaps, and the concept of the effective mass that can be extracted from the pre-calculated band structure by fitting the curvature of the bands. The Tsu-Esaki formula is then investigated so that, having calculated the transmission coefficient, students can calculate the tunneling current in resonant tunneling diode and Esaki diode. After establishing basic principles of quantum mechanics, the harmonic oscillator problem is then discussed in conjunction with understanding vibrations of a crystalline lattice, and the idea of phonons is introduced as well as the concept of creation and annihilation operators. The typical quantum mechanics class for undergraduate/first-year graduate students is then completed with the discussion of the stationary and time-dependent perturbation theory and the derivation of the Fermi Golden Rule, which is used as a starting point of a graduate level class in semiclassical transport. Coulomb Blockade is another discussion a typical quantum mechanics class will include. Particle-Wave Duality pic1_duality.png A wave-particle dual nature was discovered and publicized in the early debate about whether light was composed of particles or wave. Evidence for the description of light-as-waves was well established at the turn of the century when the photoelectric effect introduced firm evidence of a light-as-particle nature. This dual nature was found to also be characteristic of electrons. Electron particle nature properties were well documented when the DeBroglie hypothesis, and subsequent experiments by Davisson and Germer, established the wave nature of the electron. Particle-Wave Duality: an Animation This movie helps students to better distinguish when nano-things behave as particles and when they behave as waves. The link below connects to an exercise on these concepts. Introductory Concepts in Quantum Mechanics: an Exercise Solution of the Time-Independent Schrödinger Equation Piece-Wise Linear Barrier Tool in AQME – Open Systems pcpbt1.bmp pcpbt2.bmp pcpbt3.bmp Available resources: Bound States Lab in AQME The Bound States Lab in AQME determines the bound states and the corresponding wavefunctions in a square, harmonic, and triangular potential well. The maximum number of eigenstates that can be calculated is 100. Students clearly see the nature of the separation of the states in these three prototypical confining potentials, with which students can approximate realistic quantum potentials that occur in nature. The panel below (left) shows energy eigenstates of a harmonic oscillator. Probability density of the ground state that demonstrates purely quantum-mechanical behavior is shown in the middle panel below. Probability density of the 20th subband demonstrates the more classical behavior as the well opens (right panel below). pic6_state1top.png pic7_state2left.png pic8_state3right.png Available resources: Energy Bands and Effective Masses Periodic Potential Lab in AQME pic10_perpot2.png pic9_perpot1.png The Periodic Potential Lab in AQME solves the time-independent Schrödinger Equation in a 1-D spatial potential variation. Rectangular, triangular, parabolic (harmonic), and Coulomb potential confinements can be considered. The user can determine energetic and spatial details of the potential profiles, compute the allowed and forbidden bands, plot the bands in a compact and an expanded zone, and compare the results against a simple effective mass parabolic band. Transmission is also calculated. This lab also allows the students to become familiar with the reduced zone and expanded zone representation of the dispersion relation (E-k relation for carriers). Available resources: Periodic Potentials and Bandstructure: an Exercise Band Structure Lab in AQME pic12_band2.png pic11_band1.png Band structure of Si (left panel) and GaAs (right panel). In solid-state physics, the electronic band structure (or simply band structure) of a solid describes ranges of energy that an electron is “forbidden” or “allowed” to have. It is due to the diffraction of the quantum mechanical electron waves in the periodic crystal lattice. The band structure of a material determines several characteristics, in particular, its electronic and optical properties. The Band Structure Lab in AQME enables the study of bulk dispersion relationships of Si, GaAs, InAs. Plotting the full dispersion relation of different materials, students first get familiar with a band structure of a direct band gap (GaAs, InAs), as well as indirect band gap semiconductors (Si). For the case of multiple conduction band valleys, students must first determine the Miller indices of one of the equivalent valleys, then, from that information they can deduce how many equivalent conduction bands are in Si and Ge, for example. In advanced applications, the users can apply tensile and compressive strain and observe the variation in the band structure, band gaps, and effective masses. Advanced users can also study band structure effects in ultra-scaled (thin body) quantum wells, and nanowires of different cross sections. Band Structure Lab uses the sp3s*d5 tight-binding method to compute E(k) for bulk, planar, and nanowire semiconductors. Available resource: Bulk Band Structure: a Simulation Exercise diamond.png The figure on the left illustrates the first Brillouin zone of FCC lattice that corresponds to the first Brillouin zone for all diamond and Zinc-blende materials (C, Si, Ge, GaAs, InAs, CdTe, etc.). There are 8 hexagonal faces (normal to 111) and 6 square faces (normal to 100). The sides of each hexagon and each square are equal. Supplemental Information: Specification of High-Symmetry Points Symbol Description Γ Center of the Brillouin zone Simple Cube M Center of an edge R Corner point X Center of a face Face-Centered Cubic K Middle of an edge joining two hexagonal faces L Center of a hexagonal face U Middle of an edge joining a hexagonal and a square face W Corner point X Center of a square face Body-Centered Cubic H Corner point joining four edges N Center of a face P Corner point joining three edges A Center of a hexagonal face H Corner point K Middle of an edge joining two rectangular faces L Middle of an edge joining a hexagonal and a rectangular face M Center of a rectangular face Real World Applications Schred Tool in AQME pic13_schred1.png pic14_schred2.png pic15_schred3.png The Schred Tool in AQME calculates the envelope wavefunctions and the corresponding bound-state energies in a typical MOS (Metal-Oxide-Semiconductor) or SOS (Semiconductor-Oxide- Semiconductor) structure and in a typical SOI structure by solving self-consistently the one-dimensional (1-D) Poisson equation and the 1D Schrödinger equation. The Schred tool is specifically designed for Si/SiO2 interface and takes into account the mass anisotropy of the conduction bands, as well as different crystallographic orientations. Available resources: 1-D Heterostructure Tool AQME The 1-D Heterostructure Tool AQME simulates confined states in 1-D heterostructures by calculating charge self-consistently in the confined states, based on a quantum mechanical description of the one dimensional device. The greater interest in HEMT devices is motivated by the limits that will be reached with scaling of conventional transistors. The 1D Heterostructure Tool in that respect is a very valuable tool for the design of HEMT devices as one can determine, for example, the position and the magnitude of the delta-doped layer, the thickness of the barrier and the spacer layer for which one maximizes the amount of free carriers in the channel which, in turn, leads to larger drive current. This is clearly illustrated in the examples below. 1dhet1.png 1dhet2.png Available resources: Resonant Tunneling Diode Lab in AQME rtd1.png rtd2.png Put a potential barrier in the path of electrons, and it will block their flow; but, if the barrier is thin enough, electrons can tunnel right through due to quantum mechanical effects. It is even more surprising that, if two or more thin barriers are placed closely together, electrons will bounce between the barriers, and, at certain resonant energies, flow right through the barriers as if there were none. Run the Resonant Tunneling Diode Lab in AQME, which lets you control the number of barriers and their material properties, and then simulate current as a function of bias. Devices exhibit a surprising negative differential resistance, even at room temperature. This tool can be run online in your web browser as an active demo. pic18_restunn.png pic19_restun2.png Available resources: Quantum Dot Lab in AQME Available resources: Scattering and Fermi’s Golden Rule Scattering is a general physical process whereby some forms of radiation, such as light, sound, or moving particles are forced to deviate from a straight trajectory by one or more localized non-uniformities in the medium through which they pass. In conventional use, scattering also includes deviation of reflected radiation from the angle predicted by the law of reflection. Reflections that undergo scattering are often called diffuse reflections, and unscattered reflections are called specular (mirror-like) reflections. The types of non-uniformities (sometimes known as scatterers or scattering centers) that can cause scattering are too numerous to list, but a small sample includes particles, bubbles, droplets, density fluctuations in fluids, defects in crystalline solids, surface roughness, cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory. In quantum physics, Fermi’s golden rule is a way to calculate the transition rate (probability of transition per unit time) from one energy eigenstate of a quantum system into a continuum of energy eigenstates, due to a perturbation. The Bulk Monte-Carlo Lab in AQME calculates the scattering rates dependence versus electron energy of the most important scattering mechanisms for the most commonly used materials in the semiconductor industry, such as Si, Ge, GaAs, InSb, GaN, SiC. For proper parameter set for, for example, 4H SiC please refer to the following article. Available Resources: Coulomb Blockade Available resources: Users no longer have to search the nanoHUB to find the appropriate applications for discovery that are related to quantum mechanics; users, both instructors and students, can simply log in to the AQME toolbox and take advantage of the assembled tools and resources, such as animations, exercises or podcasts. AQME Constituent Tools Piece-Wise Constant Potential Barriers Tool Bound States Calculation Lab Band Structure Lab Periodic Potential Lab 1D Heterostructure Tool Resonant Tunneling Diode Simulator Quantum Dot Lab Bulk Monte Carlo Lab Coulomb Blockade Simulation Created on , Last modified on
318e9a26c9f1f660
Jump to content • Create Account Welcome to BZPower! • Send private messages to other members • Much, much more! Enjoy your visit! Posted Image Akano's Blog The Harmonic Oscillator Posted by Akano Toa of Electricity , in Math/Physics Apr 12 2012 · 66 views science, simple, harmonic and 3 more... My Classical Mechanics professor quoted someone in class the other day: "The maturation of a physics student involves solving the harmonic oscillator over and over again throughout his/her career." (or something to that effect) So, what is the harmonic oscillator? Otherwise known as the simple harmonic oscillator, it is the physical situation in which a particle is subject to a force whose strength is proportional to the displacement from equilibrium of said particle, known as Hooke's Law, or, in math terms, F = -kx where F is our force, x is our displacement, and k is some proportionality constant (often called the "spring constant"). That sounds swell and all, but to what situations does this apply? Well, for a simple example, consider a mass suspended on a spring. If you just let it sit in equilibrium, it doesn't really move since the spring is cancelling out the force of gravity. However, if you pull the mass slightly off of its equilibrium point and release it, the spring pulls the mass up, compresses, pushes the mass down, and repeats the process over and over. So long as there is no outside force or friction (a physicist's dream) this will continue oscillating into eternity, and the position of the mass can be mapped as a sine or cosine function. What is the period of the oscillation? Well, it turns out that the square of the period is related to the mass and the spring constant, k in this fashion: T2 = 4π2m/k This is usually written in terms of angular frequency, which is 2π/T. This gives us the equation (2π/T)2 = ω2 = k/m This problem is also a great example of a system where total energy, call it E, is conserved. At the peak of the oscillation (when the mass is instantaneously at rest), all energy is potential energy, since the particle is at rest and there is no energy of motion. At the middle of the oscillation (when the mass is at equilibrium and moving at its fastest) the potential energy is at a minimum (zero) and the all energy in the system is kinetic energy. Kinetic energy, denoted by T (and not to be confused with period) is equal to mv2/2, and the kinetic energy of the simple harmonic oscillator is kx2/2. Thus, the total energy can be written as E = mv2/2 + kx2/2 = p2/2m + kx2/2 Where I've made the substitution p = mv. Advanced physics students will note that this is the Hamiltonian for the simple harmonic oscillator. Well, this is great for masses on springs, but what about more natural phenomena? What does this apply to? Well, if you like music, simple harmonic oscillation is what air undergoes when you play a wind instrument. Or a string instrument. Or anything that makes some sort of vibration. What you're doing when you play an instrument (or sing) is forcing air, string(s), or electric charge (for electronic instruments) out of equilibrium. This causes the air, string(s), and current to oscillate, which creates a tone. Patch a bunch of these tones together in the form of chords, melodies, and harmonies, and you've created music. A simpler situation is blowing over a soda/pop bottle. When you blow air over the mouth of the bottle, you create an equilibrium pressure for the air above the mouth of the bottle. Air that is slightly off of this equilibrium will oscillate in and out of the bottle, producing a pure tone. Also, if you have two atoms that can bond, the bonds that are made can act as Hooke's Law potentials. This means that, if you vibrate these atoms at a specific frequency, they will start to oscillate. This can tell physicists and chemists about the bond-lengths of molecules and what those bonds are made up of. In fact, the quantum mechanical harmonic oscillator is a major topic of interest because the potential energy between particles can often be approximated as a Hooke's Law potential near minima, even if it's much more complex elsewhere. Also, for small angles of oscillation, pendula act as simple harmonic oscillators, and these can be used to keep track of time since the period of a pendulum can be determined by the length of its support. Nowadays, currents sent through quartz crystals provide the oscillations for timekeeping more often than pendula, but when you see an old grandfather clock from the olden days, you'll know that the pendulum inside the body is what keeps its time. Hopefully you can now see why we physicists solve this problem so many times on our journey to physics maturity. :P Posted Image Rock Farms And K K Posted by Akano Toa of Electricity , Mar 25 2012 · 50 views I has new comic. Head on over to the topic and view it! It features the return of KK in not the stinger! :o Posted Image It's All Relative Posted by Akano Toa of Electricity , in Math/Physics Mar 22 2012 · 85 views math, physics, Einstein and 1 more... Being a physics grad student has seen me be in quite the scientific mood lately, hasn't it? Well, unfortunately, I still don't have a new comic made (I'm sorry, everyone! ><), but I do have another idea for a blog entry. Last week, Pi day (March 14) marked Einstein's 133rd birthday, and since my Classical Mechanics course is covering the Special Theory of Relativity, I thought I'd try to cover the basic ideas in blog form. According to the laws of physics laid down by Sir Isaac Newton, all non-accelerating observers witness the same laws of physics. This included an idea of spontaneity, the idea that someone traveling on the highway at 60 mph would witness an event occur at the exact same time as someone who was just sitting on the side of the highway at rest. The transformation from a reference frame in motion to one at rest for Newtonian physics is known as a Galilean transformation, where x is shifted by -vt, or minus the velocity times time. Under such transformations, laws of physics (like Newton's second law, F = ma, remain invariant (don't change). However, during the 19th century, a man by the name James Clerk Maxwell formulated a handful of equations, known now as Maxwell's equations, that outline a theory known as electromagnetic theory. Of the many new insights this theory gleaned (among these the ability to generate electricity for power which every BZP member uses) one was that light is composed of oscillating electric and magnetic fields; light is an electromagnetic wave. By using his newly invented equations, Maxwell discovered what the speed of light was by formulating a wave equation. When his equations are used to describe electromagnetism, the speed of light is shown to be the same regardless of reference frame; in other words, someone traveling near the speed of light (as long as they weren't accelerating) would see light travel at the same speed as someone who was at rest. According to Newton's laws, this didn't make sense! If you're in your car on the highway and traveling at 60 mph while another car in the lane next to you is traveling at 65 mph, you don't see the other car moving at 65 mph; relative to you, the other car moves at 5 mph. The reason that light is different is because a different theory governs its physics. This brought about a dilemma: is Maxwell's new electromagnetic theory wrong? Or does Newtonian mechanics need some slight revision? This is where Einstein comes in. He noticed the work of another physicist, Lorentz, who had worked on some new transformations that not only caused space to shift based on reference frames moving relative to each other, but also shifted time. Einstein realized that if light had the same speed in all non-accelerating reference frames, then objects moving faster experienced time differently than those that moved slower. This would come to be known as the Special Theory of Relativity. How does this make sense? Well, if you have some speed that must remain constant no matter how fast one is traveling, you need time to shift in addition to shifting space to convert between both reference frames, since speed is the change in distance over the amount of time that displacement took place. If you have two reference frames with some relative speed between them, the only way to shift your coordinates from one to another and preserve the speed of light is if both frames experience their positions and times differently. This means that, if something moves fast enough, a journey will take less time in one frame than the other. Special relativity says that moving clocks progress more slowly than clocks at rest, so someone traveling in a rocket at a speed comparable to the speed of light will find that the journey took less time than someone who had been anticipating his arrival at rest. This also means that if someone left Earth in a rocket traveling near the speed of light and came back ten years later would not have aged ten years, but would be younger than someone who was his/her age before his journey took place. Weird, huh? If you think this is crazy or impossible, there have been experiments done (and are still going) to try to confirm/reject the ideas of special relativity, and they all seem to support it. There's another relativity at play as well known as general relativity, which states that gravitational fields affect spacetime (the combination of space and time into one geometry). General relativity says that the higher up you are in a gravitational field, the faster clocks run (time speeds up). A proof of this theory is GPS; the satellites that help find your position by GPS are all higher up in Earth's gravitational field than we are, and thus their clocks run faster than those on Earth's surface. If general relativity weren't considered in the calculations to figure out where you are on Earth, your GPS would be off by miles. Posted Image Happy Birthday... Posted by Akano Toa of Electricity , Feb 22 2012 · 80 views birthday, Hertz, KopakaKurahk ...to Heinrich Hertz! And George Washington! And Frederic Chopin! And KopakaKurahk! So many and's! Also I am now 23. 8D Posted Image Complex Numbers Posted by Akano Toa of Electricity , in Math/Physics Feb 19 2012 · 66 views math, i, e, pi Math is a truly wonderful topic, and since I'm procrastinating a little on my physics homework, I'm going to spend some time talking about the complex numbers. Most of us are used to the real numbers. Real numbers consist of the whole numbers (0, 1, 2, 3, 4, ...), the negative numbers (-1, -2, -3, ...), the rational numbers (1/2, 2/3, 3/4, 22/7, ...), and the irrational numbers (numbers that cannot be represented by fractions of integers, such as the golden ratio, the square root of 2, or π). All of these can be written in decimal format, even though they may have infinite decimal places. But, when we use this number system, there are some numbers we can't write. For instance, what is the square root of -1? In math class, you may have been told that you can't take the square root of a negative number. That's only half true, as you can't take the square root of a negative number and write it as a real number. This is because the square root is not part of the set of real numbers. This is where the complex numbers come in. Suppose I define a new number, let's call it i, where i2 = -1. We've now "invented" a value for the square root of -1. Now, what are its properties? If I take i3, I get -i, since i3 = i*i2. If I take i4, then I get i2*i2 = +1. If I multiply this by i again, I get i. So the powers of i are cyclic through i, -1, -i, and 1. This is interesting, but what is the magnitude of i, i.e. how far is i from zero? Well, the way we take the absolute value in the real number system is by squaring the number and taking the positive square root. This won't work for i, though, because we just get back i. Let's redefine the absolute value by taking what's called the complex conjugate of i and multiplying the two together, then taking the positive square root. The complex conjugate of i is obtained by taking the imaginary part of i and throwing a negative sign in front. Since i is purely imaginary (there are no real numbers that make up i), the complex conjugate is -i. Multiply them together, and you get that -i*i = -1*i2 = 1, and the positive square root of 1 is simply 1. Therefore, the number i has a magnitude of 1. It is for this reason that i is known as the imaginary unit! We can think of points on the complex plane being represented by a vector which points from the origin to the point in question. The magnitude of this vector is given by the absolute value of the point, which we can denote as r. The x-value of this vector is given by the magnitude multiplied by the cosine of the angle made by the vector with the positive part of the real axis. This angle we can denote as ϕ. The y-value of the vector is going to be the imaginary unit, i, multiplied by the magnitude of the vector times the sine of the angle ϕ. So, we get that our complex number, z, can be written as z = r*(cosϕ + isinϕ). The Swiss mathematician Leonhard Euler discovered a special identity relating to this equation, known now as Euler's Formula, that reads as follows: eiϕ = cosϕ + isinϕ Where e is the base of the natural logarithm. So, we can then write our complex number as z = re. What is the significance of this? Well, for one, you can derive one of the most beautiful equations in mathematics, known as Euler's Identity: e + 1 = 0 This equation contains the most important constants in mathematics: e, Euler's number, the base of the natural logarithm; i, the imaginary unit which I've spent this whole time blabbing about; π, the irrational ratio of a circle's circumference to its diameter which appears all over the place in trigonometry; 1, the real unit and multiplicative identity; and 0, the additive identity. So, what bearing does this have in real life? A lot. Imaginary and complex numbers are used in solving many differential equations that model real physical situations, such as waves propagating through a medium, wave functions in quantum mechanics, and fractals, which in and of themselves have a wide range of real life application, along with others that I haven't thought of. Long and short of it: math is awesome. Posted Image Brickshelf's Down? Posted by Akano Toa of Electricity , Jan 28 2012 · 148 views brickshelf, down, server, y u no So, brickshelf and majhost's servers are down. Too bad I can't upload the comic I've totally finished and not at all procrastinated on. :rolleyes: Seriously though, my apologies for the lack of updating of my comics. I have not had as much time this semester to do things I enjoy as I would like (we kinda hit the ground running). I do hope to inspired soon and start working on a comic. Another thing sort of slowing me down is that I'm getting used to using GIMP, which is quite different from Photoshop, which my new computer does not have. Thus, I have to make do. In other news, this semester I'm taking quantum mechanics version 2.0 and classical mechanics. In both classes (ironically) we're working with Lagrangian mechanics, as the classical Lagrangian (the difference of kinetic and potential energy of a system) is useful in deriving equations to describe systems in both the classical and quantum mechanical regimes. In fact, when one uses the Lagrangian as a way to formulate wavefunctions of quantum mechanics, Hamilton-Jacobi equation (a classical physics equation) pops out of the Schrödinger equation! It's as though physics is self-consistent or something... Posted Image Video Games...in 3D! Posted by Akano Toa of Electricity , Jan 08 2012 · 142 views Nintendo, 3DS, Zelda and 4 more... So, I purchased a lovely Zelda-edition 3DS with some money I got for Christmas and some out of my own pocket after the festive holiday. I have to say, it is awesome. Playing the classic game that got me into the Zelda series in 3D is fun, and due to the fact that I am used to 3D stereogram images the 3D bothers my eyes minimally. Actually, I think OoT is a great game to do in 3D, because there are many scenes that the 3D adds to well (such as Navi's flight at the beginning, establishing shots of dungeons, and basically any scene in the Chamber of Sages). Another great thing about the 3DS is its two external cameras, enabling you to take stereogram pictures. This was one of the biggest appeals to me buying one (the deciding factor was the Zelda-edition-ness). To demonstrate, I have put some 3D LEGO pictures here. Note that the images are crossview stereograms. Enjoy! Posted Image Posted Image Welcome To The Herd! Posted by Akano Toa of Electricity , in My Little Pony Dec 15 2011 · 77 views brony, My Little Pony I am now officially a brony thanks to KK and Tekulo. Lauren Faust can really make a cartoon series that multiple groups can enjoy while still being targeted at girls. The fact that Timmy Turner's voice is in the series doesn't hurt either. My favorite is Fluttershy. :P Posted Image Looking On The Bright Side... Posted by Akano Toa of Electricity , in LEGO Dec 01 2011 · 48 views fading, LEGO, retr0bright, white I don't know if anyone here has heard of Retr0bright (yes, the "0" is intentional), but it is one of the best tools that someone like me with older LEGO sets could have possibly found whilst perusing the interwebs. It's a mixture of hydrogen peroxide (H2O2) and Oxi-Clean (or similar product) which takes old white, gray, even blue pieces that have faded and gained an ugly orange-y tint. If you don't know of this phenomenon, you either have not been collecting LEGO long enough, or you keep your LEGO sets completely shielded from UV light. What happens is that the sun, whilst supplying our lovely planet with energy for us to live (yay, sun!) also gives off these lovely rays in the energy domain of ultraviolet light, which most of you probably know as the reason we get sunburns when we are outside too long in the summer (boo, sun. :(). Another detrimental effect of this UV radiation is the fading of LEGO pieces. Why/how does it fade them? Well, Earth is extra special in that its atmosphere has wonderfully healthy amounts of oxygen (O2) gas which we need to breathe and live (yay, oxygen!). Not only do we like oxygen, but so does ABS plastic, from which LEGO is made. The plastic has a compound in it that possesses bromine which, for those of you who do not know your periodic tables, is a halogen in the second to last column of the periodic table and is, thus, highly reactive when on its own. Fortunately, it is nestled in the ABS compound, but this doesn't quite satiate its need for buddies to bond with it (because it's greedy that way), so it decides to find more buddies to bond with in our own air – the very oxygen we breathe! What does UV radiation have to do with this? Well, it turns out that bonding takes energy, and the bromine within the ABS does not have the energy by its lonesome to absorb a buddy oxygen from the air (since oxygen gas is fairly stable and thus requires more energy to separate). So, the UV radiation of the sun is just the kick it needs to bond with oxygen, thus producing this: Posted Image Ugly, huh? But, someone discovered that our friend hydrogen peroxide (with a catalyst found in oxi-clean detergents) is able to reverse this process with the help of – guess what – UV radiation. That's right, the same thing that triggers the fading is also what allows it to reverse! Weird, huh? I decided to try this process on some of my faded white and gray pieces from some of my older sets (circa 1998-2000, mostly Adventurers theme) and this was the lovely result: Posted Image The difference is like night and day. For those of you wondering if it affected the printing on the skulls of the skeletons or the minifig bodies, the answer is no, it did not. Truly remarkable and a relief that my old pieces can shine as the pearly whites they were meant to be. :) Posted Image Revisiting Childhood. Posted by Akano Toa of Electricity , in LEGO Nov 17 2011 · 46 views LEGO, 90s, UFO, Fright Knights Recently I acquired an account on the lovely secondhand LEGO store known as BrickLink. This has been wonderful for my inner child and slightly stressful for my pocketbook, as I have slowly been buying sets from my childhood that I saw in LEGO catalogs but was unfortunate never to get. Of those nostalgic sets, I acquired the Night Lord's Castle and Alien Avenger of the Fright Knights and UFO themes, respectfully. Night Lord's Castle: Awesome. It's pretty much everything a creepy castle run by a vampire and witch should have: bats adorning the entrance and tower, a prison cell in the tall tower for the good guys, and the eerie crystal ball predicting the doom of anyone who dares oppose them make the atmosphere wonderfully appropriate. Basil's throne is a nice touch in the main room of the castle, and the large oak doors on the side give it a sinister majesty (note, I have it in the configuration of the front of the instructions/box, for those curious). Also, the fact that it's swarming with guards add to the "do not mess with us because we're evil and magical" vibe. Alien Avenger: The epitome of a UFO. It's a giant (for LEGO) flying saucer manned (?) by three aliens and an android guy with two extraterrestrial buggies to explore various worlds. Oh, and Alpha Dragonis' ship detaches from the main vessel. Pretty darn cool. The rotating laser canons on the front and the magnetic buggy-lifting hose are great touches and makes me want to reenact a scene of aliens abducting cattle (I need to obtain some LEGO moos!). Sorry for the lack of pictures. I promise when I get a decent picture-taking apparatus (the iPhone 3G's camera is surprisingly lacking compared to my old phone) I will post pics. EDIT: Picture of the Alien Avenger: Posted Image I should also review some of my other 90s sets some other time while I'm at it. I miss those days. Posted Image About Me Akano Toa of Electricity Posted Image Posted Image Premier Members Stone Champion Nuva Posted Image 1,500+ posts Posted Image +2 for Premier Membership +1 from Pohuaki for reporting various things in Artwork Name: Akano Real Name: Forever Shrouded in Mystery :P Age: 25 Gender: Male Notable Facts: One of the few Comic Veterans still around Has been a LEGO fan since ~1996 Bionicle fan from the beginning Misses the 90's. A lot. Twitter: @akanotoe Posted Image My Lovely Topics Posted Image Posted Image Posted Image Hieroglyphs And The Like IPB Image IPB Image IPB Image IPB Image Recent Comments Posted Image Posted Image
244c11edcae60995
torsdag 22 juni 2017 Popular Standard View of stdQM måndag 19 juni 2017 Restart of Icarus Simulation AB Customers are welcome. lördag 3 juni 2017 realQM: Helium Ground State -2.9036 = Success PS1 1st ionisation energy as observed is supposed to be 0.903569881854. Trumps Reason to Withdraw from the Paris Accord When President Trump declared that the US will pull out from the Paris Climate Accord, he did not repeat his earlier analysis that CO2 climate alarmism is a hoax without scientific support. He could have done that on very good grounds, but he did not get into the question whether CO2 emissions from human activity is a real threat to the planet or not, which some climate skeptics regret. Trump simply referred to the fact that even if all commitments of a Paris Accord where fully fulfilled, or more, the total effect according to the very dogmas of CO2 climate alarmism, would be at most 0.2 C reduction of global warming at the end of the century, that is zero effect. His logic was that it would be immoral to deliberately deny poor people access to cheap fossil fuel and keep them in poverty, if the effect on climate would be zero. This shows the true dilemma of climate alarmism: If there is a real threat, then the planned measures to avoid catastrophe are totally inadequate and thus meaningless and then immoral by causing human suffering.  If there is no real threat, then the planned measures are even more meaningless and immoral. This dilemma is covered up by main stream media selling climate alarmism, but comes out in the  true face of climate alarmism as expressed by Joachim Schellnhuber, climate advisor to Merkel and the Pope, asking for a "great transformation" of human civilisation. This is something completely different from buying an electrical car.  Think of that. PS When Scott Pruitt as new Director of EPA and chief architect together with Mylon Ebell of Trumps CO2 agenda and decision to withdraw from the Paris Accord, was asked if he knew what  Trump was "thinking about climate",  Pruitt responded that frankly he did not know and that he had not discussed this question with Trump. To Pruitt and Ebell (and to the world) it is enough to know that their agenda is supported by Trump. This fits with Trump's decision to refrain from repeating his claim that climate alarmism is a "hoax" (which is impossible to prove), because the Paris Accord is meaningless, "hoax" or not. tisdag 30 maj 2017 Kutta Condition, Gods Finger and Secret of Flight The New Theory Flight revealing the secret of flight (article1, article2book and website) is now backed by new computations in realistic geometry to be presented next week at the High-Lift Prediction Workshop III. At this historic moment, let me recall the The Old Theory of Flight by Kutta-Zhukovsky presented around 1905, which is still the accepted text book explanation of the generation of lift by an airfoil. The Old Theory states that an airfoil is capable of generating lift because it has a sharp trailing edge, which is supposed to force potential flow without lift separating on the upper surface of the wing to instead separate at the trailing edge and then generate lift by causing a redirection of the airflow, as illustrated in this generic text book figure illustrating the Kutta condition:   The Old Theory contains two unphysical effects, which happen to balance and then miraculously give a physical result = lift. The two unphysical effects are: 1. The start is 2d potential flow without lift separating on top of the wing.  2. By making the trailing edge sharp, the flow is forced to separate at trailing edge and then give lift.  The New Theory shows that 2d flow is unphysical because real flow contains completely crucial 3d features. To believe that real flow can be forced to separate at the trailing edge by making it sharp, is to give yourself access to the action of a God's finger of unlimited power. In numerics you can play God and set the velocity zero wherever you want, but that is simulated virtual reality and not real physics. It is like putting a needle into a voodo doll believing it will have an effect on a real person. This is voodo-physics. Yet, this is the text book explanation of lift. To test, ask your favourite aero-dynamicist: 1. Why do airfoils have sharp trailing edge?  2. What happens if the trailing edge is not sharp but more or less rounded?  After this experience, you will be more motivated to dig into the New Theory of Flight. PS The book will now be updated to find an efficient publisher. tisdag 16 maj 2017 realQM Excited States I have updated realQM with a section on The interested reader will there find that realQM offers a natural way to model excitation of electrons in an outermost shell by replacing the electrons in inner shells and the kernel by an effective kernel of a certain radius and reduced charge, thus relating in principle excitation of all atoms to that of Hydrogen. In realQM electron wave functions have local support and occupy different domains i space, which gives the model with an effective kernel a direct physical meaning, while in stdQM wave functions have global support and precise allocation of electrons to different shells is impossible. Classical vs Quantum Physics According to Lubos lördag 6 maj 2017 Schrödinger: Do Electrons Think? Schrödinger's equation is the basic basic mathematical model of quantum mechanics. It was first formulated for the Hydrogen atom with one electron in terms of a wave function $\psi (x,t)$ depending on a 3d space coordinate $x$ and a time coordinate $t$, with $\vert\psi (x,t)\vert^2$ representing electron charge density at $(x,t)$. Schrödinger's equation expresses stationarity of an associated energy functional and the ground state is defined as the charge density of minimal energy.   Since the agreement between model and observation was perfect for Hydrogen, Schrödinger's equation was greeted as the most stunning triumph of the human mind since Newton's law of gravitation. The generalisation of Schrödinger's equation to atoms with $N > 1$ electrons presented itself as a formal extension into a wave function $\psi (x_1,...,x_N)$ depending on $N$ 3d space coordinates $x_j$, altogether $3N$ space coordinates.  But such a multi-d wave function could no longer be interpreted as a charge density in physical 3d space,  only as the probability to find at any given time electron $j$ at position $x_j$ for $j=1,...,N,$ as if the electrons as particles were randomly jumping around. This was coined the Copenhagen Interpretation of Bohr-Born-Heisenberg which took over the scene against heavy protests from Schrödinger and Einstein among others. Schrödinger phrased his protest in many ways and in particular as the question do electrons think? Schrödinger argued that if electrons jump around randomly as in the Copenhagen Interpretation, then they cannot be viewed to think.  But if electrons instead in a deterministic way react upon forces so as to minimise energy, then they can be viewed to think in some sense. Schrödinger would thus give his answer as: Yes, electrons do think! as a protest to the randomness without thought of the Copenhagen Interpretation. This connects to Descartes "I think and therefore I am (exist)". With the same logic for the electron, physical existence would be linked to thinking and so electrons do exist because they think and do not jump randomly without thought. What do you think?           fredag 5 maj 2017 New Web Site: Real Quantum Mechanics I have launched a new web site describing a new approach to atom physics in terms of classical continuum mechanics in three space dimensions named realQM or also presented as a book in draft form. Take a look and see if you get encouraged to follow the further development of this project. onsdag 3 maj 2017 Programmering i Matematikämnet: Så Lite Som Möjligt? Regeringen beslutade 090317 att med start ht17 programmering skall ingå som en del av matematikundervisningen i grundskola och gymnasium, se tidigare bloggpost. För att detta skall bli verklighet fordras utveckling av nya läromedel och fortbildning av lärare. För detta ändamål säger sig Skolverket vilja tillföra några moduler på Lärportalen i stil med de moduler som utformats för Matematiklyftet av bl a NCM i Göteborg. Kommer detta att räcka? Det beror på målsättningen, som kan vara allt från (1) så lite som möjligt till (2) lite mera till (3) så mycket som vore befogat med tanke på Regeringens uppdrag. Här kan vi förvänta oss stor uppslutning för (1) eftersom det finns starka krafter som vill behålla matematikundervisningen i sin traditionella form utan störande inslag av programmering. Med (1) förenklas ju uppgiften vad gäller nya läromedel och fortbildning avsevärt, då nästan inget behöver göras. Eftersom ingen aktion har varit märkbar efter Regeringens beslut i mars och ht17 snart är här, så verkar det vara så att skolvärlden ställer in sig på (1). Men det var inte (1) Regeringen avsåg. Jag har förslagit NCM att jag skulle kunna bidra med Matematik-IT som är i linje med (3). Vi får se om NCM tycker det vore bra eller om det är (1) som gäller även på NCM. Vad gäller att sätta ett tak på nivå (1) för alla, som mycket väl kan bli verklighet, kan man säga att det inte vore i linje med Regeringens intentioner.  Nog borde det väl kunna få finnas alternativ i linje med (2) och (3) för de skolor/lärare som vill mer än (1)? Eller skulle det störa en princip om likformig skola? PS Varken Svenska Matematikersamfundet eller Nationalkommitten för Matematik har uttryckt någon mening vad gäller Regeringens beslut om att förändra matematikundervisningen i skolan. Detta är i linje med tidigare hållning att inte befatta sig med skolmatematik, och i fall (1) behöver ju inte heller något sägas. tisdag 2 maj 2017 CO2 Global Warming Alarmism: Hour of Reckoning Driving in the wrong direction on a one-way street, firmly believing it to be a two-way street, is stupid and potentially deadly hazardous for other people. The US Environmental Protection Agency EPA has now cleansed its web page from CO2 global warming alarmism and US Energy Sec. Perry declares • We should ‘renegotiate’ the Paris Climate Change Agreement, This signals the beginning of the end of the CO2 alarmism driven by EU politicians and US Democrats: This is a victory for rational science showing that the "CO2 greenhouse effect" has been artificially boosted to seemingly dangerous levels without proper scientific evidence, only in order to fit a certain political agenda.  I feel happy to have contributed to this insight through an analysis of the unphysical nature of the concept of "back radiation" which is central to the proclaimed alarmingly big "CO2 greenhouse effect".  You find "back radiation" in many books on atmospheric physics as one part of a "two-stream" radiative transfer model originally proposed by Schwarzschild in 1905 with net heat transfer warm-to-cold as the difference of two gross heat transfers warm-to-cold and cold-to-warm.  But what you find in many physics books is not necessarily true physics, and this is the case with two-stream radiative heat transfer, which is fake-science. This is because heat transfer cold-to-warm violates the 2nd law of thermodynamics. In the two-stream Schwarzschild equations this is present as an effect of unphysical absorption from unphysical back radiation. Schwarzschild formulated his model to allow analytical solution as first priority and did not worry about unphysical aspects.  Two-stream radiative transfer is based on a mis-interpretation of Stefan-Boltzmann-Planck's Law $\sigma T^4$ as the radiative heat energy emitted by a black body of temperature $T$ Kelvin independent of the temperature of the environment of the body, while the physically correct interpretation is  radiative energy emitted into a background of temperature zero Kelvin.  The radiative heat energy emitted by a black body of temperature $T$ in an environment of temperature $T_0$ is thus given by $\sigma (T^4-T_0^4)$ if $T_0\le T$. If $T_0>T$ then the body absorbs energy from the environment and emits no energy.  The mis-interpretation of SBP law is widely spread and apparently accepted by many more or less prominent physicists. This is made possible by the fact that the standard derivation of the SBP law is based on statistics obscuring real physics. I have given an alternative derivation based on transparent physics exhibiting the mis-interpretation.   CO2 alarmists like two-stream gross flow because small changes of gross flow can be big and support alarmism, while small changes of net flow will remain small and give no reason for alarm. And true radiative heat transfer is one-stream warm-to-cold.  In short, the CO2 swindle is based on unphysical two-stream radiative heat transfer between the Earth surface and the atmosphere of size 300 W/m2 claimed to suggest a global warming alarm of 3 C, while the true net transfer is 10 times smaller about 30 W/m2, which can only suggest a harmless warming of 0.3 C.  There is much evidence that CO2 alarmism is scientific swindle, a basic element being the unphysical idea of two-stream radiative transfer connected to a mis-interpretation of the SBP law. To be ignorant of physics may be inconvenient but to make a mis-interpretation of a physical law believing it to be true physics can be very dangerous; for example believing that a one-way street is a two-way street can be lethal...and the more convinced you are the more dangerous... It is the responsibility of physicists to gard that basic physics of radiative heat transfer is correctly described in the physics literature.  Apparently physicists today have other priorities (like string theory and multiversa) and so the mis-interpretation of the SBP law as a basis for CO2 alarm has been able to survive under the wings of physics, but now the time of reckoning is evidenced by EPA... Murry Salby is today a leading skeptic to CO2 alarmism, but the mis-conception of two-stream radiative heat transfer was present in his 1996 book Fundamentals of Atmospheric Physics as a result of mis-management of fundamental physics in modern times allowing violation of the 2nd law of thermodynamics as the cornerstone of classical physics. PS1 Schwarzschild's two-stream model for radiative heat transfer takes the following form for a horisontal slab atmosphere, with vertical coordinate $x$ with $x=0$ at the Earth surface and $x=X$ at the top of the atmosphere, in terms of a gross upward heat flux $F^+(x)$ and a gross downward heat flux  $F^-(x)$ satisfying the following advection-absorption equations for $0\lt x\lt X$: which determines the temperature profile $T(x)$. Schwarzschild's model resulting in linear $Q(x)$, is very simplistic. Only a model with $Q(x)$ constant could be more simplistic. Schwarzschild's model (1-2) expresses conservation of upward and downward heat fluxes through a thin atmospheric layer radiating both upward and downward according to SBP in the form $Q(x) =\sigma T(x)^4$. The model is unphysical because it is based on mis-interpretation of SBP and through the equation $-\frac{dF^-}{dx} + F^- = Q$ introduces spurious absorption. In a following post I will consider one-stream models for radiative transport based on real physics. PS2 I have over the years had heated debates about back radiation and two-stream radiative with many people including Roy Spencer and Judy Curry and I have met the strong grip physics books, right or wrong, can have on peoples minds. Planck is primarily to be blamed because of his unphysical proof of the law of black body radiation using statistical arguments, which he himself did not believe in and was very unhappy with, but also secondarly all the leading physicists after Planck who uncritically have accepted what cannot be true physics. I have many times met the reaction, when I express my view that two-stream radiative heat transfer to be unphysical, that people get upset and in anger block further communication. Thus the idea of two-stream radiative heat transfer has been protected from scrutiny allowing it to serve as a corner-stone of the "greenhouse effect" invented to serve CO2 global warming alarmism.  söndag 16 april 2017 Yes, anti-matter does anti-gravitate! Sabine Hossenfelder asks in a recent post at Backreaction: • Why doesn’t anti-matter anti-gravitate? • $\Delta\phi = \rho$ This model is explored under the following categories on this blog måndag 20 mars 2017 Climate Change Programmes: Waste of Money The Independent and The Guardian reports: • Donald Trump's budget director calls efforts to combat climate change "waste of money". • The budget proposal calls for deep cuts across various federal agencies responsible for different climate change actions. This means a historic shift from inhuman irrational political ideological extremism of CO2 climate change hysteria to science, rationality and humanity. All the people of the world can now celebrate that there is more than enough fossil energy on this planet, which can safely be harvested and utilised under controllable environmental side effects, to allow virtually everybody to reach a good standard of living (under the right politics).  The industrial revolution was driven by coal and the boost of the standard of living during the 20th century in the West was made possible by the abundance of oil and gas.  Without CO2 hysteria this development can now be allowed to continue and bring more prosperity to the people, as is now happening on large scale in China and India. Wasting money on actions without meaning and effect is about the most stupid thing a government can do and that will now be put to stop in the US as concerns energy production (if not on military...) It remains for the EU to come to the same conclusion...and that will come even if the awakening will take some time... PS Note the shift of terminology from "global warming by CO2" to the more neutral "climate change", motivated by the lack of warming in the "hiatus" of global temperatures during now 20 years. If "stopping climate change" was the issue, the prime concern would be to stop the upcoming ice age.  But that is not on the agenda, maybe because nobody believes that this is within the range of climate politics...the only thing that could have an effect would be massive burning of fossil fuel under the belief that it can cause some warming...  söndag 19 mars 2017 The World as Analog Computation?!                                      Augmented reality by digital simulation of analog reality. Sabine Hossenfelder expresses on Backreaction: • No, we probably don’t live in a computer simulation! as a reaction to the Simulation Hypothesis: Sabine starts her discussion with And she gets support from Lubos Motl stating: • Hossenfelder sensibly critical of our "simulated" world. Is it then meaningless to view the World as the result of analog computation? I don't think so, with arguments presented at The World as Computation. The main idea is that if the world is an analog computation, then it may well be possible to simulate the world by digital computation, and if that is possible we may perhaps better understand and control the world to our benefit.  And the other way: If the world is not analog computation, then chances for simulation by digital computation are slim, and then what?  Recall that the basic principle of classical rational deterministic physics is to view the evolution of the world as the result of sequential analog computation as transformations of inputs into outputs according to laws of physics. In short: The World as a Clock according to Laplace. Or more precisely, The World as a Clock of Infinite Precision, since the laws of physics are supposed to be satisfied exactly. Sabine's standpoint is logical as an expression of the complete collapse of classical rational deterministic physics in the spirit of Laplace into the irrational quantum world of modern physics without determinism, for which the idea of input-output computation no longer is valid. The non-computational aspect of quantum physics comes out in the multi-dimensional form of Schrödinger's equation, which makes it impossible to solve by digital computation.  But the complete collapse of rationality/determinism in modern physics is a serious blow to physics as science and I have sought a way to avoid collapse by modifying Laplace's dictum into The World as a Clock of Finite Precision and by giving Schrödinger's equation an alternative three-dimensional form as realQM, both inviting to simulation by digital computation.   Sabine's post expresses the paralysis created by the Copenhagen Interpretation of quantum mechanics presenting a world which is not understandable and therefore not computable and therefore not understandable...a world view which we do not have to accept because there are alternatives to explore... There is no evidence that we live in a computer simulation (because the world is not digital), but there is much evidence that an analog world can be simulated by digital computation, and that opens endless possibilities of enhancing the analog world by simulated worlds as augmented reality... torsdag 9 mars 2017 Regeringen Beslutar om Programmering i Matematikämnet Regeringen har idag beslutat om förtydliganden och förstärkningar i bland annat läroplaner, kursplaner och ämnesplaner för grundskolan och gymnasieskolan: • Syftet är att tydliggöra skolans uppdrag att stärka elevernas digitala kompetens. • Programmering införs som ett tydligt inslag i flera olika ämnen i grundskolan, framför allt i teknik och matematik. • Ändringarna ska tillämpas senast från och med den 1 juli 2018. Huvudmännen kommer att kunna välja när de ska börja tillämpa ändringarna inom ett ettårigt tidsspann från och med den 1 juli 2017. Nu återstår att fylla detta med konkret innehåll. Om det skall bli något annat än bara en tom åtbörd, fordras massiv vidareutbildning av särskilt lärare i matematik.  Mitt bidrag för detta ändamål finns i form av Matematik-IT.  Det finns starka konservativa krafter inom matematikutbildning från grundskola till universitet, som inte vill medverka till att bredda matematikämnet med programmering.   Det finns starka krafter inom datalogi att ta hand om programmeringen i skolan enligt en princip av "datalogiskt tänkande". Matematikämnet står därmed inför det vägskäl som präglat hela mitt akademiska liv:  1. Förnya/utvidga traditionell analytisk matematik med programmering = Matematik-IT. 2. Bevara traditionell matematikutbildning och låt inte programmering störa bilden. Regeringen har bestämt att 1. skall gälla, medan akademin lutar åt 2.  Vad är bäst för Sveriges elever? Digital kompetens med eller utan matematik? Matematik med eller utan programmering? Kampen går vidare... tisdag 28 februari 2017 Update of realQM fredag 24 februari 2017 Skeptics Letter Reaches the White House The Washington Examiner reports: • Hundreds of scientists skeptical of climate change urged President Trump on Thursday to withdraw from the United Nations framework on global warming, arguing that doing so would support the administration's pro-jobs agenda and help "people bootstrap themselves out of poverty." • The letter asserts that carbon dioxide, considered by many scientists to be the primary cause of climate change, "is not a pollutant" at all, but a necessary ingredient for nourishing life on Earth. • The 300 scientists, led by well-known climate researcher Richard Lindzen of the Massaschusetts Institute of Technology, sent a letter to the White House with a petition urging the U.S. to exit from the U.N. Framework Convention on Climate Change. • Candidates Trump and Pence promised not only to keep the U.S. out of a harmful international climate agreement, but also to roll back misdirected, pointless government restrictions of CO2 emissions," the letter read. "Dr. Lindzen and hundreds of scientists support you in this. I was one of the 300 scientists signing the letter (here). Also Washington Times reports on this historic letter: • Hundreds of scientists urge Trump to withdraw from U.N. climate-change agency • MIT’s Richard Lindzen says policies cause economic harm with ‘no environmental benefits’. lördag 18 februari 2017 Scott Pruitt New Director of EPA Trump's Pick for EPA Chief Scott Pruitt: Climate Change Dissent Is Not a Crime Pruitt is expected to scrap the Clean Power Plan (CPP) defining the gas of life CO2 to be a toxic to be put under severe control, as well as the Paris Agreement formed on the same premise. Pruitt's standpoint based on science is that there is no scientific evidence that CO2 is toxic or that CO2 emission from burning of fossil fuels can cause measurable global warming.  The work force at an EPA without CPP is estimated to be reduced from 15000 to 5000, with new main concern being clean air and water and not meaningless control of CO2. This brings hope to the all poor people of the world that there can be energy and food for everybody!  lördag 11 februari 2017 QM: Waves vs Particles: Schrödinger vs Born From The Philosophy of Quantum Mechanics The Interpretations of QM in Historical Perspective by Max Jammer, we collect the following account of Schrödinger's view of quantum mechanics as wave mechanics, in full correspondence with realQM: • Schrödinger interpreted quantum theory as a simple classical theory of waves. In his view, physical reality consists of waves and waves only.  • He denied categorically the existence of discrete energy levels and quantum jumps, on the grounds that in wave mechanics the discrete eigenvalues are eigenfrequencies of waves rather than energies, an idea to which he had alluded at the end of his first Communication. In the paper "On Energy Exchange According to Wave Mechanics," which he published in 1927, he explained his view on this subject in great detail. • The quantum postulate, in Schrödinger's view, is thus fully accounted for in terms of a resonance phenomenon, analogous to acoustical beats or to the behavior of "sympathetic pendulums" (two pendulums of equal, or almost equal, proper frequencies, connected by a weak spring).  • The interaction between two systems, in other words, is satisfactorily explained on the basis of purely wave-mechanical conceptions as if the quantum postulate were valid- just as the frequencies of spontaneous emission are deduced from the time-dependent perturbation theory of wave mechanics as if there existed discrete energy levels and as if Bohr's frequency postulate were valid.  • The assumption of quantum jumps or energy levels, Schrödinger concluded, is therfore redundant: "to admit the quantum postulate in conjunction with the resonance phenomenon means to accept two explanations of the same process. This, however, is like offering two excuses: one is certainly false, usually both."  • In fact, Schrodinger claimed, in the correct description of this phenomenon one should not apply the concept of energy at all but only that of frequency. We contrast with the following account of Born's view of quantum mechanics as particle statistics: • Only four days after Schrödinger's concluding contribution had been sent to the editor of the Annalen der Physik the publishers of the Zeitschrift fur Physik received a paper, less than five pages long, titled On the Quantum Mechanics of Collision Processes, in which Max Born proposed, for the first time, a probabilistic interpretation of the wave function implying thereby that microphysics must be considered a probabilistic theory. • When Born was awarded the Nobel Prize in 1954 "for his fundamental work in quantum mechanics and especially for his statistical interpretation of the wave function," he explained the motives of his opposition to Schrödinger's interpretation as follows:  • "On this point I could not follow him. This was connected with the fact that my Institute and that of James Franck were housed in the same building of the Göttingen University. Every experiment by Franck and his assistants on electron collisions (of the first and second kind) appeared to me as a new proof of the corpuscular nature of the electron." • Born's probabilistic interpretation, apart from being prompted by the corpuscular aspects in Franck's collision experiments, was also influenced, as Born himself admitted, by Einstein's conception of the relation between the field of electromagnetic waves and the light quanta. • In the just mentioned lecture delivered in 1955, three days before Einstein's death, Born declared explicitly that it was fundamentally Einstein's idea which he (Born) applied in 1926 to the interpretation of Schrödinger's wave function and which today, appropriately generalized., is made use of everywhere.  • Born's probability interpretation of quantum mechanics thus owes its existence to Einstein, who later became one of its most eloquent opponents. We know that the view of Born, when forcefully missioned by Bohr, eliminated Schrödinger from the scene of modern physics and today is the text book version of quantum mechanics named the Copenhagen Interpretation. We understand that Born objected to Schrödinger's wave mechanics because he was influenced by Einstein's 1905 idea of a "corpuscular nature" of light and certain experiments suggesting a "corpuscular nature" of electrons.  But associating a "corpuscular nature" to light and electrons meant a giant step back from the main advancement of 19th century physics in the form of Maxwell's theory of light as electromagnetic waves, a step back first taken by Einstein but then abandoned, as expressed by Jammer: • Born's original probabilistic interpretation proved a dismal failure if applied to the explanation of diffraction phenomena such as the diffraction of electrons.  • In the double-slit experiment, for example, Born's original interpretation implied that the blackening on the recording screen behind the double-slit, with both slits open, should be the superposition of the two individual blackenings obtained with only one slip opened in turn.  • The very experimental fact that there are regions in the diffraction pattern not blackened at all with both slits open, whereas the same regions exhibit strong blackening if only one slit is open, disproves Born's original version of his probabilistic interpretation.  • Since this double-slit experiment can be carried out at such reduced radiation intensities that only one particle (electron, photon, etc.) passes the appara- tus at a time, it becomes clear, on mathematical analysis, that the $-wave associated with each particle interferes with itself and the mathematical interference is manifested by the physical distribution of the particles on the screen. The wave function must therefore be something physically real and not merely a representation of our knowledge, if it refers to particles in the classical sense. Summing up:  • Real wave mechanics in the spirit of Schrödinger makes a lot of sense, and that is the starting point of realQM. • Born's particle statistics does not make sense, and the big trouble is that this is the text book version of quantum mechanics. How could it be, with these odds, that Born took the scene? The answer is the "obvious"  generalisation of Schrödinger's wonderful 3d equation for the Hydrogen atom with one electron with physical meaning, into the 3N-dimensional linear Schrödinger equation for an atom with $N > 1$ electrons, a trivial generalisation without physical meaning. There should be another generalisation which stays physical and that is the aim of realQM. In the end Schrödinger may be expected to take the game because he has a most perfect and efficient brain, according to Born. To get more perspective let us quote from Born's 1954 Nobel Lecture: Born's argument against Schrödinger's wave mechanics in the spirit of Maxwell in favor of his own particle mechanics in the spirit of Newton, evidently was that a "tick" of Geiger counter or "track" in a cloud chamber both viewed to have "particle-like quality", can only be triggered by a "particle", but there is no such necessity...the snap of a whip is like a "particle" generated by a "wave"... Born ends with: • How does it come about then, that great scientists such as Einstein, Schrö- dinger, and De Broglie are nevertheless dissatisfied with the situation?  • The lesson to be learned from what I have told of the origin of quantum mechanics is that probable refinements of mathematical methods will not suffice to produce a satisfactory theory, but that somewhere in our doctrine is hidden a concept, unjustified by experience, which we must eliminate to open up the road. fredag 10 februari 2017 2500 Years of Quantum Mechanics tisdag 7 februari 2017 Towards a New EPA Without CO2 Alarmism The US Environmental Protection Agency EPA is facing a complete revision along a plan drawn by CO2 alarmism skeptic Mylon Ebell, but EPA still trumpets the same old CO2 alarmism of the Obama administration under the head lines of Climate Change: • Humans are largely responsible for recent climate change. • Greenhouse gases act like a blanket around Earth, trapping energy in the atmosphere and causing it to warm. This phenomenon is called the greenhouse effect... and is natural and necessary to support life on Earth. However, the buildup of greenhouse gases can change Earth's climate and result in dangerous effects to human health and welfare and to ecosystems. The reason that this propaganda is still on the EPA web page can only be that the new director of EPA Scott Pruitt has not yet been confirmed. It will be interesting to see the new web page after Pruitt has implemented the plan of Ebell to dismantle CO2 the US...and then... söndag 5 februari 2017 From Meaningless Towards Meaningful QM? The Schrödinger equation as the basic model of atom physics descended as a heavenly gift to humanity in an act of godly inspiration inside the mind of Erwin Schrödinger in 1926. But the gift showed to hide poison: Nobody could give the equation a physical meaning understandable to humans, and that unfortunate situation has prevailed into our time as expressed by Nobel Laureate Steven Weinberg (and here): • My own conclusion (not universally shared) is that today there is no interpretation of quantum mechanics that does not have serious flaws, and that we ought to take seriously the possibility of finding some more satisfactory other theory, to which quantum mechanics is merely a good approximation. Weinberg's view is a theme on the educated physics blogosphere of today: Sabine agrees with Weinberg that "there are serious problems", while Lubos insists that "there are no problems". There are two approaches to mathematical modelling of the physical world: 1. Pick symbols to form a mathematical expression/equation and then try to give it a meaning. 2. Have a meaningful thought and then try to express it as a mathematical expression/equation.  Schrödinger's equation was formed more according to 1. rather than 2.  and has resisted all efforts to be given a physical meaning. Interpreting Schrödinger's equation has shown to be like interpreting the Bible as authored by God rather than human minds. What makes Schrödinger's equation so difficult to interpret in physical terms, is that it depends on $3N$ spatial variables for an atom with $N$ electrons, while an atom with all its electrons seems to share experience in a common 3-d space.  Here is how Weinberg describes the generalisation from $N=1$ in 3 space dimensions to $N>1$ in $3N$ space dimensions as "obvious": • More than that, Schrödinger’s equation had an obvious generalisation to general systems. Weinberg takes for granted that what "is obvious" does not have to be explained.  But everything in rational physics needs rational argumentation and nothing "is obvious", and so this is where quantum mechanics branches off from rational physics. If what is claimed to be "obvious" in fact lacks rational argument, then it may simply be all wrong. The generalisation of Schrödinger's equation to $N>1$ fell into that trap, and that is the tragedy of modern physics. There is nothing "obvious" in the sense of "frequently encountered" in the generalisation of Schrödinger's equation from 3 space dimensions to 3N space dimension, since it is a giant leap away from reality and as such utterly "non-obvious" and "never encountered" before. In realQM I suggest a different form of Schrödinger's equation as a system in 3d with physical meaning. PS Note how Weinberg describes the foundation of quantum mechanics: • The first postulate of quantum mechanics is that physical states can be represented as vectors in a sort of abstract space known as Hilbert space. • According to the second postulate of quantum mechanics, observable physical quantities like position, momentum, energy, etc., are represented as Hermitian operators on Hilbert space.  We see that these postulates are purely formal and devoid of physics. We see that the notion of Hilbert space and Hermitian operator are elevated to have a mystical divine quality, as if Hilbert and Hermite were gods like Zeus (physics of the sky) and Poseidon (physics of the sea)...much of the mystery of quantum mechanics comes from assigning meaning to such formalities without meaning... The idea that the notion of Hilbert space is central to quantum mechanics was supported by an idea that Hilbert space as a key ingredient in the "modern mathematics" created by Hilbert 1926-32 should be the perfect tool for "modern physics", an idea explored in von Neumann's monumental Mathematical Foundations of Quantum Mechanics.  Here the linearity of Schrödinger's equation is instrumental and its many dimensions doesn't matter, but it appears that von Neumann missed the physics: • I would like to make a confession which may seem immoral: I do not believe absolutely in Hilbert space no more. (von Neumann to Birkhoff 1935) fredag 3 februari 2017 Unphysical Basis of CO2 Alarmism = Hoax CO2 alarmism is based on an unphysical version of Stefan-Boltzmann's Law and associated Schwarzschild equations for radiative heat transfer stating a two-way radiative heat transfer from-warm-to-cold and from-cold-to-warm with net transfer as the difference between the two-way transfers. This is expressed as "back radiation" from a colder atmosphere to warmer Earth surface in Kiehl-Trenberth's Global energy budget (above) and in Pierrehumbert's Infrafred radiation and planetary temperature based on Schwarzschild's equations, presented as the physical basis of CO2 alarmism. In extended writing I have exposed the unphysical nature of radiative heat transfer from-cold-to-warm as violation of the 2nd law of thermodynamics, see e.g. Massive two-way radiative heat transfer between two bodies is unphysical because it is unstable, with the net transfer arising from the difference between two gross quantities, and the 2nd law says that Nature cannot work that way: There is only transfer from-warm-to-cold and there can be no transfer from-cold-to-warm. Radiative heat transfer is always one-way from-warm-to-cold. CO2 alarmism is thus based on a picture of massive radiative heat transfer back-and-forth between atmosphere and Earth surface (see above picture), as an unstable system threatening to go into "run-away-global-warming" at slightest perturbation.  But there is no true physics behind this picture, only alarmist fiction.  Real physics indicates that global climate is stable rather than unstable, and as such insensitive to a very small change of the composition of the atmosphere upon doubling of CO2. There is little/no scientific evidence indicating that the effect could be measurable, that is be bigger than 0.5 C. Note that climate models use Schwarzschild's equations to describe radiative heat transfer and the fact that these equations do not describe true physics is a death-blow to the current practice of climate simulation used to sell CO2 alarmism. So, when you meet the argument that Pierrehumbert is an authority on infrared radiation and planetary temperature, you can say that this is not convincing because Pierrehumbert is using incorrect physics (which also comes out by the fact that he forgets gravitation as the true origin of the very high temperature on the surface of Venus and not radiation). If now CO2 alarmism is based on incorrect physics or non-physics, then it may be fair to describe it as "hoax". Think of it: Suppose that "scientific consensus" through MSM is bombarding you with a message that the Earth has to be evacuated because there is imminent fear that the "sky is going to fall down" because Newton's law of gravitation says that "everything is pulled down". Would you then say that "since it is said so it must be so" or would you say that this is a non-physical misinterpretation of Newton's law?  Think of it! The edX course Making Sense of Climate Science Denial is a typical example of the CO2 alarmism  based on the incorrect physics of "back radiation", which is forcefully trumpeted by the educational system,  as illustrated in the following key picture of the course: tisdag 31 januari 2017 The End of CO2 Alarmism Myron Ebell has formed the new US policy on climate and energy as leader of the EPA transition team and now reveals that indeed Trump will do what he said on that issue. Listen: • Climate sensitivity to CO2 emission vastly exaggerated. • Climate industrial complex a very dangerous special interest. And read about this historic press conference: The question is how long it will take before a complete global exodus from the Paris agreement will take place. It may go very quick, once the ball starts to roll...maybe in days... The Trump Administration and the Environment: A Reporter’s Primer Featuring Myron Ebell Radiation as Superposition or Jumping? This is a continuation of this post on understanding of atomic radiation of frequency $E_2-E_1$ as resonance of "superposition of two eigenstates" of different frequencies $E_2>E_1$ according to realQM. In the standard view of the Copenhagen Interpretation by Bohr as stdQM, radiation is instead connected to the "jumping" of electrons between two energies/frequencies $E_2>E_2". Which is more convincing: Superposition or jumping? Superposition connects to linearity and realQM, while not linear (for more than one electron), may still show features of "near linearity" and thus allow understanding in the form of  "superposition", while realQM carries full non-linear dynamics. On the other hand, "jumping" of electrons in stdQM either requires new physics, which is missing, or has no meaning at all. This connects to the non-physical nature of the atom of stdQM discussed in a previous post presenting a contradiction in particular in the case of atomic radiation, where atoms are observed to interact with the physics of electro-magnetics and thus must be physical, because interaction between non-physics and physics is telekinesis or psychokinesis, which is viewed as pseudo-science: String theory and multiversa are spin-offs of stdQM with the non-physical aspects driven to an extreme, and accordingly by many physicists viewed as pseudo-science. PS In Quantum Theory at the Crossroads Reconsidering the Solvay Conference 1927 we read (p 132): • In 1926, with the development of wave mechanics, Schrödinger saw a new possibility of conceiving a mechanism for radiation: the superposition of two waves would involve two frequencies and emitted radiation could be understood as some kind of "difference tone" (or beat). • In his first paper on quantisation, Schrödinger states that this picture would "much more pleasing than the one of quantum jump". • This idea is still the basis of today's semi-classical radiation theory (often used in quantum optics), that is, the determination of classical electromagnetic radiation from the current associated with a charge density proportional to $\vert\psi\vert^2$. • The second paper refers to radiation only in passing. Clearly, Schrödinger was heading in a fruitful direction, but he was stopped by Born, Bohr and Heisenberg. måndag 30 januari 2017 Towards a Model of Atoms which I hope to assemble into a model which can describe: • electrons as clouds of charge subject to Coulomb and compression forces • no conceptual difference between micro and macro • no probability, no multi-d  • generalised harmonic oscillator • small damping from Abraham-Lorentz force from oscillating electro charge • near resonant forcing with half period phase shift Schrödinger passed away in 1961 after a life in opposition to Bohr since 1926 when his equation was hijacked, but his spirit lives... with the following trivial text book picture of atomic radiation in the spirit of Bohr: söndag 29 januari 2017 The Radiating Atom In the analysis on Computational Blackbody Radiation I used the following model of a harmonic oscillator of frequency $\omega$ with small damping $\gamma >0$ subject to near resonant forcing $f(t)$: • $\ddot u+\omega^2u-\gamma\dddot u=f(t)$ with the following characteristic energy balance between outgoing and incoming energy: • $\gamma\int\ddot u^2dt =\int f^2dt$ with integration over a time period and the dot signifying differentiation with respect to time $t$.  An extension to Schrödingers equation written as a system of real-valued wave functions $\phi$ and $\psi$ may take the form • $\dot\phi +H\psi -\gamma\dddot \psi = f(t)$            (1) • $-\dot\psi +H\phi -\gamma\dddot \phi = g(t)$          (2) where $H$ is a Hamiltonian, $f(t)$ and $g(t)$ represent near-resonant forcing, and $\gamma =\gamma (\dot \rho )\ge 0$ with $\gamma (0)=0$ and $\rho =\phi^2 +\psi^2$ is charge density. This model carries the characteristics displayed of the model $\ddot\phi+H^2\phi =0$ as the 2nd order in time model obtained after eliminating $\psi$ in the case $\gamma =0$ as displayed in a previous post.  In particular, multiplication of (1) by $\phi$ and (2) by $-\psi$ and addition gives conservation of charge if $f(t)\phi -g(t)\psi =0$ as a natural phase shift condition.  Further, multiplication of (1) by $\dot\psi$ and (2) by $\dot\phi$ and addition gives a balance of total energy as inner energy plus radiated energy  • $\int (\phi H\phi +\psi H\psi)dt +\gamma\int (\ddot\phi^2 +\ddot\psi^2)dt$ in terms of work of forcing. lördag 28 januari 2017 Physical Interpretation of Quantum Mechanics Needed The standard text book Copenhagen Interpretation of quantum mechanics formed by Bohr is a not a realist physical theory about "what is", but instead an idealist/positivist non-physical probabilistic theory of "what we can know". This has led modern physics into a black hole of endless fruitless speculations with the Many Worlds Interpretation by Everett as the absurd result of anyway seeking to give a physical meaning to the non-physical Copenhagen Interpretation. Now, it is a fact that the microscopic world of atoms interacts with the macroscopic world we perceive as being real physical. If the microscopic world is declared to be non-real non-physical, then the interaction becomes a mystery. That real physics can interact with real physics is obvious, but to think of interaction between non-real and real physics makes you dizzy as expressed so well by Bohr: • Anyone who is not shocked by quantum theory has not understood it. • Anyone who can contemplate quantum mechanics whit getting dizzy, hasn't understood it. The emission spectrum of an atom shows that atom microscopics does interact with electromagnetic macroscopics. Physicists are paid to describe this interaction, but following Bohr this was and still is impossible, and the question is if the pay should continue... In realQM atoms are real as composed of clouds of electric charge around a kernel and the emission spectrum is explained as the result of charge oscillation within atoms in resonance with exterior electromagnetic waves. To keep being paid a physicist would say: Look, after all an atom is real as being composed of electron "particles orbiting" a kernel, and the non-real aspect is just that the physics is hidden to inspection and that we cannot know the whereabouts of these particles over time. So atoms are real but the nature of the reality is beyond human perception because you get dizzy when seeking to  understand. In particular it is to Bohr inexplicable that electron particles orbiting a kernel of an atom in ground state do not radiate and allows the ground state to be stable. In realQM the charge distribution of an atom in ground state does not change in time and thus is not source of radiation and the atom can remain stable. On the other hand the charge distribution of a superposition of ground and excited states does vary with time and thus may radiate at the beat frequency as the difference between excited and ground frequency. To Bohr contact with the inner microscopic world of an atom from the macroscopic would take place at a moment of observation, but that leaves out the constant interaction between micro and macro-scopics taking place in radiation. An atom in ground state is not radiating and the inner mechanics of the atom is closed to inspection. For this case one could argue that Bohr's view could be upheld, since one would be free to describe the inner mechanics in many different ways, for example in terms of probabilities of electron particle configurations, all impossible to experimentally verify. The relevant problem is then the radiating atom in interaction with an outer macroscopic world and here Bohr has little to say because he believes that interaction micro-macro takes place only at observation in the form of "collapse of the wave function".   A real actuality of the inner mechanics of an atom may interact with an actual real outer world, with or without probability, but a probability of an inner particle mechanics of an atom cannot interact with an outer reality, and Bohr discards the first option...actualities can interact but not potentialities... Let me sum up: The inner microscopics of a radiating atom interacts with outer macroscopics, and the interaction requires the microscopics to share physics with the macroscopics. This not the case in The Copenhagen Interpretation which thus must be false.    torsdag 26 januari 2017 Why Atomic Emission at Beat Frequencies Only? An atom can emit radiation of frequency $\nu =E_2-E_1$ (with Planck's constant $h$ normalized to unity and allowing to replace energy by frequency) and $E_2>E_1$ are two frequencies as eigenvalues $E$ of a Hamiltonian $H$ with corresponding eigenfunction $\psi (x)$ depending on a space coordinate $x$ satisfying $H\psi =E\psi$ and corresponding wave function $\Psi (x,t)=\exp(iEt)\psi (x)$ satisfying Schrödingers wave equation • $i\frac{\partial\Psi}{\partial t}+H\Psi =0$ and $t$ is a time variable. Why is the emission spectrum generated by differences $E_2-E_1$ of frequencies of the Hamiltonian as "beat frequencies" and not the frequencies $E_2$ and $E_1$ themselves? Why does an atom interact/resonate with an electromagnetic field of beat frequency $E_2-E_1$, but not $E_2$ or $E_1$? In particular, why is the ground state of smallest frequency stable by refusing electromagnetic resonance?   This was the question confronting Bohr 1913 when trying to build a model of the atom in terms of classical mechanics terms. Bohr's answer was that "for some reason" only certain "electron orbits" with certain frequencies "are allowed" and that "for some reason" these electron orbits cannot resonate with an electromagnetic field, and then suggested that observed resonances at beat frequencies came from "electrons jumping between energy levels".  This was not convincing and prepared the revolution into quantum mechanics in 1926. Real Quantum Mechanics realQM gives the following answer: The charge density $\vert\Psi (t,x)\vert^2=\psi^2(x)$ of a wave function $\Psi (x,t)=\exp(iEt)\psi (x)$ with $\psi (x)$ satisfying $H\psi =E\psi$, does not vary with time and as such does not radiate. On the other hand the difference $\Psi =\Psi_2-\Psi_1$ between two wave functions $\Psi_1(x,t)=\exp(iE_1t)\psi_1(x)$ and $\Psi_2(x,t)=\exp(iE_2t)\psi_2(x)$ with $H\psi_1=E_1$ and $H\psi_2=E_2\psi_2$, is a solution to Schrödinger's equation and can be written • $\Psi (x,t)=\exp(iE_1t)(\exp(i(E_2-E_1)t)\psi_2(x)-\psi_1(x))$ with corresponding charge density • $\vert\Psi (t,x)\vert^2 = \vert\exp(i(E_2-E_1)t)\psi_2(x)-\psi_1(x)\vert^2$ with a visible time variation in space scaling with $(E_2-E_1)$ and associated radiation of frequency $E_2-E_1$ as a beat frequency.  Superposition of two eigenstates thus may radiate because the corresponding charge density varies in space with time, while pure eigenstates have charge densities which do not vary with time and thus do not radiate. In realQM electrons are thought of as "clouds of charge" of density $\vert\Psi\vert^2$ with physical presence, which is not changing with time in pure eigenstates and thus does not radiate, while superpositions of eigenstates do vary with time and thus may radiate, because a charge oscillating at a certain frequency generates a electric field oscillating at the same frequency. In standard quantum mechanics stdQM $\vert\Psi\vert^2$ is instead interpreted as probability of configuration of electrons as particles, which lacks physical meaning and as such does not appear to  allow an explanation of the non-radiation/resonance of pure eigenstates and radiation/resonance at beat frequencies. In stdQM electrons are nowhere and everywhere at the same time, and it is declared that speaking of electron (or charge) motion is nonsensical and then atom radiation remains as inexplicable as to Bohr in 1913. So the revolution of classical mechanics into quantum mechanics driven by Bohr's question and unsuccessful answer, does not seem to present any real answer. Or does it? PS I have already written about The Radiating Atom in a sequence of posts 1-11 with in particular 3: Resolution of Schrödinger's Enigma connecting to this post. onsdag 25 januari 2017 Ny Läroplan med Programmering på Regeringens Bord SVT Nyheter i Gävleborg meddelar att den nya läroplanen med programmering som nytt studieämne nu ligger på Regeringens bord för beslut och att flera skolor i Gävle och Sandviken redan rivstartat och infört ämnet. Snart måste övriga skolor följa efter. Mitt bidrag för att möta behovet av nya läromedel är Matematik-IT, färdigt att provas! tisdag 24 januari 2017 Is the Quantum World Really Inexplicable in Classical Terms? Peter Holland describes in the opening statement of The Quantum Theory of Motion the state of the art of modern physics in the form of quantum mechanics, as follows: • The quantum world is inexplicable in classical terms. • The predictions pertaining to the interaction of matter and light embodied in Newton's laws of motion  and Maxwell's equations governing the propagation of electromagnetic fields, are in flat contradiction with the experimental facts at the microscopic scale. • A key feature of quantum effects is their apparent indeterminism, that individual atomic events are unpredictable, uncontrollable and literally seem to have no cause. • Regularities emerge onlywhen one considers a large ensemble of such events. • This indeed is generally considered to constitute the heart of the conceptual problems posed by quantum phenomena, necessitating a fundamental revision of the deterministic classical world view. No doubt this describes the predicament of modern physics and it is a sad story: It is nothing but a total collapse of rationality, and as far as I can understand, there are no compelling reasons to give up the core principles of classical continuum physics so well expressed in Maxwell's equations.  If classical continuum physics is modified just a little by adding a new element of finite precision computation, then the apparent contradiction of the ultra-violet catastrophe of black-body radiation as the root of "quantization", can be circled and rationality maintained.  You can find these my arguments by browsing the labels to this post and the web sites Computational Black Body Radiation and The World as Computation with further development in the book Real Quantum Mechanics. And so No, it may not be necessary to give up the deterministic classical world view when doing atom physics, the view which gave us Maxwell's equations and opened a new world of electro-magnetics connecting to atoms. It may suffice to modify the deterministic classical view just a little bit without losing anything to make it work also for atom physics. After all, what can be more deterministic than the ground state of a Hydrogen atom? Of course, this is not a message that is welcomed by physicists, who have been locked since 90 years into finding evidence that quantum mechanics is inexplicable, by inventing contradictions of concepts without physical reality. The root to such contradictions (like wave-particle duality) is the linear multi-d Schrödinger equation which is picked from the air as a formality without physics content, but just because of that being inexplicable. To advance, it seems that a new Schrödinger equation with physical meaning should be derived... The question is how to generalise Schrödinger's equation for the Hydrogen atom with one electron, which works fine and can be understood, to Helium with two electrons and so on...The question is then how the two electrons of Helium find co-existence around the kernel. In Real Quantum Mechanics they split 3d space without East and West of global politics or Germany... Quantum Mechanics as Retreat to (German) Romantic Irrational Ideal Quantum theory is widely held to resist any realist interpretation and to mark the advent of a ‘postmodern’ science characterised by paradox, uncertainty, and the limits of precise measurement. Keeping his own realist position in check, Christopher Norris provides a remarkably detailed and incisive account of the positions adopted by parties on both sides of this complex debate. James Cushing gives in Bohmian Mechanics and Quantum Theory (1996): An Appraisal, an account of the rise to domination of the Born-Heisenberg-Bohr Copenhagen Interpretation of quantum mechanics: • Today it is generally assumed that the success of quantum mechanics demands that we accept a world view in which physical processes at the most fundamental level are seen as being irreducibly and ineliminably indeterministic.  • That is, one of the great watersheds in twentieth-century scientific thought is the "Copenhagen" insight that empirical evidence and logic are seen as necessarily implying an indeterministic picture of nature.  • This is in marked contrast to any classical representation of a clockwork universe.  • A causal program would have been a far less radical departure from the then-accepted framework of classical physics than was the so-called Copenhagen version of quantum mechanics that rapidly gained ascendancy by the late 1920s and has been all-but universally accepted ever since.  • How could this happen?  • It has been over twenty years now since the dramatic and controversial "Forman thesis" was advanced that acausality was embraced by German quantum physicists in the Weimar era as a reaction to the hostile intellectual and cultural environment that existed there prior to and during the formulation of modem quantum mechanics.  • The goal was to establish a causal connection between this social intellectual milieu and the content of science, in this case quantum mechanics.  • The general structure of this argument is the following. Causality for physicists in the early twentieth century "meant complete lawfulness of Nature, determinism [(i.e., event-by-event causality)]".  • Such lawfulness was seen by scientists as absolutely essential for science to be a coherent enterprise. A scientific approach was also taken to be necessarily a rational one.  • When, in the aftermath of the German defeat in World War I, science was held responsible (not only by its failure, but even more because of its spirit) for the sorry state of society, there was a reaction against rationalism and a return to a romantic, "irrational" ideal. Yes, quantum mechanics (in its Copenhagen Interpretation forcefully advocated by Bohr under influence from the anti-realist positivist philosopher Höffding) was a product of German physics in the Weimar republic of the 1920s, by Heisenberg and Born.  It seems reasonable to think that if the defeat of Germany in World War I was blamed on a failure of "rationality" and "realism", then a resort to "irrationality" and "anti-realism" would be rational in particular in Germany...and so quantum mechanics in its anti-realist form took over the scene as Germany rebuilt its power... But maybe today Germany is less idealistic and anti-realistic  (although the Energiewende is romantic anti-realism) and so maybe also a more realistic quantum mechanics can be allowed to develop...without the standard "shut-up and calculate" suppression of discussion...
b9f4bacadc1af5b1
Monday, April 30, 2012 Spring came late to Germany, but it seems it finally has arrived. The 2012 Riesling has the first leaves and the wheat is a foot high. Lara and Gloria are now 16 months old, almost old enough so we should start counting their age in fraction of years. This month's news is Lara's first molar, and Gloria's first word: I have been busy writing a proposal for the Swedish Research Council, which is luckily submitted now, and I also had a paper accepted for publication. Ironically, from all the papers that I wrote in the last years, it's the one that is the least original and cost me the least amount of time, yet it's the only one that smoothly went through peer review. Besides this, I'm spending my time with the organization of a workshop, a conference, and a four-week long program. I'm also battling a recurring ant infection of our apartment, which is complicated by my hesitation to distribute toxins where the children play. Friday, April 27, 2012 The Nerdly Painter's Blog In expecto weekendum, I want to share with you the link of Regina Valluzzi'a blog Nerdly Painter. Regina has a BS in Materials Science from MIT and PhD in Polymer Science from University of Massachusetts Amherst, and she does the most wonderful science-themed paintings I've seen. A teaser below. Go check out her blog and have a good start into the weekend! Wednesday, April 25, 2012 The Cosmic Ray Composition Problem A recent arXiv paper provides an update on the cosmic ray composition problem: First the basics: We're talking about the ultra-high energetic end of the cosmic ray spectrum, with total energies of about 106 TeV. That's the energy of the incident particles in the Earth rest frame, not the center-of-mass energy of their collision with air molecules (ie mostly nucleons), which is "only" of the order 10 TeV, and thus somewhat larger than what the LHC delivers. After the primary collision, the incoming particles produce a cascade of secondary particles, known as a "cosmic ray shower" which can be detected on the ground. These showers are then reconstructed from the data with suitable software so that, ideally, the physics of the initial high energy collison can be extracted. For some more details on cosmic ray showers, please read this earlier post. Cosmic ray shower, artist's impression. Source: ASPERA The Pierre Auger Cosmic Ray Observatory is a currently running experiment that measures cosmic ray showers on the ground. One relevant quantity about the cosmic rays is the "penetration depth," that is the distance the primary particle travels through the atmosphere till it makes the first collision. The penetration depth can be reconstructed if the shower on the ground can be measured sufficiently precise, and is relatively new data. The penetration depth depends on the probability of the primary particle to interact, and with that on the nature of the particle. While we have never actually tested the collisions at the center-of-mass energies of the highest energetic cosmic rays, we think we have a pretty good understanding of what's going on by virtue of the standard model of particle physics. All the knowledge that we have, based on measurements at lower energies, is incorporated into the numerical models. Since the collisions involve nucleons rather than elementary particles, this goes together with an extrapolation of the parton distribution function by the DGLAP equation. This sounds complicated, but since QCD is asymptotically free, it should actually get easier to understand at high energies. Shaham and Piran in their paper argue that this extrapolation isn't working as expected, which might be a signal for new physics. The reason is that the penetration depth data shows that at high energies the probability of the incident particles to interact peaks at a shorter depth and is also more strongly peaked than one expects for protons. Now it might be that at higher energies the cosmic rays are dominated by other primary particles, heavier ones, that are more probable to interact, thus moving the peak of the distribution to a shorter depth. However, if one adds a contribution from other constituents (heavier ions: He, Fe...) this also smears out the distribution over the depth, and thus doesn't fit the width of the observed penetration depth distribution. This can be seen very well from the figure below (Fig 2 from Shaham and Piran's paper) which shows the data from the Pierre Auger Collaboration, and the expectation for a composition of protons and Fe nuclei. You can see that adding a second component does have the desired effect of moving the average value to a shorter depth. But it also increases the width. (And, if the individual peaks can be resolved, produces a double-peak structure.) Fig 2 from arXiv:1204.1488. Shown is the number of events in the energy bin 1 to 1.25 x 106 TeV as a function of the penetration depth. The red dots are the data from the Pierre Auger Collaboration (arXiv:1107.4804), the solid blue line is the expectation for a combination of protons and Fe nuclei. The authors thus argue there is no compositions for the ultra high energetic primary cosmic ray particles that fits the data well. Shaham and Piram think that this mismatch should be taken seriously. While different simulations yield slightly different results, the results are comparable and neither code fits the data. If it's not the simulation, the mismatch comes about either from the data or the physics. "There are three possible solutions to this puzzling situation. First, the observational data might be incorrect, or it is somehow dominated by poor statistics: these results are based on about 1500 events at the lowest energy bin and about 50 at the highest one. A mistake in the shower simulations is unlikely, as different simulations give comparable results. However, the simulations depend on the extrapolations of the proton cross sections from the measured energies to the TeV range of the UHECR collisions. It is possible that this extrapolation breaks down. In particular a larger cross section than the one extrapolated from low energies can explain the shorter penetration depth. This may indicates new physics that set in at energies of several dozen TeV." The authors are very careful not to jump to conclusions, and I won't either. To be convinced there is new physics to find here, I would first like to see a quantification of how bad the best fit from the models actually is. Unfortunately, there's no chi-square/dof in the paper that would allow such a quantification, and as illustrative as the figure above is, it's only one energy bin and might be a misleading visualization. I am also not at all sure that the different simulations are actually independent from each other. Since scientific communities exchange information rapidly and efficiently, there exists a risk for systematic bias even if several models are considered. Possibly there's just some cross-section missing or wrong. Finally, there's nothing in the paper about how the penetration depth data is obtained to begin with. Since that's not a primary observable, there must be some modeling involved too, though I agree that this isn't a likely source of error. With these words of caution ahead, it is possible that we are looking here at the first evidence for physics beyond the standard model. Monday, April 23, 2012 Can we probe planck-scale physics with quantum optics? You might have read about this some weeks ago on Chad Orzel's blog or at Ars Technica: Nature published a paper by Pikovski et al on the possibility to test Planck scale physics with quantum optics. The paper is on the arXiv under arXiv:1111.1979 [quant-ph]. I left a comment at Chad's blog explaining that it is implausible the proposed experiment will test any Planck scale effects. Since I am generally supportive of everybody who cares about quantum gravity phenomenology, I'd have left it at this, and be happy that Planck scale physics made it into Nature. But then I saw that Physics Today picked it up, and before this spreads further, here's an extended explanation of my skepticism. Igor Pikovski et al have proposed a test for Planck scale physics using recent advances in quantum optics. The framework they use is a modification of quantum mechanics, expressed by a deformation of the canonical commutation relation, that takes into account that the Planck length plays the role of a minimal length. This is one of the most promising routes to quantum gravity phenomenology, and I was excited to read the article. In their article, the authors claim that their proposed experiment is feasible to "probe the possible effects of quantum gravity in table-top quantum optics experiment" and that it reaches a "hitherto unprecedented sensitivity in measuring Planck-scale deformations." The reason for this increased sensitivity for Planck-scale effects is, according to the authors own words, that "the deformations are enhanced in massive quantum systems." Unfortunately, this claim is not backed up by the literature the authors refer to. The underlying reason is that the article fails to address the question of Lorentz-invariance. The deformation used is not invariant under normal Lorentz-transformations. There are two ways to deal with that, either breaking Lorentz-invariance or deforming it. If it is broken, there exists a multitude of very strong constraints that would have to be taken into account and are not mentioned in the article. Presumably then the authors implicitly assume that Lorentz-symmetry is suitably deformed in order to keep the commutation relations invariant - and in order to test something actually new. This can in fact be done, but comes at a price. Now the momenta transform non-linearly. Consequently, a linear sum of momenta is no longer Lorentz-invariant. In the appendix however, the authors have used the normal sum of momenta to define the center-of-mass momentum. This is inconsistent. To maintain Lorentz-invariance, the modified sum must be used. This issue cannot be ignored for the following reason. If a suitably Lorentz-invariant sum is used, it contains higher-order terms. The relevance of these terms does indeed increase with the mass. This also means that the modification of the Lorentz-transformations become more relevant with the mass. Since this is a consequence of just summing up momenta, and has nothing in particular to do with the nature of the object that is being studied, the increasing relevance of corrections prevents one from reproducing a macroscopic limit that is in agreement with our knowledge of Special Relativity. This behavior of the sum, whose use, we recall, is necessary for Lorentz-invariance, is thus highly troublesome. This is known in the literature as the "soccer ball problem." It is not mentioned in the article. If the soccer-ball problem persists, the theory is in conflict with observation already. While several suggestions have been made how this problem can be addressed in the theory, no agreement has been reached to date. A plausible and useful ad-hoc suggestion that has been made by Magueijo and Smolin is that the relevant mass scale, the Planck mass, for N particles is rescaled to N times the Planck mass. Ie, the scale where effects become large moves away when the number of particles increases. Now, that this ad-hoc solution is correct is not clear. What is clear however is that, if the theory makes sense at all, the effect must become less relevant for systems with many constituents. A suppression with the number of constituents is a natural expectation. If one takes into account that for sums of momenta the relevant scale is not the Planck mass, but N times the Planck mass, the effect the authors consider is suppressed by roughly a factor 1010. This means the existing bounds (for single particles) cannot be significantly improved in this way. This is the expectation that one can have from our best current understanding of the theory. This is not to say that the experiment should not be done. It is always good to test new parameter regions. And, who knows, all I just said could turn out to be wrong. But it does mean that based on our current knowledge, it is extremely unlikely that anything new is to be found there. And vice versa, if nothing new is found, this cannot be used to rule out a minimal length modification of quantum mechanics. (This is not the first time btw, that somebody tried to exploit the fact that the deviations get larger with mass by using composite systems, thereby promoting a bug to a feature. In my recent review, I have a subsection dedicated to this.) Sunday, April 22, 2012 Experimental Search for Quantum Gravity 2012 It is my great pleasure to let you know that there will be a third conference on Experimental Search for Quantum Gravity, October 22 to 25, this year, at Perimeter Institute. (A summary of the ESQG 2007 is here, and a summary from 2010 is here.) Even better is that this time it wasn't my initiative but Astrid Eichhorn's, who is also to be credited for the theme "The hard facts." The third of the organizers is Lee Smolin, who has been of great help also with the last meeting. But most important, the website of the ESQG 2012 is here. We have an open registration with a moderate fee of CAN$ 115, which is mostly to cover catering expenses. There is a limit to the number of people we can accommodate, so if you are interested in attending, I recommend you register early. If time comes, I'll tell you some more details about the meeting. Thursday, April 19, 2012 Schrödinger meets Newton In January, we discussed semi-classical gravity: Classical general relativity coupled to the expectation value of quantum fields. This theory is widely considered to be only an approximation to the still looked-for fundamental theory of quantum gravity, most importantly because the measurement process messes with energy conservation if one were to take it seriously, see earlier post for details. However, one can take the point of view that whatever the theorists think is plausible or not should still be experimentally tested. Maybe the semi-classical theory does in fact correctly describe the way a quantum wave-function creates a gravitational field; maybe gravity really is classical and the semi-classical limit exact, we just don't understand the measurement process. So what effects would such a funny coupling between the classical and the quantum theory have? Luckily, to find out it isn't really necessary to work with full general relativity, one can instead work with Newtonian gravity. That simplifies the issue dramatically. In this limit, the equation of interest is known as the Schrödinger-Newton equation. It is the Schrödinger-equation with a potential term, and the potential term is the gravitational field of a mass distributed according to the probability density of the wave-function. This looks like this Inserting a potential that depends on the expectation value of the wave-function makes the Schrödinger-equation non-linear and changes its properties. The gravitational interaction is always attractive and thus tends to contract pressureless matter distributions. One expects this effect to show up here by contracting the wave-packet. Now the usual non-relativistic Schrödinger equation results in a dispersion for massive particles, so that an initially focused wave-function spreads with time. The gravitational self-coupling in the Schrödinger-Newton equation acts against this spread. Which one wins, the spread from the dispersion or the gravitational attraction, depends on the initial values. However, the gravitational interaction is very weak, and so is the effect. For typical systems in which we study quantum effects, either the mass is not large enough for a collapse, or the typical time for it to take place is too long. Or so you are lead to think if you make some analytical estimates. The details are left to a numerical study though because the non-linearity of the Schrödinger-Newton equation spoils the attempt to find analytical solutions. And so, in 2006 Carlip and Salzmann surprised the world by claiming that according to their numerical results, the contraction caused by the Schrödinger-Newton equation might be possible to observe in molecule interferometry, many orders of magnitude off the analytical estimate. It took five years until a check of their numerical results came out, and then two papers were published almost simultaneously: • Schrödinger-Newton "collapse" of the wave function J. R. van Meter arXiv:1105.1579 [quant-ph] • Gravitationally induced inhibitions of dispersion according to the Schrödinger-Newton Equation Domenico Giulini and André Großardt arXiv:1105.1921 [gr-qc] They showed independently that Carlip and Salzmann's earlier numerical study was flawed and the accurate numerical result fits with the analytical estimate very well. Thus, the good news is one understands what's going on. The bad news is, it's about 5 orders of magnitude off today's experimental possibilities. But that's in an area of physics were progress is presently rapid, so it's not hopeless! It is interesting what this equation does, so let me summarize the findings from the new numerical investigation. These studies, I should add, have been done by looking at the spread of a spherical symmetric Gaussian wave-packet. The most interesting features are: • For masses smaller than some critical value, m less than ~ (ℏ2/(G σ))1/3, where σ is the width of the initial wave-packet, the entire wave-packet expands indefinitely. • For masses larger than that critical value, the wave-packet fragments and a fraction of the probability propagates outwards to infinity, while the rest remains localized in a finite region. • From the cases that eventually collapse, the lighter ones expand initially and then contract, the heavier ones contract immediately. • The remnant wave function approaches a stationary state, about which it performs dampened oscillations. That the Schrödinger-Newton equation leads to a continuous collapse might lead one to think it could play a role for the collapse of the wave-function, an idea that has been suggested already in 1984 by Lajos Diosi. However, this interpretation is questionable because it became clear later that the gravitational collapse that one finds here isn't suitable to be interpreted as a wave-function collapse to an eigenstate. For example, in this 2002 paper, it was found that two bumps of probability density, separated by some distance, will fall towards each other and meet in the middle, rather than focus on one of the two initial positions as one would expect for a wave-function collapse. Monday, April 16, 2012 The hunt for the first exoplanet The little prince Today, extrasolar planets, or exoplanets for short, are all over the news. Hundreds are known, and they are cataloged in The Extrasolar Planets Encyclopaedia, accessible for everyone who is interested. Some of these extrasolar planets orbit a star in what is believed to be a habitable zone, fertile ground for the evolution of life. Planetary systems, much like ours, have turned out to be much more common results of stellar formation than had been expected. But the scientific road to this discovery has been bumpy. Once one knows that stars on the night sky are suns like our own, it doesn't take a big leap of imagination to think that they might be accompanied by planets. Observational evidence for exoplanets was looked for already in the 19th century, but the field had a bad start. Beginning in the 1950s, several candidates for exoplanets made it into the popular press, yet they turned out to be data flukes. At that time, the experimental method used relied on detecting minuscule changes in the motion of the star caused by a heavy planet of Jupiter type. If you recall the two-body problem from 1st semester: It's not that one body orbits the other, but they both orbit around their common center-of-mass, just that, if one body is much heavier than the other, it might almost look like the lighter one is orbiting the heavier one. But if a sufficiently heavy planet orbits a star, one might in principle find out by watching the star very closely because it wobbles around the center-of-mass. In the 50s, watching the star closely meant watching its distance to other stellar objects. The precision which could be achieved this way simply wasn't sufficient to reliably tell the presence of a planet. In the early 80s, Gordon Walker and his postdoc Bruce Campbell from British Columbia, Canada, pioneered a new technique that improved the possible precision by which the motion of the star could be tracked by two orders of magnitude. Their new technique relied on measuring the star's absorption lines, whose frequency depends on the motion of the star relative to us because of the Doppler effect. To make that method work, Walker and Campbell had to find a way to precisely compare spectral images taken at different times so they'd know how much the spectrum had shifted. They found an ingenious solution to that: They would used the, very regular and well-known, molecular absorption lines of hydrogen fluoride gas. The comb-like absorption lines of hydrogen fluoride served as a ruler relative to which they could measure the star's spectrum, allowing them to detect even smallest changes. Then, together with astronomer Stephenson Yang, they started looking at candidate stars which might be accompanied by Jupiter-like planets. To detect the motion of the star due to the planet, they would have to record the system for several completed orbits. Our planet Jupiter needs about 12 years to orbit the sun, so they were in for a long-term project. Unfortunately, they had a hard time finding support for their research. In his recollection “The First High-Precision Radial Velocity Search for Extra-Solar Planets” (arXiv:0812.3169), Gordon Walker recounts that it was difficult to get time for their project at observatories: “Since extra-solar planets were expected to resemble Jupiter in both mass and orbit, we were awarded only three or four two-night observing runs each year.” And though it is difficult to understand today, back then many of Walker's astronomer colleagues thought the search for exoplanets a waste of time. Walker writes: “It is quite hard nowadays to realise the atmosphere of skepticism and indifference in the 1980s to proposed searches for extra-solar planets. Some people felt that such an undertaking was not even a legitimate part of astronomy. It was against such a background that we began our precise radial velocity survey of certain bright solar-type stars in 1980 at the Canada France Hawaii 3.6-m Telescope.” After years of data taking, they had identified several promising candidates, but were too cautious to claim a discovery. At the 1987 meeting of the American Astronomical Society in Vancouver, Campbell announced their preliminary results. The press reported happily yet another discovery of an exoplanet, but the astronomers regarded even Walker and Campbell's cautious interpretation of the data with large skepticism. In his article “Lost world: How Canada missed its moment of glory,” Jacob Berkowitz describes the reaction of Walker and Campbell's colleagues: “[Campbell]'s professional colleagues weren't as impressed [as the press]. One astronomer told The New York Times he wouldn't call anything a planet until he could walk on it. No one even attempted to confirm the results.” Walker's gifted postdoc Bruce Campbell suffered most from the slow-going project that lacked appreciation and had difficulties getting continuing funding. In 1991, after more than a decade of data taking, they still had no discovery to show up with. Campbell meanwhile had reached age 42, and was still sitting on a position that was untenured, was not even tenure-track. Campbell's frustration built up to the point where he quit his job. When he left, he erased all the analyzed data in his university account. Luckily, his (both tenured) collaborators Walker and Yang could recover the data. Campbell made a radical career change and became a personal tax consultant. But in late 1991, Walker and Yang were finally almost certain to have found sufficient evidence of an exoplanet around the star gamma Cephei, whose spectrum showed a consistent 2.5 year wobble. In a fateful coincidence, when Walker just thought they had pinned it down, one of his colleagues, Jaymie Matthews, came by his office, looked at the data and pointed out that the wobble in the data coincided with what appeared to be periods of heightened activity on the star's surface. Walker looked at the data with new eyes and, mistakenly, believed that they had been watching all the time an oscillating star rather than a periodic motion of the star's position. Briefly after that, in early 1992, Nature reported the first confirmed discovery of an exoplanet by Wolszczan and Frail, based in the USA. Yet, the planet they found orbits a millisecond pulsar (probably a neutron star), so for many the discovery doesn't score highly because the star's collapse would have wiped out all life in that planetary system long ago. In 1995 then, astronomers Mayor and Queloz of the University of Geneva announced the first definitive observational evidence for an exoplanet orbiting a normal star. The planet has an orbital period of a few days only, no decade long recording was necessary. It wasn't until 2003 that the planet that Walker, Campbell and Yang had been after was finally confirmed. There are three messages to take away from this story. First, Berkowitz in his article points out that Canada failed to have faith in Walker and Campbell's research at the time when just a little more support would have made them first to discover an exoplanet. Funding for long-term projects is difficult to obtain and it's even more difficult if the project doesn't produce results before it's really done. That can be an unfortunate hurdle for discoveries. Second, it is in hindsight difficult to understand why Walker and Campbell's colleagues were so unsupportive. Nobody ever really doubted that exoplanets exist, and with the precision of measurements in astronomy steadily increasing, sooner or later somebody would be able to find statistically significant evidence. It seems that a few initial false claims had a very unfortunate backlash that did exceed the reasonable. Third, in the forest of complaints about lacking funding for basic research, especially for long-term projects, every tree is a personal tragedy. Saturday, April 14, 2012 Book review: “How to Teach Relativity to Your Dog” by Chad Orzel How to Teach Relativity to Your Dog By Chad Orzel Basic Books (February 28, 2012) Let me start with three disclaimers: First, I didn’t buy the book, I got a free copy from the editor. Second, this is the second of Chad Orzel’s dog physics books and I didn’t read the first. Third, I’m not a dog person. Chad Orzel from Uncertain Principles is a professor for physics at Union College and the best known fact about him is that he talks to his dog, Emmy. Emmy is the type of dog large enough to sniff your genitals without clawing into your thighs, which I think counts in her favor. That Chad talks to his dog is of course not the interesting part. I mean, I talk to my plants, but who cares? (How to teach hydrodynamics to your ficus.) But Chad imagines his dog talks back, and so the book contains conversations between Emmy and Chad about physics. In this book, Chad covers the most important aspects of special and general relativity: time dilatation and length contraction, space-time diagrams, relativistic four-momentum, the equivalence principle, space-time curvature, the expansion of the universe and big bang theory. Emmy and Chad however go beyond that by introducing the reader also to the essentials of black holes, high energy particle collisions, the standard model of particle physics and Feynman diagrams. They even add a few words on grand unification and quantum gravity. The physics explanations are very well done, and there are many references to recent observations and experiments, so the reader is not left with the impression that all this is last century’s stuff. The book contains many helpful figures and even a few equations. It also comes with a glossary and a guide to further reading. Emmy’s role in the book is to engage Chad in a conversation. These dialogues are very well suited to introduce unfamiliar subjects because they offer a natural way to ask and answer questions, and Chad uses them masterfully. Besides Emmy the dog, the reader also meets Nero the cat and there are a lot of squirrels involved too. The book is written very well, in unique do..., oops, Orzel-style, with a light sense of humor. It is difficult for me to judge this book. I must have read dozens of popular science introductions to special and general relativity, but most of them 20 years ago. Chad explains very well, but then all the dog stuff takes up a lot of space (the book has 300 pages) and if you are, like me, not really into dogs, the novelty wears off pretty fast and what’s left are lots of squirrels. I did however learn something from this book, for example that dogs eat cheese, which was news to me. I also I learned that Emmy is partly German shepherd and thus knows the word “Gedankenexperiment,” though Stefan complains that she doesn’t know the difference between genitive and dative. In summary, Chad Orzel’s book “How to Teach Relativity to Your Dog” is a flawless popular science book that gets across a lot of physics in an entertaining way. If you always wanted to know what special and general relativity is all about and why it matters, this is a good starting point. I’d give this book 5 out of 5 tail wags. Thursday, April 12, 2012 Some physics-themed ngram trends I've been playing again with Google ngram, which shows the frequency by which words appear in books that are in the Google database, normalized to the number of books. Here are some keywords from physics that I tried which I found quite interesting. In the first graph below you see "black hole" in blue which peaks around 2002, "big bang" in red which peaks around 2000, "quantization" in green which peaks to my puzzlement around 1995, and "dark matter" in yellow which might peak or plateau around 2000. Data is shown from 1920 to 2008. Click to enlarge. In the second graph below you see the keywords "multiverse" in blue, which increases since about 1995 but interestingly seems to have been around much before that, "grand unification" in yellow which peaks in the mid 80s and is in decline since, "theory of everything" in green which plateaus around 2000, and "dark energy" in red which appears in the late 90s and is still sharply increasing. Data is shown from 1960 to 2008. Click to enlarge. This third figure shows "supersymmetry" in blue which peaks around 1985 and 2001, "quantum gravity" in red which might or might not have plateaued, and "string theory" in green which seems to have decoupled from supersymmetry in early 2002 and avoided to drop. Data is shown from 1970 to 2008. A graph that got so many more hits it wasn't useful to plot it with the others: "emergence" which peaked in the late 90s. Data is shown from 1900 to 2008. More topics of the past: "cosmic rays" in blue which was hot in the 1960s, "quarks" in green which peaks in the mid 90s, and "neutrinos" in red peak around 1990. Data is shown from 1920 to 2008. Even quantum computing seems to have maxed (data is shown from 1985 to 2008). So, well, then what's hot these days? See below "cold atoms" in blue, "quantum criticality" in red and "qbit" in green. Data is shown from 1970 to 2008. So, condensed matter and cosmology seem to be the wave of the future, while particle physics is in the decline and quantum gravity doesn't really know where to go. Feel free to leave your interpretation in the comments! Tuesday, April 10, 2012 Be careful what you wish for Michael Nielsen in his book “Reinventing Discovery” relates the following anecdote from the history of science. In the year 1610, Galileo discovered that the planet Saturn, the most distant then known planet, had a peculiar shape. Galileo’s telescope was not good enough to resolve Saturn’s rings, but he saw two bumps on either side of the main disk. To make sure this discovery would be credited to him, while still leaving him time to do more observations, Galileo followed a procedure common at the time: He sent the announcement of the discovery to his colleagues in form of an anagram This way, Galileo could avoid revealing his discovery, but would still be able to later claim credit by solving the anagram, which meant “Altissimum planetam tergeminum observavi,” Latin for “I observed the highest of the planets to be three-formed.” Among Galileo’s colleagues who received the anagram was Johannes Kepler. Kepler had at this time developed a “theory” according to which the number of moons per planet must follow a certain pattern. Since Earth has one moon and from Jupiter’s moons four were known, Kepler concluded that Mars, the planet between Earth and Jupiter, must have two moons. He worked hard to decipher Galileo’s anagram and came up with “Salve umbistineum geminatum Martia proles” Latin for “Be greeted, double knob, children of Mars,” though one letter remained unused. Kepler interpreted this as meaning Galileo had seen the two moons of Mars, and thereby confirmed Kepler’s theory. Psychologists call this effort which the human mind makes to brighten the facts “motivated cognition,” more commonly known as “wishful thinking.” Strictly speaking the literature distinguishes both in that wishful thinking is about the outcome of a future event, while motivated cognition is concerned with partly unknown facts. Wishful thinking is an overestimate of the probability that a future event has a desirable outcome, for example that the dice will all show six. Motivated cognition is an overly optimistic judgment of a situation with unknowns, for example that you’ll find a free spot in a garage whose automatic counter says “occupied,” or that you’ll find the keys under the streetlight. There have been many small-scale psychology experiments showing that most people are prone to overestimate a lucky outcome (see eg here for a summary), even if they know the odds, which is why motivated cognition is known as a “cognitive bias.” It’s an evolutionary developed way to look at the world that however doesn’t lead one to an accurate picture of reality. Another well-established cognitive bias is the overconfidence bias, which comes in various expressions, the most striking one being “illusory superiority”. To see just how common it is for people to overestimate their own performance, consider the 1981 study by Svenson which found that 93% of US American drivers rate themselves to be better than the average. The best known bias is maybe confirmation bias, which leads one to unconsciously pay more attention to information confirming already held believes than to information contradicting it. And a bias that got a lot attention after the 2008 financial crisis is “loss aversion,” characterized by the perception of a loss being more relevant than a comparable gain, which is why people are willing to tolerate high risks just to avoid a loss. It is important to keep in mind that these cognitive biases serve a psychologically beneficial purpose. They allow us to maintain hope in difficult situations and a positive self-image. That we have these cognitive biases doesn’t mean there’s something wrong with our brain. In contrast, they’re helpful to its normal operation. However, scientific research seeks to unravel the truth, which isn’t the brain’s normal mode of operation. Therefore scientists learn elaborate techniques to triple-check each and every conclusion. This is why we have measures for statistical significance, control experiments and double-blind trials. Despite that, I suspect that cognitive biases still influence scientific research and hinder our truth-seeking efforts because we can’t peer review scientists motivations, and we’re all alone inside our heads. And so the researcher who tries to save his model by continuously adding new features might misjudge the odds of being successful due to loss aversion. The researcher who meticulously keeps track of advances of the theory he works on himself, but only focuses on the problems of rival approaches, might be subject to confirmation bias, skewing his own and other people’s evaluation of progress and promise. The researcher who believes that his prediction is always just on the edge of being observed is a candidate for motivated cognition. And above all that, there’s the cognitive meta-bias, the bias blind spot: I can’t possibly be biased. Scott Lilienfeld in his SciAm article “Fudge Factor” argued that scientists are particularly prone to conformation bias because “[D]ata show that eminent scientists tend to be more arrogant and confident than other scientists. As a consequence, they may be especially vulnerable to confirmation bias and to wrong-headed conclusions, unless they are perpetually vigilant” As I scientist, I regard my brain the toolbox for my daily work, and so I am trying to learn what can be done about its shortcomings. It is to some extent possible to work on a known bias by rationalizing it: By consciously seeking out the information that might challenge ones beliefs, asking a colleague for a second opinion on whether a model is worth investing more time, daring to admit to being wrong. And despite that, not to forget the hopes and dreams. Mars btw has to our best current knowledge indeed two moons. Sunday, April 08, 2012 Happy Easter! Stefan honors the Easter tradition by coloring eggs every year. The equipment for this procedure is stored in a cardboard shoe-box labeled "Ostern" (Easter). The shoe-box dates back to the 1950s and once contained a pair of shoes produced according to the newest orthopedic research. I had never paid much attention to the shoe-box but as Stefan pointed out to me this year, back then the perfect fit was sought after by x-raying the foot inside the shoe. The lid of the box contains an advertisement for this procedure which was apparently quite common for a while. Click to enlarge. Well, they don't xray your feet in the shoe stores anymore, but Easter still requires coloring the eggs. And here they are: Happy Easter everybody! Friday, April 06, 2012 Book Review: "The Quest for the Cure" by B.R. Stockwell The Quest for the Cure: The Science and Stories Behind the Next Generation of Medicines By Brent R. Stockwell Columbia University Press (June 1, 2011) As a particle physicist, I am always amazed when I read about recent advances in biochemistry. For what I am concerned, the human body is made of ups and downs and electrons, kept together by photons and gluons - and that's pretty much it. But in biochemistry, they have all these educated sounding words. They have enzymes and aminoacids, they have proteases, peptides and kineases. They have a lot of proteins, and molecules with fancy names used to drug them. And these things do stuff. Like break up and fold and bind together. All these fancy sounding things and their interactions is what makes your body work; they decide over your health and your demise. With all that foreign terminology however, I've found it difficult to impossible to read any paper on the topic. In most cases, I don't even understand the title. If I make an effort, I have to look up every second word. I do just fine with the popular science accounts, but these always leave me wondering just how do they know this molecule does this and how do they know this protein breaks there, fits there, and that causes cancer and that blocks some cell-function? What are the techniques they use and how do they work? When I came across Stockwell's book "The Quest for the Cure" I thought it would help me solve some of these mysteries. Stockwell himself is a professor for biology and chemistry at Columbia university. He's a guy with many well-cited papers. He knows words like oligonucleotides and is happy to tell you how to pronounce them: oh-lig-oh-NOOK-lee-oh-tide. Phosphodiesterase: FOS-foh-dai-ESS-ter-ays. Nicotinonitrile: NIH-koh-tin-oh-NIH-trayl. Erythropoitin: eh-REETH-roh-POIY-oh-ten. As a non-native speaker I want to complain that this pronunciation help isn't of much use for a non-phonetic language; I can think of at least three ways to pronounce the syllable "lig." But then that's not what I bought the book for anyway. The starting point of "The Quest for the Cure" is a graph showing the drop in drug approvals since 1995. Stockwell sets out to first explain what is the origin of this trend and then what can be done about it. In a nutshell, the issue is that many diseases are caused by proteins which are today considered "undruggable" which means they are folded in a way that small molecules, that are suitable for creating drugs, can't bind to the proteins' surfaces. Unfortunately, it's only a small number of proteins that can be targeted by presently known drugs: "Here is the surprising fact: All of the 20,000 or so drug products that ever have been approved by the U.S. Food and Drug Administration interact with just 2% of the proteins found in human cells." And fewer than 15% are considered druggable at all. Stockwell covers a lot of ground in his book, from the early days of genetics and chemistry to today's frontier of research. The first part of the book, in which he lays out the problem of the undruggable proteins, is very accessible and well-written. Evidently, a lot of thought went into it. It comes with stories of researchers and patients who were treated with new drugs, and how our understanding of diseases has improved. In the first chapters, every word is meticulously explained or technical terms are avoided to the level that "taken orally" has been replaced by "taken by mouth." Unfortunately, the style deteriorates somewhat thereafter. To give you an impression, it starts more reading like this "Although sorafenib was discovered and developed as an inhibitor of RAF, because of the similarity of many kinases, it also inhibits several other kinases, including the patelet-derived growth factor, the vascular endothelia growth factor (VEGF) receptors 2 and 3, and the c-KIT receptor." Now the book contains a glossary, but it's incomplete (eg it neither contains VEGF nor c-KIT). With the large number of technical vocabulary, at some point it doesn't matter anymore if a word was introduced, because if it's not something you deal with every day it's difficult to keep in mind the names of all sorts of drugs and molecules. It gets worse if you put down the book for a day or two. This doesn't contribute to the readability of the book and is somewhat annoying if you realize that much of the terminology is never used again and one doesn't really know why it was necessary to use to begin with. The second part of the book deals with the possibilities to overcome the problem of the undruggable molecules. In that part of the book, the stories of researchers curing patients are replaced with stories of the pharmaceutical industry, the start-up of companies and the ups and downs of their stock price. Stockwell's explanations left me wanting in exactly the points that I would have been interested in. He writes for example a few pages about nuclear magnetic resonance and that it's routinely used to obtain high resolution 3-d pictures of small proteins. One does not however learn how this is actually done, other than that it requires "complicated magnetic manipulations" and "extremely sophisticated NMR methods." He spends a paragraph and an image on light-directed synthesis of peptides that is vague at best, and one learns that peptides can be "stapled" together, which improves their stability, yet one has no clue how this is done. Now the book is extremely well referenced, and I could probably go and read the respective papers in Science. But then I would have hoped that Stockwell's book saves me exactly this effort. On the upside, Stockwell does an amazingly good job communicating the relevance of basic research and the scientific method, and in my opinion this makes up for the above shortcomings. He tells stories of unexpected breakthroughs that came about by little more than coincidence, he writes about the relevance of negative results and control experiments, and how scientific research works: "There is a popular notion about new ideas in science springing forth from a great mind fully formed in a dazzling eureka moment. In my experience this is not accurate. There are certainly sudden insights and ideas that apear to you from time to time. Many times, of course, a little further thought makes you realize it is really an absolutely terrible idea... But even when you have an exciting new idea, it begins as a raw, unprocessed idea. Some digging around in the literature will allow you to see what has been done before, and whether this idea is novel and likely to work. If the idea survives this stage, it is still full of problems and flaws, in both the content and the style of presenting it. However, the real processing comes from discussing the idea, informally at first... Then, as it is presented in seminars, each audience gives a series of comments, suggestions, and questions that help mold the idea into a better, sharper, and more robust proposal. Finally, there is the ultimate process of submission for publication, review and revision, and finally acceptance... The scientific process is a social process, where you refine your ideas through repeated discussions and presentations." He also writes in a moderate dose about his own research and experience with the pharmaceutical industry. The proposals that Stockwell has how to deal with the undruggable proteins have a solid basis in today's research. He isn't offering dreams or miracle cures, but points out hopeful recent developments, for example how it might be possible to use larger molecules. The problem with large molecules is that they tend to be less stable and don't enter cells readily, but he quotes research that shows possibilities to overcome this problem. He also explains the concept of a "privileged structure," structures that have been found with slight alterations to bind to several proteins. Using such privileged structures might allow one to sort through a vast parameter space of possible molecules with a higher success rate. He also talks about using naturally occurring structures and the difficulties with that. He ends his book by emphasizing the need for more research on this important problem of the undruggable proteins. In summary: "The Quest for the Cure" is a well-written book, but it contains too many technical expressions, and in many places scientific explanations are vague or lacking. It comes with some figures which are very helpful, but there could have been more. You don't need to read the blurb to figure out that the author isn't a science writer but a researcher. I guess he's done his best, but I also think his editor should have dramatically sorted out the vocabulary or at least have insisted on a more complete glossary. Stockwell makes up for this overdose of biochemistry lingo with communicating very well the relevance of basic research and the power of the scientific method. I'd give this book four out of five stars because I appreciate Stockwell has taken the time to write it to begin with. Wednesday, April 04, 2012 On the importance of being wrong Some years ago, I attended a seminar by a young postdoc who spoke about an extension of the standard model of particle physics. Known as “physics beyond the standard model,” this is a research area where theory is presently way ahead of experiment. In the hope to hit something by shooting in the dark, theorists add stuff that we haven’t seen to the stuff we know, and then explain why we haven’t seen the additional stuff – but might see it with some experiment which is about to deliver result. Ie, the theorists tell experimentalists where to look. Due to the lack of observational evidence, the main guide in this research area is mathematical consistency combined with intuition. This type of research is absolutely necessary to make progress in the present situation, but it’s also very risky. Most of the models considered today will turn out to be wrong. The content of the seminar wasn’t very memorable. The reason I still recall it is that, after the last slide had flashed by, somebody asked what the motivation is to consider this extension of the standard model, to which the speaker replied “There is none, except that it can be done.” This is a remarkably honest answer, especially since it came from a young researcher who had still ahead of him the torturous road to tenure. You don’t have to look far in the blogosphere or on Amazon to find unsolicited advice for researchers for how to sell themselves. There now exist coaching services for scientists, and some people make money writing books about “Marketing for Scientists.” None of them recommends that when you’ve come to the conclusion that a theory you looked at wasn’t as interesting as you might have thought, you go and actually say that. Heaven forbid: You’re supposed to be excited about the interesting results. You were right all along that the result would be important. And there are lots of motivations why this is the one and only right thing to do. You have won great insights in your research that are relevant for the future of mankind, at least, if not for all mankinds in all multiverses. It’s advice well meant. It’s advice for how to reach your presumed personal goal of landing a permanent position in academia, taking into account the present mindset of your older peers. It is not advice for how to best benefit scientific research in the long run. In fact, unfortunately, the both goals can be in conflict. Of course any researcher should in the first line work on something interesting, well motivated, and something that will deliver exciting results! But most often it doesn’t work as you wish it should. To help move science forward, the conclusion that the road you’ve been on doesn’t seem too promising should be published to prevent others from following you into a dead end, or at least telling them where the walls are. Say it, and start something new. It’s also important for your personal development. If you advertise your unexciting research as the greatest thing ever, you might eventually come to believe it and waste your whole life on it. The reason nobody advises you to say your research project (which might not even have been your own choice) is unexciting is that it’s difficult if not impossible to publish a theoretical paper that examines an approach just to come to the conclusion that it’s not a particularly convincing description of nature. The problem with publishing negative results might be familiar to you from medicine, but it exists in theoretical physics as well. Even if you get it published, and even if it’s useful in saving others the time and work that you have invested, it will not create a research area and it’s unlikely to become well-cited. If that’s all you think matters, for what your career is concerned it would be a waste of your time indeed. So, they are arguably right with their career advice. But as a scientist your task is to advance our understanding of nature, even if that means concluding you’ve wasted your time – and telling others about it. If you make everybody believe in the excitement of an implausible model, you risk getting stuck on a topic you don’t believe in. And, if you’re really successful, you get others stuck on it too. Congratulations. This unexciting seminar speaker some years ago, and my own yawn, made me realize that we don’t value enough those who say: “I tried this and it was a mistake. I thought it was exciting, but I was wrong.” Basic research is a gamble. Failure is normal and being wrong is important. Monday, April 02, 2012 In the past month, Lara and Gloria have learned to learn. They try to copy and repeat everything we do. Lara surprised me by grabbing a brush and pulling it through her hair and Gloria, still short on hair, tries to put on her shoes. They haven't yet learned to eat with a spoon, but they've tried to feed us. They both understand simple sentences. If I ask where the second shoe is, they'll go and get it. If I tell them lunch is ready, they'll both come running and try to push the high chairs towards the table. If we tell them we'll go for a walk, they run to the door. If we do as much as mention cookies, they'll point at the bag and insist on having one. Lara is still the more reserved one of the two. Faced with something new, she'll first watch from a distance. Gloria has no such hesitations. Last week, I childproofed the balcony. Lara, who was up first, saw the open door and froze. She stood motionless, staring at the balcony for a full 10 minutes. Then Gloria woke up, came running while yelling "Da,da" - and stumbled over the door sill, landing on her belly. Lara then followed her, very carefully. Now that spring is coming and the girls are walking well, we've been to the playground several times. Initially Lara and Gloria just sat there, staring at the other children. But meanwhile they have both made some contacts with other children, though not without looking at me every other minute to see if I approve. Gloria, as you can guess, is the more social one. She'll walk around with her big red bucket and offer it to others, smiling brightly. She's 15 months and has at least 3 admirers already, all older boys who give her toys, help her to walk, or even carry her around. (The boys too look at me every other minute to see if I approve.) Lara and I, we watch our little social butterfly, and build sand castles. From my perspective, the playground is a new arena too. Weekdays, the adult population is exclusively female and comes in two layers of generations, either the mothers or the grandmothers. They talk about their children and pretty much nothing but their children, unless you want to count pregnancies separately. After some initial mistakes, I now bring a book, paper, or a magazine with me to hide behind. Another piece of news from the past month is that I finally finished the review on the minimal length in quantum gravity that I've been working on since last year. It's now on the arXiv. The first 10 pages should be understandable for pretty much everybody, and the first half should be accessible also for undergraduates. So if you were wondering what I'm doing these days besides running after my daughters, have a look at my review. Sunday, April 01, 2012 Computer Scientists develop Software for Virtual Member of Congress A group of computer scientists from Rutgers university have published a software intended for crowd-sourcing the ideal candidate. "We were asking ourselves: Why do we waste so much time with candidates who disagree with themselves, aren't able to recall their party's program, and whose intellectual output is inferior even to Shit Siri Says?" recalls Arthur McTrevor, who lead the project, "Today, we have software that can perform better." McTrevor and his colleagues then started coding what they refer to as the "unopinionated artifical intelligence" of the virtual representative, the main information processing unit. The unopinionated intelligence is a virtual skeleton which comes alive by crowd-sourcing opinions from a selected group of people, for example party members. Members feed the software with opinions, which are then aggregated and reformulated to minimize objectionable statements. The result: The perfect candidate. The virtual candidate also has a sophisticated speech assembly program, a pleasant looking face, and a fabricated private life. Visual and audial appearance can be customized. The virtual candidate has a complete and infallible command of the constitution, all published statistical data, and can reproduce quotations from memorable speeches and influential books in the blink of an eye. "80 microseconds, actually," said McTrevor. The software moreover automatically creates and feeds its own Facebook account and twitter feed. The group from Rutgers tested the virtual representative in a trial run whose success is reported in a recent issue of Nature. In their publication, the authors point out that the virtual representative is not a referendum that aggregates the opinions of the general electorate. Rather, it serves a selected group to find and focus their identity, which can then be presented for election. In an email conversation, McTrevor was quick to point out that the virtual candidate is made in USA, and its patent dated 2012. The candidate will be thus be eligible to run for congress at the "age" of 25, in 2037.
ae3a2e183f17e27f
Mathematical Colloquium: Localization of interacting quantum particles with quasi-random disorder Vieri Mastropietro Universita’ di Milano Monday, May 22, 2017 - 16:00 to 17:00 It is well established at a mathematical level that disorder can produce Anderson localization of the eigenvectors of the single particle Schrödinger equation. Does localization survive in presence of many body interaction? A positive answer to such question would have important physical consequences, related to lack of thermalization in closed quantum systems. Mathematical results on such issue are still rare and a full understanding is a challenging problem. We present an example in which localization can be proved for the ground state of an interacting system of fermionic particles with quasi random Aubry-Andre' potential. The Hamiltonian is given by $N$ coupled almost-Mathieu Schrödinger operators. By assuming Diophantine conditions on the frequency and density, we can establish exponential decay of the ground state correlations. The proof combines methods coming from the direct proof of convergence of KAM Lindstedt series with Renormalization Group methods for many body systems. Small divisors appear in the expansions, whose convergence follows exploiting the Diophantine conditions and fermionic cancellations. The main difficulty comes from the presence of loop graphs, which are the signature of many body interaction and are absent in KAM series. V.Mastropietro. Comm Math Phys 342, 217 (2016); Phys Rev Lett 115, 180401 (2015); Comm. Math. Phys. (2017) Sign in
3f87b71be94d02c5
Public Release:  A mathematical advance in describing waves New development builds on centuries of research devoted to using math to describe the physical world University at Buffalo BUFFALO, N.Y. -- One of the great joys in mathematics is the ability to use it to describe phenomena seen in the physical world, says University at Buffalo mathematician Gino Biondini. With UB postdoctoral researcher Dionyssios Mantzavinos, Biondini has published a new paper that advances the art -- or shall we say, the math -- of describing a wave. The findings, published Jan. 27 in Physical Review Letters, are thought to apply to wave forms ranging from light waves in optical fibers to water waves in the sea. The study explores what happens when a regular wave pattern has small irregularities, a question that scientists have been trying to answer for the last 50 years. Researchers have long known that in many cases such minor imperfections grow and eventually completely distort the original wave as it travels over long distances, a phenomenon known as "modulational instability." But the UB team has added to this story by showing, mathematically, that many different kinds of disturbances evolve to produce wave forms belonging to a single class, denoted by their identical asymptotic state. "Ever since Isaac Newton used math to describe gravity, applied mathematicians have been inventing new mathematics or using existing forms to describe natural phenomena," says Biondini, a professor of mathematics in the UB College of Arts and Sciences and an adjunct faculty member in the UB physics department. "Our research is, in a way, an extension of all the work that's come before." He says the first great success in using math to represent waves came in the 1700s. The so-called wave equation, used to describe the propagation of waves such as light, sound and water waves, was discovered by Jean le Rond d'Alembert in the middle of that century. But the model has limitations. "The wave equation is a great first approximation, but it breaks down when the waves are very large -- or, in technical parlance -- 'nonlinear,'" Biondini said. "So, for example, in optical fibers, the wave equation is great for moderate distances, but if you send a laser pulse (which is an electromagnetic wave) through an optical fiber across the ocean or the continental U.S., the wave equation is not a good approximation anymore. "Similarly, when a water wave whitecaps and overturns, the wave equation is not a good description of the physics anymore." Over the next 250 years, scientists and mathematicians continued to develop new and better ways to describe waves. One of the models that researchers derived in the middle of the 20th century is the nonlinear Schrödinger equation, which helps to characterize wave trains in a variety of physical contexts, including in nonlinear optics and in deep water. But many questions remained unanswered, including what happens when a wave has small imperfections at its origin. This is the topic of Biondini and Mantzavinos' new paper. "Modulational instability has been known since the 1960s. When you have small perturbations at the input, you'll have big changes at the output. But is there a way to describe precisely what happens?" Biondini said. "After laying out the foundations in two earlier papers, it took us a year of work to obtain a mathematical description of the solutions. We then used computers to test whether our math was correct, and the simulation results were pretty good -- it appears that we have captured the essence of the phenomenon." The next step, Biondini said, is to partner with experimental researchers to see if the theoretical findings hold when applied to tangible, physical waves. He has started to collaborate with research groups in optics as well as water waves, and he hopes that it will soon be possible to test the theoretical predictions with real experiments.
d06b350384056ee8
Religion Wiki 34,378pages on this wiki Add New Page Talk0 Share Basic, canonic, canonical: reduced to the simplest and most significant form possible without loss of generality, e.g., "a basic story line"; "a canonical syllable pattern." This word is used by theologians and canon lawyers to refer to the canons of the Roman Catholic, Eastern Orthodox and Anglican Churches adopted by ecumenical councils. It also refers to later law developed by local churches and dioceses of these churches. The function of this collection of various "canons" is somewhat analogous to the precedents established in common law by case law. In the 20th century, the Roman Catholic Church revised its canon law in 1917 and then again in 1983, into the modern Code of Canon Law. This code is no longer merely a compilation of papal decrees and conciliar legislation, but a more completely developed body of international church law. It is analogous to the English system of statute law. Canonical can also mean "part of the canon", i.e., one of the books comprising a biblical canon, (e.g. the Gospel of Matthew or the Gospel of Mark) as opposed to apocryphal books (e.g. the Gospel according to the Hebrews). The term is also applied by Westerners to other religions, but in inconsistent ways: for example, in the case of Buddhism one authority[1] refers to "scriptures and other canonical texts", while another[2] says that scriptures can be categorized into canonical, commentarial and pseudo-canonical. Canonization is the process by which a person becomes recognized as a saint. Literature and art The word is also often used when describing bodies of literature or art: those books that all educated people have supposedly read, or are advised to read, make up the "canon", for example the Western canon. (See also canon (fiction)). Mathematicians have for perhaps a century or more used the word canonical to refer to concepts that have a kind of uniqueness or naturalness, and are (up to trivial aspects) "independent of coordinates." Examples include the canonical prime factorization of positive integers, the Jordan canonical form of matrices (which is built out of the irreducible factors of the characteristic polynomial of the matrix), and the canonical decomposition of a permutation into a product of disjoint cycles. Various functions in mathematics are also canonical, like the canonical homomorphism of a group onto any of its quotient groups, or the canonical isomorphism between a finite-dimensional vector space and its double dual. Although a finite-dimensional vector space and its dual space are isomorphic, there is no canonical isomorphism. This lack of a canonical isomorphism can be made precise in terms of category theory; see natural transformation. But at a simpler level one could say that "any isomorphism you can think of here depends on choosing a basis." As stated by Goguen, "To any canonical construction from one species of structure to another corresponds an adjunction between the corresponding categories." [3] Being canonical in mathematics is stronger than being a conventional choice. For instance, the vector space Rn has a standard basis which is canonical in the sense that it is not just a choice which makes certain calculations easy; in fact most linear operators on Euclidean space take on a simpler form when written as a matrix relative to some basis other than the standard one (see Jordan form). In contrast, an abstract n-dimensional real vector space V would not have a canonical basis; it is isomorphic to Rn of course, but the choice of isomorphism is not canonical. The word canonical is also used for a preferred way of writing something, see the main article canonical form. In set theory, the term "canonical" identifies an element as representative of a set. If a set is partitioned into equivalence classes, then one member can be chosen from each equivalence class to represent that class. That representative member is the canonical member. If you have a canonicalizing function, f(x), that maps x to the canonical member of the equivalence class which contains it, then testing whether two items, a and b, are equivalent is the same as testing whether f(a) is identical to f(b). Computer science Some circles in the field of computer science have borrowed this usage from mathematicians. It has come to mean "the usual or standard state or manner of something"; for example, "the canonical way to organize a file system is as a hierarchy, with extensions to make it a directed graph".[4] XML Signature defines canonicalization as the process of converting XML content to a canonical form, to take into account changes that can invalidate a signature over that data (from JWSDP 1.6). In enterprise application integration, the "canonical data model" is a design pattern used to communicate between different data formats. It introduces an additional format, called the "canonical format", "canonical document type" or "canonical data model". Instead of writing translators between each and every format (with potential for a combinatorial explosion), it is sufficient just to write a translator between each format and the canonical format. OASIS (organization) (Organization for the Advancement of Structured Information Standards) is an example of an integration architecture that is based on a canonical data model. Some people have been known to use the noun canonicality; others use canonicity. In fields other than computer science, canonicity is this word's canonical form. In computer science, a canonical name record (or CNAME record) is a type of DNS record. In computer science, a canonical number is the old designation for a MAC address on routers and servers. In theoretical physics, the concept of canonical (or conjugate, or canonically conjugate) variables is of major importance. They always occur in complementary pairs, such as spatial location x and linear momentum p, angle φ and angular momentum L, and energy E and time t. They can be defined as any coordinates whose Poisson brackets give a Kronecker delta (or a Dirac delta in the case of continuous variables). The existence of such coordinates is guaranteed under broad circumstances as a consequence of Darboux's theorem. Canonical variables are essential in the Hamiltonian formulation of physics, which is particularly important in quantum mechanics. For instance, the Schrödinger equation and the Heisenberg uncertainty relation always incorporate canonical variables. Canonical variables in physics are based on the aforementioned mathematical structure and therefore bear a deeper meaning than being just convenient variables. One facet of this underlying structure is expressed by Noether's theorem, which states that a (continuous) symmetry in a variable implies an invariance of the conjugate variable, and vice versa; for instance symmetry under spatial displacement leads to conservation of momentum, and time-independence implies energy conservation. In statistical mechanics, the canonical ensemble, the grand canonical ensemble, and the microcanonical ensemble are archetypal probability distributions for the (unknown) microscopic state of a thermal system, applying respectively in the physical cases of:- a closed system at fixed temperature (able to exchange energy with its environment); an open system at fixed temperature (able to exchange both energy and particles); and a closed thermally isolated system (able to exchange neither). These probability distributions can be applied directly to practical problems in thermodynamics. See also 1. Macmillan Encyclopedia of Buddhism (Volume One), page 142 2. Bechert & Gombrich, World of Buddhism, Thames & Hudson, London, 1984, page 79 3. Goguen J. "A categorical manifesto". Math. Struct. Comp. Sci., 1(1):49--67, 1991 4. canonical from the Jargon File Ad blocker interference detected!
26fc674e8906622b
Tuesday, September 18, 2007 Not quite infinite Lubos has a memo where he discusses how physicists make (finite) sense of divergent sums like 1+10+100+1000+... or 1+2+3+4+5+... . The last is, as string theorists know, of course -1/12 as for example explained in GSW. Their trick is to read that sum as the value at s=-1 of and define that value via the analytic continuation of the given expression which is well defined only for real part of s>1. Alternatively, he regularises as . Then, in an obscure analogy with minimal subtraction throws away the divergent term and takes the finite remainder as the physical value. He justifies this by claiming agreement with experiment (here in the case of a Casimir force). This, I think, however, is a bit too weak. If you rely on arguments like this it is unclear how far they take you when you want to apply them to new problems where you do not yet know the answer. Of course, it is good practice for physicists to take calculational short-cuts. But you should always be aware that you are doing this and it feels much better if you can say "This is a bit dodgy, I know, and if you really insist we could actually come up with a rigorous argument that gives the same result.", i.e. if you have a justification in your sleeve for what you are doing. Most of the time, when in a physics calculation you encounter an infinity that should not be there (of course, often "infinity" is just the correct result, questions like how much energy I have to put into the acceleration of an electron to bring it up to the speed of light? come to my mind), you are actually asking the wrong question. This could for example be because you made an idealisation that is not physically justified. Some examples come to my mind: The 1+2+3+... sum arises when you try to naively compute the commutator of two Virasoro generators L_n for the free boson (the X fields on the string world sheet). There, L_n is given as an infinite sum over bilinears in a_k's, the modes of X. In the commutator, each summand gives a constant from operator ordering and when you sum up these constants you face the sum 1+2+3+... Once you have such an expression, you can of course regularise it. But you should be suspicious that it is actually meaningful what you do. For example, it could be that you can come up with two regularisations that give different finite results. In that case you should better have an argument to decide which is the better one. Such an argument could be a way to realise that the infinity is unphysical in the first place: In the Virasoro example, one should remember that the L_n stand for transformations of the states rather than observables themselves (outer vs. inner transformations of the observable algebra). Thus you should always apply them to states. But for a state that is a finite linear combination of excitations of the Fock vacuum there are always only a finite number of terms in the sum for the L_n that do not annihilate the state. Thus, for each such state the sum is actually finite. Thus the infinite sum is an illusion and if you take a bit more care about which terms actually contribute you find a result equivalent to the -1/12 value. This calculation is the one you should have actually done but the zeta function version is of course much faster. My problem with the zeta function version is that to me (and to all people I have asked so far) it looks accidental: I have no expansion of the argument that connects it to the rigorous calculation. From the Virasoro algebra perspective it is very unnatural to introduce s as at least I know of no way to do the calculation with L_n and a_k with a free parameter s. Another example are the infinities that arise in Feynman diagrams. Those arise when you do integrals over all momenta p. There are of course the usual tricks to avoid these infinities. But the reason they work is that the integral over all p is unphysical: For very large p, your quantum field theory is no longer the correct description and you should include quantum gravity effects or similar things. You should only integrate p up the scale where these other effects kick in and then do a proper computation that includes those effects. Again, the infinity disappears. If you have a renormalisable theory you are especially lucky: There you don't really have to know the details of that high energy theory, you can subsume them into a proper redefinition of your coupling constants. A similar thing can be seen in fluid dynamics: The Navier-Stokes equation has singular solutions much like Einstein's equations lead to singularities. So what shall we do with for example infinite pressure? Well, the answer is simple: The Navier-Stokes equation applies to a fluid. But the fluid equations are only an approximation valid at macroscopic scales. If you look at small scales you find individual water molecules and this discreteness is what saves you actually encountering infinite values. There is an approach to perturbative QFT developed by Epstein and Glaser and explained for example in this book that demonstrates that the usual infinities arise only because you have not been careful enough earlier in your calculation. There, the idea is that your field operators are actually operator valued distributions and that you cannot always multiply distributions. Sometimes you can, if their singularities (the places where they are not a function but really a distribution) are in different places or in different directions (in a precise sense) but in general you cannot. The typical situation is that what you want to define (for example delta(x)^2) is still defined for a subset of your test functions. For example delta(x)^2 is well defined for test functions that vanish in a neighbourhood of 0. So you start with a distribution defined only for those test functions. Then, you want to extend that definition to all test-functions, even those that are finite around 0. It turns out that if you restrict the degree of divergence (the maximum number of derivatives acting on delta, this will later turn out to be related to the superficial scaling dimension) to be below some value, there is a finite dimensional solution space to this extension problem. In the case of phi^4 theory for example the two point distribution is fixed up to a multiple of delta(x) and a multiple of the d'Alambertian of delta(x), the solution space is two dimensional (if Lorentz invariance is taken into account). The two coefficients have to be fixed experimentally and of course are nothing but mass and wave function renormalisation. In this approach the counter terms are nothing but ambiguities of an extension problem of distributions. I has been shown in highly technical papers, that this procedure is equivalent to BPHZ regularization and dimensional regularisation and thus it's save to use the physicist's short-cuts. But it's good to know that the infinities that one cures could have been avoided in the first place. My last example is of slightly different flavour: Recently, I have met a number of mathematical physicists (i.e. mathematicians) that work on very complicated theorems about what they call stability of matter. What they are looking at is the quantum mechanics of molecules in terms of a Hamiltonian that includes a kinetic term for electrons and Coulomb potentials for electron-electron and electron-nucleus interactions. The position of the nuclei are external (classical) parameters and usually you minimise them with respect to the energy. What you want to show is that the spectrum of this Hamiltonian is bounded from below. This is highly non-trivial as the Coulomb potential itself alone is not bounded from below (-1/r becomes arbitrarily negative) and you have to balance it with the kinetic term. Physically, you want to show that you cannot gain an infinite amount of energy by throwing an electron into the nucleus. Mathematically, this is a problem about complicated PDE's and people have made progress using very sophisticated tools. What is not clear to me is if this question is really physical: It could well be that it arises from an over-simplification: The nuclei are not point-like and thus the true charge distribution is not singular. Thus the physical potential is not unbounded from below. In addition, if you are worried about high energies (as would be around if the electron fell into a nucleus) the Schrödinger equation would no longer be valid and would have to be replaced with a Dirac equation and then of course the electro-magnetic interaction should no longer be treated classically and a proper QED calculation should be done. Thus if you are worried about what happens to the electron close to the nucleus in Schrödinger theory, you are asking an unphysical question. What still could be a valid result is that you show (and that might look very similar to a stability result) is that you don't really get out of the area of applicability of your theory as the kinetic term prevents the electrons from spending too much time very close to the nucleus (classically speaking). What is shared by all these examples, is that some calculation of a physically finite property encounters infinities that have to be treated and I tried to show that those typically arise because earlier in your calculation you have not been careful and stretched an approximation beyond its validity. If you would have taken that into account there wouldn't have been an infinity but possible a much more complicated calculation. And in lucky cases (similar to the renormalisable situation) you can get away with ignoring these complications. However you can sleep much better if you know that there would have been another calculation without infinities. Update: I have just found a very nice text by Terry Tao on a similar subject to "knowing there is a rigorous version somewhere". Joe Polchinski said... In chapter 1 of my book, eq. 1.3.34, I derive the `correct' value of this infinite sum by the requirement that one cancel the Weyl anomaly introduced by the regulator by a local counterterm; this fixes the finite value completely. At various points later in the book (see index item `normal ordering constants') I derive the constant by a fully finite calculation that respects the Weyl symmetry throughout. Robert said... For those readers who don't have Joe's book at hand let me reproduce his argument: In the cut-off version, epsilon is if fact dimension-full and a constant, n independent term would as well be the consequence of a world sheet cosmological constant. Thus the 1/epsilon^2 is in fact a renormalisation of the world-sheet cosmological constant. This would be in conflict with Weyl invariance and thus one has to add a counter term which makes it vanish. This is what I should have written instead of calling the argument "obscure". This leaves me still looking for a physical justification for the introduction of s in the zeta regularisation and the hope that physics is actually analytic in s. Maybe this could be related to dimensional regularisation on the world sheet? Lumo said... Dear robert, I am somewhat confused by your skepticism. A similar comment to yours by ori - I suppose it could even be Ori Ganor - appeared on my blog. Why I am confused? Because I think that Joe's argument is, at the level of physics, a rigorous argument. Let me start with the vacuum energy subtraction. We require Weyl invariance of the physical quantities. So the total zero-point function must vanish. It is clearly the case because such a result is dimensionful and any dimensionful quantity has a scale and breaks scale invariance. So one exactly needs to add a counterterms to have the total vacuum energy vanish and this counterterm thus exactly has the role of killing the 1/epsilon^2 term. Joe has a lot of detailed extra factors of length etc. in his formulae to make it really transparent how the terms depend on the length. This makes the mathematical essence of the regularization more convoluted than it is but it should make the physical interpretation much more unambiguous. Now the zeta function. You ask about the "hope" that physics is analytical in complex "s". I don't know why you call it a hope. It is a easily demonstrable fact that is, as you correctly hint, analogous to the case of dim reg. Just substitute a complex "s" and calculate what the result is. You only get nice functions so of course the result is locally holomorphic in "s". Just like in the case of dimreg, one doesn't have to have an interpretation of complex values of "s". The only thing we call "physics for complex s" are the actual formulae and their results and they are clearly holomorphic. Beisert and Tseytlin have checked a highly nontrivial zeta-function regularization of some AdS/CFT spinning calculation up to four loops. That's where they argued to understand the three-loop discrepancy as an order of limits issue. See also a 600+ citation paper by Hawking who checks curved spaces in all dimensions etc. These regularizations work and it's no coincidence. Robert said... you misunderstand me. I have no doubt that in field theory calculations where for example you want to compute tr(log(O)) for some operator O as this gives you the 1 loop effective action zeta function regularisation of log(0) works as well as any other regularisation (and often nicer as it preserves more symmetries than more ad hoc versions). What I am looking for is a version where you not only reinterpret n as 1/n^s for s=-1 once you encounter an obviously divergent expression but start out with something that includes s from the beginning such that for say Re(s)>1 everything is finite at all stages and in the end you can take s->-1 analytically. Can you come up with (s dependent) definitions of a_n and their commutation relations or L_n such that the commutator of L_n's (which is something you calculate rather than define) gives the expression including s? BTW, in the LQG version of the string, the correct constant appears as Tr([A_2,B_2]) where A and B are generators of diffeomorphisms and the subscript 2 refers to A_2 = (A + JAJ)/2 where J multiplies positive modes by i and negative modes by -i. Thus it's the 'beta'-part in the language of Boguliubov transformations. Needless to mention this expression is in fact finite even though there is a trace in an infinite dimensional Hilbert space as can be shown that A_2 is a Hilbert-Schmidt operator (that is the product of two such operators has a finite trace). Of course you need an infinite dimensional space for a commutator to have a non-vanishing trace. Lumo said... More generally about your comments, Robert. I think that it is entirely wrong to say "this argument is dodgy blah blah blah" (in the context of the vacuum energy subtraction) because the argument is transparent and rigorous when looked at properly. Both of them in fact. Also, I disagree with your general statement that an infinity means that we have asked a wrong question. Only IR divergences are about wrong questions. UV divergences are about a theory being effective. But even QCD that is UV finite gives UV divergences - they're responsible e.g. for the running. There's no way to ask a better question about the exact QCD theory that we know and love that would remove the infinity. QCD also falsifies your statement that "the integral over all p is unphysical". It's not unphysical. QCD is well-defined at arbitrarily high values of "p" but it still requires one to deal with and subtract the infinities properly. Sorry to say but the comments that physicists are always expected to say "we're dodgy, everything is unreliable, we need experiments" just mean that you don't quite understand the technology. Your comments are Woit-Lite comments. In each case, there is a completely well-defined answer to the questions whether a particular symmetry constrains the terms or not, whether a given regularization preserves the symmetry or not, and consequently, whether a given regularization gives a correct result or not. There is no ambiguity here whatsoever and the examples listed are guaranteed to give the right results. Lumo said... Dear Robert, concerning your comment, I understood pretty well that you wanted to define the whole theory for complex unphysical values of "s". That's exactly why I pre-emptively wrote that it is wrong to try to define the whole theory for wrong values of "s" just like it is wrong to define a theory in a complex dimension "d" in dimreg. Such a theory probably doesn't exist, especially not in the dimreg case. But you don't need the full theory in 3.98+0.2i spacetime dimensions in order to prove that dimreg preserves gauge invariance, do you? In the same way, you don't need to define the operator algebra in a CFT for complex values of "s" or something like that. I don't understand how to combine this discussion with the "LQG version of a string". The texts I wrote above were trying to help to clarify how the quantities actually behave in correct physics while LQG is a supreme example how the divergences and other things are treated physically incorrectly. Of course that things I write are incompatible with the LQG quantization. But the reason is that the LQG quantization is wrong while e.g. Joe's arguments are correct. Your conclusion that physics is ambiguous is not a correct conclusion. Robert said... All I am saying is you should have a way (fine if done retroactively) to treat infinities without them actually occurring. And if you do that by adding an epsilon dependent counter term (that diverges by itself when you take epsilon to 0) that's fine with me. As long as you can physically justify it. Otherwise you are prone to arguments like And sorry, "an argument is correct if it gives the correct result" is not good enough. I would like to have a way to decide if an argument is valid before I know the answer from somewhere else. Robert said... By "LQG string" I meant our version where we (in a slightly mathematically more careful language) re-derive the usual central charge (same content, different formalism) rather than the polymer version (different content of which you know I do not approve). Lumo said... Dear Robert, I disagree that one can only trust a theory if infinities never occur. A particular regularization that replaces infinities by finite numbers as the intermediate results is just a mathematical trick but the actual physical result is independent of all details of the regularization which really means that it directly follows from a correct calculation inside the theory that contains these infinities. In other words, you only need the Lagrangian of standard QCD (one that leads to divergent Feynman diagrams) plus correct physical rules that constrain/dictate how to deal with infinities to get the right QCD predictions. You don't need any theory that is free of infinities. Such a theory is just a psychological help if one feels uncertain. I agree with you that one should be able to decide whether an argument is correct before the result is compared with another one. And indeed, it is possible. This is what this discussion is about. You argue that it is impossible to decide whether an argument or calculation is correct as long as it started with an infinite expression, and others are telling you that it is possible. If you rederive the same physics in what you call "LQG string", why do you talk about "LQG string" as opposed to just a "string"? Cannot you reformulate your argument in normal physics as opposed to one of kinds of LQG physics? Sabine's calculation you linked to is manifestly wrong because she doubles one of the infinities in order to subtract them and get a wrong finite part. There was no symmetry principle that would constrain the right result in her calculation. The original integral was perfectly convergent and she just added (2-1) times infinity (by rescaling the cutoff by a factor of 2 in one term), pretending that 2-1=0. I don't quite know why you think that I am prone to such arguments. ;-) Maybe Sabine is but I am not. She didn't make any proper analysis of counterterms, any proper analysis of any symmetries, and she didn't make any analytical continuation of anything to a convergent region either. Why do you think it's analogous to a valid calculation? If you mentioned it because of the relationship between 1+2+3+... and 1-2+3-4+..., the derived relationship between them may remind you of Sabine's wrong calculation. But it is not analogous. These rescalings and alternating sums can be calculated by the zeta function regularization that allows me to make these arguments adding subseries and rescaling them. For example, you get the correct sum for antiperiodic fields, 1/2 + 3/2 + 5/2 + ... can also be calculated by taking the normal sum 1+2+3 and subtracting a multiple of it from itself. So if the zeta-function reg gives a Weyl-invariant value of the alternating sum, it also gives the right value of the normal sum as well as the shifted Neveu-Schwarz sum and others. Lumo said... Let me say more physically what she actually did. In order to calculate a convergent integral in the momentum space (x), she wrote it as a difference of two divergent ones. That would be perfectly compatible with physics and nothing wrong could follow from it. The error only occurs when she rescales the "x" by a factor of 1/2 or 2 in the two terms. This is equivalent to confusing what is her cutoff - by a factor of two up or down. Because her integral is logarithmically divergent, it is a standard example of a running coupling. So she has effectively added "g(2.lambda)-g(lambda/2)" - the difference of gauge couplings at two different scales, pretending that it is zero. Of course, it is not zero: this is exactly the way how running couplings arise. An experienced physicist would never make this error - using inconsistent cutoffs for different contributions in the same expression. Hers is just a physics error, if we interpret it as a physics calculation. One can't say that her calculation is analogous to the correct calculations such as Joe's subtractions of the vacuum energy even though it seems that this is precisely what you're saying. There is a very clear a priori difference between correct and wrong calculations: correct ones have no physical errors of this or other kinds. Robert said... My final comment for tonight: For those readers who did not get this from my comments above: I completely agree with Joe's derivation of including a regularisation and imposing Weyl invariance. Do not try to convince me it is correct. It is. My point about Sabine's calculation was that you can of course (and nobody I believe doubts this) produce non-sense if you are not careful about infinite quantities. Once you regulate, the error is obvious. My final remark (and this is not serious, thus I will delete any comments referring to it) is that there is a shorter version of Sabine's argument which goes: "int dx/x is always zero in dimensional regularisation" (this is how I learned to actually apply dim reg from a particle phenomenologist: Bring your integrals to form finite + int dx/x and set the second term to zero). Anonymous said... When physicists proceed 'formally', its usually explicitly stated as such. There are many examples throughout history where this actually turns out to be wrong when done rigorously. The interesting thing (for mathematicians) is when it turns out to be correct, as it usually means theres some hidden principle in there somewhere and often can lead to new and nontrivial mathematics (eg distribution theory). Lumo said... Dear Robert, if you exactly agree with Joe's derivation, why do you exactly write that this derivation is based on an "obscure analogy with minimal subtraction"? There is nothing obscure about it and, if looked at properly, there is nothing obscure about the minimal subtraction either. One can easily prove why it works whenever it works. I agree that one must be careful about infinite quantities but we seem to disagree what it means to be careful. In my picture, it means that you must carefully include them whenever they are nonzero. In the polymer LQG string that you researched, for example, they are very careful to throw all these important terms arising as infinities away which is wrong, and your work is an interpolation between the correct result and the wrong result which is thus also wrong, at least partially. ;-) I disagree that your "nonserious" comment is not serious. It is absolutely serious. Don't try to erase this comment because of it. The comment that you call "nonserious" is the standard insight - certainly taught in QFT courses at most good graduate schools - that power law divergences are zero in dim reg. In the case of the log divergence it is still true as long as you consistently extract the finite part by taking correct limits of the integral. Thomas Larsson said... Why are zeta-function techniques better than simply calculating the action of the Virasoro generators on some state? It is very easy to compute [L_m, L_-m] |0>, and you can read off the central charge from this, without ever having to introduce any infinities. What is less trivial is how to generalize this to d dimensions, where the diffeomorphism generators are labelled by vectors m = (m_0, m_1, ...) in Z^d rather than an scalar integer m in Z. In fact, I was stuck on this problem for many years (and ran out of funding in the meantime), before it was solved in a seminal paper by Rao and Moody. amused said... Hi Robert, Lubos, and anyone else, I have a question/doubt about something Lubos wrote in his post on this topic and would appreciate your views or clarifications. (Normally I would post this on the blog of the person who wrote it, but seeing as in this case it's Lubos...hope you don't mind me posting it here instead) LM wrote: "The fact that different regularizations lead to the same final results is a priori non-trivial but can be mathematically demonstrated to be inevitably true by the tools of the renormalization group." Is this really true? E.g., I don't recall any mention of this in Peskin & Schroeder's book, even though they discuss RG group in detail.. To explain my doubts, consider the case of perturbative QCD: two different regularizations which preserve gauge invariance are dimensional reg. and lattice formulation. In fact there are a whole lot of different possible lattice discretizations, and not all of them can be expected to produce results which agree with the physical ones obtained using dimensional regularization. E.g., there must at least be some kind of locality condition on the lattice QCD formulation that one uses, and I don't think anyone knows at present what the mildest possible locality requirement is that guarantees that the lattice formulation will produce correct results. In light of this, I don't see how it can be asserted that different regularizations (which preserve the appropriate symmetries) are always guaranteed to give the same final results... Robert said... I know there is some literature about different regularisation/renormalisation schemes giving identical results but trying to locate some using google scholar was unsuccessful. I know for sure that BPHZ and Epstein-Glaser have been shown to be equivalent and would be surprised if the ones more often used in practical calculations (i.e. dim reg) would not have been connected as well. Step zero for such a proof (which in character is mathematical and not very physics oriented) is to define what exactly you mean by scheme X. That would have to be a prescription that works at all loop order for all graphs and not like in QFT textbooks where a few simple graphs are calculated (most often only one loop so they do not encounter overlapping divergencies) and then a "you proceed along the same lines for other graphs" instruction is given. Lattice regularisation, however, is very different in spirit as it is not perturbative (it does not expand in the coupling constant) so it is not supposed to match a perturbative calculation up to some fixed loop order. Thus it does not compare directly with Feynman graph calculations. Only the continuum limit of the lattice theory is supposed to match with an all loop calculation that also takes into account non-perturbative effects. In fact, the lattice version of gauge theories is probably the best definition of what you mean by "the full quantum theory including non-perturbative effects" as those are not computed directly in perturbation theory and there are only indirect hints from asymptotic expansions and of course S-duality. OTOH, starting from the lattice theory, you have to show that the continuum limit in fact has Lorentz symmetry and is causal, two properties that this regularisation destroys. Once you managed this, it's likely you are not too far from claiming the 1 million dollars: amused said... Thanks Robert. You seem to have in mind the nonperturbative lattice formulation used in computer simulations, but there is also a perturbative version which does expand in the coupling constant - see, e.g., T.Reisz, NPB 318 (1989) 417 where perturbative renormalizability of lattice QCD was proved to all orders in the loop expansion. However, it is not clear to me that this will always give the correct physical results for any choice of lattice QCD formulation. There must surely be some conditions on the formulation; in particular some minimal locality condition. That's why I was surprised by the claim that any regularization (preserving the symmetries) will must lead to the same end results (Btw, extraction of physics results from the lattice involves perturbative calculations as well as the computer simulations. I recall some nice posts about this on the "life on the lattice" blog at some point..) cecil kirksey said... Interesting subject. I think I can accept the "mathematical" definition of summing divert series because the "sum" can be defined in a potentially consistent manner. However, in any real world situation the question does it EVER make sense using such a divergent series? Would it ever make sense add (sum?)an infinite number of measurable quantites? If not exactly what is being added in ST? Thanks.
593db4042c75f933
Magnetic potential Magnetic potential The magnetic potential provides a mathematical way to define a magnetic field in classical electromagnetism. It is analogous to the electric potential which defines the electric field in electrostatics. Like the electric potential, it is not directly observable - only the field it describes may be measured. There are two ways to define this potential - as a scalar and as a vector potential. (Note, however, that the magnetic vector potential is used much more often than the magnetic scalar potential.) The magnetic vector potential is often called simply the magnetic potential, vector potential, or electromagnetic vector potential. If the magnetic vector potential is time-dependent, it also defines a contribution to the electric field. Magnetic vector potential The magnetic vector potential mathbf{A} is a three-dimensional vector field whose curl is the magnetic field, i.e.: mathbf{B} = nabla times mathbf{A}. Since the magnetic field is divergence-free (i.e. nabla cdot mathbf{B} = 0, called Gauss's law for magnetism), this guarantees that mathbf{A} always exists (by Helmholtz's theorem). Unlike the magnetic field, the electric field is derived from both the scalar and vector potentials: mathbf{E} = - nabla Phi - frac { partial mathbf{A} } { partial t }. Starting with the above definitions: nabla cdot mathbf{B} = nabla cdot (nabla times mathbf{A}) = 0 nabla times mathbf{E} = nabla times left(- nabla Phi - frac { partial mathbf{A} } { partial t } right) = - frac { partial } { partial t } (nabla times mathbf{A}) = - frac { partial mathbf{B} } { partial t }. Note that the divergence of a curl will always give zero. Conveniently, this solves the second and third of Maxwell's equations automatically, which is to say that a continuous magnetic vector potential field is guaranteed not to result in magnetic monopoles. The vector potential mathbf{A} is used extensively when studying the Lagrangian in classical mechanics (see Lagrangian#Special relativistic test particle with electromagnetism), and in quantum mechanics, such as the Schrödinger equation for charged particles or the Dirac equation. For example, one phenomenon whose analysis involves mathbf{A} is the Aharonov-Bohm effect. In the SI system, the units of A are volt seconds per metre. Gauge choices It should be noted that the above definition does not define the magnetic vector potential uniquely because, by definition, we can arbitrarily add curl-free components to the magnetic potential without changing the observed magnetic field. Thus, there is a degree of freedom available when choosing mathbf{A}. This condition is known as gauge invariance. Magnetic scalar potential The magnetic scalar potential is another useful tool in describing the magnetic field around a current source. It is only defined in regions of space in the absence of currents. The magnetic scalar potential is defined by the equation: mathbf{B} = - mu_0 nabla mathbf{psi}. Applying Ampère's law to the above definition we get: mathbf{J} = frac{1}{mu_0} nabla times mathbf{B} = - nabla times nabla mathbf{psi} = 0. Solenoidality of the magnetic induction leads to Laplace's equation for potential: trianglemathbfpsi = 0. Since in any continuous field, the curl of a gradient is zero, this would suggest that magnetic scalar potential fields cannot support any sources. In fact, sources can be supported by applying discontinuities to the potential field (thus the same point can have two values for points along the disconuity). These discontinuities are also known as "cuts". When solving magnetostatics problems using magnetic scalar potential, the source currents must be applied at the discontinuity. Electromagnetic four-potential In the context of special relativity, it is natural to join the magnetic vector potential together with the (scalar) electric potential into the electromagnetic potential, also called "four-potential". One motivation for doing so is that the four-potential turns out to be a mathematical four-vector. Thus, using standard four-vector transformation rules, if the electric and magnetic potentials are known in one intertial reference frame, they can be simply calculated in any other inertial reference frame. Another, related motivation is that the content of classical electromagnetism can be written in a concise and convenient form using the electromagnetic four potential, especially when the Lorenz gauge is used. In particular, in abstract index notation, the set of Maxwell's equations (in the Lorenz gauge) may be written (in Gaussian units) as follows: partial^mu A_mu = 0 Box^2 A_mu = frac{4 pi}{c} J_mu where Box^2 is the D'Alembertian and J is the four-current. The first equation is the Lorenz gauge condition while the second contains Maxwell's equations. Yet another motivation for creating the electromagnetic four-potential is that it plays a very important role in quantum electrodynamics. * * * Search another word or see magnetic potentialon Dictionary | Thesaurus |Spanish Copyright © 2014, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
2840a70fb479d7c3
Dirac equation From Wikipedia, the free encyclopedia In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity,[1] and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine details of the hydrogen spectrum in a completely rigorous way. The equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed several years later. It also provided a theoretical justification for the introduction of several component wave functions in Pauli's phenomenological theory of spin; the wave functions in the Dirac theory are vectors of four complex numbers (known as bispinors), two of which resemble the Pauli wavefunction in the non-relativistic limit, in contrast to the Schrödinger equation which described wave functions of only one complex value. Moreover, in the limit of zero mass, the Dirac equation reduces to the Weyl equation. Although Dirac did not at first fully appreciate the importance of his results, the entailed explanation of spin as a consequence of the union of quantum mechanics and relativity—and the eventual discovery of the positron—represents one of the great triumphs of theoretical physics. This accomplishment has been described as fully on a par with the works of Newton, Maxwell, and Einstein before him.[2] In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-1/2 particles. Mathematical formulation The Dirac equation in the form originally proposed by Dirac is:[3] Dirac equation (original) where ψ = ψ(x, t) is the wave function for the electron of rest mass m with spacetime coordinates x, t. The p1, p2, p3 are the components of the momentum, understood to be the momentum operator in the Schrödinger equation. Also, c is the speed of light, and ħ is the Planck constant divided by . These fundamental physical constants reflect special relativity and quantum mechanics, respectively. Dirac's purpose in casting this equation was to explain the behavior of the relativistically moving electron, and so to allow the atom to be treated in a manner consistent with relativity. His rather modest hope was that the corrections introduced this way might have a bearing on the problem of atomic spectra. Up until that time, attempts to make the old quantum theory of the atom compatible with the theory of relativity, attempts based on discretizing the angular momentum stored in the electron's possibly non-circular orbit of the atomic nucleus, had failed – and the new quantum mechanics of Heisenberg, Pauli, Jordan, Schrödinger, and Dirac himself had not developed sufficiently to treat this problem. Although Dirac's original intentions were satisfied, his equation had far deeper implications for the structure of matter and introduced new mathematical classes of objects that are now essential elements of fundamental physics. The new elements in this equation are the 4 × 4 matrices αk and β, and the four-component wave function ψ. There are four components in ψ because the evaluation of it at any given point in configuration space is a bispinor. It is interpreted as a superposition of a spin-up electron, a spin-down electron, a spin-up positron, and a spin-down positron (see below for further discussion). The 4 × 4 matrices αk and β are all Hermitian and have squares equal to the identity matrix: and they all mutually anticommute (if i and j are distinct): The single symbolic equation thus unravels into four coupled linear first-order partial differential equations for the four quantities that make up the wave function. These matrices and the form of the wave function have a deep mathematical significance. The algebraic structure represented by the gamma matrices had been created some 50 years earlier by the English mathematician W. K. Clifford. In turn, Clifford's ideas had emerged from the mid-19th-century work of the German mathematician Hermann Grassmann in his Lineale Ausdehnungslehre (Theory of Linear Extensions). The latter had been regarded as well-nigh incomprehensible by most of his contemporaries. The appearance of something so seemingly abstract, at such a late date, and in such a direct physical manner, is one of the most remarkable chapters in the history of physics. Making the Schrödinger equation relativistic The Dirac equation is superficially similar to the Schrödinger equation for a massive free particle: The left side represents the square of the momentum operator divided by twice the mass, which is the non-relativistic kinetic energy. Because relativity treats space and time as a whole, a relativistic generalization of this equation requires that space and time derivatives must enter symmetrically as they do in the Maxwell equations that govern the behavior of light — the equations must be differentially of the same order in space and time. In relativity, the momentum and the energies are the space and time parts of a spacetime vector, the four-momentum, and they are related by the relativistically invariant relation which says that the length of this four-vector is proportional to the rest mass m. Substituting the operator equivalents of the energy and momentum from the Schrödinger theory, we get the Klein-Gordon equation describing the propagation of waves, constructed from relativistically invariant objects, with the wave function ϕ being a relativistic scalar: a complex number which has the same numerical value in all frames of reference. Space and time derivatives both enter to second order. This has a telling consequence for the interpretation of the equation. Because the equation is second order in the time derivative, one must specify initial values both of the wave function itself and of its first time-derivative in order to solve definite problems. Since both may be specified more or less arbitrarily, the wave function cannot maintain its former role of determining the probability density of finding the electron in a given state of motion. In the Schrödinger theory, the probability density is given by the positive definite expression and this density is convected according to the probability current vector with the conservation of probability current and density following from the continuity equation: The fact that the density is positive definite and convected according to this continuity equation implies that we may integrate the density over a certain domain and set the total to 1, and this condition will be maintained by the conservation law. A proper relativistic theory with a probability density current must also share this feature. Now, if we wish to maintain the notion of a convected density, then we must generalize the Schrödinger expression of the density and current so that space and time derivatives again enter symmetrically in relation to the scalar wave function. We are allowed to keep the Schrödinger expression for the current, but must replace the probability density by the symmetrically formed expression which now becomes the 4th component of a spacetime vector, and the entire probability 4-current density has the relativistically covariant expression The continuity equation is as before. Everything is compatible with relativity now, but we see immediately that the expression for the density is no longer positive definite – the initial values of both ψ and tψ may be freely chosen, and the density may thus become negative, something that is impossible for a legitimate probability density. Thus, we cannot get a simple generalization of the Schrödinger equation under the naive assumption that the wave function is a relativistic scalar, and the equation it satisfies, second order in time. Although it is not a successful relativistic generalization of the Schrödinger equation, this equation is resurrected in the context of quantum field theory, where it is known as the Klein–Gordon equation, and describes a spinless particle field (e.g. pi meson). Historically, Schrödinger himself arrived at this equation before the one that bears his name but soon discarded it. In the context of quantum field theory, the indefinite density is understood to correspond to the charge density, which can be positive or negative, and not the probability density. Dirac's coup Dirac thus thought to try an equation that was first order in both space and time. One could, for example, formally (i.e. by abuse of notation) take the relativistic expression for the energy replace p by its operator equivalent, expand the square root in an infinite series of derivative operators, set up an eigenvalue problem, then solve the equation formally by iterations. Most physicists had little faith in such a process, even if it were technically possible. As the story goes, Dirac was staring into the fireplace at Cambridge, pondering this problem, when he hit upon the idea of taking the square root of the wave operator thus: On multiplying out the right side we see that, in order to get all the cross-terms such as xy to vanish, we must assume Dirac, who had just then been intensely involved with working out the foundations of Heisenberg's matrix mechanics, immediately understood that these conditions could be met if A, B, C and D are matrices, with the implication that the wave function has multiple components. This immediately explained the appearance of two-component wave functions in Pauli's phenomenological theory of spin, something that up until then had been regarded as mysterious, even to Pauli himself. However, one needs at least 4 × 4 matrices to set up a system with the properties required — so the wave function had four components, not two, as in the Pauli theory, or one, as in the bare Schrödinger theory. The four-component wave function represents a new class of mathematical object in physical theories that makes its first appearance here. Given the factorization in terms of these matrices, one can now write down immediately an equation with κ to be determined. Applying again the matrix operator on both sides yields On taking κ = mc/ħ we find that all the components of the wave function individually satisfy the relativistic energy–momentum relation. Thus the sought-for equation that is first-order in both space and time is and because we get the Dirac equation as written above. Covariant form and relativistic invariance To demonstrate the relativistic invariance of the equation, it is advantageous to cast it into a form in which the space and time derivatives appear on an equal footing. New matrices are introduced as follows: and the equation takes the form (remembering the definition of the covariant components of the 4-gradient and especially that 0 = 1/ct ) Dirac equation where there is an implied summation over the values of the twice-repeated index μ = 0, 1, 2, 3, and μ is the 4-gradient. In practice one often writes the gamma matrices in terms of 2 × 2 sub-matrices taken from the Pauli matrices and the 2 × 2 identity matrix. Explicitly the standard representation is The complete system is summarized using the Minkowski metric on spacetime in the form where the bracket expression denotes the anticommutator. These are the defining relations of a Clifford algebra over a pseudo-orthogonal 4-dimensional space with metric signature (+ − − −). The specific Clifford algebra employed in the Dirac equation is known today as the Dirac algebra. Although not recognized as such by Dirac at the time the equation was formulated, in hindsight the introduction of this geometric algebra represents an enormous stride forward in the development of quantum theory. The Dirac equation may now be interpreted as an eigenvalue equation, where the rest mass is proportional to an eigenvalue of the 4-momentum operator, the proportionality constant being the speed of light: Using (pronounced: "d-slash"[4]) in Feynman slash notation, which includes the gamma matrices as well as a summation over the spinor components in the derivative itself, the Dirac equation becomes: In practice, physicists often use units of measure such that ħ = c = 1, known as natural units. The equation then takes the simple form Dirac equation (natural units) A fundamental theorem states that if two distinct sets of matrices are given that both satisfy the Clifford relations, then they are connected to each other by a similarity transformation: If in addition the matrices are all unitary, as are the Dirac set, then S itself is unitary; The transformation U is unique up to a multiplicative factor of absolute value 1. Let us now imagine a Lorentz transformation to have been performed on the space and time coordinates, and on the derivative operators, which form a covariant vector. For the operator γμμ to remain invariant, the gammas must transform among themselves as a contravariant vector with respect to their spacetime index. These new gammas will themselves satisfy the Clifford relations, because of the orthogonality of the Lorentz transformation. By the fundamental theorem, we may replace the new set by the old set subject to a unitary transformation. In the new frame, remembering that the rest mass is a relativistic scalar, the Dirac equation will then take the form If we now define the transformed spinor then we have the transformed Dirac equation in a way that demonstrates manifest relativistic invariance: Thus, once we settle on any unitary representation of the gammas, it is final provided we transform the spinor according to the unitary transformation that corresponds to the given Lorentz transformation. The various representations of the Dirac matrices employed will bring into focus particular aspects of the physical content in the Dirac wave function (see below). The representation shown here is known as the standard representation – in it, the wave function's upper two components go over into Pauli's 2-spinor wave function in the limit of low energies and small velocities in comparison to light. The considerations above reveal the origin of the gammas in geometry, hearkening back to Grassmann's original motivation – they represent a fixed basis of unit vectors in spacetime. Similarly, products of the gammas such as γμγν represent oriented surface elements, and so on. With this in mind, we can find the form of the unit volume element on spacetime in terms of the gammas as follows. By definition, it is For this to be an invariant, the epsilon symbol must be a tensor, and so must contain a factor of g, where g is the determinant of the metric tensor. Since this is negative, that factor is imaginary. Thus This matrix is given the special symbol γ5, owing to its importance when one is considering improper transformations of spacetime, that is, those that change the orientation of the basis vectors. In the standard representation, it is This matrix will also be found to anticommute with the other four Dirac matrices: It takes a leading role when questions of parity arise because the volume element as a directed magnitude changes sign under a spacetime reflection. Taking the positive square root above thus amounts to choosing a handedness convention on spacetime . Conservation of probability current By defining the adjoint spinor where ψ is the conjugate transpose of ψ, and noticing that we obtain, by taking the Hermitian conjugate of the Dirac equation and multiplying from the right by γ0, the adjoint equation: where μ is understood to act to the left. Multiplying the Dirac equation by ψ from the left, and the adjoint equation by ψ from the right, and subtracting, produces the law of conservation of the Dirac current: Now we see the great advantage of the first-order equation over the one Schrödinger had tried – this is the conserved current density required by relativistic invariance, only now its 4th component is positive definite and thus suitable for the role of a probability density: Because the probability density now appears as the fourth component of a relativistic vector and not a simple scalar as in the Schrödinger equation, it will be subject to the usual effects of the Lorentz transformations such as time dilation. Thus, for example, atomic processes that are observed as rates, will necessarily be adjusted in a way consistent with relativity, while those involving the measurement of energy and momentum, which themselves form a relativistic vector, will undergo parallel adjustment which preserves the relativistic covariance of the observed values. See Dirac spinor for details of solutions to the Dirac equation. Note that since the Dirac operator acts on 4-tuples of square-integrable functions, its solutions should be members of the same Hilbert space. The fact that the energies of the solutions do not have a lower bound is unexpected – see the hole theory section below for more details. Comparison with the Pauli theory The necessity of introducing half-integer spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong inhomogeneous magnetic field, which then splits into N parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two—the ground state therefore could not be integer, because even if the intrinsic angular momentum of the atoms were as small as possible, 1, the beam would be split into three parts, corresponding to atoms with Lz = −1, 0, +1. The conclusion is that silver atoms have net intrinsic angular momentum of 12. Pauli set up a theory which explained this splitting by introducing a two-component wave function and a corresponding correction term in the Hamiltonian, representing a semi-classical coupling of this wave function to an applied magnetic field, as so in SI units: (Note that bold faced characters imply Euclidean vectors in 3 dimensions, where as the Minkowski four-vector Aμ can be defined as Aμ = (Φ/c, -A).) Here A and represent the components of the electromagnetic four-potential in their standard SI units, and the three sigmas are the Pauli matrices. On squaring out the first term, a residual interaction with the magnetic field is found, along with the usual classical Hamiltonian of a charged particle interacting with an applied field in SI units: This Hamiltonian is now a 2 × 2 matrix, so the Schrödinger equation based on it must use a two-component wave function. On introducing the external electromagnetic 4-vector potential into the Dirac equation in a similar way, known as minimal coupling, it takes the form : A second application of the Dirac operator will now reproduce the Pauli term exactly as before, because the spatial Dirac matrices multiplied by i, have the same squaring and commutation properties as the Pauli matrices. What is more, the value of the gyromagnetic ratio of the electron, standing in front of Pauli's new term, is explained from first principles. This was a major achievement of the Dirac equation and gave physicists great faith in its overall correctness. There is more however. The Pauli theory may be seen as the low energy limit of the Dirac theory in the following manner. First the equation is written in the form of coupled equations for 2-spinors with the SI units restored: Assuming the field is weak and the motion of the electron non-relativistic, we have the total energy of the electron approximately equal to its rest energy, and the momentum going over to the classical value, and so the second equation may be written which is of order v/c – thus at typical energies and velocities, the bottom components of the Dirac spinor in the standard representation are much suppressed in comparison to the top components. Substituting this expression into the first equation gives after some rearrangement The operator on the left represents the particle energy reduced by its rest energy, which is just the classical energy, so we recover Pauli's theory if we identify his 2-spinor with the top components of the Dirac spinor in the non-relativistic approximation. A further approximation gives the Schrödinger equation as the limit of the Pauli theory. Thus, the Schrödinger equation may be seen as the far non-relativistic approximation of the Dirac equation when one may neglect spin and work only at low energies and velocities. This also was a great triumph for the new equation, as it traced the mysterious i that appears in it, and the necessity of a complex wave function, back to the geometry of spacetime through the Dirac algebra. It also highlights why the Schrödinger equation, although superficially in the form of a diffusion equation, actually represents the propagation of waves. It should be strongly emphasized that this separation of the Dirac spinor into large and small components depends explicitly on a low-energy approximation. The entire Dirac spinor represents an irreducible whole, and the components we have just neglected to arrive at the Pauli theory will bring in new phenomena in the relativistic regime – antimatter and the idea of creation and annihilation of particles. Comparison with the Weyl theory In the limit m → 0, the Dirac equation reduces to the Weyl equation, which describes relativistic massless spin-12 particles.[5] Dirac Lagrangian Both the Dirac equation and the Adjoint Dirac equation can be obtained from (varying) the action with a specific Lagrangian density that is given by: If one varies this with respect to ψ one gets the Adjoint Dirac equation. Meanwhile, if one varies this with respect to ψ one gets the Dirac equation. Physical interpretation Identification of observables The critical physical question in a quantum theory is—what are the physically observable quantities defined by the theory? According to the postulates of quantum mechanics, such quantities are defined by Hermitian operators that act on the Hilbert space of possible states of a system. The eigenvalues of these operators are then the possible results of measuring the corresponding physical quantity. In the Schrödinger theory, the simplest such object is the overall Hamiltonian, which represents the total energy of the system. If we wish to maintain this interpretation on passing to the Dirac theory, we must take the Hamiltonian to be where, as always, there is an implied summation over the twice-repeated index k = 1, 2, 3. This looks promising, because we see by inspection the rest energy of the particle and, in case A = 0, the energy of a charge placed in an electric potential qA0. What about the term involving the vector potential? In classical electrodynamics, the energy of a charge moving in an applied potential is Thus, the Dirac Hamiltonian is fundamentally distinguished from its classical counterpart, and we must take great care to correctly identify what is observable in this theory. Much of the apparently paradoxical behavior implied by the Dirac equation amounts to a misidentification of these observables. Hole theory The negative E solutions to the equation are problematic, for it was assumed that the particle has a positive energy. Mathematically speaking, however, there seems to be no reason for us to reject the negative-energy solutions. Since they exist, we cannot simply ignore them, for once we include the interaction between the electron and the electromagnetic field, any electron placed in a positive-energy eigenstate would decay into negative-energy eigenstates of successively lower energy. Real electrons obviously do not behave in this way, or they would disappear by emitting energy in the form of photons. To cope with this problem, Dirac introduced the hypothesis, known as hole theory, that the vacuum is the many-body quantum state in which all the negative-energy electron eigenstates are occupied. This description of the vacuum as a "sea" of electrons is called the Dirac sea. Since the Pauli exclusion principle forbids electrons from occupying the same state, any additional electron would be forced to occupy a positive-energy eigenstate, and positive-energy electrons would be forbidden from decaying into negative-energy eigenstates. If an electron is forbidden from simultaneously occupying positive-energy and negative-energy eigenstates, then the feature known as Zitterbewegung, which arises from the interference of positive-energy and negative-energy states, would have to be considered to be an unphysical prediction of time-dependent Dirac theory. This conclusion may be inferred from the explanation of hole theory given in the preceding paragraph. Recent results have been published in Nature [R. Gerritsma, G. Kirchmair, F. Zaehringer, E. Solano, R. Blatt, and C. Roos, Nature 463, 68-71 (2010)] in which the Zitterbewegung feature was simulated in a trapped-ion experiment. This experiment impacts the hole interpretation if one infers that the physics-laboratory experiment is not merely a check on the mathematical correctness of a Dirac-equation solution but the measurement of a real effect whose detectability in electron physics is still beyond reach. Dirac further reasoned that if the negative-energy eigenstates are incompletely filled, each unoccupied eigenstate – called a hole – would behave like a positively charged particle. The hole possesses a positive energy since energy is required to create a particle–hole pair from the vacuum. As noted above, Dirac initially thought that the hole might be the proton, but Hermann Weyl pointed out that the hole should behave as if it had the same mass as an electron, whereas the proton is over 1800 times heavier. The hole was eventually identified as the positron, experimentally discovered by Carl Anderson in 1932. It is not entirely satisfactory to describe the "vacuum" using an infinite sea of negative-energy electrons. The infinitely negative contributions from the sea of negative-energy electrons have to be canceled by an infinite positive "bare" energy and the contribution to the charge density and current coming from the sea of negative-energy electrons is exactly canceled by an infinite positive "jellium" background so that the net electric charge density of the vacuum is zero. In quantum field theory, a Bogoliubov transformation on the creation and annihilation operators (turning an occupied negative-energy electron state into an unoccupied positive energy positron state and an unoccupied negative-energy electron state into an occupied positive energy positron state) allows us to bypass the Dirac sea formalism even though, formally, it is equivalent to it. In certain applications of condensed matter physics, however, the underlying concepts of "hole theory" are valid. The sea of conduction electrons in an electrical conductor, called a Fermi sea, contains electrons with energies up to the chemical potential of the system. An unfilled state in the Fermi sea behaves like a positively charged electron, though it is referred to as a "hole" rather than a "positron". The negative charge of the Fermi sea is balanced by the positively charged ionic lattice of the material. In quantum field theory In quantum field theories such as quantum electrodynamics, the Dirac field is subject to a process of second quantization, which resolves some of the paradoxical features of the equation. Other formulations The Dirac equation can be formulated in a number of other ways. As a differential equation in one real component Generically (if a certain linear function of electromagnetic field does not vanish identically), three out of four components of the spinor function in the Dirac equation can be algebraically eliminated, yielding an equivalent fourth-order partial differential equation for just one component. Furthermore, this remaining component can be made real by a gauge transform.[6] Curved spacetime This article has developed the Dirac equation in flat spacetime according to special relativity. It is possible to formulate the Dirac equation in curved spacetime. The algebra of physical space This article developed the Dirac equation using four vectors and Schrödinger operators. The Dirac equation in the algebra of physical space uses a Clifford algebra over the real numbers, a type of geometric algebra. See also The Dirac equation appears on the floor of Westminster Abbey on the plaque commemorating Paul Dirac's life, which was inaugurated on November 13, 1995.[7] 1. ^ P.W. Atkins (1974). Quanta: A handbook of concepts. Oxford University Press. p. 52. ISBN 0-19-855493-1.  2. ^ T.Hey, P.Walters (2009). The New Quantum Universe. Cambridge University Press. p. 228. ISBN 978-0-521-56457-1.  3. ^ Dirac, P.A.M. (1982) [1958]. Principles of Quantum Mechanics. International Series of Monographs on Physics (4th ed.). Oxford University Press. p. 255. ISBN 978-0-19-852011-5.  4. ^ see for example Brian Pendleton: Quantum Theory 2012/2013, section 4.3 The Dirac Equation 5. ^ Tommy Ohlsson (22 September 2011). Relativistic Quantum Physics: From Advanced Quantum Mechanics to Introductory Quantum Field Theory. Cambridge University Press. p. 86. ISBN 978-1-139-50432-4.  6. ^ Akhmeteli, Andrey (2011). "One real function instead of the Dirac spinor function" (PDF). Journal of Mathematical Physics. 52 (8): 082303. Bibcode:2011JMP....52h2303A. arXiv:1008.4828Freely accessible. doi:10.1063/1.3624336.  7. ^ Gisela Dirac-Wahrenburg. "Paul Dirac". Dirac.ch. Retrieved 2013-07-12.  Selected papers • Dirac, P. A. M. (1928). "The Quantum Theory of the Electron" (PDF). Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 117 (778): 610. Bibcode:1928RSPSA.117..610D. JSTOR 94981. doi:10.1098/rspa.1928.0023.  • Dirac, P. A. M. (1930). "A Theory of Electrons and Protons". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 126 (801): 360. Bibcode:1930RSPSA.126..360D. JSTOR 95359. doi:10.1098/rspa.1930.0013.  • Anderson, Carl (1933). "The Positive Electron". Physical Review. 43 (6): 491. Bibcode:1933PhRv...43..491A. doi:10.1103/PhysRev.43.491.  • Frisch, R.; Stern, O. (1933). "Über die magnetische Ablenkung von Wasserstoffmolekülen und das magnetische Moment des Protons. I". Zeitschrift für Physik. 85: 4. Bibcode:1933ZPhy...85....4F. doi:10.1007/BF01330773.  • M. Arminjon; F. Reifler (2013). "Equivalent forms of Dirac equations in curved spacetimes and generalized de Broglie relations". Brazilian Journal of Physics. 43 (1–2): 64–77. Bibcode:2013BrJPh..43...64A. arXiv:1103.3201Freely accessible. doi:10.1007/s13538-012-0111-0.  • Shankar, R. (1994). Principles of Quantum Mechanics (2nd ed.). Plenum.  • Bjorken, J D; Drell, S. Relativistic Quantum mechanics.  • Thaller, B. (1992). The Dirac Equation. Texts and Monographs in Physics. Springer.  • Griffiths, D.J. (2008). Introduction to Elementary Particles (2nd ed.). Wiley-VCH. ISBN 978-3-527-40601-2.  External links • The Dirac Equation at MathPages • The Nature of the Dirac Equation, its solutions, and Spin • Dirac equation for a spin ½ particle • Pedagogic Aids to Quantum Field Theory click on Chap. 4 for a step-by-small-step introduction to the Dirac equation, spinors, and relativistic spin/helicity operators. • BBC Documentary Atom 3 The Illusion of Reality Retrieved from "https://en.wikipedia.org/w/index.php?title=Dirac_equation&oldid=796044311" This content was retrieved from Wikipedia : http://en.wikipedia.org/wiki/Dirac_equation This page is based on the copyrighted Wikipedia article "Dirac equation"; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA
203da4b515a051a9
torsdag 22 juni 2017 Popular Standard View of stdQM Philip Ball is a writer of popular science and in his latest contribution in Aeon he propagates yet another time the standard view that quantum mechanics does not make sense, as vividly witnessed and acknowledged by all great physicists: • Why, then, is it still so common to find talk of quantum mechanics defying logic and generally messing with reality?  • We might have to out some of the blame on the Danish physicist Niels Bohr. He was probably the deepest thinker about the meaning of quantum theory among its founding pioneers, and his intuitions were usually right.  • But during the 1920s and ’30s, Bohr drove a lasting wedge between the quantum and classical worlds. They operate according to quite different principles, he said, and we simply have to accept that.  Ball then starts out to fix this major defect of modern physics, something all the great physicists failed to do: • Now we have that theory. Not a complete one, mind you, and the partial version still doesn’t make the apparent strangeness of quantum rules go away. But it does enable us to see why those rules lead to the world we experience it allows us to move past the confounding either/or choice of Bohr’s complementarity.  • The boundary between quantum and classical turns out not to be a chasm after all. A ball has a position, or a speed, or a mass. I can measure those things, and the things I measure are the properties of the ball. What more is there to say?  Yes, there is more to say, and that is said as realQM. måndag 19 juni 2017 Restart of Icarus Simulation AB My consulting company Icarus Simulation AB together with Johan Jansson is now being restructured  to take on new challenges in computational simulation. Icarus Simulation also offers unified reformed mathematics education from early school to advanced university level combining formal and computational mathematics into a basic tool of the computer age with wide areas of application. Customers are welcome. lördag 3 juni 2017 realQM: Helium Ground State -2.9036 = Success Computation with realQM in spherical coordinates with azimuthal symmetry on a mesh with 200 points in radial direction and 100 in polar angle (on my iPad), gives the value -2.9036... (Hartree) for the ground state energy of Helium in good agreement the observed value and benchmark computations with stdQM (here and here): • Pekeris (1959): -2.903724376    (best stdQM/Hylleraas with 1078 parameters) • Koki (2009): -2.9042 (in supposedly better agreement with observation). realQM offers a new model of the atom, which has a physical meaning in classical continuum mechanical terms and which is computable. The first test of realQM, beyond the one-electron Hydrogen atom where realQM coincides with the standard Schrödinger equation, is Helium with two electrons, and it seems that realQM passes this test successfully! The step from one to two is huge while the step from two to many may be if realQM works for Helium, then... Wikipedia gives the observed value −2.90338583(13) with reference: The difference with Pekeris is significant: stdQM does not appear to fit better with observation than realQM...maybe less... Wikipedia handles the discrepancy by the following hand-waving: PS1 1st ionisation energy as observed is supposed to be 0.903569881854. PS2  Nakashima and Nakasuji reports -2.903 724 377 034 119 598 311 159 245 194 404 446 696 905 37 with 40 correct decimals, still different from observation -2.903385... Trumps Reason to Withdraw from the Paris Accord When President Trump declared that the US will pull out from the Paris Climate Accord, he did not repeat his earlier analysis that CO2 climate alarmism is a hoax without scientific support. He could have done that on very good grounds, but he did not get into the question whether CO2 emissions from human activity is a real threat to the planet or not, which some climate skeptics regret. Trump simply referred to the fact that even if all commitments of a Paris Accord where fully fulfilled, or more, the total effect according to the very dogmas of CO2 climate alarmism, would be at most 0.2 C reduction of global warming at the end of the century, that is zero effect. His logic was that it would be immoral to deliberately deny poor people access to cheap fossil fuel and keep them in poverty, if the effect on climate would be zero. This shows the true dilemma of climate alarmism: If there is a real threat, then the planned measures to avoid catastrophe are totally inadequate and thus meaningless and then immoral by causing human suffering.  If there is no real threat, then the planned measures are even more meaningless and immoral. This dilemma is covered up by main stream media selling climate alarmism, but comes out in the  true face of climate alarmism as expressed by Joachim Schellnhuber, climate advisor to Merkel and the Pope, asking for a "great transformation" of human civilisation. This is something completely different from buying an electrical car.  Think of that. PS When Scott Pruitt as new Director of EPA and chief architect together with Mylon Ebell of Trumps CO2 agenda and decision to withdraw from the Paris Accord, was asked if he knew what  Trump was "thinking about climate",  Pruitt responded that frankly he did not know and that he had not discussed this question with Trump. To Pruitt and Ebell (and to the world) it is enough to know that their agenda is supported by Trump. This fits with Trump's decision to refrain from repeating his claim that climate alarmism is a "hoax" (which is impossible to prove), because the Paris Accord is meaningless, "hoax" or not.
047a91a50f20c730
Friday, April 08, 2016 Quantum critical dark matter and tunneling in quantum chemistry Quantum revolution, which started from biology, has started to infect also chemistry. contains interesting article titled Exotic quantum effects can govern the chemistry around us. The article tells about the evience that quantum tunnelling takes place in chemical reactions even at temperatures above the boiling point of water. This is not easy to explain in standard quantum theory framework. No one except me has the courage to utter aloud the words "non-standard value of Planck constant". This is perfectly understandable since at thist moment these worlds would still mean instantaneous academic execution. Quantum tunneling means that quantum particle is able to move through a classically forbidden region, where its momentum would be imaginary. The tunnelling probability can be estimated by solving the Schrödinger equation assuming that a free particle described as a wave arrives from the other side of the barrier and is partially reflected and partially transmitted. Tunneling probability is proportional to exp(-2∫ Kdx), k=iK is the imaginary wave vector in forbidden region - imaginary because the kinetic energy T=p2/2m of particle equals to T= E-V and is negative. In forbidden region momentum p is imaginary as also the wave vector k=iK = p/hbar. The trasmission-/tunnelling probability decreases exponentially with the height and width of the barrier. Hence the tunnelling should be extremely improbable in macroscopic and even nano-scales. The belief has been that this is true also in chemistry. Especially so at high temperatures, where quantum coherence lengths are expected to be short. Experiments have forced to challenge this belief. In TGD framework the hierarchy of phases of ordinary matter with Planck constant given by heff=n× h. The exponent in the tunneling probablity is proportional to 1/hbar. If hbar is large, the tunnelling probability increases since the damping exponential is near to unity. Tunneling becomes possible in scales, which are by a factor heff/h=n longer than usually. At microscopic level - in the sense of TGD space-time - the tunnelling would occur along magnetic flux tubes. This could explain the claimed tunneling effects in chemistry. In biochemistry these effects would of special importance. In TGD framework non-standard values of Planck constant are associated with quantum criticality and there is experimental evidence for quantum criticality in the bio-chemistry of proteins (see also this. In TGD framework quantum criticality is the basic postulate about quantum dynamics in all length scales and makes TGD unique since the fundamental coupling strength is analogous to critical temperature and therefore has a discrete spectrum. Physics student reading this has probably already noticed that diffraction is another fundamental quantum effect. By naive dimensional estimate, the sizes of diffraction spots should scale up by heff. This might provide a second manner to detect the presence of large heff photons and also other particles such as electrons. Dark variants of particles wold not be directly observable but might induce effects in ordinary matter making the scaled up diffraction spots visible. For instance, could our visual experience provide some support for large heff diffraction? The transformation of dark photons to biophotons might make this possible. P. S. Large heff quantum tunnelling could provide one further mechanism for cold fusion. The tunnelling probabily for overcoming Coulomb wall separating incoming charged nucleus from target nucleus is extremely small. If the value of Planck constant is scaled up, the probability increases by the above mechanism. Therefore TGD allows to consider at least 3 different mechanisms for cold fusion: all of them would rely on hierarchy of Planck constants. For a summary of earlier postings see Latest progress in TGD. Post a Comment << Home
28fd1dd61eca7b5b
in English | suomeksi aboutlecturesdiscussion Adobe Flash plugin required for showing the video here. Works in Linux, Mac and Windows. You can also download the video and play it locally in e.g. VLC player. Adobe Flash plugin required for showing the slides here. Works in Linux, Mac and Windows. You can also download the pdf and open it locally in your preferred PDF viewer. Brief instructions: Prof. Heikki Hyötyniemi AS-74.4192 Elementary Cybernetics Lecture 3: Towards Modeling of Emergence Helsinki University of Technology, 6.2.2009 (v.2009.04.10, only a rough machine translation, not cleaned yet!) [0:00 / 1] Welcome once again -- the subject this time is emergens. Courage to take the bull by the horns, because without it, that we somehow extract emergens idea, we can not do complex systems mallitusta. [0:22 / 2] Here is a little bit the problem that we have something like leased Wonderland here -- that if you ask Irvikissalta "In which direction should go?", So Irvikissa ask "What you would like to do?" And if the answer is "I do not now I really know", then Irvikissa says, "Well then, is exactly the same what you do" -- to which direction to go, so anything from then is there. This Escher image files quite well with this situation, if the source of some of the stairs rise up, stand up in agreement, then suddenly huomataankin the others being the bottom of the stairs, and it will continue -- and typically happens in such a manner that will be back to square one -- makes a kind of a cycle there. Well, the lecture time is right to try to find a coherent way forward, that we start piling up understanding of the old consensus on. The target would be that even if we have a very holistic problem, so the tools to deal with it would be reduktionistisia -- because the only tools we have, are reduktionistisia. Initially, we could look down on the top of this mallittamisprosessia general, the manner in which these complex processes has traditionally been approached. Shortly same repetition than last time we visited. [2:03 / 3] A typical approach is this -- the basis of facing up ponnistaen. There are some -- for example -- simple components, which are known, simple models. It comes to them then to gather together -- as if the construction of brick houses, and hope that when enough bricks are together, it houses about emergoituu. A typical example is the growth of this sort in the models. Let's go even move Exponential Growth, and when you will notice that it is not sufficient, then perhaps take one of the logistic model, or Monodin model, according to. It will be then such models. This publication is an example of one of the bacterial model in which the behavior of the substrate is mallitettu, and then the formation of acid, and alcohol consumption. But the problem in these cases is that, even when this has been the basic building out, so you have a countless number of free parameters which must be affixed. You must be a large amount of data, so that you could draw the parameters. If you have so many experiments of a complex system that all of its parameters, you get tied up, as you already very much familiar with the system behavior. You, not so much about the model no longer any joy is not even. This is like a fiddler's paradise that it is easy to invent new, such as the nonlinear terms, for this area, and always can make a new publication from then. The problem, of course, is that the models are very difficult to analyze because they are highly nonlinear, typically. Could almost say that they are hardly even useable. [4:35 / 4] Here is a concrete example of how this assumption as if the hazardous impact on the underlying situation. Suppose that we want to model the grass, and hares interaction. Construction of such a model ruoholle that it is just this sort pure integrator here -- that the grass is growing all the time. Here is one of constant growth factor, which causes it to weed all the time will be. And then here on this model hares. This is connected back to a positive cycle -- means that the more hares, the stronger they will grow. And the more there are hares, so the more you eat the grass. It is considered simulate this, but states that this model of behavior is absolutely absurd. Here it leaves the grass enough to stand up in this way, quite correctly, but the hares exponential growth is so strong, that this scale -- we have tens of thousands of them -- will notice that this growth ekspontentiaalinen eat all the grass and even in the way that the linear model leads to a negative ground biomass. Well, this clearly must be corrected in some way. [6:02 / 5] Added to it just this sort Monodin term for it. Which means that the growth reduction on the basis if we have the food there is not enough. Now, we see that after the behavior starts to recall already -- in some way -- a meaningful population behavior. There will be a cycle of this sort. The problem is that now that the grass be still doing the zero level. So, the grass biomass is sometimes zero. That we should certainly be corrected, so that it is credible. [6:50 / 6] Put there a restriction of this sort -- that is the grass will be up to zero by means of. Well, it happens then it means that the grass will decrease to zero, the hares and the number is set to a constant value. But this is still physically insane, because this Monodin model does not take into account the fact that if indeed the food goes to zero, so this model suggests that the number of hares remains constant thereafter. [7:25 / 7] It does not fit here, or we can extend this model by taking to the logistic model of the. We note that this is beginning to behave more rationally, ie the amount of grass here is set to one of the rational. And even this number of hares in this set may have one. Is seen that all the time and others are still quite poskellaan, or should we start now the parameters to match the findings of. In short, this is an endless task, this model of Tuning in this way, if we go forward basis, to encourage -- in the way it is traditionally, insinöörimäisesti approach. [8:18 / 8] Bring recent example was very simple, but when you take a sufficiently large model, it is gonna show you credible. And on issues like models are used quite a lot further. Take the example of the Forrester model, which is based on, for example, this time written for the Club of Rome report. This Forrester block model to emulate, it was found that by 2000, we have all messed up the world -- all the raw materials is at an end, oil is at an end, and it is a kind of catastrophe may have been the world. It was built in this model the world exactly the way it is in that I have just made. In other words, states that, for example, natural resources -- they are always eaten, this natural resource stock decreases, and the number of people grows, and so on. But the problem with issues like hand-coded models are typically the fact that they pretty much models are exactly what they were built mallittamaan. As long as the results are not what we expected, so as long tweaked the model forward. The end result is typically that the models can not be so much more to tell than what the model builder the default starting point has been. Since the models are so complicated, there are so many free parameters, that -- it always will agree to any pre-assumption, quite frankly. The worst, of course, is that the qualitative behavior depends on the parameters. If any of the feedback factor is too big, so the model becomes unstable, for example, while a smaller parameter negative feedback stabilizes the system. So, have the other kinds of approaches? [10:47 / 9] The target would be an ambitious goal would be to find methods to approach the problems head on -- rather than the source then the basis of physical models of collecting, building models, so it tries to reach the behavior of movement. So, starting with the chaos here, and the aim is to reach your order, elikä model. When I just went in another direction -- that is based on a simple model, but was chaos. [11:30 / 10] Here is a simple example of how the principle may be to think that this sort of this holistic approach, or a systemic approach, could be surprisingly effective. An example of this sort is a refrigerator case. Well, just an invented example -- think that the 1000-watt refrigerator with an efficiency of 30%, is connected to a network, but forgotten in the fridge door open. This refrigerator has a room which has been isolated from their environment. How the temperature of the room you need, when this refrigerator presumably puskee all the time from the cold room? Is there any suggestions ...? Warms or cools? [I would suggest that the warming up. ] Yes, this is the first level on issues like systemic thinking that it is -- because the refrigerator, however, is this sort of heat engine, as a whole, it produces more heat than cold. But, to be used in favor of the overall system -- or the whole of this systemic approach -- we need to look at the entire room with insulation system, which will be 1000-watt output in the heat. Actually, this is the whole problem-solving. Quite indifferent to that because what the 1000-watt output in a room is made -- however, there always will be 1000 watts, which is a moment. Ie the room warms up in 1000-watt power level, in the same way regardless of the efficiency of the refrigerator, or the other. As a strong performance can in principle be achieved if we have a simplified, merely to the system. A sad thing of course is -- last time it was found -- that kyberneettiset these systems do not in any way to simple systems. For example, cropping systems and insulation hypothesis is never true. Tyypillisestihän is assumed that these must be open systems, so that this entropian accumulation can be explained. We have to kind of a compromise to find these approaches range from -- therefore, in the very basis of the departing, and then this totally off on to examine the approach to -- be some way to get, to find something like the link between them. [14:44 / 11] One approach is to karkeistaminen, or could speak granularisoinnista. Take a complex system, and it comes -- more or less -- hand-picked from the avainsuureita. Suppose that the complex system behavior can be described by a small number of key variables. Well, this is perhaps a sufficiently intuitive example, that if you have a person even if, as some of the key features that a person can get one connected to the painting. Those who are more 70-century lived, so will know immediately that this is Kekkonen. Although this is very far karkeistettu. [15:45 / 12] This karkeistamisessa is always a risk that if you go too far to go too simple descriptions, so it is no longer responsible, is no longer capable of any level to describe the real world. This statue is Urho Kekkonen statue, which is over there Kajaanin cemetery, or church land, and it is the artist's vision for this -- that the screw cap image Kekkosta best. Maybe this is a political vision. But really, that in general terms, the idea of Holism -- or holistic mallituksessa idea -- would be to get more than what is, in parts alone. But if this is being done incorrectly, this approach, then the hole is actually bigger than what would be desirable. So, we left with only -- abstained. [17:02 / 13] Well, to get connected to these lower-level models -- that is, the only models we can build our tools, and then this senior-level holistic thinking, or tämmönen holistic intuition, so that proves to be, we must at some level to answer the question that what is emergens. Because of complex systems always describe this emergens concept. Complex system is typically semmoinen that there emergoituu any behavior, which is the basic component not. And if you really want to, an ambitious approach to this complex system mallitusongelmaa, so we must at some level to take a position on this, that what is emergens. So, you could -- in a lecture day in books -- to consider that what you think is emergens, and how you are approached to the problem. Here are the final lecture deals with the root of this problem that how we approached it, and how these approaches follow a specific approach in which the name is neokybernetiikka. But -- you have to remember that this is only one approach to these matters, and if you develop a different kind of approach, so you can rename it to another name. These are very much still open, these questions. [18:52 / 14] Complex systems research is not really anything beforehand Poker. In this way, that you can choose the approach that the concepts, methods and application areas, the time freely, and is, at least for the moment, it is interesting to study, because, as the last time it was found, so this tutkimuksenala attract just such a very emergent-minded people. You have quite a lot of freedom there. But, really, so that we have something concrete to achieve, so this course will focus on just this neokybernetiikkaan now. So, going into detail, and see what follows from premises, and then when it comes to these starting points to build the system, so can be considered what kind of on top in the visible characteristics, emergente properties then it is. Now, defining the approaches and concepts, and certainly worth noting that originally, these things were not so straightforward -- this is all mares iterative, and it later can be interpreted that these may be summarized in these basic ideas jonkunlaiseksi set of ideas. But intuition was originally the driving force. [20:45 / 15] This is really very much on issues like conflicting intuition in relation to these complex systems, as stated last time. And now indeed chosen a line, and try to justify why that is one line, or why it is then a line is selected. Even last time, it was found that a very large number of kompleksisuustutkijoista focus on this form of surface -- or on issues like fraktaalitutkijat are pretty much satisfied with the fact that if a formula to produce something Fractals and complex behavior, ignore a very large extent on how this formula could tul-in environment, how one gene could semmoista function to implement, for example. Wolfram These mussels are examples of this approach, too. And if you read emergens-names book, so there is a comparison of this sort, that the complex system of the brain, are surprisingly well-the same shape as the Hamburg city map was during the Middle Ages. Did not this have at one -- two complex systems -- a link now ...? In a sense it is a certain way, yes the link, because they are these the local players here -- neurons, or people -- who will start form on issues like installations, and as if this increase of this sort so in some sense -- they have a common underlying explanation. But instead of examining this result, the mares inflation outcome, it is fruitful to start to examine what functions these people, and what do these brain cells is here -- and why they are, after all, to some evolution have been proven stable structures. Prefer to explore these deep structures than surface structures. [23:19 / 16] Now determined that this course on issues like these deep structures are precisely those characters emergente. Does not of itself to be a very revolutionary statement, but it comes to integrate these concepts together -- that is, however, that might be necessary to take a position on that what are the deep structures, and what are the emergent dress, as defined, only that they are now -- to use both terms, but they point to the same matter. Come on now just concrete -- trying to model, or at least considered as examples of emergent behavior of the concrete, to get some kind of intuition that is what we are, the more common they all have. We certainly have examples of very specific emergent phenomena, [24:32 / 17] and this next slide illustrates the one area in emergens various levels. Here is a short, gas mallitusta various abstraktiotasoilla, or a range of karkeistuksen levels. Just at the lowest level, everything -- including the gas particles -- can be modeled alkeishiukkasten, quantum mechanical tools. It is the lowest level here. There exist stochastic, random new laws -- if we can kind of too much talk about even then. And if you want something a great number of gas models, this kvanttitaso or alkeishiukkasten level is not very useful, because we will have a tremendous amount of Schrödinger equations, which we should in some way to simplify. This simplification is, frankly, been going on -- we can rely on these results. It can be concluded that it is observed that the sufficient level karkeistuksen, these atoms and the behavior of atoms -- that is their interaction kvarkkien thanks -- both those atoms can be the ideal gas model to look as if the self-biljardipalloina. So, from this sort deterministic model, in which the atoms behave like each other törmäilevät balls. This is achieved Newton's mechanics, which dominated the world of gases. This is significantly better -- if you have a large number of atoms, or the ideal gas particles, so it is much more intuitive, user-friendly and more useful, all in all, this model, as this kvanttitason model, or alkeishiukkasmalli. But then, if you have millions of millions of particles, is beginning to be quite impossible to monitor all of collisions, and we are forced to live with it, that examines this issue in some way statistically. But it proves indeed that the macroscopic phenomena in terms, as if these emergent greats -- such as temperature and pressure -- are extremely well enough to describe the gas space. We do not need to know the individual molecules or particles of the behavior, when we know only that everything that we can head to measure the reverse -- that is the temperature and pressure -- they have a range of statistical functions of these particles. We know, for example, that temperature is directly proportional to the particles of the average kinetic energy, which in turn is directly proportional to the speed squared average. But then when it comes to large volumes of gas -- in the way that the particles they no longer have access to the same probability in all parts of the tank, for example. And if you have this sort macroscopic large tank, so environmental impact of the reservoir begins to be different in different. It follows that, for example, temperature differences -- there gas -- will start to become significant, the different items. It is the beginning -- is that we should start to take into account the convection, and large quantities of gas are concerned, different turbulent phenomena. So, this just a temperature and pressure of the model -- the assumption that we must stop these quantities, the same throughout the volume -- it is no longer true. We will have to look at every point separately. And then it starts to come of this sort are a statistical model, because the turbulence, for example, can not properly deal with any other than the statistical model. And if you go from here -- mind that we are fully turbulent, ie hahmoton gas mass, so one could imagine that we will no longer be able to do anything concrete, since these statistical mallitkin a very fluid results, but it proves that when you go down again at a higher level, so This fully mixed, fully turbulenttinen gas tank, it begins to behave as ideaalisekoitin -- ie it can be assumed that every point the cartridge is the same as if -- or when it is fully mixed with the same concentration, the same temperature at which point gas. Reach to the fact that we can once again start mallittamaan centralized system components, ie, a single concentration or lämpötilamuuttujalla. Again we have this sort deterministic model. Well, now we are at that level, that these models, we have issues like the gas tank design, but when we want to model something to an industrial plant where there are dozens of issues like the gas tank, or tanks, so huomataankin that if you approach the problem in just this way the traditional sense -- that is built for each of the shell's own model, what we can do, assuming the ideal confused why -- so when these will be hundreds of these ideaalisekoitinmalleja, then one hundred, this variable is a whole design begins once again be semmoinen that it is unable to control -- no longer possible to say that which of these variables are really important, and what kinds of behaviors, this variable 100 entity then receives a time when there is a different way, connected to the tanks with each other. It is precisely in these modern automation systems is a problem that even when all the components can be models for a very precise, its overall system robustisuusominaisuuksia or qualitative features do not really possible to say, without any simulation or otherwise. This would be a challenge, these engineers who will be our future. So, all those below the basic components are in control, but that the manner in which the entity is managed and understood, so it should be -- it would be the next big challenge. Similarly, those who are not in our area, so the challenge there -- and, in principle, although understood in a man's behavior, so the entire range of human behavior can not be further from the lead, but it is another of this sort emergenti behavior. It can be assumed that if this line with developments in this continuing, so the level of stochastic deterministic monitors, and again this determinististä stochastic. So, one of the statistical mares can be expected to need. This is really an example on issues like the system where these emergent phenomena following one another -- in other words, it is not in any way -- even if they are truly emergente, these levels are in the way that does not agree to return the lower-level variables, or the behavior of the upper-level phenomena -- it is much convenient to use the upper level of the parameters and laws of the return to the lowest level of quantum mechanics -- so this is, however, semmoinen concrete system in which we can in principle to restore the upper-level definitions -- or upper-level definitions will revert to the lower-level phenomena, and it can be a little more detail here now to consider that how these systems as if they are the new features emergoituvat. First, if I start from here-floor level, or go one level to another, so that the aikaskaalat grow. At the lowest level is a very fast phenomena, and they slow down what will be higher. The other hand, the lowest level is a huge amount of alkeishiukkasia, and it is reduced all the time when you will be higher up, or the number of variables is reduced, ie a certain way just makes this abstraction -- a large number of variables is forgotten, and only some kind of cumulative variables are ignored. Held to be the mind. [34:42 / 18] This is justified, why the hell is a kind of logical that the stochastic and deterministic level following each other. One might think that if there are two levels of determinististä a row, then they kind of collapse on top -- that is, we could be the upper-level variables deterministiä dealt with low-level deterministic variables among equally well. Similarly, if two stochastic level would be a row, then it should be the same stochastic model adaptable to all. Making this sort are bold generalization, kind of, that is when I just found that aikaskaalat will always only be longer, so we think that this emergens comes a stage when time at a lower level, goes to infinity -- that is to say there are an infinite number of time steps, or an infinite number of particles törmäilemässä. So the one hand, an infinite number, and infinite time -- we think that with sufficient accuracy, if they have something statistically meaningful behavior, the so-stationary situation is reached -- that this gonna poukkoilemaan the term of this sort than stationaarisuus and so forth. The other hand, it is also ergodisuudesta, if anyone is interested in mathematical matters, because in this way, combined with a series average and time average, ie the view that all the particles are also identical to each other, ie to go and the time-axis of the space variable-axis to infinity, and the same sort of behavior is found. So, what happens when the time-axis are eliminated? [36:59 / 19] Here's intuition of this sort now in use, that is in some sense infinite and emergens are with each other married on the agenda. If we make on issues like the formula, that is integrated into something even if it is, minus the endless to this moment, so if it receives a vak-a practical value -- means that if this is statistically meaningful, stationary, the signal -- so we wait for some value out of this clause. This E-operator -- this will be now, if you think the expectation operator, so will be very much wrong to use this, this course, or it might be wiser at this stage to nominate the E-operator emergens operator, jolloinka do not have this problem, but nevertheless pretty much just this expectation is at stake. In practice, then have to be some way approksimoimaan this infinity -- is assumed that a smaller set of data already achieved this stationaarisuusominaisuus, or if there is stationaarista data as a lower amount of data may be sufficient as an infinite amount. This approach -- although this is extremely simple -- it is just the good side, that this is mathematically extremely compact and simple, and unique, if you define that emergens is this -- and loosening up a bit, saying that poor emergens is this -- then we have tools to move forward. There has been some intuitively correct features already evident in this definition -- that is if you have a tree in woodland, or by one, even if the noise sample, which is quite different from all other samples, and it is a single sample in the way that there are no other issues like samples -- in the way that it is not statistically significant -- so the waiting value of, we get this completely disappear in this one tree. Elikä it does not affect our models samme its forest model from guarantee. The individual particles, or individual samples of the time do not mean anything. Only that, if one of the behaviors have something long-term correlations, someone else -- or if it is, if it is semmoista repetitive, and also visible in the past, and hopefully in the future, because we build models for this now nollahetkelle, and we hope that knowing the past will tell something in the future. In this sense, we should indeed expect that the system is stationary and the statistical properties are preserved, in the future. [40:36 / 20] This is of this sort since the new film, what does the old kalvoseteissä been -- I want to little problems connected to this, whether it is really just a emergens averaging? It is, however, this emergens core, in some way. Take, for example, it is this, that the manner in which the gas temperature is comparable to those below the particles, or their properties, we could say that really defines the temperature of the average kinetic energy, in the shell, ie the average kinetic energy, it is proportional to the mean square speed, ie does not need aware of the particulate flow directions and does not change, only its scalar speed, and its square, and in light of all of these mean, it is directly proportional to temperature. So, in this sense, the temperature is low due emergenti feature, or feature semmoinen they then have the individual particles is. But it is now when I start to approach these genuinely interesting complex systems -- so it is kind of a little semmoinen newer approach to this issue is that we want to link to this emergens the particles the interaction also -- not a single particle properties, but the two little mutual connections, connections, and its expectation value. The simplest, if we have an i-particle and the j-particle, and they have a feature x, so we want to calculate the expectation value of the properties of particles results. They are familiar with these mathematical things will notice immediately that, when this and this kind of combined, so get to the end of the day are interested in the system variables covariance. So back to the then next time. Here's the good thing about that, as it does this will find the so-linearity trying to maintain in these models, as far as possible. Although this is a nonlinear function of this correlation, so they can be further analyzed, or they could build a model, linea Aris terms. It will be seen next time, how it succeeds. [43:35 / 21] This is yet more on issues like cross-intuition. Traditionally, models are to the individual players and individual time points and is now explicitly do not want individuals Designs. This is, of course, it is a bad side, that at least in principle, we can not, then build -- can not be predicted with a single käyttäytyjän, or a single operator's behavior. This is the time to understand in itself, after all, these players have the free will. But a large number of can then find some of the laws. It is what is the good thing about this approach is that quite a lot of fundamental issues like what the problems of complex systems, which have teeth, or what caused debates on issues like, so they can be overridden, the same way. For example, quite a lot has been discussed in the theory of evolution that is it now the selfish gene is that really is the operator who is in favor of models, or whether it is this individual, which contain these genes jyllävät, so whether it is the individual, which is in favor of models. Following this review, the only rational review of the level is really the populations where these genes are pretty. In other words, the starting point of this sort as soon as the assumption here. And the other thing is that -- for example, such a profound time for Darwin's starting point for this -- that it is interested only in whether the best, that is, the best survive and reproduce, so this framework is about the entire population, after all, a very large number in the population will continue its genealogy , not only of the best. Or, say in the way that, if only it would continue the best relationship, so it would die out very quickly throughout the population. In other words, precisely the population -- or biological strength comes from diversity, that there are differences involved. These basic Darwiniaaninen theory -- this is not really able to effectively tackle at all, because it -- so tightly connected to the winner, it is a theory. Just another area to take this for example, that in the NP-hard problems, which are, in principle, be solved by non-polynomiaalisessa time, so much time is devoted time to find the best possible solution, but we think that nature is quite the same situation -- it is one of optimization problem, and it tries to find the best for the solution but did not find it. It enters into any of the suboptimaaliseen situation, which is pretty good, which is reasonably good, but does not typically have quite the absolute best. When rupeamme build kyberneettisiä these models, so it is not the best solution to the model, but rather it is a set of models, each of these osamalleista reflects in its own way a good solution, which in some way is able to respond to environmental challenges in their own way. Then, these models for more than an attempt to build a compact model. Eli kyberneettiset models are very much the model templates. [47:47 / 22] Again, this sort have the opposite intuition here. There are two options when it comes to complex systems modeling. Well, that Simon, in the complexity of the architectural book specifically on this issue in the way that there are two choices: either rupeamme to look at these characters, or processes. And now we have already found that rupeamme to look at these characters, but these processes are probably very interesting, because when rupeamme these characters to look at these systems, we as säätöteoreettisilla or methods, so I will come back to these processes in themselves -- as if through the back and later -- and after all that process philosophy is very close to what this will do. But, however, that the source directly into circulation for some processes, so it is not really the point of departure, as the time many of the other study, has been detected. Ie -- no, for example, artificial intelligence will bring a book, what I read at the time, so that at the outset, it is noted that all these artificial methods of interpretation of issues like the process framework, as if the intelligent agent framework, which means that the light completely in this way with an appropriate sense. [49:42 / 23] Here is a little motivation for why it is so strongly had the surface. Justification is, of course, this computer, because all of what the computer is doing on issues like algoritmisia processes, it is easy to take from this sort of comparison. On the other hand, those of chaos theory, all things are issues like procedural. And -- no, these are a lot of arguments. [50:23 / 24] But now, however, this course seeks to this natural approach the complexity of these are actually the characters in mind. This is now attempted to tie together these processes and outline. It is considered a little while for this file. So, here are two axes, first of this sort dimensionaalisen complexity axis, and then the structural complexity of the shaft. Pretty much one could say that this structural complexity, it includes epälineaarisuutta, and on issues like struktuuraalisesti complex things -- if you think that the linear system is here, it is semmoinen simple, perfectly well-known structural and behavior, then what more complex epälineaarisuuksia on, so the longer it is here in the structural complexity axis. It can be argued that the original habitat is structurally very complex, and built physical models, they are structurally complex, where they can find epälineaarisuudet and encode, but compared to the wild -- in other words, this physical model, it is dimensionaalisesti simple, because there is only a few variables, we pursue this whole natural complexity to connect. Eli abstrahoimme terribly much of this natural complexity out of a lot of variables out of -- get this close to zero here, ie there is only a few variables to this model is built. Well, then when it is sufficiently complex model, we do not really no other way but to simulate it -- mathematical methods can not so much epälineaarisuudesta at all to say -- we can simulate it, when we get here in the structural axis of complexity, a simpler, a little bit, but we dimensionaalisen complexity axis low -- origosta farther away, because we have to fiksata all initial states and others -- so that we can simulate the system so all the free parameters must be committed. They aimed to make in the way that they are the best match of this natural system here. Once it has been driving this model, as pursued in the mix of different situations -- we get a large amount of data, which is typically always the same shape, and it can be treated with a uniform methodology, ie it is structurally simple, this data. Typically, it is dimensionaalisesti complicated -- there are a lot of the data. At best, it can be argued that if we can find data about characters, we can simplify it, reduce the variables, and on the other hand, this structure reduces. This is the goal, we can find that on issues like the character design. What do these figures now mean, as we come back to it later, but the idea is that this character model is the one who is able to keep sisässään the other hand, this physical model of behavior that the natural behavior. Since the data is what we get here, and what mallitamme here once we get this character model -- so it may come directly from nature, these measurements are of nature. This is of this sort, no at the next -- or the moment will be finding that this is in some ways Kantilaista modeling. In the sense that we fiksaamaan something in advance -- we are in this case, fiksaamaan this, that the shape of characters to pick from here, but the figure inside the structure is then more or less unambiguous, that how this is a future here the data are interpreted, that the type of characters to be found there. So, those old philosophers are still valid, yes. It is not only the human perception mechanism, but also in our machinery of these -- when we want it to automatically mallittamaan something, so the same problems as humans will have been -- that is to do complex data mallittamaan we must have something to which the structure rupeamme to build this model -- that is now these figures will be the basic structure of those on our key components. [55:44 / 25] This is another of these cross-intuition -- that is traditionally thought that when the model is built, so we want to simplify, to uniquely identify a single one like a complex system model, and as if looking for the truth of that -- ie the system behavior, without any interference or non-interactions, seeks to find the core of it. But now -- first, we see or know that all our systems are always truly interact with the environment -- we want more models of the process of interaction with the environment, than the individual system. Because of its isolated system alone is not interesting -- it is a rabbit, for example, it will die out -- rather than the hare populations modify their environment and the environment is interacting with. Ie, instead of something Seeking the truth in what the data will never be able to get -- this is yet more to this philosophy to the cross that we must be satisfied with something, tämmöisiin shadows of reality, if you sought the truth -- instead of its relevance if we pick, so it is there in the data shown , ie we can see the data of variables between the interaction effects of the very basis of the data. It can be assumed that all of what is interesting, it is apparent, ie it must be able to pick up the correct variables in the system behavior, in which case all interest is ultimately to be in the data. This is the starting point, therefore, that pretty much -- or no does not mention any of the truth, speaks only of relevance mallituksesta, so avoid these philosophical bridge hole. [58:09 / 26] If muistelette these planes in the model where the past still had issues like ideaalisekoittimet here, so now may be approaching the issue of the level of characters -- that we come back next time and practice, says that we can approach it mallitusta tämmöisten statistical multivariate methods, and then in certain environments, we can, when it is appropriate , the so-called harvakoodautuneita features found there, so we can even rename them -- we get back tämmöisten funktionaalisuuksien, or symbolic concepts at a level up. But it pays to remember that although you are now training exercises will deal with this pääkomponenttianalyysia and so on, so next time, at its next session will be to find that instead that we now should be of this sort toolbox what we like -- the force of natural behavior, so will be to detect that this is a multi-variable analytics -- specifically pääkomponenttianalyysi and harvakoodaus -- it emergoituu then the system itself. Eli is not so, that we should now be a tool box, which is a requirement to be reconciled with nature, and nature were forced then to act on these pre-assumptions. Here is our statement that we now have a case of this sort Kantilaistyyppinen compromise here. This now let this sort, only an apostrophe. If someone has these Kant pure irrational criticisms come across somewhere, so to say that this is true in some sense it is comparable to the -- that we need someone kind teoriaohjautuneisuutta in this case, on the other hand, the theory must then make room for this data. [1:00:31 / 27] Well, this was found already that in order for this expected value operator, giving something sensible, we need the data, or their datojen kovariaation, stationaarista be in the way that the dependence on the past relations and future dependency relations are assumed to remain the same. Well, so this stationaarisuus could be valid, so the system must be stable in the broad sense. Does not mean stability in the way that it should always fiksautua to a certain point, but it must be stable in such a way that it will be able to answer -- that if the environment becomes even though the disturbances ... [cassette exchange] ... that is the attempt to balance the dynamic disorder is affected. It is noted that only if it is sufficiently stable conditions, so this sort of a emergenti kind of phenomenon is able to then emergoitumaan -- because typically these emergent phenomena are very sensitive, so if you would be too drastic conditions, so it was never there emergoituisi. Well, those returning to myöhemmnin. It may of course ask whether it is wise to confine stable systems, and, indeed, this course does not focus on all mathematically possible systems or means of mathematical models of potential, but only physically meaningful. It may be noted that all the physically meaningful models are at some level stable, because if they would be unstable, so they would have exploded a long time ago -- they do not exist anymore. - Or it can be argued that some processes have been an explosion, but their impact is then spread throughout the universe in the way that we see only with the explosion of the outcome here. In that sense, is not a transient phenomenon, necessary or useful to model, because we have no data about the behavior, we see only the end result. Another possible outcome is that it does not explode, but going into extinction, this sort unstable behavior -- just as much we can not be models out of dead animals -- is what is interesting is the one who is able to survive until now, it can perhaps give something really an indication of how our systems could survive. [1:03:35 / 28] Such a return to later, but certainly worth remembering that this sort of static and dynamic balance are very different things. On the contrary they seem quite the same, and typically when you think that balance is not sufficiently strong frame of reference mallitettaessa complex systems, as merely to think about issues like static balance weights. Instead, the dynamic balance between the apparently stable under the surface happens all the time. It is precisely the dynamic equilibrium is a balance of tension -- it is interesting that all the time is as if ready to collapse, unless there is something about the effects of consolidation. In this now as if the extension of this dynamic equilibrium concept, ie it can be argued that this sort termodynaaminen death is, then -- is achieved at the stage when the all-time figures are derivative zeros. So, something like a contradiction -- in the rupeamme focus on these non-static balance weights -- both of these systems neokyberneettiset gonna focus on these dynamic tasapainohin, and gonna change them, a static equilibrium, and then when it is as if the end kaluttu, this first-time figure of static equilibrium, so aletaankin focus another single figure in the balance, and so forth. The end result is a sudden, the fact that I should like to thermodynamic equilibrium, where the extreme far this goes kyberneettistä Chain -- ie this can be a very profound analysis, then, finally, returning to it after a while just briefly. Well, so that these issues like balance the search process would be possible to the local players, so we have to interpret them jonkunlaisina diffuusioprosesseina, that is spoken in generalized diffuusioprosesseista future. We can pretty much those we like the standard models. But this generalization here, means that they are also -- those variables may be a way, even on issues like information variables, not just the physical variables, concentrations, or the other. And they can be multi-dimensional diffuusioprosesseja. [1:06:32 / 29] Well once again the cross-intuition. Of this sort -- and this is semmoinen familiar image that comes with when teaching about these kompleksisuusteorioihin. It states that it is first of Poker on issues like systems which are not interesting, then, is a little more complex systems periodisia who like to return to the same space, and then there are issues like chaotic systems. These chaotic systems, they are -- to them we do not, they can not too much to say, they are epäkiinnostavia, but also on issues like periodiset systems, they can we fully know, and they are not in the sense of interest -- these complex systems are a kind of extremely narrow interface in this epäkiinnostavien and epäkiinnostavien between. It is a little bit of this sort of doubtful, or semmoinen vague place that is what this chaos interface here -- that is how we succeed, how complex the system stays in this interesting region. This is just the basic fundamental problem in that kompleksisuusteoriassa that this is a very unstable kind of place -- that complex systems place -- ie nearly all of the approaches pullahtavat either side of chaos, or else tämmöisten simple systems side. How can natural systems will remain as if this automatically, all the time on this interface? Well, will find that this is a certain kind of attraktori here, and as you kyberneettiset systems are developing, so that the interface moves further chaos in there side. These classics, such as the Schrödinger, "What Is Life" book, it assumes that all of life or living systems, they are characterized by this that they are as far removed from the balance -- they are very unstable, and in that way -- and it is this intuition, that are very far from equilibrium, so one might almost say that this intuition is false, that Schrödinger was thinking mares static equilibrium, because otherwise he would not have been able to argue that the dynamic equilibrium is death, but the ability to stay in this dynamic balance in this chaos and order at the border , so it is rather characteristic of life. The second is that Prigogine which assumes that what dissipatiivisempi, the more the system consumes energy, the environment, for example, its lively it is, and that is what the balance further away it is, the more lively. They all lit up, quite a different point of view. [1:10:12 / 30] Well, here now is this sort oriental symbol -- it deserves this place in such a way that even if this interpretation of the oriental static equilibrium of the balance pretty much -- the thought that the human body -- this sort Oriental medicine seeks to balance -- so it does not think the West just as thoroughly as it should think. It really is not a static balance, but this oriental balance is of this sort -- of this sort mystical, some kind of dynamic equilibrium, in other words, this is also something to steam -- the second interpretation for this is in the steam -- and the second interpretation is järjestymisperiaate. It is a very profound idea indeed, this basically balance the oriental idea of what this sort of Western interpretation is not able to formalisoimaan. But on the other hand, it is not even the oriental philosophy, and it will not be able to formalisoimaan this, that when these issues are approached, as it leads to these, tämmöisiin logical paradox, koaneihin. [1:11:43 / 31] Well this is like a nutshell, what has come to be established, or what will face or what will be discussed in the future of this course. Kyberneettiset These structures have a certain way on issues like stabiilisuusrakenteita, semmoisia attraktoreita root, which in the long term, stable, on issues like dynamic constructs, even if they momentarily appear to be very sensitive to semmoisia, fluid. Rather than talk of dynamic balance alone, so talk to the Balance balance, ie the single chapter balans. It is precisely in the sense that kyberneettinen system is a multi-level, and emergens has many levels, and multiple. Kyberneettinen and model of this sort is relevant spectrum of behavior more. [1:13:00 / 32] And, it is a model tämmöisten Oct. alien minima over. In other words, I just found that these NP problems, they usually try to find -- as the passenger's problem -- to find the best solution, so kyberneettisessä framework seeks to identify any kind of pattern to it that what a picture, or what is common in all reasonably good models that are not there are optimal, but which are close to their optima, and eligible. Indeed, the fact that models of these different options, so it can be intuitively close this Herakleitoksen the idea that when the models for the river, so never the same models for the river, but the river idea. [1:14:09 / 33] This is now true again, the same finding, that the models only physically meaningful systems stable. It is an extremely small class of all possible mathematical systems. We can easily justify this. Where is the state variables of the songs, and think that they are dynamic variables -- in terms of the corresponding mode has been thrown into the complex plane at random -- the so-so that the overall system is stable, each of these randomly thrown poles must hit the left half plane, because if there is one right half plane, the polar, or variable, so it means that the overall system is unstable. It follows that we can more or less intuitive formula leads in this way that it is 1 / 2 ^ n, is the probability that all the poles at random would be thrown in the left half plane. How many people got this idea from? However, this has to be justified by the fact that the mathematical mind, this set of models we look at what is an extremely narrow. But the extremely narrow range of fit in all the systems of interest, however. This in itself is not so restrictive that it is limited to stable systems, since it is apparent that these complex systems in themselves are in control systems, ie when they are connected to the environment, which may have been originally unstable -- when they are sufficiently closely connected, so they may achieve it, that the whole environment changes stable, and this is the very nature of these root systems kyberneettisille that originally unstable system will change -- when it changes it kyberneettiseksi system -- becomes stable. [1:16:30 / 34] It involves the way it is, that when this kyberneettinen system is received at a lower level signals are stabilized, and pushed in practice, the signal variation of the heat death, so the system rupeaakin focus on what still is left, that is a kind of higher-level equilibrium. Seeks to focus on stabilizing it. It follows that in the end we end up on issues like higher single chapter balance, which may appoint a thermodynamic heat death. To return to the course at the end just to the fact that how this idea of these kyberneettiset systems are thermodynamic entirely consistent. So, even if kyberneettisissä systems typically order to grow, leading to the legislator, this improves, so if the system takes that system and the whole environment, so the overall system, in this environment + system in the variables stabiloituvat better, ie closer to the inflow of heat death, ie Entropia grows. After all, when the systems limits the right way, so the end result is that, in the same manner as that of simple physical systems, including those kyberneettiset systems tend towards entropian maximum. [1:18:35 / 35] Well, then this intuition would each like to say that since these are the tensions on issues like models, after all, so pretty well describes the behavior of these issues like transienttitilanteissa elastic system of intuition, a system of mechanical intuition, that is if they poikkeutetaan about balance, so they are a force to return the balance to . And also for the electrical side analogioita found, ie it proves -- if the two systems are interacting with each other, so that the power between the maximum move and would not have the power to lose, so the impedance between them have agreed with the country. This may say e-something men, but a return to these later. [1:19:37 / 36] Now, suitably covered jyrinää there -- we are coming to the philosophies and very far really over here in science pens. In other words, so that we can move forward consistently mallituksessa this, we need on issues like really fundamental prinsiipin, which supports us. If we can agree on issues like -- here it is now named Pallas Athena hypothesis that -- if this is acceptable, we are quite consistent with the path to move forward at a later date. But what does this mean this hypothesis, so this is a very good idea kontroversiaalinen itself. But can you think about it in mind that if this is acceptable. You know, perhaps the Gaia hypothesis, it is a bit similar to him. Gaia is the god, and that Lovelock and others have raised semmoisen idea that these processes, all klimatologiset and palaeontological, any processes on Earth is, and even volcanic eruptions, and others, they are very easy to wipe all life on Earth out. But it appears that the earth, or rather the earth mother, Gaia elikä-goddess, elikä the god, has directed all of the processes as a way to behave, however, that they in some way in support of life, and allow ever more complex sits on life here on earth. That while this Gaia, or the earth goddess is somehow very semmoinen unstable and mentally a little epäbalanssissa one, so it is, however, made it possible for all disasters is life still exists here. And, now get this Gaia hypothesis that, in fact, that can lead to very effective, very powerful models, on issues like klimatologisille or to the ground handlers ilmiöille, if you think that they are limited to semmoisiin phenomena that allow for life on Earth. So, may be limited to all of the potential range of behavior, only those behaviors that are not too drastic. Ie -- no, you can see this in Gaia hypothesis further. This is a very questionable theory. And just as questionable theory is then that Pallas Athena hypothesis which relates to the fact that Pallas Athena -- if Gaia was the goddess, as Pallas Athena was the goddess of science. Well, this Wiegner and Einstein, are in turn, and all the other scientists in turn have in mind at least have said, and wondered how that could be possible that the math is so strong that it is able to tackle natural phenomena, how it is able to explain it. And precisely Einstein once said that how is it possible that nature at all mallitettavissa. How is it possible that can be kompressoida so this world kaikenpuolinen complexity so simple that we can truly understand and even mathematical models for it? This is really a complete mystery. But this Pallas Athena, the hypothesis now assumes that this goddess to protect us, and science has not yet been exhausted. That may be -- science progresses further. If this hypothesis can be accepted, that science has not yet stopped, as if a large scale -- that is not only just semmoisia to fill gaps left, so when we will be very powerful tools available suddenly. They return to the moment. This is a bit like a parallel-axiom, there Euklidisessa geometry. That we can take on issues like -- or we can assume that this is not valid and, for example, non-linearity is an essential part of all nature, all natural mallitusta. Then we get quite different paths along the different results. But if we take seriously the Pallas Athena-hypothesis, then we end up very different world in which pretty much dominates the linearity of the phenomena. Now this is neokybernetiikka explicitly -- based on the idea that, for example, linearity is dominated by -- [1:25:10 / 37] Well, before you go to the so-linearity into second intuition, which, as follows from that Pallas Athena hypothesis. A certain kind of determinism. So if the measure of data -- data is the only thing we can draft, after all, to detect or collect to get -- the so-so science can develop, as its scientific progress must be based on this data collection, after all. Where there may be some models to build, then it must be more or less unambiguous, that it is how data is to be interpreted. Because otherwise the risk of just that post-modern ambiguity. In other words, data can be interpreted in different ways, and it then hajotaan different directions of interpretation. So that we would be one of the only interpretation, at least in the broad sense, is valid, then it requires that any kind of non-random nature of systems. So, in a particular way kyberneettisten these systems, they must have a kind of natural mirror images -- to go into detail more specifically -- but not in the way that they more or less uniquely describe the surrounding world. [1:26:59 / 38] Another very semmoinen intuitive idea in itself, is that because this system and environment are strongly married to each other, and the environment, it consists of other systems, so these models have to be symmetrical, in a particular way, that what the model tells the surroundings and what it says a systemic, it is more or less mirrors or rotatable. [1:27:45 / 39] Well, then this is really the most questionable, or the most striking objections to the hypothesis in this context, it is this lineaarisuusolettamus. Well okay, we can imagine always that if the system is in balance, so it is somehow säätynyt to a point where it acts as its operating point for the environment, so it can be linearisoida -- but this is a profound idea in itself, this, because we first have to get there linearisointipisteeseen. No why this lineaarisuusolettamusta now highlights so far -- and will continue to keep this a guideline in the way that, as far as possible, is the linear, pending the establishment of epälineaarisuuksia -- so why is this done? No justification is that epälineaaristen the category is so broad and so uncharted, and then never find any consistent yhtenäisteoriaa, or semmoista a single class models, which in some way to cover all possible epälineaarisuudet. Only lineaariteorian side of this is possible. [1:29:11 / 40] Well, this is indeed a very fundamental nature of the starting point of these kompleksisuustutkimuksissa that almost the first sentence always states that the complex phenomena, or emergent phenomena, the following epälineaarisuudesta, a lower level. So, this is a very, very fundamental difference -- although, if we examine the covariance something else and so, after all, it is the variables of income, ie non-linear function, but it can still be modeled linearly. And -- no, one of reasons is also this, that if we are not interested in those processes, but only in the final quarters, that the outcome of its process of going into balance, dynamic balance, so that balance analysis can be much easier than that of the review process itself. So, it is a balance -- there may be sufficient linearity of this sort in itself. [1:30:16 / 41] Here are few of these konkluusioita. So, next time will have to apply the one hand, this balance between the search of a stage, on the other hand, lineaarisuustavoitetta in every phase. They have a wide range of heuristisia, theoretical and practical arguments -- you can read about that -- but all in all, these starting points to more or less unambiguous suuntaviivaston what direction to go in favor of, or in what direction to go. [1:30:52 / 42] Here is one example of what can follow if you have epälineaarisuutta system. I will run this through quickly -- this is so surprising result, when we have two things in combination, non-linearity and korkeadimensioisuus. So, in future we will be satisfied korkeadimensioisuuteen and linearity -- because we know that even if the dimension is high, the so-linearity to save us. It is considered mares, a system that is very close to a linear model, ie it is diskreettiaikainen model in which the next state of s (k +1) is a function of the previous state of s (k). It is this sort matrix A, which makes lineaarikuvauksen new place, it is just this sort in the non-linearity. If this is not something that f-seven would be so totally know the quality of how that system behaves, there was a dimension of the problem, huh. Well, now defined as the non-linearity in this way that it only cut the negative values in the way that if s is positive, or it s the element is positive, then go through the s in. But if s is negative, the output provides a simple zero. What do you think is this not a simpler behavior of this system as a linear system, because no variable or the variable element is now able to go negative? It is only the first kvadraatti, or hyperkvadraatti available for the single-space, and there is a linear model. Does not seem to narrow the time of this behavior? However, it proves that this is much more complex behavior than the linear model. [1:32:53 / 43] Can be shown that the appropriate A-matrix of choice, brings diskreettiaikainen model capable of simulating any algorithm. That is the way that the state s is a snapshot of the program, because that way there are the values of variables, and then the program counter. Take the example of this. [1:33:20 / 44] This is a program of this sort, a very simple language described, but that language is a direct translator that is able to reverse the Matlab-code, and on issues like matrix. So, this is a translation of this phrase here, or in this matrix. You can see here that the first X in one of the X value and Y has a zero-value, and then the program counter goes into circulation on this -- that if X is even greater than zero, then X is reduced by one, Y, was reared in one , and goes sextuplets, and this is again that, in practice, going all the time 'X' down, so long that X is zero, when pullahdetaan out of the loop, and this program will stop. But what is this side of on the fact that whenever the X-seven minimized, so is Y was also changed in the way that changed its leader, will be zero. It follows from the fact that, depending on the x's parity, then Y has an end value, either number one or zero. So, this is -- one could say that this is a generalized parity function. If you remember neuraaliverkkoteoriaa, so you know that someone XOR, elikä a kind of parity, just two parameters, has already been quite a test of the problem, and here you can x the value to be any integer, so it always returns the outcome Y in -- either number one or a zero, depending on whether it is an even or odd. It is just -- this is the standard, this A-matrix, s but keep it sisälläään x: the value of the initial state, and then ohjelmalaskurin here. And then when it stops the process, enters into the balance, as it always does, then the Y has something other than zero -- or it is zero or unity. This has now been tested that varied with some of this' X ', and this varied with ohjelmalaskurin value -- because this is just a purely arbitrary, this sort s, the vector can be iteroida this through and to see to it that what it konvergoituu. [1:35:48 / 45] Well this is the end result. See you, that this ohjelmalaskurin value of this axis, and then the initial status, ie the x's value is in this axis. So, you can see, the classic of this sort parity function is defined only in the integer points, and we can see that if you have a program counter number one initially, and one integer value, then it enters into in the way that it is zero to zero in the inflow, ykkösestä number one, second place to zero, the three number one, and so forth -- ie Y is, or is loppulos, is number one if it is odd that the initial value of x. But really, we can of this Y's values, Y, by the final values of the draw with the other initial values of Poker in these well-defined values -- in other words, this is this sort generalized parity function here korkeadimensioisessa in space, can not say. Well, this in itself was the only experiment of this sort, [1:37:03 / 46] but it is what this is what the thing is that when it comes to this sort komputationaalinen strength of this mallikehyksellä, then suddenly it is a Pandora's box opened. In other words, if an arbitrary algorithm can be implemented in this way matrix, so we can take to the so-called universal machine in the form. Universal Machine is semmoinen, which includes the parameter in the other program codes, by simulating it and return it to the value of what it does inside of a function or algorithm would return. So, this has to be put into practice now semmoinen a universal machine. And, it is a universal machine has been used in such a way that that having someone inside the algorithm, and to interpret its results. This is a little bit complicated, now, but the question is whether [1:38:17 / 47] which is also Gödelin epätäydellisyyslauseessa, that is, it proves that if able to do semmoisen the algorithm which is able to say something about that the last film in the system -- from A-matrix -- then it would prove that the overall system -- that is, if you would semmoinen algorithm which is able to say about the system, Will it ever stop at an arbitrary feed or not, so we could submit it to the algorithm, which makes this reasoning, we could give in to this system -- this system works in such a way that if this system itsestänsä says that it is going to stop, so it pistääkin this eternal luuppiin, ie it never stops, and if so semmoinen algorithm which is able to say that it does not stop, it stops abruptly. This is the pysähtymisteoreemoja, and so forth. However, the end result is that -- [1:39:32 / 48] can you report this to look at what it is -- however, it is essential that the system here is semmoinen that although the future systeemiteoreetikot would use most of the time of this analysis, it was never going to be semmoista a method that is able to say all the feeds you, that if this is stable or not, this system, it is a question. So, some simple non-linearity than what was examined, both when it has a sufficient number of dimension, as if this is more than 300-dimensioinen system -- I mean that we are capable of three hundred to take the dimensions of this universal machine -- so its behavior is qualitatively completely gone -- that nothing will really be able to longer say, this system, since all the algorithms are returned to that pysähtymisongelmaan of this framework. [1:40:33 / 49] Well, here is this mallitusstrategia what is implemented on time. [1:40:41 / 50] However, here is this idea, in short, that we started as if the circulation of this situation that does not really have to know that what will take the stairs, so now if you are moving these osviittojen in accordance with, then it goes to one of the dark -- we do not know in advance what it will take, but still achieve consistent, according to the forwards. It is perhaps reasonable to conclude the course watch them again, this film sets, just in the sense that it may bring to mind that the more age than was promised. And it's all learning, at least kyberneettinen learning, is the root mares iterative learning. In other words, piled up to the increasing consensus on the new information. Well, thank you. [1:41:37 / -]
d47d37392380c338
Electromagnetic Processes (Princeton Series in Astrophysics) Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 7.86 MB Downloadable formats: PDF Maybe quantum strangeness, which produces everyday normality, is just a byproduct of a universe that has been designed so we can exist. We then take another derivative, \partial_t \langle xp \rangle =\frac{1}{i} \langle [xp,H]\rangle = \frac{1}{2 i m } \langle [xp,p^2]\rangle. [xp,p^2] &=& 2 i p^2. \partial_t^2 \langle x^2\rangle = \frac{2}{m} \langle p^2\rangle. \partial_t \langle p^2\rangle = \frac{1}{i} \langle [p^2,H]\rangle. Pages: 312 Publisher: Princeton University Press (December 11, 2005) ISBN: 0691124434 Wave Mechanics Modern Quantum Field Theory II - Proceedings of the International Colloquium (Tata Institute of Fundamental Research, Bombay, India, 5-11 January) The vibrator shakes the string back and forth, creating a disturbance perpendicular to the string's length The Fractional Fourier download epub http://lv.emischool.com/?lib/the-fractional-fourier-transform-with-applications-in-optics-and-signal-processing. Although Bohr's theory was initially viewed with skepticism, it earned him the Nobel Prize in physics in 1922 and was eventually expanded by other physicists into quantum mechanics epub. As in the classical case, amplitudes add and subtract from one another. Considering photons as particles, if we reduce the emission rate, photons go through the slits one at a time*, and after enough time has elapsed, an interference pattern is built up. *Strictly speaking, according to quantum mechanics, a photon can go through BOTH slits at the same time The Science of Sound **ISBN: 9780805385656** download for free. What is the centripetal force on the car? 26 Fundamentals of Optical Fibers tellfredericksburg.com. Then we unplug the typewriter, and nobody looks inside the box (to see whether the paper has T or B typed on it) for two weeks. To be analogous to the two weeks of eating or rotting for the cat, we can imagine using time-sensitive ink that will change color during the two weeks Applied Digital Optics: From download pdf download pdf. As for your motion the waves would simply pass through each other, emerging from the meeting point only slightly changed. Needless to say the exact motion at the meeting point would be complex. Needless to say the exact motion at the meeting point would be complex Linear and Nonlinear Waves read online Linear and Nonlinear Waves. The uncertainty principle has been frequently confused with the observer effect, evidently even by its originator, Werner Heisenberg. [6] The uncertainty principle in its standard form describes how precisely we may measure the position and momentum of a particle at the same time — if we increase the precision in measuring one quantity, we are forced to lose precision in measuring the other. [19] An alternative version of the uncertainty principle, [20] more in the spirit of an observer effect, [21] fully accounts for the disturbance the observer has on a system and the error incurred, although this is not how the term "uncertainty principle" is most commonly used in practice. ^ "Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature ref.: Terahertz and Gigahertz download epub Terahertz and Gigahertz Electronics and. All these developments led to the establishment of quantum mechanics as a scientific theory, well grounded in experiment and formalism. The wavefunction describing any particle in quantum mechanics is a matter wave, whose form is computed through the use of Schrödinger equation. Ergo, matter waves form the central most important feature of quantum mechanics Locally Compact Quantum Groups download online download online. To name just a few, What is this Y thing anyway? What does it mean to say that "probability waves" are flying through space, interfering with each other, and suddenly "collapsing" into definitenesses? Presumably the photon is bumping into, and interacting with, all kinds of things on the way to the cardboard, but they apparently don't count as "measurements." This is called the collapse of the wave function. An example of a wave that could be a position function. (Actual position functions are normally much more concentrated.) In quantum mechanics, it is meaningless to make absolute statements such as "the particle is here" online. Digital and Analog Fiber Optic Communication for CATV and FTTx Applications Theoretical Optics: An Introduction Neutrosophic Logic, Wave Mechanics, and Other Stories (Selected Works 2005-2008) Wave Processes in Solids With Microstructure (Series on Stability, Vibration and Control of Systems) The confirmation is that besides predicting well probabilities of measurements, all the intermediate steps before the detector (passing through fields, beam-splitters, etc.) are well-described with the concept of a complex wave possessing magnitude and phase (concept already known to us from e.m. waves). – Sofia Dec 21 '14 at 22:53 @Sofia: While it is possible to describe most of quantum mechanics in terms of wave functions, it is not the most advisable way: It doesn't generalize easily to QFT, it is difficult to explain what spin, angular momentum and the like are, and it lures people into thinking of quantum objects as classical waves , source: Oscillations and Waves http://tellfredericksburg.com/freebooks/oscillations-and-waves. In doing so, Quantum Mechanics changes our understanding of nature in fundamental ways. While the classical laws of physics are deterministic, QM is probabilistic , cited: Oscillations and Waves download here tellfredericksburg.com. Furthermore, in the special case where X = ±cT, we actually have I = 0 even though X, T = 0 — i. e., the “distance” between two well-separated events can be zero , cited: Mathematical Studies In download pdf Mathematical Studies In Nonlinear Wave. Notice the pattern in the harmonics - they are all multiples of the fundamental frequency. The pattern is that a node is added for each additional harmonic, adding a half wave to the string. Since speed is constant, this decrease in wavelength (L also constant) corresponds to an increase in frequency. Here is the pattern for open and closed pipes. Notice the similarity in the harmonic pattern for the open pipe and waves on a string above Cavitation and Multiphase Flow Phenomena http://www.sandikli.web.tr/freebooks/cavitation-and-multiphase-flow-phenomena. A plane wave in two dimensions in the x−y plane moves in the direction 45◦ counterclockwise from the x-axis as shown in figure 2.20 The Physics of Vibrations and Waves tellfredericksburg.com. The interpretation was provided by Max Born. He stated that the wave function for a hydrogen atom represents each of its physical states and it can be used to calculate the probability of finding the electron at a certain point in space ref.: Engineering Field Theory with Applications http://tellfredericksburg.com/freebooks/engineering-field-theory-with-applications. Detection, Estimation, and Modulation Theory. Part I: Detection, Estimation, and Linear Modulation Theory (Part 1) Coherent States On The Nature of Spacetime: Ontology of Physics - Classical And Quantum Dynamics of Compressible Viscous Fluid (Stability Oscillations and Optimization of Systems) Symmetries, Asymmetries, and the World of Particles (Geo. S. Long Publication Series) Quantum Fields on a Lattice (Cambridge Monographs on Mathematical Physics) Relativistic Quantum Mechanics: With Applications in Condensed Matter and Atomic Physics (Religious Studies; 47) Shock Wave Dynamics: Derivatives and Related Topics The Un-unified Field Encyclopedia of Imaging Science & Technology 2 volume set Engineering Electromagnetics Photomagneton and Quantum Field Theory, the - Volume 1 of Quantum Chemistry (Series in Optics and Photonics) Method of Moments for 2D Scattering Problems: Basic Concepts and Applications The Wave Mechanics of Electrons in Metals Figure 3.8: Production of a virtual image by a negative lens. An image will be produced to the right of the lens only if do > f. If do < f, the lens is unable to converge the rays from the image to a point, as is seen in figure 3.7. However, in this case the backward extension of the rays converge at a point called a virtual image, which in the case of a positive lens is always farther away from the lens than the object , cited: Image processing: the fundamentals download online. Also, the higher the amplitude the higher the energy of the wave. If the wave has a high amplitude, how must the particles in the wave be moving? For a wave to have a high amplitude the particle has to be moving over a large distance (large being a relative term here, the distance may still be miniscule) Broadband Optical Access Networks and Fiber-to-the-Home: Systems Technologies and Deployment Strategies http://tellfredericksburg.com/freebooks/broadband-optical-access-networks-and-fiber-to-the-home-systems-technologies-and-deployment. Following is an explanation of this animation: The animation is updated with a new frame at a regular frequency several times per second. So, the time interval between frames is constant. The black disk is the object, the black vertical and horizontal lines mark the x- and y-positions of the object respectively Digital Communication over download here download here. If you have a stationary state, and you superimpose another stationary state, the result is not stationary. More than one exponential is not stationary. So when you have this, you could have time dependence. Whenever you have a state that is not stationary, there is time dependence. So here is a simple but important calculation that should be done IUTAM Symposium on Mechanical and Electromagnetic Waves in Structured Media (Solid Mechanics and Its Applications, Volume 91) IUTAM Symposium on Mechanical and. What aspects of the behaviour of light make it look like a wave? 4. What aspects of the behaviour of light make it look like a particle? 5. White light is partially reflected by the transparent material. Some of the light, however, is refracted into the transparent material and reflected back by the opaque material online. John Bell replaced the “arbitrary axioms” (Bell 1987, page 11) of Kochen-Specker and others by an assumption of locality, of no action-at-a-distance Mathematical Modeling of Wave read online tellfredericksburg.com. IIT JEE 1980 - 2009 Transverse wave – Here, the elements of the disturbed media of the travelling wave, move perpendicular to the direction of the wave’s propagation. A particle at the crest / trough has zero velocity. The distance between two consecutive crests / troughs is equal to the wavelength of the wave P(0)2 Euclidean (Quantum) Field Theory (Princeton Series in Physics) http://kaigohoshou.com/library/p-0-2-euclidean-quantum-field-theory-princeton-series-in-physics. Planck seems to have been unaware that by using Wien’s energy density calculation he was actually causing the infinitely variable measurement time to be fixed at a constant value of one second. He also seems to have been unaware that the fixed time variable was subsequently hidden in the final calculations of his action constant “h”: ” - and the fixed, hidden measurement time variable, “tm” Submarine Landslides and download epub lv.emischool.com. I have a BS in Physics and an MS in Electrical Engineering. My professional career was in Electrical Engineering with considerable time spent working with accelerometers, gyroscopes and flight dynamics (Physics related topics) while working on the Space Shuttle. I gave formal classroom lessons to technical co-workers periodically over a several year period Frontiers in Aeroacoustics read epub http://backazimuthpublishing.com/freebooks/frontiers-in-aeroacoustics. In other words, if we replace C with where h(t) is a smooth function with h(t0) = h(t1) = 0, and we calculate L( (You may hear that the physical trajectory minimizes the action, but this is only a necessary condition for minimum, and the physical trajectory is not always an actual minimum.) If we expand L in powers of and retain only the first-order term, the stationary condition becomes and if we integrate by parts, it is: (The boundary terms vanish because of the conditions h(t0) = h(t1) = 0, which served to fix the beginning and end of the trajectory.) If, now, we let the three components of h range over a complete set such as sin(n t/L), with L = t1-t0, we see that the only possibility is for The big advantage of Lagrangian mechanics is that it allows us relatively easily to find the equations of motion of an extended body, such as a string online. Rated 4.2/5 based on 802 customer reviews
00a667ed81992ed5
Tuesday, 30 September 2014 On This Day in Math - September 30 Big whirls have little whirls, That feed on their velocity; And little whirls have lesser whirls, And so on to viscosity. ~Lewis Richardson The 273rd day of the year; 273oK(to the nearest integer)is the freezing point of water, or 0oC 273 is prime, and 173 is also prime, and the concatenation 173273 is also prime. (How many pairs of three digit primes can be concatenated to make a prime?) 1717 Colin Maclaurin (1698–1746), age 19, was appointed to the Mathematics Chair at Marischal College, Aberdeen, Scotland. This is the youngest at which anyone has been elected chair (full professor) at a university. (Guinness) In 1725 he was made Professor at Edinburgh University on the recommendation of Newton. *VFR 1810 The University of Berlin opened. *VFR It is now called The Humboldt University of Berlin and is Berlin's oldest university. It was founded as the University of Berlin (Universität zu Berlin) by the liberal Prussian educational reformer and linguist Wilhelm von Humboldt, whose university model has strongly influenced other European and Western universities.*Wik 1890 In his desk notes Sir George Biddell Airy writes about his disappointment on finding an error in his calculations of the moon’s motion. “ I had made considerable advance ... in calculations on my favourite numerical lunar theory, when I discovered that, under the heavy pressure of unusual matters (two transits of Venus and some eclipses) I had committed a grievous error in the first stage of giving numerical value to my theory. My spirit in the work was broken, and I have never heartily proceeded with it since.” *George Biddell Airy and Wilfrid Airy (ed.), Autobiography of Sir George Biddell Airy (1896), 350. 1893 Felix Klein visits Worlds fair in Chicago, then visits many colleges. On this day the New York Mathematical society had a special meeting to honor him. *VFR 1921 William H Schott patented the "hit-and-miss synchronizer for his clocks. The Shortt-Synchronome free pendulum clock was a complex precision electromechanical pendulum clock invented in 1921 by British railway engineer William Hamilton Shortt in collaboration with horologist Frank Hope-Jones, and manufactured by the Synchronome Co., Ltd. of London, UK. They were the most accurate pendulum clocks ever commercially produced, and became the highest standard for timekeeping between the 1920s and the 1940s, after which mechanical clocks were superseded by quartz time standards. They were used worldwide in astronomical observatories, naval observatories, in scientific research, and as a primary standard for national time dissemination services. The Shortt was the first clock to be a more accurate timekeeper than the Earth itself; it was used in 1926 to detect tiny seasonal changes (nutation) in the Earth's rotation rate. *Wik 1939 an early manned rocket-powered flight was made by German auto maker Fritz von Opel. His Sander RAK 1 was a glider powered by sixteen 50 pound thrust rockets. In it, Opel made a successful flight of 75 seconds, covering almost 2 miles near Frankfurt-am-Main, Germany. This was his final foray as a rocket pioneer, having begun by making several test runs (some in secret) of rocket propelled vehicles. He reached a speed of 238 km/h (148 mph) on the Avus track in Berlin on 23 May, 1928, with the RAK 2. Subsequently, riding the RAK 3 on rails, he pushed the world speed record up to 254 km/h (158 mph). The first glider pilot to fly under rocket power, was another German, Friedrich Staner, who flew about 3/4-mile on 11 Jun 1928.*TIS 2012 A Blue moon, Second of two full moons in a single month. August had full moons on the 2nd, and 31st. September had full moons on the 1st and 30th. After this month you have to wait until July of 2015 for the next blue moon. (The Farmer's Almanac uses a different notation for "blue moon", the third full moon in a season of four full moons.)*Wik The next blue moon under the modern definition will occur on July 31, 2015; but in 2013 there was a blue moon using the old Farmer's Almanac definition on August 21 1550 Michael Maestlin (30 September 1550, Göppingen – 20 October 1631, Tübingen) was a German astronomer who was Kepler's teacher and who publicised the Copernican system. Perhaps his greatest achievement (other than being Kepler's teacher) is that he was the first to compute the orbit of a comet, although his method was not sound. He found, however, a sun centerd orbit for the comet of 1577 which he claimed supported Copernicus's heliocentric system. He did show that the comet was further away than the moon, which contradicted the accepted teachings of Aristotle. Although clearly believing in the system as proposed by Copernicus, he taught astronomy using his own textbook which was based on Ptolemy's system. However for the more advanced lectures he adopted the heliocentric approach - Kepler credited Mästlin with introducing him to Copernican ideas while he was a student at Tübingen (1589-94).*SAU The first known calculation of the reciprocal of the golden ratio as a decimal of "about 0.6180340" was written in 1597 by Maestlin in a letter to Kepler. He is also remembered for : Catalogued the Pleiades cluster on 24 December 1579. Eleven stars in the cluster were recorded by Maestlin, and possibly as many as fourteen were observed. Occultation of Mars by Venus on 13 October 1590, seen by Maestlin at Heidelberg. *Wik 1715 Étienne Bonnot de Condillac (30 Sep 1715; 3 Aug 1780) French philosopher, psychologist, logician, economist, and the leading advocate in France of the ideas of John Locke (1632-1704). In his works La Logique (1780) and La Langue des calculs (1798), Condillac emphasized the importance of language in logical reasoning, stressing the need for a scientifically designed language and for mathematical calculation as its basis. He combined elements of Locke's theory of knowledge with the scientific methodology of Newton; all knowledge springs from the senses and association of ideas. Condillac devoted careful attention to questions surrounding the origins and nature of language, and enhanced contemporary awareness of the importance of the use of language as a scientific instrument.*TIS 1775 Robert Adrain (30 September 1775 – 10 August 1843) . Although born in Ireland he was one of the first creative mathematicians to work in America. *VFR Adrain was appointed as a master at Princeton Academy and remained there until 1800 when the family moved to York in Pennsylvania. In York Adrain became Principal of York County Academy. When the first mathematics journal, the Mathematical Correspondent, began publishing in 1804 under the editorship of George Baron, Adrain became one of its main contributors. One year later, in 1805, he moved again this time to Reading, also in Pennsylvania, where he was appointed Principal of the Academy. After arriving in Reading, Adrain continued to publish in the Mathematical Correspondent and, in 1807, he became editor of the journal. One has to understand that publishing a mathematics journal in the United States at this time was not an easy task since there were only two mathematicians capable of work of international standing in the whole country, namely Adrain and Nathaniel Bowditch. Despite these problems, Adrain decided to try publishing his own mathematics journal after he had edited only one volume of the Mathematical Correspondent and, in 1808, he began editing his journal the Analyst or Mathematical Museum. With so few creative mathematicians in the United States the journal had little chance of success and indeed it ceased publication after only one year. After the journal ceased publication, Adrain was appointed professor of mathematics at Queen's College (now Rutgers University) New Brunswick where he worked from 1809 to 1813. Despite Queen's College trying its best to keep him there, Adrain moved to Columbia College in New York in 1813. He tried to restart his mathematical journal the Analyst in 1814 but only one part appeared. In 1825, while he was still on the staff at Columbia College, Adrain made another attempt at publishing a mathematical journal. Realising that the Analyst had been too high powered for the mathematicians of the United States, he published the Mathematical Diary in 1825. This was a lower level publication which continued under the editorship of James Ryan when Adrain left Columbia College in 1826. *SAU 1870 Jean-Baptiste Perrin (30 Sep 1870; 17 Apr 1942) was a French physicist who, in his studies of the Brownian motion of minute particles suspended in liquids, verified Albert Einstein's explanation of this phenomenon and thereby confirmed the atomic nature of matter. Using a gamboge emulsion, Perrin was able to determine by a new method, one of the most important physical constants, Avogadro's number (the number of molecules of a substance in so many grams as indicated by the molecular weight, for example, the number of molecules in two grams of hydrogen). The value obtained corresponded, within the limits of error, to that given by the kinetic theory of gases. For this achievement he was honoured with the Nobel Prize for Physics in 1926.*TIS 1882 Hans Wilhelm Geiger  (30 Sep 1882; 24 Sep 1945) was a German physicist who introduced the Geiger counter, the first successful detector of individual alpha particles and other ionizing radiations. After earning his Ph.D. at the University of Erlangen in 1906, he collaborated at the University of Manchester with Ernest Rutherford. He used the first version of his particle counter, and other detectors, in experiments that led to the identification of the alpha particle as the nucleus of the helium atom and to Rutherford's statement (1912) that the nucleus occupies a very small volume in the atom. The Geiger-Müller counter (developed with Walther Müller) had improved durability, performance and sensitivity to detect not only alpha particles but also beta particles (electrons) and ionizing electromagnetic photons. Geiger returned to Germany in 1912 and continued to investigate cosmic rays, artificial radioactivity, and nuclear fission.*TIS 1883 Ernst David Hellinger (1883 - 1950) introduced a new type of integral: the Hellinger integral . Jointly with Hilbert he produced an important theory of forms. *SAU 1894 Dirk Jan Struik (30 Sept 1894 , 21 Oct 2000) Dirk Jan Struik (September 30, 1894 – October 21, 2000) was a Dutch mathematician and Marxian theoretician who spent most of his life in the United States. In 1924, funded by a Rockefeller fellowship, Struik traveled to Rome to collaborate with the Italian mathematician Tullio Levi-Civita. It was in Rome that Struik first developed a keen interest in the history of mathematics. In 1925, thanks to an extension of his fellowship, Struik went to Göttingen to work with Richard Courant compiling Felix Klein's lectures on the history of 19th-century mathematics. He also started researching Renaissance mathematics at this time. Struik was a steadfast Marxist. Having joined the Communist Party of the Netherlands in 1919, he remained a Party member his entire life. When asked, upon the occasion of his 100th birthday, how he managed to pen peer-reviewed journal articles at such an advanced age, Struik replied blithely that he had the "3Ms" a man needs to sustain himself: Marriage (his wife, Saly Ruth Ramler, was not alive when he turned one hundred in 1994), Mathematics, and Marxism. It is therefore not surprising that Dirk suffered persecution during the McCarthyite era. He was accused of being a Soviet spy, a charge he vehemently denied. Invoking the First and Fifth Amendments of the U.S. Constitution, he refused to answer any of the 200 questions put forward to him during the HUAC hearing. He was suspended from teaching for five years (with full salary) by MIT in the 1950s. Struik was re-instated in 1956. He retired from MIT in 1960 as Professor Emeritus of Mathematics. Aside from purely academic work, Struik also helped found the Journal of Science and Society, a Marxian journal on the history, sociology and development of science. In 1950 Stuik published his Lectures on Classical Differential Geometry. Struik's other major works include such classics as A Concise History of Mathematics, Yankee Science in the Making, The Birth of the Communist Manifesto, and A Source Book in Mathematics, 1200-1800, all of which are considered standard textbooks or references. Struik died October 21, 2000, 21 days after celebrating his 106th birthday. *Wik 1905 Sir Nevill F. Mott (30 Sep 1905; 8 Aug 1996) English physicist who shared (with P.W. Anderson and J.H. Van Vleck of the U.S.) the 1977 Nobel Prize for Physics for his independent researches on the magnetic and electrical properties of amorphous semiconductors. Whereas the electric properties of crystals are described by the Band Theory - which compares the conductivity of metals, semiconductors, and insulators - a famous exception is provided by nickel oxide. According to band theory, nickel oxide ought to be a metallic conductor but in reality is an insulator. Mott refined the theory to include electron-electron interaction and explained so-called Mott transitions, by which some metals become insulators as the electron density decreases by separating the atoms from each other in some convenient way.*TIS 1913 Samuel Eilenberg (September 30, 1913 – January 30, 1998) was a Polish and American mathematician born in Warsaw, Russian Empire (now in Poland) and died in New York City, USA, where he had spent much of his career as a professor at Columbia University. He earned his Ph.D. from University of Warsaw in 1936. His thesis advisor was Karol Borsuk. His main interest was algebraic topology. He worked on the axiomatic treatment of homology theory with Norman Steenrod (whose names the Eilenberg–Steenrod axioms bear), and on homological algebra with Saunders Mac Lane. In the process, Eilenberg and Mac Lane created category theory. Eilenberg was a member of Bourbaki and with Henri Cartan, wrote the 1956 book Homological Algebra, which became a classic. Later in life he worked mainly in pure category theory, being one of the founders of the field. The Eilenberg swindle (or telescope) is a construction applying the telescoping cancellation idea to projective modules. Eilenberg also wrote an important book on automata theory. The X-machine, a form of automaton, was introduced by Eilenberg in 1974. *Wik 1916 Richard Kenneth Guy (born September 30, 1916, Nuneaton, Warwickshire - ) is a British mathematician, and Professor Emeritus in the Department of Mathematics at the University of Calgary. He is best known for co-authorship (with John Conway and Elwyn Berlekamp) of Winning Ways for your Mathematical Plays and authorship of Unsolved Problems in Number Theory, but he has also published over 100 papers and books covering combinatorial game theory, number theory and graph theory. He is said to have developed the partially tongue-in-cheek "Strong Law of Small Numbers," which says there are not enough small integers available for the many tasks assigned to them — thus explaining many coincidences and patterns found among numerous cultures. Additionally, around 1959, Guy discovered a unistable polyhedron having only 19 faces; no such construct with fewer faces has yet been found. Guy also discovered the glider in Conway's Game of Life. Guy is also a notable figure in the field of chess endgame studies. He composed around 200 studies, and was co-inventor of the Guy-Blandford-Roycroft code for classifying studies. He also served as the endgame study editor for the British Chess Magazine from 1948 to 1951. Guy wrote four papers with Paul Erdős, giving him an Erdős number of 1. He also solved one of Erdős problems. His son, Michael Guy, is also a computer scientist and mathematician. *Wik 1918 Leslie Fox (30 September 1918 – 1 August 1992) was a British mathematician noted for his contribution to numerical analysis. *Wik 1953 Lewis Fry Richardson, FRS (11 October 1881 - 30 September 1953) was an English mathematician, physicist, meteorologist, psychologist and pacifist who pioneered modern mathematical techniques of weather forecasting, and the application of similar techniques to studying the causes of wars and how to prevent them. He is also noted for his pioneering work on fractals and a method for solving a system of linear equations known as modified Richardson iteration.*Wik 1985 Dr. Charles Francis Richter (26 Apr 1900, 30 Sep 1985) was an American seismologist and inventor of the Richter Scale that measures earthquake intensity which he developed with his colleague, Beno Gutenberg, in the early 1930's. The scale assigns numerical ratings to the energy released by earthquakes. Richter used a seismograph (an instrument generally consisting of a constantly unwinding roll of paper, anchored to a fixed place, and a pendulum or magnet suspended with a marking device above the roll) to record actual earth motion during an earthquake. The scale takes into account the instrument's distance from the epicenter. Gutenberg suggested that the scale be logarithmic so, for example, a quake of magnitude 7 would be ten times stronger than a 6.*TIS 2014 Martin Lewis Perl (June 24, 1927 – September 30, 2014) was an American physicist who won the Nobel Prize in Physics in 1995 for his discovery of the tau lepton. He received his Ph.D. from Columbia University in 1955, where his thesis advisor was I.I. Rabi. Perl's thesis described measurements of the nuclear quadrupole moment of sodium, using the atomic beam resonance method that Rabi had won the Nobel Prize in Phyics for in 1944. Following his Ph.D., Perl spent 8 years at the University of Michigan, where he worked on the physics of strong interactions, using bubble chambers and spark chambers to study the scattering of pions and later neutrons on protons.[1] While at Michigan, Perl and Lawrence W. Jones served as co-advisors to Samuel C. C. Ting, who earned the Nobel Prize in Physics in 1976. Seeking a simpler interaction mechanism to study, Perl started to consider electron and muon interactions. He had the opportunity to start planning experimental work in this area when he moved in 1963 to the Stanford Linear Accelerator Center (SLAC), then being built in California. He was particularly interested in understanding the muon: why it should interact almost exactly like the electron but be 206.8 times heavier, and why it should decay through the route that it does. Perl chose to look for answers to these questions in experiments on high-energy charged leptons. In addition, he considered the possibility of finding a third generation of lepton through electron-positron collisions. He died after a heart attack at Stanford University Hospital on September 30, 2014 at the age of 87. *Wik Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Monday, 29 September 2014 On This Day in Math - September 29 ~Enrico Fermi The 272nd day of the year; 272 = 24·17, and is the sum of four consecutive primes (61 + 67 + 71 + 73). 272 is also a Pronic or Heteromecic number, the product of two consecutive factors, 16x17 (which makes it twice a triangular #). 1609  Almost exactly a year after the first application for a patent of the telescope, Giambaptista della Porta, the Neapolitan polymath, whose Magia Naturalis of 1589, well known all over Europe, because of a tantalizing hint at what might be accomplished by a combination of a convex and concave lens: ‘With a concave you shall see small things afar off, very clearly; witha convex, things neerer to be greater, but more obscurely: if you know how to fit them both together, you shall see both things afar off, and things neer hand, both greater and clearly.’sends a letter to the founder of the Accademia dei Lincei, Prince Federico Cesi in Rome, with a sketch of an instrument that had just reached him, and he wrote:" It is a small tube of soldered silver, one palm in length, and three finger breadths in diameter, which has a convex glass in the end. There is another tube of the same material four finger breadths long, which enters into the first one, and in the end. It has a concave [glass], which is secured like the first one. If observed with that first tube, faraway things are seen as if they were near, but because the vision does not occur along the perpendicular, they appear obscure and indistinct. When the other concave tube, which produces the opposite effect, is inserted, things will be seen clear and erect and it goes in an out, as in a trombone, so that it adjusts to the eyesight of [particular] observers, which all differ. *Albert Van Helden, Galileo and the telescope; Origins of the Telescope, Royal Netherlands Academy of Arts andSciences, 2010 (I assume that we can safely date the invention of the trombone prior to 1609 also) 1801 Gauss’s Disquisitiones Arithmeticae published. It is a textbook of number theory written in Latin by Carl Friedrich Gauss in 1798 when Gauss was 21 and first published in 1801 when he was 24. In this book Gauss brings together results in number theory obtained by mathematicians such as Fermat, Euler, Lagrange and Legendre and adds important new results of his own. The book is divided into seven sections, which are : Section I. Congruent Numbers in General Section II. Congruences of the First Degree Section III. Residues of Powers Section IV. Congruences of the Second Degree Section V. Forms and Indeterminate Equations of the Second Degree Section VI. Various Applications of the Preceding Discussions Section VII. Equations Defining Sections of a Circle. Sections I to III are essentially a review of previous results, including Fermat's little theorem, Wilson's theorem and the existence of primitive roots. Although few of the results in these first sections are original, Gauss was the first mathematician to bring this material together and treat it in a systematic way. He was also the first mathematician to realize the importance of the property of unique factorization (sometimes called the fundamental theorem of arithmetic), which he states and proves explicitly. From Section IV onwards, much of the work is original. Section IV itself develops a proof of quadratic reciprocity; Section V, which takes up over half of the book, is a comprehensive analysis of binary quadratic forms; and Section VI includes two different primality tests. Finally, Section VII is an analysis of cyclotomic polynomials, which concludes by giving the criteria that determine which regular polygons are constructible i.e. can be constructed with a compass and unmarked straight edge alone. *Wik In 1988, the space shuttle Discovery blasted off from Cape Canaveral, Fla., marking America's return to manned space flight following the Challenger disaster. *TIS 1994 HotJava ---- Programmers first demonstrated the HotJava prototype to executives at Sun Microsystems Inc. A browser making use of Java technology, HotJava attempted to transfer Sun's new programming platform for use on the World Wide Web. Java is based on the concept of being truly universal, allowing an application written in the language to be used on a computer with any type of operating system or on the web, televisions or telephones.*CHM 1561  Adriaan van Roomen (29 Sept 1561 , 4 May 1615) is often known by his Latin name Adrianus Romanus. After studying at the Jesuit College in Cologne, Roomen studied medicine at Louvain. He then spent some time in Italy, particularly with Clavius in Rome in 1585. Roomen was professor of mathematics and medicine at Louvain from 1586 to 1592, he then went to Würzburg where again he was professor of medicine. He was also "Mathematician to the Chapter" in Würzburg. From 1603 to 1610 he lived frequently in both Louvain and Würzburg. He was ordained a priest in 1604. After 1610 he tutored mathematics in Poland. One of Roomen's most impressive results was finding π to 16 decimal places. He did this in 1593 using 230 sided polygons. Roomen's interest in π was almost certainly as a result of his friendship with Ludolph van Ceulen. Roomen proposed a problem which involved solving an equation of degree 45. The problem was solved by Viète who realised that there was an underlying trigonometric relation. After this a friendship grew up between the two men. Viète proposed the problem of drawing a circle to touch 3 given circles to Roomen (the Apollonian Problem) and Roomen solved it using hyperbolas, publishing the result in 1596. Roomen worked on trigonometry and the calculation of chords in a circle. In 1596 Rheticus's trigonometric tables Opus palatinum de triangulis were published, many years after Rheticus died. Roomen was critical of the accuracy of the tables and wrote to Clavius at the Collegio Romano in Rome pointing out that, to calculate tangent and secant tables correctly to ten decimal places, it was necessary to work to 20 decimal places for small values of sine, see [2]. In 1600 Roomen visited Prague where he met Kepler and told him of his worries about the methods employed in Rheticus's trigonometric tables. *SAU 1803   Jacques Charles-François Sturm (29 Sep 1803; 18 Dec 1855) French mathematician whose work resulted in Sturm's theorem, an important contribution to the theory of equations. .While a tutor of the de Broglie family in Paris (1823-24), Sturm met many of the leading French scientists and mathematicians. In 1826, with Swiss engineer Daniel Colladon, he made the first accurate determination of the velocity of sound in water. A year later wrote a prizewinning essay on compressible fluids. Since the time of René Descartes, a problem had existed of finding the number of solutions of a given second-order differential equation within a given range of the variable. Sturm provided a complete solution to the problem with his theorem which first appeared in Mémoire sur la résolution des équations numériques (1829; “Treatise on Numerical Equations”). Those principles have been applied in the development of quantum mechanics, as in the solution of the Schrödinger equation and its boundary values. *TIS  Sturm is also remembered for the Sturm-Liouville problem, an eigenvalue problem in second order differential equations.*SAU 1812  Gustav Adolph Göpel (29 Sept 1812, 7 June 1847) Göpel's doctoral dissertation studied periodic continued fractions of the roots of integers and derived a representation of the numbers by quadratic forms. He wrote on Steiner's synthetic geometry and an important work, Theoriae transcendentium Abelianarum primi ordinis adumbratio levis, published after his death, continued the work of Jacobi on elliptic functions. This work was published in Crelle's Journal in 1847. *SAU 1895 Harold Hotelling​, 29 September 1895 - 26 December 1973   He originally studied journalism at the University of Washington, earning a degree in it in 1919, but eventually turned to mathematics, gaining a PhD in Mathematics from Princeton in 1924 for a dissertation dealing with topology. However, he became interested in statistics that used higher-level math, leading him to go to England in 1929 to study with Fisher. Although Hotelling first went to Stanford University in 1931, he not many years afterwards became a Professor of Economics at Columbia University, where he helped create Columbia's Stat Dept. In 1946, Hotelling was recruited by Gertrude Cox​ to form a new Stat Dept at the University of North Carolina at Chapel Hill. He became Professor and Chairman of the Dept of Mathematical Statistics, Professor of Economics, and Associate Director of the Institute of Statistics at UNC-CH. (When Hotelling and his wife first arrived in Chapel Hill they instituted the "Hotelling Tea", where they opened their home to students and faculty for tea time once a month.) Dr. Hotelling's major contributions to statistical theory were in multivariate analysis, with probably his most important paper his famous 1931 paper "The Generalization of Student's Ratio", now known as Hotelling's T^2, which involves a generalization of Student's t-test for multivariate data. In 1953, Hotelling published a 30-plus-page paper on the distribution of the correlation coefficient, following up on the work of Florence Nightingale David in 1938. *David Bee 1931   James Watson Cronin (29 Sep 1931, ) American particle physicist, who shared (with Val Logsdon Fitch) the 1980 Nobel Prize for Physics for "the discovery of violations of fundamental symmetry principles in the decay of neutral K-mesons." Their experiment proved that a reaction run in reverse does not follow the path of the original reaction, which implied that time has an effect on subatomic-particle interactions. Thus the experiment demonstrated a break in particle-antiparticle symmetry for certain reactions of subatomic particles.*TIS 1935 Hillel (Harry) Fürstenberg (September 29, 1935, ..)) is an American-Israeli mathematician, a member of the Israel Academy of Sciences and Humanities and U.S. National Academy of Sciences and a laureate of the Wolf Prize in Mathematics. He is known for his application of probability theory and ergodic theory methods to other areas of mathematics, including number theory and Lie groups. He gained attention at an early stage in his career for producing an innovative topological proof of the infinitude of prime numbers. He proved unique ergodicity of horocycle flows on a compact hyperbolic Riemann surfaces in the early 1970s. In 1977, he gave an ergodic theory reformulation, and subsequently proof, of Szemerédi's theorem. The Fürstenberg boundary and Fürstenberg compactification of a locally symmetric space are named after him. *Wik 1939 Samuel Dickstein (May 12, 1851 – September 29, 1939) was a Polish mathematician of Jewish origin. He was one of the founders of the Jewish party "Zjednoczenie" (Unification), which advocated the assimilation of Polish Jews. He was born in Warsaw and was killed there by a German bomb at the beginning of World War II. All the members of his family were killed during the Holocaust. Dickstein wrote many mathematical books and founded the journal Wiadomości Mathematyczne (Mathematical News), now published by the Polish Mathematical Society. He was a bridge between the times of Cauchy and Poincaré and those of the Lwów School of Mathematics. He was also thanked by Alexander Macfarlane for contributing to the Bibliography of Quaternions (1904) published by the Quaternion Society. He was also one of the personalities, who contributed to the foundation of the Warsaw Public Library in 1907.*Wik 1941 Friedrich Engel (26 Dec 1861, 29 Sept 1941)Engel was taught by Klein who recognized that he was the right man to assist Lie. At Klein's suggestion Engel went to work with Lie in Christiania (now Oslo) from 1884 until 1885. In 1885 Engel's Habilitation thesis was accepted by Leipzig and he became a lecturer there. The year after Engel returned to Leipzig from Christiania, Lie was appointed to succeed Klein and the collaboration of Lie and Engel continued. In 1889 Engel was promoted to assistant professor and, ten years later he was promoted to associate professor. In 1904 he accepted the chair of mathematics at Greifswald when his friend Eduard Study resigned the chair. Engel's final post was the chair of mathematics at Giessen which he accepted in 1913 and he remained there for the rest of his life. In 1931 he retired from the university but continued to work in Giessen. The collaboration between Engel and Lie led to Theorie der Transformationsgruppen a work on three volumes published between 1888 and 1893. This work was, "... prepared by S Lie with the cooperation of F Engel... "  In many ways it was Engel who put Lie's ideas into a coherent form and made them widely accessible. From 1922 to 1937 Engel published Lie's collected works in six volumes and prepared a seventh (which in fact was not published until 1960). Engel's efforts in producing Lie's collected works are described as, "... an exceptional service to mathematics in particular, and scholarship in general. Lie's peculiar nature made it necessary for his works to be elucidated by one who knew them intimately and thus Engel's 'Annotations' completed in scope with the text itself. " Engel also edited Hermann Grassmann's complete works and really only after this was published did Grassmann get the fame which his work deserved. Engel collaborated with Stäckel in studying the history of non-euclidean geometry. He also wrote on continuous groups and partial differential equations, translated works of Lobachevsky from Russian to German, wrote on discrete groups, Pfaffian equations and other topics. *SAU 1955 L(ouis) L(eon) Thurstone (29 May 1887, 29 Sep 1955)  was an American psychologist who improved psychometrics, the measurement of mental functions, and developed statistical techniques for multiple-factor analysis of performance on psychological tests. In high school, he published a letter in Scientific American on a problem of diversion of water from Niagara Falls; and invented a method of trisecting an angle. At university, Thurstone studied engineering. He designed a patented motion picture projector, later demonstrated in the laboratory of Thomas Edison, with whom Thurstone worked briefly as an assistant. When he began teaching engineering, Thurstone became interested in the learning process and pursued a doctorate in psychology. *TIS 2003 Ovide Arino (24 April 1947 - 29 September 2003) was a mathematician working on delay differential equations. His field of application was population dynamics. He was a quite prolific writer, publishing over 150 articles in his lifetime. He also was very active in terms of student supervision, having supervised about 60 theses in total in about 20 years. Also, he organized or coorganized many scientific events. But, most of all, he was an extremely kind human being, interested in finding the good in everyone he met. *Euromedbiomath 2010 Georges Charpak (1 August 1924 – 29 September 2010) was a French physicist who was awarded the Nobel Prize in Physics in 1992 "for his invention and development of particle detectors, in particular the multiwire proportional chamber". This was the last time a single person was awarded the physics prize. *Wik Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Sunday, 28 September 2014 On This Day in Math - September 28 But in the present century, thanks in good part to the influence of Hilbert, we have come to see that the unproved postulates with which we start are purely arbitrary. They must be consistent, they had better lead to something interesting. ~ Julian Lowell Coolidge The 271st day of the year; 271 is a prime number and is the sum of eleven consecutive primes (7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43). 490 B.C. In one of history’s great battles, the Greeks defeated the Persians at Marathon. A Greek soldier was dispatched to notify Athens of the victory, running the entire distance and providing the name and model for the modern “marathon” race. *VFR 1695 After fitting several comets data using Newton's proposal that they followed parabolic paths, Edmund Halley was "inspired" to test his own measurements of the 1682 comet against an elliptical orbit. He writes to Newton, "I am more and more confirmed that we have seen that Comet now three times since Ye Year 1531." *David A Grier, When Computer's Were Human 1791 Captain George Vancouver observed this Wednesday morning a partial solar eclipse. He went on the name the barren rocky cluster of isles, by the name of Eclipse Islands. *NSEC 1858, Donati's comet (discovered by Giovanni Donati, 1826-1873) became the first to be photographed. It was a bright comet that developed a spectacular curved dust tail with two thin gas tails, captured by an English commercial photographer, William Usherwood, using a portrait camera at a low focal ratio. At Harvard, W.C. Bond, attempted an image on a collodion plate the following night, but the comet shows only faintly and no tail can be seen. Bond was subsequently able to evaluate the image on Usherwood's plate. The earliest celestial daguerreotypes were made in 1850-51, though after the Donati comet, no further comet photography took place until 1881, when P.J.C. Janssen and J.W. Draper took the first generally recognized photographs of a comet*TIS “William Usherwood, a commercial photographer from Dorking, Surrey took the first ever photograph of a comet when he photographed Donati’s comet from Walton Common on the 27th September 1858, beating George Bond from Harvard Observatory by a night! Unfortunately, the picture taken by Usherwood has been lost.” *Exposure web site  1917 Richard Courant wrote to Nina Runge, his future wife, that he finally got the opportunity to talk to Ferdinand Springer about “a publishing project” and that things looked promising. This meeting led to a contract and a series of books now called the "Yellow Series". *VFR 1938 Paul Erdos boards the Queen Mary bound for the USA. Alarmed by Hitler's demands to annex the Sudatenland, Erdos hurriedly left Budapest and made his way through Italy and France to London. He would pass through Ellis Island on his way to a position at Princeton's Institute for Advanced Study on October 4. * Bruce Schechter, My Brain is Open: The Mathematical Journeys of Paul Erdos 1969 Murchison meteorite , a meteorite fell over Murchison, Australia. Only 100-kg of this meteorite have been found. Classified as a carbonaceous chondrite, type II (CM2), this meteorite is suspected to be of cometary origin due to its high water content (12%). An abundance of amino acids found within this meteorite has led to intense study by researchers as to its origins. More than 92 different amino acids have been identified within the Murchison meteorite to date. Nineteen of these are found on Earth. The remaining amino acids have no apparent terrestrial source. *TIS 2009: mathoverflow.net goes online. *Peter Krautzberger, comments 2011 President Barack Obama announced that Richard Alfred Tapia was among twelve scientists to be awarded the National Medal of Science, the top award the United States offers its researchers. Tapia is currently the Maxfield and Oshman Professor of Engineering; Associate Director of Graduate Studies, Office of Research and Graduate Studies; and Director of the Center for Excellence and Equity in Education at Rice University. He is a renowned American mathematician and champion of under-represented minorities in the sciences. *Wik 551 B.C. Birthdate of the Chinese philosopher and educator Confucius. His birthday is observed as “Teacher’s Day” in memory of his great contribution to the Chinese Nation. His most famous aphorism is: “With education there is no distinction between classes or races of men.” *VFR 1651 Johann Philipp von Wurzelbau (28 September 1651 in Nürnberg; 21 July 1725 Nürnberg )was a German astronomer. A native of Nuremberg, Wurzelbauer was a merchant who became an astronomer. As a youth, he was keenly interested in mathematics and astronomy but had been forced to earn his living as a merchant. He married twice: his first marriage was to Maria Magdalena Petz (1656–1713), his second to Sabina Dorothea Kress (1658–1733). Petz bore him six children. He first published a work concerning his observations on the great comet of 1680, and initially began his work at a private castle-observatory on Spitzenberg 4 owned by Georg Christoph Eimmart (completely destroyed during World War II), the director of Nuremberg's painters' academy. Wurzelbauer was 64 when he began this second career, but proved himself to be an able assistant to Eimmart. A large quadrant from his days at Eimmart's observatory still survives. After 1682, Wurzelbauer owned his own astronomical observatory and instruments, and observed the transit of Mercury, solar eclipses, and worked out the geographical latitude of his native city. After 1683, he had withdrawn himself completely from business life to dedicate himself to astronomy. By 1700, Wurzelbauer had become the most well-known astronomer in Nuremberg. For his services to the field of astronomy, he was ennobled in 1692 by Leopold I, Holy Roman Emperor and added the von to his name. He was a member of the French and the Prussian academies of the sciences. The crater Wurzelbauer on the Moon is named after him. *Wik 1698 Pierre-Louis Moreau de Maupertuis (28 Sep 1698; 27 Jul 1759)French mathematician, biologist, and astronomer. In 1732 he introduced Newton's theory of gravitation to France. He was a member of an expedition to Lapland in 1736 which set out to measure the length of a degree along the meridian. Maupertuis' measurements both verified Newton's predictions that the Earth would be an oblate speroid, and they corrected earlier results of Cassini. Maupertuis published on many topics including mathematics, geography, astronomy and cosmology. In 1744 he first enunciated the Principle of Least Action and he published it in Essai de cosmologie in 1850. Maupertuis hoped that the principle might unify the laws of the universe and combined it with an attempted proof of the existence of God.*TIS 1761 François Budan de Boislaurent (28 Sept 1761, 6 Oct 1840) was a Haitian born amateur mathematician best remembered for his discovery of a rule which gives necessary conditions for a polynomial equation to have n real roots between two given numbers. Budan is considered an amateur mathematician and he is best remembered for his discovery of a rule which gives necessary conditions for a polynomial equation to have n real roots between two given numbers. Budan's rule was in a memoir sent to the Institute in 1803 but it was not made public until 1807 in Nouvelle méthode pour la résolution des équations numerique d'un degré quelconque. In it Budan wrote, "If an equation in x has n roots between zero and some positive number p, the transformed equation in (x - p) must have at least n fewer variations in sign than the original." *SAU (Sounds like a nice followup extension to Descartes Rule of signs in Pre-calculus classes. Mention the history, how many times do your students hear about a Haitian mathematician?) 1824 George Johnston Allman (28 September 1824 – 9 May 1904) was an Irish professor, mathematician, classical scholar, and historian of ancient Greek mathematics.*Wik 1873  Julian Lowell Coolidge. (28 Sep 1873; 5 Mar 1954) After an education at Harvard (B.A. 1895), Oxford (B.Sc. 1897), Turin (with Corrado Serge) and Bonn (with Eouard Study, Ph.D. 1904), he came back to Harvard to teach until he retired in 1940. He was an enthusiastic teacher with a flair for witty remarks. [DSB 3, 399] *VFR He published numerous works on theoretical mathematics along the lines of the Study-Segre school. He taught at Groton School, Conn. (1897-9) where one of his pupils was Franklin D Roosevelt, the future U.S. president. From 1899 he taught at Harvard University. Between 1902 and 1904, he went to Turin to study under Corrado Segre and then to Bonn where he studied under Eduard Study. His Mathematics of the Great Amateurs is perhaps his best-known work. *TIS 1881 Edward Ross studied at Edinburgh and Cambridge universities. After working with Karl Pearson in London he was appointed Professor of Mathematics at the Christian College in Madras India. Ill health forced him to retire back to Scotland. *SAU 1925 Martin David Kruskal (September 28, 1925 – December 26, 2006) was an American mathematician and physicist. He made fundamental contributions in many areas of mathematics and science, ranging from plasma physics to general relativity and from nonlinear analysis to asymptotic analysis. His single most celebrated contribution was the discovery and theory of solitons. His Ph.D. dissertation, written under the direction of Richard Courant and Bernard Friedman at New York University, was on the topic "The Bridge Theorem For Minimal Surfaces." He received his Ph.D. in 1952. In the 1950s and early 1960s, he worked largely on plasma physics, developing many ideas that are now fundamental in the field. His theory of adiabatic invariants was important in fusion research. Important concepts of plasma physics that bear his name include the Kruskal–Shafranov instability and the Bernstein–Greene–Kruskal (BGK) modes. With I. B. Bernstein, E. A. Frieman, and R. M. Kulsrud, he developed the MHD (or magnetohydrodynamic) Energy Principle. His interests extended to plasma astrophysics as well as laboratory plasmas. Martin Kruskal's work in plasma physics is considered by some to be his most outstanding. In 1960, Kruskal discovered the full classical spacetime structure of the simplest type of black hole in General Relativity. A spherically symmetric black hole can be described by the Schwarzschild solution, which was discovered in the early days of General Relativity. However, in its original form, this solution only describes the region exterior to the horizon of the black hole. Kruskal (in parallel with George Szekeres) discovered the maximal analytic continuation of the Schwarzschild solution, which he exhibited elegantly using what are now called Kruskal–Szekeres coordinates. This led Kruskal to the astonishing discovery that the interior of the black hole looks like a "wormhole" connecting two identical, asymptotically flat universes. This was the first real example of a wormhole solution in General Relativity. The wormhole collapses to a singularity before any observer or signal can travel from one universe to the other. This is now believed to be the general fate of wormholes in General Relativity. Martin Kruskal was married to Laura Kruskal, his wife of 56 years. Laura is well known as a lecturer and writer about origami and originator of many new models.[3] Martin, who had a great love of games, puzzles, and word play of all kinds, also invented several quite unusual origami models including an envelope for sending secret messages (anyone who unfolded the envelope to read the message would have great difficulty refolding it to conceal the deed). His Mother, Lillian Rose Vorhaus Kruskal Oppenheimer was an American origami pioneer. She popularized origami in the West starting in the 1950s, and is credited with popularizing the Japanese term origami in English-speaking circles, which gradually supplanted the literal translation paper folding that had been used earlier. In the 1960s she co-wrote several popular books on origami with Shari Lewis.*wik 1925 Seymour R. Cray (28 Sep 1925; 5 Oct 1996) American electronics engineer who pioneered the use of transistors in computers and later developed massive supercomputers to run business and government information networks. He was the preeminent designer of the large, high-speed computers known as supercomputers. *TIS Cray began his engineering career building cryptographic machinery for the U.S. government and went on to co-found Control Data Corporation​ (CDC) in the late 1950s. For over three decades, first with CDC then with his own companies, Cray consistently built the fastest computers in the world, leading the industry with innovative architectures and packaging and allowing the solution of hundreds of difficult scientific, engineering, and military problems. Many of Cray's supercomputers are on exhibit at The Computer Museum History Center. Cray died in an automobile accident in 1996.*CHM 1961 Enrique Zuazua Iriondo (September 28, 1961 'Eibar, Gipuzkoa, Basque Country, Spain - ) is a Research Professor at Ikerbasque, the Basque Foundation for Science in BCAM - Basque Center for Applied Mathematics that he founded in 2008 as Scientific Director. He is also the Director of the BCAM Chair in Partial Differential Equations, Control and Numerics and Professor in leave of Applied Mathematics at the Universidad Autónoma de Madrid (UAM). His domains of expertise in Applied Mathematics include Partial Differential Equations, Control Theory and Numerical Analysis. These subjects interrelate and their final aim is to model, analyse, computer simulate, and finally contribute to the control and design of the most diverse natural phenomena and all fields of R + D + i. Twenty PhD students got the degree under his advice and they now occupy positions in centres throughout the world: Brazil, Chile, China, Mexico, Romania, Spain, etc. He has developed intensive international work having led co-operation programmes with various Latin American countries, as well as with Portugal, the Maghreb, China and Iran, amongst others. *Wik 1694 Gabriel Mouton was a French clergyman who worked on interpolation and on astronomy.*SAU 1869 Count Guglielmo Libri Carucci dalla Sommaja (1 Jan 1803, 28 Sept 1869) Libri's early work was on mathematical physics, particularly the theory of heat. However he made many contributions to number theory and to the theory of equations. His best work during the 1830s and 1840s was undoubtedly his work on the history of mathematics. From 1838 to 1841 he published four volumes of Histoire des sciences mathématiques en Italie, depuis la rénaissanace des lettres jusqu'à la fin du dix-septième siècle. He intended to write a further two volumes, but never finished the task. It is an important work but suffers from over-praise of Italians at the expense of others. *SAU 1953 Edwin Powell Hubble (20 Nov 1889, 28 Sep 1953)American astronomer, born in Marshfield, Mo., who is considered the founder of extragalactic astronomy and who provided the first evidence of the expansion of the universe. In 1923-5 he identified Cepheid variables in "spiral nebulae" M31 and M33 and proved conclusively that they are outside the Galaxy. His investigation of these objects, which he called extragalactic nebulae and which astronomers today call galaxies, led to his now-standard classification system of elliptical, spiral, and irregular galaxies, and to proof that they are distributed uniformly out to great distances. Hubble measured distances to galaxies and their redshifts, and in 1929 he published the velocity-distance relation which is the basis of modern cosmology.*TIS 1992 John Leech is best known for the Leech lattice which is important in the theory of finite simple groups.*SAU 2004 Jacobus Hendricus ("Jack") van Lint (1 September 1932, 28 September 2004) was a Dutch mathematician, professor at the Eindhoven University of Technology, of which he was rector magnificus from 1991 till 1996. His field of research was initially number theory, but he worked mainly in combinatorics and coding theory. Van Lint was honored with a great number of awards. He became a member of Royal Netherlands Academy of Arts and Sciences in 1972, received four honorary doctorates, was an honorary member of the Royal Netherlands Mathematics Society (Koninklijk Wiskundig Genootschap), and received a Knighthood.*Wik Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Saturday, 27 September 2014 On This Day in Math - September 27 Algebra exists only for the elucidation of geometry. ~William Edge The 270th day of the year; the harmonic mean of the factors of 270 is an integer. The first three numbers with this property are 1, 6, and 28.. what is the next one? These are sometimes called Ore numbers for Øystein Ore, who studied them.  Many of them also have the arithmetic mean of their divisors is an integer, but not all.  ? 14 A.D.: A total lunar eclipse marked the death of Augustus: "The Moon in the midst of a clear sky became suddenly eclipsed; the soldiers who were ignorant of the cause took this for an omen referring to their present adventures: to their labors they compared the eclipse of the planet, and prophesied 'that if to the distressed goodness should be restored her wonted brightness and splendor, equally successful would be the issue of their struggle.' Hence they made a loud noise, by ringing upon brazen metal, and by blowing trumpets and cornets; as she appeared brighter or darker they exulted or lamented" - Tacitus *NASA Lunar Eclipses 1830 American Statesman Charles Sumner (1811-1874) paid little attention as an undergraduate at Harvard, but a year after graduation he became convinced that mathematics was a necessary part of a complete education. To a classmate he wrote: “Just a week ago yesterday, I commenced Walker’s Geometry, and now have got nearly half through. All those problems, theorems, etc., which were such stumbling-blocks to my Freshman-year career, unfold themselves as easily as possible now. You will sooner have thought, I suppose, that fire and water would have embraced than mathematics and myself; but, strange to tell, we are close friends now. I really get geometry with some pleasure. I usually devote four hours in the forenoon to it.” Quoted from Florian Cajori’s Mathematics in Liberal Education (1928), p. 115. *VFR  (Sumner was nearly beaten to death by a South Carolina Congressional Representative after a vindictive speech attacking the Kansas-Nebraska act, and it's authors.  His speech included direct insults, sexual innuendo, and made fun of South Carolina Senator Andrew Butler, one of the authors, by imitating his stroke impaired speech and mannerisms.  Butler's Nephew,  Preston Brooks, having decided that a duel could not take place between a gentleman (himself) and a drunk-lout(Sumner) stopped by Sumner's desk to confront him and nearly beat him to death with his cane.  Sumner lost the fight, but the incident put his star on the rise in the Northern states.) In 1831, the first annual meeting of the British Association for the Advancement of Science was held in York. The British Association had been established in the same year by Sir David Brewster, R.I. Murchison and others. One of the association's main objectives was to "promote the intercourse of those who cultivate science with each other." The second annual meeting was held at Oxford (1832), and in following years at Cambridge, Edinburgh, Dublin, Bristol, Liverpool, Newcastle, Birmingham, Glasgow, Plymouth, Manchester and Cork respectively, until returning to York in 1844. It is incorporated by Royal Charter dated 21 Apr 1928.*TIS 1892 Mykhailo Pilipovich Krawtchouk (27 Sept 1892 in Chovnitsy, (now Kivertsi) Ukraine - 9 March 1942 in Kolyma, Siberia, USSR) In 1929 Krawtchouk published his most famous work, Sur une généralisation des polynômes d'Hermite. In this paper he introduced a new system of orthogonal polynomials now known as the Krawtchouk polynomials, which are polynomials associated with the binomial distribution. However his mathematical work was very wide and, despite his early death, he was the author of around 180 articles on mathematics. He wrote papers on differential and integral equations, studying both their theory and applications. Other areas he wrote on included algebra (where among other topics he studied the theory of permutation matrices), geometry, mathematical and numerical analysis, probability theory and mathematical statistics. He was also interested in the philosophy of mathematics, the history of mathematics and mathematical education. Krawtchouk edited the first three-volume dictionary of Ukrainian mathematical terminology. *SAU 1905 E=mc2 the day that Einstein's paper outlining the significance of the equation arrived in the offices of the German journal Annalen der Physik.  "Does the inertia of a body depend on its energy content?"  1919 Einstein writes to his ailing mother that "H. A. Lorenztz has just telegraphed me that the British Expeditions have definitely confirmed the deflection of light by the sun." He adds consolation on her illness and wishes her "good days", and closes with "affectionately, Albert *Einstein Archives In 1922, scientists at the Naval Aircraft Radio Laboratory near Washington, D.C., demonstrated that if a ship passed through a radio wave being broadcast between two stations, that ship could be detected, the essentials of radar. *TIS 1996 Kevin Mitnick, 33, was indicted on charges resulting from a 2 ½-year hacking spree. Police accused the hacker, who called himself "Condor," of stealing software worth millions of dollars from major computer corporations. The maximum possible sentence for his crimes was 200 years. *CHM    Mitnick served five years in prison — four and a half years pre-trial and eight months in solitary confinement — because, according to Mitnick, law enforcement officials convinced a judge that he had the ability to "start a nuclear war by whistling into a pay phone". He was released on January 21, 2000. During his supervised release, which ended on January 21, 2003, he was initially forbidden to use any communications technology other than a landline telephone. Mitnick fought this decision in court, eventually winning a ruling in his favor, allowing him to access the Internet. Under the plea deal, Mitnick was also prohibited from profiting from films or books based on his criminal activity for seven years. Mitnick now runs Mitnick Security Consulting​ LLC, a computer security consultancy. *Wik 1677 Johann Doppelmayr was a German mathematician who wrote on astronomy, spherical trigonometry, sundials and mathematical instruments.*SAU 1719 Abraham Kästner was a German mathematician who compiled encyclopaedias and wrote text-books. He taught Gauss. His work on the parallel postulate influenced Bolyai and Lobachevsky*SAU 1814  Daniel Kirkwood (27 Sep 1814; 11 Jun 1895) American mathematician and astronomer who noted in about 1860 that there were several zones of low density in the minor-planet population. These gaps in the distribution of asteroid distances from the Sun are now known as Kirkwood gaps. He explained the gaps as resulting from perturbations by Jupiter. An object that revolved in one of the gaps would be disturbed regularly by the planet's gravitational pull and eventually would be moved to another orbit. Thus gaps appeared in the distribution of asteroids where the orbital period of any small body present would be a simple fraction of that of Jupiter. Kirwood showed that a similar effect accounted for gaps in Saturns rings.*TIS  The asteroid 1951 AT was named 1578 Kirkwood in his honor and so was the lunar impact crater Kirkwood, as well as Indiana University's Kirkwood Observatory. He is buried in the Rose Hill Cemetery in Bloomington, Indiana, where Kirkwood Avenue is named for him. *Wik 1876 Earle Raymond Hedrick (September 27, 1876 – February 3, 1943), was an American mathematician and a vice-president of the University of California. Hedrick was born in Union City, Indiana. After undergraduate work at the University of Michigan, he obtained a Master of Arts from Harvard University. With a Parker fellowship, he went to Europe and obtained his PhD from Göttingen University in Germany under the supervision of David Hilbert in 1901. He then spent several months at the École Normale Supérieure in France, where he became acquainted with Édouard Goursat, Jacques Hadamard, Jules Tannery, Émile Picard and Paul Émile Appell, before becoming an instructor at Yale University. In 1903, he became professor at the University of Missouri. He was involved in the creation of the Mathematical Association of America in 1916 and was its first president. His work was on partial differential equations and on the theory of nonanalytic functions of complex variables. He also did work in applied mathematics, in particular on a generalization of Hooke's law and on transmission of heat in steam boilers. With Oliver Dimon Kellogg he authored a text on the applications of calculus to mechanics. He moved in 1920 to UCLA to become head of the department of mathematics. In 1933, he was giving the first graduate lecture on mathematics at UCLA. He became provost and vice-president of the University of California in 1937. He humorously called his appointment The Accident, and told jokingly after this event, "I no longer have any intellectual interests —I just sit and talk to people." He played in fact a very important role in making of the University of California a leading institution. He retired from the UCLA faculty in 1942 and accepted a visiting professorship at Brown University. Soon after the beginning of this new appointment, he suffered a lung infection. He died at the Rhode Island hospital in Providence, Rhode Island. Two UCLA residence halls are named after him: Hedrick Hall in 1963, and Hedrick Summit in 2005. 1843 Gaston Tarry was a French combinatorialist whose best-known work is a method for solving mazes.*SAU 1855 Paul Appell (27 September 1855 – 24 October 1930), also known as Paul Émile Appel, was a French mathematician and Rector of the University of Paris. The concept of Appell polynomials is named after him, as is rue Paul Appell in the 14th arrondissement of Paris.*Wik 1876 Earle Raymond Hedrick (September 27, 1876 – February 3, 1943), was an American mathematician and a vice-president of the University of California. He worked on partial differential equations and on the theory of nonanalytic functions of complex variables. He also did work in applied mathematics, in particular on a generalization of Hooke's law and on transmission of heat in steam boilers. With Oliver Dimon Kellogg he authored a text on the applications of calculus to mechanics.*Wik 1879 Hans Hahn was an Austrian mathematician who is best remembered for the Hahn-Banach theorem. He also made important contributions to the calculus of variations, developing ideas of Weierstrass. *SAU 1918 Sir Martin Ryle (27 Sep 1918; 14 Oct 1984) British radio astronomer who developed revolutionary radio telescope systems and used them for accurate location of weak radio sources. Ryle helped develop radar for British defense during WW II. Afterward, he was a leader in the development of radio astronomy. With his aperture synthesis technique of interferometry he and his team located radio-emitting regions on the sun and pinpointed other radio sources so that they could be studied in visible light. Ryle's 1C - 5C Cambridge catalogues of radio sources led to the discovery of numerous radio galaxies and quasars. Using this technique, eventually radio astronomers surpassed optical astronomers in angular resolution. He observed the most distant known galaxies of the universe. For his aperture synthesis technique, Ryle shared the Nobel Prize for Physics in 1974 (with Antony Hewish), the first in recognition of astronomical research. He was the 12th Astronomer Royal (1972-82).*TIS 1919 James Hardy Wilkinson (27 September 1919 – 5 October 1986) was a prominent figure in the field of numerical analysis, a field at the boundary of applied mathematics and computer science particularly useful to physics and engineering. He received the Turing Award in 1970 "for his research in numerical analysis to facilitate the use of the high-speed digital computer, having received special recognition for his work in computations in linear algebra and 'backward' error analysis." In the same year, he also gave the John von Neumann Lecture at the Society for Industrial and Applied Mathematics.   The J. H. Wilkinson Prize for Numerical Software is named in his honour.*Wik 1783 Étienne Bézout was a French mathematician who is best known for his theorem on the number of solutions of polynomial equations.*SAU Bézout's theorem for polynomials states that if P and Q are two polynomials with no roots in common, then there exist two other polynomials A and B such that AP+BQ=1. *Wik 1997 William Edge graduated from Cambridge and lectured at Edinburgh University. He wrote many papers in Geometry. He became President of the EMS in 1944 and an honorary member in 1983. *SAU  2014 Jacqueline Anne ( Barton)Stedall (4 August 1950; Romford, Essex, U.K.–27 September 2014; Painswick, Gloucestershire) was a well-known historian of mathematics. Although her career as a researcher, scholar and university teacher lasted less than 14 years, it was greatly influential. Her nine books, more than 20 articles, input to the online edition of the manuscripts of Thomas Harriot, journal editorships and contributions to Melvyn Bragg’s Radio 4 programme In Our Time showed her exceptional breadth of scholarship. Jackie Stedall came to Oxford in October 2000 as Clifford-Norton Student in the History of Science at Queen’s College. She held degrees of BA (later MA) in Mathematics from Cambridge University (1972), MSc in Statistics from the University of Kent (1973), and PhD in History of Mathematics from the Open University (2000). She also had a PGCE in Mathematics (Bristol Polytechnic 1991). In due course she became Senior Research Fellow in the Oxford Mathematical Institute and at Queen’s College, posts from which, knowing that she was suffering from incurable cancer, she took early retirement in December 2013. This was her fifth career. Following her studies at Cambridge and Canterbury she had been three years a statistician, four years Overseas Programmes Administrator for War on Want, seven years a full-time parent, and eight years a schoolteacher before she became an academic. *Obituaries at The Guardian, Oxford Mathemtics, and Wik Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Friday, 26 September 2014 On This Day in Math - September 26 "mathematics is not yet ready for such problems" ~Paul Erdos in reference to Collatz's problem This is the 269th day of the year, the date is written 26/9 in much of Europe. This is the only day of the year which presents itself in this way. (Are there any days that work using month/day?) 269 is a regular prime, an Eisenstein prime with no imaginary part, a long prime, a Chen prime, a Pillai prime, a Pythagorean prime, a twin prime, a sexy prime, a Higgs prime, a strong prime, and a highly cototient number. So many new terms to look up... Well? Look them up. 1679 On September 26, 1679, a fierce fire consumed the Stellaburgum — Europe’s finest observatory, built by the pioneering astronomer Johannes Hevelius in the city of Danzig, present-day Poland, decades before the famous Royal Greenwich Observatory and Paris Observatory existed. *Maria Popova at brainpickings.org 1775 John Adams writes to his wife to entreat her to teach his children geometry and... "I have seen the Utility of Geometry, Geography, and the Art of drawing so much of late, that I must intreat you, my dear, to teach the Elements of those Sciences to my little Girl and Boys. It is as pretty an Amusement, as Dancing or Skaiting, or Fencing, after they have once acquired a Taste for them. No doubt you are well qualified for a school Mistress in these Studies, for Stephen Collins tells me the English Gentleman, in Company with him, when he visited Braintree, pronounced you the most accomplished Lady, he had seen since he left England.—You see a Quaker can flatter, but dont you be proud. *Natl. Archives 1874 James Clerk Maxwell in a letter to Professor Lewis Campbell describes Galton, "Francis Galton, whose mission it seems to be to ride other men's hobbies to death, has invented the felicitous expression 'structureless germs'. " *Lewis Campbell and William Garnett (eds.), The Life of James Clerk Maxwell (1884), 299. 1991 The first two year closed mission of Biosphere 2 began just outside Tucson, Arizona. *David Dickinson ‏@Astroguyz  2011 Astronauts had this view of the aurora on September 26, 2011. Credit: NASA We’ve had some great views of the aurora submitted by readers this week, but this one taken from the International Space Station especially highlights the red color seen by many Earth-bound skywatchers, too. Karen Fox from the Goddard Space Flight Center says the colors of the aurora depend on which atoms are being excited by the solar storm. In most cases, the light comes when a charged particle sweeps in from the solar wind and collides with an oxygen atom in Earth’s atmosphere. This produces a green photon, so most aurora appear green. However, lower-energy oxygen collisions as well as collisions with nitrogen atoms can produce red photons — so sometimes aurora also show a red band as seen here. *Universe Today 1688 Willem 'sGravesande (26 September 1688 – 28 February 1742)was a Dutch mathematician who expounded Newton's philosophy in Europe. In 1717 he became professor in physics and astronomy in Leiden, and introduced the works of his friend Newton in the Netherlands. His main work is Physices elementa mathematica, experimentis confirmata, sive introductio ad philosophiam Newtonianam or Mathematical Elements of Natural Philosophy, Confirm'd by Experiments (Leiden 1720), in which he laid the foundations for teaching physics. Voltaire and Albrecht von Haller were in his audience, Frederic the Great invited him in 1737 to come to Berlin. His chief contribution to physics involved an experiment in which brass balls were dropped with varying velocity onto a soft clay surface. His results were that a ball with twice the velocity of another would leave an indentation four times as deep, that three times the velocity yielded nine times the depth, and so on. He shared these results with Émilie du Châtelet, who subsequently corrected Newton's formula E = mv to E = mv2. (Note that though we now add a factor of 1/2 to this formula to make it work with coherent systems of units, the formula as expressed is correct if you choose units to fit it.) *Wik 1754 Joseph-Louis Proust (26 Sep 1754; 5 Jul 1826) French chemist who proved (1808) that the relative quantities of any given pure chemical compound's constituent elements remain invariant, regardless of the compound's source, and thus provided crucial evidence in support of John Dalton's “law of definite proportions,” which holds that elements in any compound are present in fixed proportion to each other. *TIS 1784 Christopher Hansteen (26 Sep 1784; 15 Apr 1873) Norwegian astronomer and physicist noted for his research in geomagnetism. In 1701 Halley had already published a map of magnetic declinations, and the subject was studied by Humboldt, de Borda, and Gay-Lussac, among others. Hansteen collected available data and also mounted an expedition to Siberia, where he took many measurements for an atlas of magnetic strength and declination.*TIS 1854 Percy Alexander MacMahon (26 Sept 1854 , 25 Dec 1929) His study of symmetric functions led MacMahon to study partitions and Latin squares, and for many years he was considered the leading worker in this area. His published values of the number of unrestricted partitions of the first 200 integers which proved extremely useful to Hardy and Littlewood in their own work on partitions. He gave a Presidential Address to the London Mathematical Society on combinatorial analysis in 1894. MacMahon wrote a two volume treatise Combinatory analysis (volume one in 1915 and the second volume in the following year) which has become a classic. He wrote An introduction to combinatory analysis in 1920. In 1921 he wrote New Mathematical Pastimes, a book on mathematical recreations. *SAU 1887 Sir Barnes (Neville) Wallis (26 Sep 1887; 30 Oct 1979) was an English aeronautical designer and military engineer whose famous 9000-lb bouncing "dambuster" bombs of WW II destroyed the German Möhne and Eder dams on 16 May 1943. He designed the R100 airship, and the Vickers Wellesley and Wellington bombers. The specially-formed RAF 617 Squadron precisely delivered his innovative cylindrical bombs which were released from low altitude, rotating backwards at high speed that caused them to skip along the surface of the water, right up to the base of the dam. He later designed the 5-ton Tallboy and 10-ton Grand Slam earthquake bombs (which used on many enemy targets in the later years of the war). Postwar, he developed ideas for swing-wing aircraft. *TIS (His courtship with his wife has been written by his daughter, Mary Stopes-Roe from the actual courtship in the entertaining, but perhaps overpriced book, Mathematics With Love: The Courtship Correspondence of Barnes Wallis, Inventor of the Bouncing Bomb.) 1891 Hans Reichenbach (September 26, 1891, April 9, 1953) was a leading philosopher of science, educator and proponent of logical empiricism. Reichenbach is best known for founding the Berlin Circle, and as the author of The Rise of Scientific Philosophy.*Wik 1924 Jean Hoerni, a pioneer of the transistor, is born in Switzerland. A physicist, Hoerni in 1959 invented the planar process, which, combined with Robert Noyce's technique for placing a layer of silicon dioxide on a transistor, led to the creation of the modern integrated circuit. Hoerni's planar process allowed the placement of complex electronic circuits on a single chip. *CHM 1926 Colin Brian Haselgrove (26 September 1926 , 27 May 1964) was an English mathematician who is best known for his disproof of the Pólya conjecture in 1958. the Pólya conjecture stated that 'most' (i.e. more than 50%) of the natural numbers less than any given number have an odd number of prime factors. The conjecture was posited by the Hungarian mathematician George Pólya in 1919.. The size of the smallest counter-example is often used to show how a conjecture can be true for many numbers, and still be false. *Wik 1927 Brian Griffiths (26 Sept 1927 , 4 June 2008) He was deeply involved in the 'School Mathematics Project', he served as chairman of the 'Joint Mathematical Council', and chaired the steering group for the 'Low Attainers Mathematics Project' from 1983 to 1986. This project became the 'Raising Achievement in Mathematics Project' in 1986 and he chaired this from its foundation to 1989. *SAU 1766 Giulio Carlo Fagnano dei Toschi died. He is important for the identity \pi = 2i\log{1 - i \over 1 +i} and for his rectification of the lemmiscate. *VFR An Italian mathematician who worked in both complex numbers and on the geometry of triangles.*SAU The lemniscate is of particular interest because, even if it has little relevance today, it the catalyst for immeasurably important mathematical development in the 18th and 19th centuries. The figure 8-shaped curve first entered the minds of mathematicians in 1680, when Giovanni Cassini presented his work on curves of the form, appropriately known as the ovals of Cassini. Only 14 years later, while deriving the arc length of the lemniscate, Jacob Bernoulli became the first mathematician in history to define arc length in terms of polar coordinates. The first major result of work on the lemniscate came in 1753, when, after reading Giulio Carlo di Fagnano’s papers on dividing the lemniscate using straightedge and compass, Leonhard Euler proved that: Jacobi called December 23,1751 "the birthday of elliptic functions", as this was the day that Euler began reviewing the papers of Fagnanao who was being considered for membership in the Berlin Academy. *Raymond Ayoub, The lemniscate and Fagnano's contributions to elliptic integrals 1802 Jurij Vega (23 Mar 1754, 26 Sept 1802) wrote about artillery but he is best remembered for his tables of logarithms and trigonometric functions. Vega calculated π to 140 places, a record which stood for over 50 years. This appears in a paper which he published in 1789. In September 1802 Jurij Vega was reported missing. A search was unsuccessful until his body was found in the Danube near Vienna. The official cause of death was an accident but many suspect that he was murdered. *SAU 1867 James Ferguson (31 Aug 1797, 26 Sep 1867) Scottish-American astronomer who discovered the first previously unknown asteroid to be detected from North America. He recorded it on 1 Sep 1854 at the U.S. Naval Observatory, where he worked 1848-67. This was the thirty-first of the series and is now known as 31 Euphrosyne, named after one of the Charites in Greek mythology. It is one of the largest of the main belt asteroids, between Mars and Jupiter. He was involved in some of the earliest work in micrometry was done at the old U.S. Naval Observatory at Foggy Bottom in the midst of the Civil War using a 9.6 inch refractor. He also contributed to double star astronomy. Earlier in his life he was a civil engineer, member of the Northwest Boundary Survey, and an assistant in the U.S. Coast Survey *TIS 1868 August Ferdinand Mobius died. He discovered his famous strip in September 1858. Johann Benedict Listing discovered the same surface two months earlier.*VFR (It is somewhat amazing that we call it after Mobius when Listing discovered it first and published, and it seems, Mobius did not. However Mobius did seem to have thought on the four color theorem before Guthrie, or anyone else to my knowledge.) 1877 Hermann Günther Grassmann (15 Apr 1809, 26 Sep 1877) German mathematician chiefly remembered for his development of a general calculus of vectors in Die lineale Ausdehnungslehre, ein neuer Zweig der Mathematik (1844; "The Theory of Linear Extension, a New Branch of Mathematics"). *TIS 1910 Thorvald Nicolai Thiele (24 Dec 1838, 26 Sept 1910) He is remembered for having an interpolation formula named after him, the formula being used to obtain a rational function which agrees with a given function at any number of given points. He published this in 1909 in his book which made a major contribution to numerical analysis. He introduced cumulants (under the name of "half-invariants") in 1889, 1897, 1899, about 30 years before their rediscovery and exploitation by R A Fisher. *SAU 1976 Paul (Pál) Turán (18 August 1910, 26 September 1976) was a Hungarian mathematician who worked primarily in number theory. He had a long collaboration with fellow Hungarian mathematician Paul Erdős, lasting 46 years and resulting in 28 joint papers. *SAU 1978 Karl Manne Georg Siegbahn (3 Dec 1886, 26 Sep 1978) Swedish physicist who was awarded the Nobel Prize for Physics in 1924 for his discoveries and investigations in X-ray spectroscopy. In 1914 he began his studies in the new science of x-ray spectroscopy which had already established from x-ray spectra that there were two distinct 'shells' of electrons within atoms, each giving rise to groups of spectral lines, labeled 'K' and 'L'. In 1916, Siegbahn discovered a third, or 'M', series. (More were to be found later in heavier elements.) Refining his x-ray equipment and technique, he was able to significantly increase the accuracy of his determinations of spectral lines. This allowed him to make corrections to Bragg's equation for x-ray diffraction to allow for the finer details of crystal diffraction. *TIS 1990 Lothar Collatz​ (July 6, 1910, , September 26, 1990) was a German mathematician. In 1937 he posed the famous Collatz conjecture, which remains unsolved. The Collatz-Wielandt formula for positive matrices important in the Perron–Frobenius theorem is named after him. *Wik The Collatz conjeture is an iteration problem that deals with the following algorithm.. If a number n is odd, then f(n)= 3n+1 if n is even, then f(n) = 1/2 (n) Each answer then becomes the new value to input into the function. The problem, or should I say problems, resolve around what happens to the sequence of outcomes when we keep putting the answer back into the function. For example if we begin with 15 we get the following sequence, also called the orbit of the number: 15, 46, 23, 70, 35, 106, 53, 160, 80, 40, 20, 10, 5, 16, 8, 4, 2, 1... One of the unproven conjectures is that for any number n, the sequence will always end in the number 1. This has been shown to be true for all numbers up to just beyond 1016. A second interesting question is how long it takes for a number to return to the value of 1. For the example above, the number 15 took 17 steps to get back to the unit value. Questions such as which three (or other n) digit number has the longest orbit. There are many vairations of the problem, but if you are interested in a good introduction, check this link from Simon Fraser University" Collatz's Problem is often also called the Syracuse Algorithm, Hasse's problem, Thwaite's problem, and Ulam's problem after people who have worked and written on the problem. It is unclear where the problem originated, as it seems to have had a long history of being passed by word of mouth before it was ever written down. It is often attributed to Lothar Collatz from the University of Hamburg who wrote about the problem as early as 1932. The name "Syracuse Problem" was applied by after H. Hasse, an associate of Collatz, visited and discussed the problem at Syracuse University in the 1950's. During the 1960's Stan Ulam circulated the problem at Los Alamos laboratory. One famous quote about the problem is from Paul Erdos who stated, "mathematics is not yet ready for such problems". *Personal notes Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell Thursday, 25 September 2014 On This Day in Math - September 25 I am undecided whether or not the Milky Way​ is but one of countless others all of which form an entire system. Perhaps the light from these infinitely distant galaxies is so faint that we cannot see them. ~ Johann H Lambert This is the 268th day of the year, 268 is the smallest number whose product of digits is 6 times the sum of its digits. (A good classroom exploration might be to find numbers in which the product of the digits is n x the sum of the digits for various values of n.. more generally, for what percentage of numbers is the sum a factor of the product at all?) 268 is the sum of two consecutive primes, 268 = 131 + 137 1493 Columbus set sail on his second voyage to America. 1513 Balboa discovered the Pacific. 1608 The oldest written mention of the telescope: In a letter of introduction from the Council of Zeeland to Zeeland’s Delegates to the States General (the Netherlands parliament) in Den Haag asking them to organise an audience with Prince Maurice of Nassau for a spectacle maker from Middelburg who had invented a “…certain device by means of which all things at a very great distance can be seen as if they were nearby, by looking through glasses…”; the oldest written mention of the telescope. On an unknown day between 25th and 29th September: Hans Lipperhey (1570 – 1619) the spectacle maker from Middelburg (who was actually a German from Wesel) demonstrates his new invention at the court of Prince Maurice, where a peace conference in the Dutch-Spanish War is taking place along with the first visit to Europe of the Ambassador of Siam. Lipperhey’s demonstration is described in detail in a French flyer describing the Ambassadors visit and the news of the new invention is thus spread rapidly throughout Europe. *Renaissance Mathematicus, 1654 Fermat writes to Pascal defending his combinatorial method that Pascal had previously regarded as incorrect.*VFR 1820 Arago announces electromagnetism ... Francois Arago announced that a copper wire between the poles of a voltaic cell, could laterally attract iron filings to itself (Ann. de Chim. et de Physique., xv. p.93). His discovery came in the same year that Oersted discovered that an electric current flowing in a wire would deflect a neighbouring compass needle. Arago in the same publication described how he had successfully succeeded in causing permanent magnetism in steel needles laid at right angles to the copper wire. Arago and André-Marie Ampère, discussed and experimented with forming the copper wire into a helix to intensify the magnetizing action. However, it was not until 1825 that the electromagnet in its familiar form was invented by William Sturgeon. *TIS 1944 Denmark issued a stamp commemorating the 300th anniversary of the birth of Ole Roemer,*VFR 1989 IBM announces plans to develop a new design for transmitting information within a computer, called Micro Channel Architecture, which it said could transfer data at 160 million bytes per second or eight times faster than the fastest speed at the time. Although IBM was hoping to make its system the industry standard, manufacturers of IBM-compatible computers largely chose other methods. *CHM 1644 Olaus Roemer, Danish astronomer, born. He was the first to measure the speed of light. *VFR (25 Sep 1644;23 Sep 1710) Astronomer who demonstrated conclusively that light travels at a finite speed. He measured the speed by precisely measuring the length of time between eclipses of Jupiter by one of its moons. This observation produces different results depending on the position of the earth in its orbit around the sun. He reasoned that meant light took longer to travel the greater distance when earth was traveling in its orbit away from Jupiter.*TIS "Ole Rømer took part in several other achievements considering measurement. He developed a temperature scale that is now famous as the Fahrenheit scale. Fahrenheit improved and distributed his ideas after visiting Rømer. In his last years, he was even given the position as second Chief of the Copenhagen Police and invented the first street oil lamps in the city of Copenhagen. Further achievements and inventions may be added to Rømer's biography, like his innovative water supply system and his urban planning concept. " *Yovista.blogspot 1819 George Salmon (25 September 1819 – 22 January 1904) made many discoveries about ruled surfaces and other surfaces. *SAU His publications in algebraic geometry were widely read in the second half of the 19th century. A Treatise on Conic Sections remained in print for over fifty years, going though five updated editions in English, and was translated into German, French and Italian. *Wik 1825 Carl Harald Cramer,(25 September 1893 ,5 October 1985) was a Swedish mathematical statisticians and is one of the prominent figures in the statistical theory. He was once described by John Kingman as "one of the giants of statistical theory".  In number theory, Cramér's conjecture,in 1936 states that p_{n+1}-p_n=O((\log p_n)^2),\ where pn denotes the nth prime number, O is big O notation, and "log" is the natural logarithm. Intuitively, this means the gaps between consecutive primes are always small, and it quantifies asymptotically just how small they can be. This conjecture has not been proven or disproven. 1846 Wladimir (Peter) Köppen (25 Sep 1846; 22 Jun 1940) German meteorologist and climatologist best known for his delineation and mapping of the climatic regions of the world. He played a major role in the advancement of climatology and meteorology for more than 70 years. The climate classification system he developed remains popular because it uses easily obtained data (monthly mean temperatures and precipitation) and straightforward, objective criteria. He recognized five principal climate groups: (A) Humid tropical -winterless climates; (B) Dry - evaporation constantly exceed precipitation; (C) humid mid-latitude, mild winters; (D) humid mid-latitude, severe winters; and (E) Polar - summerless climates. *TIS 1888 Stefan Mazurkiewicz (25 Sept 1888 , 19 June 1945) His main work was in topology and the theory of probability. His notion of dimension of a compact set preceded that of Menger and Urysohn by seven years. Mazurkiewicz applied topological methods to the theory of functions, obtaining powerful results. His theory gave particularly strong results when applied to the Euclidean plane, giving deep knowledge of its topological structure. *SAU 1777 Johann Heinrich Lambert (26 Aug 1728, 25 Sep 1777) Swiss-German mathematician, astronomer, physicist, and philosopher who provided the first rigorous proof that pi ( the ratio of a circle's circumference to its diameter) is irrational, meaning it cannot be expressed as the quotient of two integers. He also devised a method of measuring light intensity. *TIS In 1766 Lambert wrote Theorie der Parallellinien which was a study of the parallel postulate. By assuming that the parallel postulate was false, he managed to deduce a large number of non-euclidean results. He noticed that in this new geometry the sum of the angles of a triangle increases as its area decreases. *SAU 1852 Christoph Gudermann (March 25, 1798, September 25, 1852) was born in Vienenburg. He was the son of a school teacher and became a teacher himself after studying at the University of Göttingen, where his advisor was Karl Friedrich Gauss. He began his teaching career in Kleve and then transferred to a school in Münster. He is most known today for being the teacher of Karl Weierstrass, who took Gudermann's course in elliptic functions, 1839–1840, the first to be taught in any institute. Weierstrass was greatly influenced by this course, which marked the direction of his own research. Gudermann originated the concept of uniform convergence, in an 1838 paper on elliptic functions, but only observed it informally, neither formalizing it nor using it in his proofs. Instead, Weierstrass elaborated and applied uniform convergence. His researches into spherical geometry and special functions focused on particular cases, so that he did not receive the credit given to those who published more general works. The Gudermannian function, or hyperbolic amplitude, is named after him.Gudermann died in Münster. *Wik 1877 Urbain-Jean-Joseph Le Verrier (11 May 1811, 25 Sep 1877) French astronomer who predicted the position of a previously unknown planet, Neptune, by the disturbance it caused in the orbit of Uranus. In 1856, the German astronomer Johan G. Galle discovered Neptune after only an hour of searching, within one degree of the position that had been computed by Le Verrier, who had asked him to look for it there. In this way Le Verrier gave the most striking confirmation of the theory of gravitation propounded by Newton. Le Verrier also initiated the meteorological service for France, especially the weather warnings for seaports. *TIS (He died the day after the anniversary of the sighting of his most famous prediction. Between that moment of fame in 1846 and his death, he mistakenly attributed the variability of Mercury's orbit to another small planet he named "Vulcan". It took the theory of General Relativity to explain the variations. He was buried in Montparnasse cemetery in Paris. A large globe sits atop grave. Arago described him as, "the man who discovered a planet with the point of his pen." 1933 Paul Ehrenfest (January 18, 1880, September 25, 1933) was an Austrian and Dutch physicist and mathematician, who made major contributions to the field of statistical mechanics and its relations with quantum mechanics, including the theory of phase transition and the Ehrenfest theorem. *Wik Credits : *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *RMAT= The Renaissance Mathematicus, Thony Christie *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell
98b8c9184f3edf91
List of Articles chava science-1024x70 Spectral Lines Associated with Dark Matter In recent News from Physics and Cosmology, there has been a flurry of reports concerning a signature spectral line which can be associated with dark matter in distant galaxies. Given the preponderance of hydrogen in normal matter, there has been a suspicion that dark matter is novel form of hydrogen. "An unidentified line in X-ray spectra of the Andromeda galaxy and Perseus galaxy cluster" by Boyarsky is an example. Although the line is weak, it has a tendency to become stronger towards the centers of the galaxies and is absent in the spectrum of a deep "blank sky" dataset. Detection of an Unidentified Emission Line in the Stacked X-Ray Spectrum of Galaxy Clusters by Bulbul et al. reports the unidentified emission line at 3.6 keV in 73 different Galaxies. The authors conclude that "As intriguing as the dark matter interpretation of our new line is, we should emphasize the significant systematic uncertainties in detecting the line energy in addition to the quoted statistical errors." Statisticians seem to be more comfortable with the evidence than physicists. We at Chava are interested in the dark matter to hydrogen from the perspective of alternative energy. There is a possibility that dark matter hydrogen can be ubiquitous, and even manufactured or "harvested". The solar wind could be resource for dark matter. In another paper: "Questioning a 3.5 keV dark matter emission line" Riemer-Sørensen analyzes data from the Milky Way and finds some evidence of this line but does not ascribe the same high confidence level as do other for a dark matter signature. The issue is far from decided, but it is not too soon to consider alternative energy implications for Earth-bound uses and experiments with engineered Dark Matter, which are based on the possibility that hydrogen isomers are formed in a predicted state, known as the DDL, or Deep Dirac Level, which can be identified as warm dark matter with the characteristic emission. The actual mystery emission line is centered at ~3.5-3.6 keV in all 73 Galaxies which were analyzed. Previously there had been predictions of neutrinos at 3.5 and 7 keV based on roughly the same equations which derive from the Dirac equation. This spectrum is otherwise unpopulated by known elemental emission lines. X-rays in this spectrum are fairly "soft" - and at a blind spot exists in experiments where they could appear, since there are no commercially available meters to see 10 keV all the way down to EUV. Thus, detection in metal hydride experiments has not been possible to date without the use of film exposure; and even NASA can only accomplish this feat in space and at huge expense. Almost any window for a detector will block this x-ray but if more evidence accumulates, solutions to the detector problem will be found. Very thin Mylar may work, or exposed circuit lines and semiconductor. The enticing thing about this x-ray line – for those who pursuing the phenomenon of anomalous heat from metal-hydrides – the field which was once called "cold fusion" and later LENR is that it offers an alternative explanation for thermal gain. No matter what name has been given to the phenomenon in the past, it cannot involve common types of nuclear fusion, since no gamma radiation is present. But the predicted deeply bound state of hydrogen, derived from the Dirac equation, fits the evidence nicely. This is an emission range which could have gone undetected in the past 25 years of LENR research, and yet it would produce a few thousand times more energy than a chemical reaction. Notably, this line seems to be near a Rydberg multiple of the kind featured in the CQM theory of Randell Mills, and possibly already associated with deep level ground state orbital redundancy of hydrogen, in the work of several others including Naudts, Va'vra and Meulenberg. There can be 137 steps in the progression of ground state hydrogen orbital to a DDL which are multiples of 27.2 eV, the Hartree energy. For instance 130 * 27.2eV = 3.54 keV which would indicate that the deeper states below 130 steps are not accessible. Randell Mills own calculation provides a value which is too low for what has been reported. There are other ways to compute this value, as well, which fall within a range of 3-7 keV. If the hydrogen as a DDL isomer can be identified as dark matter, or a subset of dark matter, it is not completely dark in a cosmological environment, and will emit its signature on either decay or other stimulation, such as the passage of a gravity wave. The payoff of dark matter research - and its availability as an alternative energy source would be huge - should this emission line be seen in experiments. We could simultaneously go a long way towards explaining what dark matter really consists of (basically it is hydrogen but as a DDL isomer) and also, explain the proximate cause of some forms of LENR, which are producing heat without gamma radiation. This understanding could also permit better control over a notoriously unpredictable system. Further Reading: Randell Mills Theory Jan Naudts "On the hydrino state of the relativistic hydrogen atom", Aug, 2005, predicts the DDL state at very close to the observed spectral line which does not really support Mills theory. Naudts summarizes: "This paper starts with the Klein-Gordon equation, with minimal coupling to the non-quantized electromagnetic field. In case of a Coulomb potential this equation is the obvious relativistic generalization of the Schrödinger equation of the non relativistic hydrogen atom, if spin of the electron is neglected. It has two sets of eigenfunctions, one of which introduces small relativistic corrections to the non-relativistic solutions. The other set of solutions contains one eigenstate which describes a highly relativistic particle with a binding energy which is a large fraction of the rest mass energy. This is the hydrino [single DDL] state. For a contrary view, see Rice and Kim and the rebuttal of Rice and Kim by Va'vra The DDL/Dark-Matter/LENR connection is an interesting possibility that has generated a huge amount of interest, since it fills a large gap elegantly... which of course, does not make it right. is an information sharing platform on new energy research projects provided by Chava Energy LLC and its subsidiary Chava Wind LLC. For further information please contact Hagen Ruff or Mark Snoswell on This email address is being protected from spambots. You need JavaScript enabled to view it..
25b3bf3732d56eb2
Quantum Matter Animated! by Jorge Cham – “I don’t remember anything I learned in college” Screen Shot 2013-06-11 at 12.15.21 AM Screen Shot 2013-06-11 at 12.16.55 AM Screen Shot 2013-06-11 at 12.17.20 AM Watch the first installment of this series: Produced in Partnership with the Institute for Quantum Information and Matter (http://iqim.caltech.edu) at Caltech with funding provided by the National Science Foundation. Transcription: Noel Dilworth Thanks to: Spiros Michalakis, John Preskill and Bert Painter 65 thoughts on “Quantum Matter Animated! 1. Is it still not possible that the laser gave some part of it’s energy to the mirror? Is it possible to detect such small instantaneous rise in temperature (which will be dispersed to the surroundings within fraction of a second as it is maintained at 0K) ?? Because if it is not completely possible to measure such small changes in temperature in such small time then how can we be sure that the red shifted laser is NOT due to the laser giving off it’s energy ?? And if this is the reason then this still does not prove that the mirror was vibrating. It started vibrating only after being hit by the laser. But due to temperature dispersal the mirror was instantly damped and brought again to zero vibrations or ground state. • I am not a physicist, but the intuitive answer to your question is that if the laser were imparting energy to the mirror, and that was where the red shift was coming from, then there would still be a corresponding blue shift. • Right. When the oscillator is in its quantum ground state, it can absorb energy but cannot emit energy because it is already in its lowest possible energy state. Reflected light can be shifted toward the red (have lower energy than the incident light, because the oscillator absorbed some of the incident energy), but cannot be shifted toward the blue (have higher energy than the incident light). That’s what the experiment found. • Just a follow-on to John’s response… The inability of the mechanical resonator to give up energy when it is in its lowest energy state seems like an obvious statement (by definition of “lowest energy state”), and so why is the experiment interesting then? All it did was confirm that indeed this energy emission goes away as the object gets colder and colder and approaches its ground (lowest energy) state. It is really the fact that the mechanical resonator can absorb energy when it is in the ground state that is interesting. The classical description of the motion of a mechanical object has no way of allowing for this asymmetry in the emission and absorption of energy with the environment; the processes must be symmetric and zero when the object is not moving at temperature=0K. Think of it from the stand-point that the mechanical object isn’t moving when in its classical ground state, and thus it is not doing work on its environment and the environment is not doing work on it. That is what makes the quantum description of the ground state of motion interesting; it allows for the asymmetry in the process of emission and absorption of energy by the mechanical resonator to (or from) the environment. I like to make the analogy to the spontaneous emission of light from an atom, in which there is no corresponding spontaneous absorption process of light. A well defined “mode” (think of it as a particular direction and polarization) of light can be described by a similar set of quantum equations as that describing the mechanical resonator, and thus also has a ground state with intrinsic fluctuations. These “zero-point fluctuations” or “vacuum fluctuations” can be thought of as triggers for atomic spontaneous decay and emission of light by the atom, but do not cause the reverse process of spontaneous excitation of the atom. [Aside: This used to really mystify me when I first learned about spontaneous (and the related stimulated) emission of atoms. The excellent little book by Allen and Eberly, does a nice job of de-mystifying the vacuum fluctuations.] A nice description of the above argument is also given in Aash Clerk’s Physics Viewpoint accompanying article: • Hi Oskar, John, and Paras: 0. For some odd reason, while fast browsing, I first read Oskar’s reply, and then John’s, and only after both, Paras’ original question. (Oskar’s was the longest and innermost indented reply, and so it sort of first caught the eye in the initial rapid browsing.) Even before going through your respective replies, I had happened to think of what in many ways is the same point as Paras has tried to point out above. … Ok. Let me put the query the way I thought of. 1. Here is a simple model of the above experimental arrangement, simplified very highly, just for the sake of argument. The system here consists of the mechanical oscillator and the light field. The environment consists of the light source, the optical measurements devices, the cooling devices, and then, you know, the lab, the earth, all galaxies in the universe, the dark matter, the dark energy … you get the idea. The environment also includes the mechanical support of the oscillator, which in turn, is connected to the lab, the earth, etc. *Only* the system is cooled to 0 K. [Absolutely! 😉 Absolutely, only the system is cooled “to” “0” K!!] The measurement consists of only one effect produced by the light-to-the mechanical oscillator interaction: the changes effected to the reflected light. This effect, it is experimentally found, indeed is in accordance with the QM predictions. (BTW, in fact, the experiment is much more wonderful than that: it should be valuable in studying the classical-to-QM transitions as well. But that’s just an aside as far this discussion goes.) 2. Now my question is this: what state of |ignorance> + |stupidity> + |insanity> + |sinfulness> [+ etc…] do I enter into, if I forward the following argument: At “0” K, the system gets into such a quantum mode that as far as the *reflection* of the light is concerned, if “I” is the amount of the incident light energy (say per unit time), then only some part of it (i.e. the red-shifted part of it) is found to be reflected. However, there still remains an “I – R” amount of energy that the system gives back to the environment via some *experimentally* unmeasured means. If it doesn’t, the first law of thermodynamics would get violated. We may wonder, what could be the form taken by such an energy leakage? Given the bare simplicity of the above abstract description as to what the system and environment here respectively consist of, the answer has to be: via some mechanical oscillation modes of the mechanical oscillator that we do not take account of (let alone measure) in this experiment. The leakage would affect the mechanical support of the oscillator, which, please note, lies *outside* of the system. [The oscillation modes inside the system may be taken to be quantum-mechanical ones; outside of it, as classical ones. But I won’t enter into debates of where the boundary between the quantum and the classical is to be drawn, etc. As far as this experiment—and this argument—goes, we know that “inside” the system, it has to be treated QMechanically; outside, it’s classical mechanically; and that’s enough, as far as I am concerned!] Since the system here is not a passive device but is *actively* being kept cooled down “to” “0” K, it means: it’s the “freeze” sitting in the environment, not to mention the earth and the rest of the universe, which absorb these leaked out vibrations of the mechanical oscillator. The missing energy corresponds to *this* leakage. 3. Of course, I recognize that my point is subtly different from Paras’. His write-up seems to suggest as if there is an otherwise classically rigid-body oscillator sitting standstill, which begins to vibrate only after being hit by the laser. In contrast, I don’t have this description in mind. He also seems to think rather in terms of a *transient* damping out of the mechanical oscillations. Though I do not rule out transients in the system, that wouldn’t be the simplest model one might suggest here: I would rather think of the situation as if there were a more or less “steady-state” leakage of the missing energy into the environment. Yet, Paras does seem to appreciate the role of the environment—the unmeasured side-effects, so to speak, that the system produces on the environment. 4. Anyway, I would appreciate it if you could kindly let me know in what final state should I collapse: |ignorance> or |stupidity> or … . And, why 🙂 [BTW, by formal training, I am just an engineer. And, sorry if my reply is too verbose and had too many diversions…] Thanks in advance. • About your parts (2) and (3), I think it is easier to think of it this way: At the low temperatures the system is subjected to (I really don’t think it even makes sense to say that “only the system is cooled down to 0K”; instead, just say that the system is cooled down to low temperatures is enough), a lot of the system’s constituent particles are in their ground states. What is happening in this experiment, is that they are observing that absorption and excitation of constituent particles up from ground states is observed without the corresponding “classical” de-exciting reflection wave that you normally get. This is predicted from the quantum physics. The special thing about this experiment, though, is that they are also saying that the entire system itself, a macroscopic body, has a quantum wavefunction just like their microscopic parts. That is the part that is interesting and worth reporting upon. Because, if a macroscopic body has a quantum wavefunction, then it can also do all the rest of the quantum weirdness, and that applies to us humans, the Earth, being able to, say, perform quantum tunnelling. Once you see the experiment in this way, it is then obvious that the loss of energy that you perceive, is merely the spontaneous emission of light by the excited particles, and, in this way, they drop back into the ground state of the entire system. This is important, because spontaneous emission is basically undetectable in our case, which is what the experiment observed. The point is that, classically, you are supposed to observe substantial energetic reflection (along with the spontaneous emission that you cannot remove), and you do not observe that in this experiment. 2. Could you add a link to the paper about the experiment for those readers who want more details about it? 3. Whenever someone asks me for a book to explain quantum mechanics to laymen, I always point them to this: It’s an illustrated book about the history of quantum mechanics created by Japanese translation students studying english. They chose the topic because they needed to be able to accurately translate relatively technical material. It’s wonderful for answering the questions you raise in the post above. 4. Great video describing a really interesting experiment. However, it is far from reaching the important lessons from Quantum Mechanics that have shaped the way we see the universe. Forget Quantum Computing. I am not saying that Quantum Computing is not sexy or something, but it is not where the paradigm shift is. One of the greatest thing that a Physics undergraduate degree forces you through, is to learn about Condensed Matter Physics. You might think that, in contemporary Physics education, they would certainly teach you both Quantum Mechanics, and General Relativity. After all, they are what we call the new world view, that revolutionised how we as a species see ourselves. The truth, however, is that, if I did not force them to teach me, they would have ignored General Relativity and just taught me Quantum Mechanics. Lots of it. Without motivation. It is only at the end of the Physics degree do you get to see why it is arranged in the way it is. Special Relativity, the one that Einstein published in 1905, is a really easy thing. Yes, it is bizarre, but you can easily teach it, and later on, you can tell students to apply what they have learnt. That it is reducible into small equations that are easy to memorise, is another plus point. General Relativity, on the other hand, is a pain to teach — everybody, mathematician or physicist, would be confused by the initial arguments, the mathematical notation and all that jazz, until you have completed the module. And even after that, some people just never get it (although, luckily, it is simple enough that a large chunk of people actually understand it very fully, to the contrary of Eddington’s bad joke). The deal breaker, however, is that the ideas from General Relativity, although a nice help to the other parts of Physics, is very far away from essential. i.e. People can make do without any knowledge of that, and still contribute to the rest of Physics in a proper way. That is not the same with Quantum Mechanics. The standard way they teach Quantum Mechanics these days, is to throw the mathematics at you, right at the start. Just write down your energy equation (that you can remember from high school), do your canonical quantisation (which is nothing other than replacing symbols you know about with derivative signs; a monkey can do that), and tack on something magical that we call the wavefunction, and Viola, Quantum Goodness! Since there is nothing to actually understand about it, I watched in amusement as everybody around me struggled to understand something out of nothing, congratulating myself for actually knowing the meaninglessness of it all. Boy, what do I know? The next module, aptly named “Atomic and Molecular Physics”, looked like nothing but applications of the mathematics learnt. It was HORRIBLE to go through, especially since it looked like vocational training — approximation and other calculational techniques that are hardly useful outside higher and higher corrections to the properties of materials that classical physics could have found out about (except quantisation, of course). It was important to have learnt it (not least because it was the first place in which Quantum Entanglement was taught), but it felt like we are just learning tricks instead of ideas. Statistical Thermodynamics was better. Building upon Thermal Physics in first year, there was a bit of Quantum effects being shown in action, especially the Quantum Degeneracy pressure that keeps stars the size they are. Then BOOM! Condensed Matter Physics (I learnt it under the older name, Solid State Physics). I had to completely rewrite what I thought I had known about Quantum Theory, for it is obvious I knew nothing. I am sure you guys have heard of the adage: “When stuff are moving fast, are large, or heavy, General Relativity cannot be neglected. When things are small, Quantum corrections cannot be neglected.” It is still true, but there is a sleight of hand here — we have yet to define what it means to be “large” or “small”. In particular, whenever you have a lot of material squeezed into a small space, i.e. high density, it is small. Thus, something can be both large and small at the same time, requiring both General Relativity and Quantum Theory to describe. A black hole is one such object. The name “Condensed Matter”, is a really good one. Any liquid or solid, really, is condensed, so condensed, that actually, it is no longer a classical system — the quantum effects DOMINATE. Without incorporating Quantum Theory right into the heart of it all, nothing you calculate even makes sense. And since our first approximations here beat the best classical calculations left-right-centre, there was also no reason to teach the classical approximation techniques either. Specifically, notice how, in high school, people teach you that heat and sound are just atoms moving about in different ways? Classical theories can talk about heat propagation and sound propagation and motion. But they are three different islands that don’t even make sense together. So different, that even their mathematical tools are different. But in Quantum Theory, the same mathematics describe all three as one united whole, on the zeroth approximation, and even give you dispersion, which is something classical theories cannot explain without complicated methods. After being floored by how it actually is done, the icing on the cake is Transistors. The theory was originally made in order to explain how metals behave, and we talk about a free electron gas, to explain how metals conduct so well. So, it came as a complete shock that any improvement, notice, ANY SIMPLE improvement to the free electron picture, be it Nearly Free Electron model, or the Tight Binding model, energy bands appear. In practical terms, the theory that sought to explain metals, now explains insulators, and even more, predicts the existence of this previously unheard of class of materials, known as semiconductors. Indeed, it does even better. It predicts the existence, how to make them, and how they would be useful. It is the first time that Physics THEORY had been faster and earlier than the experimenters at any topic. So, yeah, while you are enjoying your computers reading this piece, appreciate the sheer ingenuity and wonder that is brought to you by the Quantum revolution. Please alert Jorge to this. He can do wonders with information. Sound propagation and bulk motion can be treated the same way, because they are both forms of characteristic wave propagation and show up as eigenvalues of the same equation set. Heat transfer, viscous momentum transfer, and diffusive mass transfer all work basically the same way, because they are closely related effects of the same basic process. All of this can be derived in a unified framework using the principles of classical kinetic theory, because all of it is inherent in the Boltzmann equation. It’s true that you need quantized internal energy states to accurately predict something as simple as the temperature dependence of specific heat in a gas. But it seems to me that you are somewhat exaggerating the shortcomings of classical physics. I am really doubtful of that. The reason is that the mathematical apparatus is just not the same. For the propagation of heat, you have the heat equation in classical physics, with the propagation constant kappa. For sound, the Harmonic approximation gives rise to a fixed speed of sound, which you later improve upon by adding anharmonic terms so that the speed of sound becomes a variable. Those two constants are not the same. Granted, they are dimensionally inconsistent, but the fact is that you have to treat them rather differently. The reason for this discrepancy is that sound propagation exhibits higher frequency dependency, so that it is easier to look at one frequency. Heat, on the other hand, is usually averaged over in the context of classical heat propagation. This makes it really complicated, as you have to average over both spacetime and weight them according to the probabilities of being in so-and-so states. Note that this last thing is also itself temperature dependent, so classical physics is crazy. Nothing stops a person from combining the heat and sound contributions in classical physics, but they are like Frankenstein combinations — oh, this contribution is for heat, and that for sound, and this for their interaction. That is very different from truly unified descriptions in Quantum theory, where it is one term, and one term only, that we are looking at. Because of that, I do not think I am exaggerating the shortcomings of classical physics. It simply is not a unified framework, although it is frequently possible to push approximations in classical physics to really high orders of accuracy. That, I can give, but not unification. And even then, one should notice the tremendous difference in the mathematical methods involved. Yes, both approaches would heavily depend on Fourier analysis, but that is just about their similarities. Instead, a knowledge of the approximation techniques in classical physics is only useful for the continuum free-space approximation of the transport of various quantum objects, whereas proper quantum approximation techniques is frequently simpler than the classical counterparts Finally, bulk motion is very different from either of sound nor heat in any case, except the fact that they are all of zero frequency (actually, this is how the normal mode mathematical technique announces its own failure, and there are ways to compensate formally). Luckily, it is seldom a problem that this is happening — after all, bulk motion would, somewhat, be better treated with relativistic methods. • I suspect we’re talking past one another a bit here. I’m a fluid dynamicist. I’ve studied some advanced solid mechanics and continuum mechanics, but mostly I’m a fluid dynamicist. When you say stuff like “bulk motion is very different from sound”, I think of the underlying physical principles, because in the derived practical equations I use this is not true. But when you say stuff like “the heat equation in classical physics, with the propagation constant kappa”, I think of the phrase “toy equation”. Even in the engineering form of the heat equation, or the Navier-Stokes equations for a linear isotropic fluid, kappa is a coefficient, not a constant (though turbulence modellers generally ignore its thermodynamic derivatives). And it doesn’t show up at all in the Boltzmann equation, unless you do the math and derive it. Regarding “unified framework”, I expressed that poorly. Sure, in the engineering equations, first-order fluxes like acoustic propagation and bulk motion are handled differently than second-order fluxes like heat transfer and viscous stress. This is because their behaviours are different, so the simplest reasonably accurate mathematical descriptions of them will unavoidably be different. But it should never be forgotten that they can both be derived from the same statistical mechanical representation. It strikes me that what the Boltzmann equation is to fluid mechanics is somewhat analogous to what Schrödinger’s equation is to quantum condensed matter physics (though it isn’t quite as fundamental). The general form isn’t very useful by itself, but specializations and approximations can produce good enough results to translate into engineering equations. The key to the Boltzmann equation (assuming you have enough dimensions to describe all important degrees of freedom) is the collision operator, which could be said to be analogous to the Hamiltonian in the Schrödinger equation. The collision operator describes all interactions between particles and is very difficult to specify exactly for real physical systems, though a number of popular approximations exist. I gather this is a bit different from the quantum-mechanical approach you’re talking about, where a lot of condensed systems can be described surprisingly well with “noninteracting” approaches… People have tried to use the Boltzmann equation (with or without quantum effects) to model solids, with mixed success. It seems to be best at fluids, especially gases and plasmas, perhaps because the molecular chaos assumption is difficult to remove. Look, I’m not claiming that quantum physics is no better than classical physics. But you seem to be saying “classical physics” when you should be saying “classical engineering approximations”, and then drawing conclusions based on the conflation of the two. Comparing the Schrödinger equation to something like the Navier-Stokes equations, never mind the heat equation, is apples-to-oranges. You can actually derive all of the basic principles of fluid mechanics from Newtonian mechanics, without even referencing electromagnetics, though your accuracy won’t be very good… I shouldn’t have gotten involved. I have a segfault to chase down… • I better see where you are coming from. You are clearly talking about deeper stuff, and good luck with your segfault. However, I do not think that your argument is convincing enough. Yes, it is possible to derive fluid equations and so forth from Newtonian mechanics. The problem still persists, however, that after the derivation (in which kappa turns out to be a derived quantity and actually not a constant), that the treatment of heat and sound needs to be done as stitched patchworks on top of the same fundamentals. As you rightly noted, I was saying that you don’t treat it that way in quantum physics, and it is quite important to see how it is actually handled differently. Also, the “proper” way to deal with interacting quantum systems is to couple them. For example, phonons and photons, by interacting, means that a proper treatment is to deal with waves that are half-phonon and half-photon and then quantise them yet again. This is completely different from how classical approaches tackle these problems. Yet again, I have to reiterate that, I am not saying that you cannot get good results from classical considerations. What I am saying, is that, due to how classical ideas actually arise from quantum fundamentals (namely, that everything classical tends to just be the conflation of modal [as in, most probable] behaviour as the _only_ [or mean, if you are talking about bulk stuff] possibility), the approximation schemes are doomed to complications for little gain. One of which, is the asymmetrical treatment of heat and sound. That is, even after you derive the heat and sound from the same underlying bulk motion of continuum mechanics, you still have to treat them separately, whereas quantum physics insists that they are _exactly_ the same thing, just different limits of the same _one_ term in any approximation scheme. It is the same thing with fluids. Very few physicists are dealing with Navier-Stokes equation itself, since it is now the preferred game of applied mathematicians. Instead of asking whether Navier-Stokes equations can have solutions for so-and-so kinds of problems, the physicists working on fluids tend to be working, instead, on the quantum corrections that should be added onto Navier-Stokes equations. After all, chaos sets in earlier than Navier-Stokes equations imply, because, near the critical points, modal behaviour is nowhere near the mean behaviour that we should have been focusing upon all this while. Sadly, this is so difficult that we have yet to do something fundamentally good about it. In that case, I am not saying that the corresponding classical problems are not important or not good at describing physical systems, but that the quantum world view is very different. And since the fundamental picture clearly needs to be quantum, I merely mean to say that those quantum considerations happen to be even more important than the classical problems. • Well, I got led on a merry chase and finally found what was causing the memory error. Turns out it was my fault all along… Rather than “the problem still persists” after the derivation, I would say that the problem ARISES in the derivation of transport equations from the fundamentals. The Boltzmann equation doesn’t have separate terms for heat, sound, bulk motion, viscous stress, etc., because it directly describes the molecular motion those things are emergent properties of. It’s not continuum mechanics either; it’s perfectly capable of describing rarefied gases and even free molecular flow. Of course quantum mechanics is a much better model than classical physics for condensed matter behaviour, and even some aspects of gas/plasma behaviour. I completely agree with you there. But I maintain that the specific criticism I was responding to, that of classical physics having an inherently fragmented picture of material mechanics, was not accurate, seemingly because of a mismatch in the fundamentality of the descriptions being compared. • Sorry, I don’t know why, the comment system won’t allow me to reply to yours. I see. That would be totally my ignorance, then. However, I would like to point out, to replace the original wrong argument, that the natural ideal gases that we are familiar with, are actually Fermi gases in the high temperature and low density limit. If that were not the case, we would run into what is known as the Gibb’s paradox, in which a classical gas, in the equations, somehow has a lot less pressure than expected. In particular, the ideal gas equation of pV = NkT, would miss out the N which is around 10^24. That makes no sense, until one realises that the quantum indistinguishability (which is basically quantum entanglements, really) needs to be taken into account. I hope that little bit, which basically states that, even for dilute gases in which we do not expect quantum effects to be important, turn out to be critically dependent upon quantum ideas nonetheless. Of course, the rest of the system does not require quantum corrections, and there is an easy fudge factor to fix that problem, but it does show how quantum theory is still a vital component of everyday life, not some esoteric correction that only people caring about precise effects can observe. (Which is the underlying point I really wanted to outline, although my choice of example turned out to be wrong.) Thanks. It seems, however, that it may be that the “classical atoms” view that is given by Boltzmann equation thus incorporates enough physics to reproduce the important things I was caring about. Interesting. • I hate to keep doing this, but… The Gibbs paradox has to do with the definition of entropy. If you don’t assume indistinguishability, you can toggle the entropy up and down by opening and closing a door between two identical reservoirs. You can get the correct pressure just fine with classical gas kinetics. But there are other things about gas dynamics that require quantum treatment. The temperature dependence of ideal-gas specific heat in multiatomic substances, for instance, is quite substantial and entirely due to the quantization of internal energy storage modes (at lower temperatures, there usually isn’t enough energy in a collision to excite these modes). • Or something like that – I had to look up Gibbs’ paradox, and I’m not completely sure my facile description above is right… • Nah, I know the classical gas kinetics can derive the pressure just fine. Why, indeed, I was just teaching my student that elementary derivation. But it does mean that both Boltzmann and Gibbs entropy cannot be derived from classical reasoning without the indistinguishability fudge factor. You would have to rely on Sackur–Tetrode entropy (removing all the quantum stuff and replacing them with an unknown constant, of course). It might not seem like a big issue at first glance, but it actually is. Other than the fact that entropy of mixing (that you were describing) had to be discontinuously and manually handled, it does also mean that stuff like phase changes go haywire. Again, that is useless to a fluid dynamicist until you want to deal with, say, ice-water mixtures or critical phenomena. Or worse, the theory is inconsistent. Judging by how seriously you take the mathematics, it is either screaming at you that you are doing something wrong, or that phenomenology needs to be used (by curve-fitting the unknown constant there, for one). Instead, what I wanted to impress upon you is that, instead of deriving the pressure from kinetic theory (actually, what a bad name! It is not a theory, nor does kinetic make sense as its modifier. Instead, classical atomic model would be its rightful name), it is possible to subsume the entirety of classical thermodynamics into the 2nd Law. That is, given the existence and some assumed properties of the entropy, you can construct everything you find in classical thermodynamics, even without statistical thermodynamics. That is, 0th Law and 1st Law, in particular, are theorems if you assume the 2nd Law to be your postulate. Actually, it is even a bit less — you assume parts of 2nd Law, and prove the full form of the 2nd Law with the assumptions. The issue I was referring to, is that, if you take this view, in which pressure is just a derivative of the entropy via Maxwell’s relations, and then you try to construct the statistical thermodynamics from it, you will run into Gibbs’ paradox. At the end, there is no need to worry about you dragging the conversation out. Actually, I was still waiting for some insights from you — you have already shown me wrong once, and there is no reason why you cannot teach me more. 5. I particularly like the statement: 6. Okay, physicist, most of the things in the video are not new to me, but good presentation. Commented, though, to point out that the “everything is named after Quantum” is an interesting recurring phenomenon in the USA. Perhaps the largest one was the use of “Radio” in naming things. Radio was the internet on steroids, the “tech stock” of the 1920s bubble. One of the most famous meaningless uses of Radio from the time was the little red wagon called a Radio Flyer. The company just put two hot buzz words together, and created a legendary product. 7. Pingback: The Webcomic Guide to Quantum Physics | Slackpile 8. Dear Jorge Cham, I enjoyed your cute animation. Since you said you were looking for ways to think about quantum mechanics, I thought the resource list below might be interesting. Please feel free to contact me with questions. David Liao One of my physics professors from Harvey Mudd College (half-hour east of Caltech) wrote a wonderful book on quantum mechanics for junior physics majors: John Townsend, A Modern Approach to Quantum Mechanics, University Press: Sausalito, CA (2000) (http://www.amazon.com/A-Modern-Approach-Quantum-Mechanics/dp/1891389785). The academic pedigree of this book comes through Sakurai’s Modern Quantum Mechanics. Get a hold of Professor David Mermin at Cornell. Tell him you are working with Caltech on this animation series, and ask him to walk you through his slides on Bell’s inequalities and the Einstein-Podolsky-Rosen paradox (http://www.lassp.cornell.edu/mermin/spooky-stanford.pdf). If you can meet with Sterl Phinney at Caltech, talk to him. He seems to know a lot about a lot, and he’s really fun to be around. Fundamental concepts: There is a variety of ways to introduce quantum mechanics. The following two flavors can be provide particularly satisfying insight: Path-integral formulation — A creative child can tell a bunch of different imaginary stories to explain how a particle got from situation A to another situation B during the course of a day. A mathematician can associate with each story a complex phasor. The phasors can be added (in a vector-like head-to-tail fashion) to obtain an overall complex number for getting from A to B, whose squared magnitude is the overall probability of getting from A to B. The concept of extremized action from classical mechanics (think of light taking the path of least time) is a limiting approximation of the quantum-mechanical path-integral formulation. For this brief description, I skipped a variety of details. This perspective is attributed to Richard Feynman. State vector, operators — An older, more traditional description of quantum mechanics centers around the state vector (often denoted |psi>). “All that can be described” about an entity of interest is hypothetically abstracted as a vector from a vector space of all possible descriptions that can be associated with the entity. It is hypothesized that the outcomes of measurements correspond to [real] eigenvalues of [Hermitian] operators that can act on the state vector, and that when it is appropriate to describe an entity using one single eigenstate of an operator, this means that observation corresponding that operator will without doubt yield the corresponding eigenvalue as the measured result. Note: State |psi> is *not* wavefunction psi(x). psi(x) = , which is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>. Risky vocabulary: It is important to be aware of verbal shortcuts that are used to make quantum seem more conceptually accessible in the short term that, unfortunately, also make quantum much more difficult to understand fundamentally in the long term: There is no motion in any energy eigenstate (ground state or otherwise). Words such as “vibration” and “zooming around” are only euphemistically associated with any *individual* energy eigenstate. As an example, the Born-Oppenheimer approximation for solving the time-independent Schrödinger equation by separating the electronic and nuclear degrees of freedom is often justified using a story that involves the phrase “the light electrons are whizzing around as the nuclei faster than the massive nuclei are slowly vibrating around their equilibrium positions.” This is shorthand for saying that the curvature term associated with the nuclear coordinates is ignored as the first term in a perturbative expansion because it is suppressed by the ratio of the nuclear mass, M, to the electron mass, m, (for details, http://www.math.vt.edu/people/hagedorn/). Even though the Heisenberg relationship is often described using phrases such as “not knowing how we disturbed a particle by looking at it,” a more fundamentally satisfying understanding is obtained by seeing that some operators don’t commute. Because some pairings of operators, such as position and momentum, don’t share eigenvectors, it is impossible for an entity to simultaneously be in an eigenvector for one operator, say, x position, while also being in an eigenvector for the other operator, in this example, x momentum. Having the momentum well defined (being in an eigenvector for momentum) corresponds to being unable to associate one particularly narrow range of position eigenvalues with the entity. This is essentially the Fourier cartoon you used in the animation (narrowness in space corresponds to less specificity in frequency/wavelength and vice versa). Beware of popular reports of the experimental observation of a wavefunction. Pull up the abstract from the underlying peer-reviewed manuscripts. I bet that the wavefunction has not been directly observed. Instead, the squared-magnitude (probability distribution) has been inferred from a large collection of individual experiments. As an example, a recent work inferring the nodal structure (radii where probability of finding electron around an atomic core vanishes) became popularized as direct observation of the wavefunction, which is not the claim in the original authors’ abstract. • Hi David, By and large, a good write-up. But, still… 1. A minor point: Did you miss something on the right-hand side of the equality sign? In any case, guess you could streamline the line a bit here. 2. A major point: “There is no motion in any energy eigenstate.” And, just, how do you know? [And, oh, BTW, you could expand this question to include any other eigenstates as well.] Anyway, nice to see your level of enthusiasm and interest for these *conceptual* matters as well. Coming from a physics PhD, it is only to be appreciated. • Thank you for your reply. Hope the following is helpful! 1) Thank you for catching the typo in the sentence, “psi(x) = , which is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>.” This sentence should, instead, read, “psi(x) is a *representation* of |psi> in terms of linear weighting coefficients for adding up basis states |x>.” I don’t know how to edit my post to correct this sentence. 2) You asked how it is possible to know that there is no motion in an energy eigenstate. Below, I include two ways to respond. The abstruse response is an actual answer and points to the insight you are seeking. If you look closely, you will see that the graphical response is not an actual answer. Instead, it is a fun exercise for “feeling the intuition” that energy eigenstates do not have motion. Both responses are important (many physicists enjoy both casual “proofs” and fluffy intuition). Abstruse response: We argue that an object that is completely described by one energy eigenstate has no motion. An energy eigenstate is a solution to the time-INDEpendent Schrödinger equation. It’s very “boring.” The only thing that happens to it, according to the time-DEpendent Schrödinger equation is, a rotation of its overall complex phase. This phase does not appear in expectation values, and so all expectation values are constants with time. To obtain motion, it is necessary to have a superposition of more than one state corresponding to at least more than one energy eigenvalue. In such circumstances, at least some of the complex phases will rotate at different time frequencies, allowing *relative* phases between states in the superposition to change with time. I am not claiming that experimental systems that people abstract using energy eigenstates will never turn out, following additional research, to have any aspect of motion. I am saying that the *abstraction* of a single energy eigenstate itself (without reference to whether the abstraction corresponds to anything empirically familiar) is a conceptual structure that contains no concept of motion (save for the rotating overall phase factor). The mathematics described above are very similar to the mathematics that describe the propagation of waves in elastic media. A pure frequency standing wave always has the same shape (though it might be vertically scaled and upside down). A combination of standing waves of different frequencies does not always maintain the same shape. Graphical response: Go to http://crisco.seas.harvard.edu/projects/quantum.html and play with the simulator. Now set the applet to use a Harmonic potential, and try to sketch, using the “Function editor,” the ground state from http://en.wikipedia.org/wiki/File:HarmOsziFunktionen.png You might want to turn on the display of the Potential energy function to ensure an accurate width for the state you are sketching. Run the simulation. Notice that the function doesn’t move very much (or in the case that you sketched the ground state with perfect accuracy, it shouldn’t move at all). Now, sketch a different state that doesn’t look like any one of the energy eigenstates in the Wikipedia image above. This should generate motion (to some extent looking like a probability mound bouncing back and forth in the well). You can also look at the animations at http://en.wikipedia.org/wiki/Quantum_harmonic_oscillator and see that the energy eigenstate examples (panels C,D,E, and F) merely rotate in complex space (red and blue get exchanged with each other), but the overall spatial probability distribution is unchanged. 3) You asked whether one would assert absence of motion for other eigenstates. Not as a general blanket statement. The reason that energy eigenstates have no motion is that they are eigenstates, specifically, of the Hamiltonian. Yes, in some examples, it is possible for an eigenstate of another operator to have no motion (i.e. when that state is both an eigenstate of another operator, as well as of the Hamiltonian). • Cool. Your abstruse response really wasn’t so abstruse. But anyway, my point concering the quantum eigenstates was somewhat like this. To continue on the same classical mechanics example as you took, consider, for instance, a plucked guitar string. The pure frequency standing wave is “standing” only in a secondary sense—in the sense that the peaks are not moving along the length of the string. Yet, physically, the elements of the string *are* experiencing motion, and thus the string *is* in motion, whether you choose to view it as an up-and-down motion, or, applying a bit of mathematics, view it as a superposition of “leftward” and “rightward” moving waves. The issue with the eigenstates in QM is more complicated, only because of the Copenhagen/every other orthodoxy in the mainstream QM. The mainstream QM in principle looks down on the idea of any hidden variables—including those local hidden variables which still might be capable of violating the Bell inequalities. They are against the latter idea, in principle—even if the hidden variables aren’t meant to be “classical.” Leave aside a few foundations-related journal, the mainstream QM community, on the whole, refuses to seriously entertain any idea of any kind of a hidden variable—and that’s exactly the way in which the relativists threw the aether out of physics. … I was not only curious to see what your inclinations with respect to this issue are, but also to learn the specific points with which the mainstream QM community comes to view this particular manifestation of the underlying issue. In particular, do they (even if epistemologically only wrongly) cite any principle as they proceed to wipe out every form of motion out of the eigenstates, or is it just a dogma. (I do think that it is just a dogma.) Anyway, thanks for your detailed and neatly explanatory replies. … Allow me to come back to you also later in future, by personal email, infrequently, just to check out with you how you might present some complicated ideas esp. from QM. (It’s a personal project of mine to understand the mainstream QM really well, and to more fully develop a new approach for explaining the quantum phenomena.) • Ah, I see better where you are coming from. You are wondering what explanations someone might give for focusing on mainstream QM interpretations and de-emphasizing hidden variables perspectives. Off the top of my head, I can imagine what people might generally say. I can also rattle off a couple thoughts as to why my attention does not wander much into the world of hidden variables. Anticipated general responses (0) I imagine usual responses would refer to Occam’s Razor and/or the Church of the Flying Spaghetti Monster. People might say that Occam’s Razor (or something along the same lines) is a fundamental aesthetic aspect of the Western idea of “science.” I am not saying these references directly address the most logically reasoned versions of the concerns you might be raising. (0.1) I think some professional scientists are laid back conceptual cleanliness. It doesn’t bother them enough to “beat” the idea of motion in eigenstates out from students in QM. I know a couple professional scientists who are OK with letting students think that electrons are whizzing around molecules. Personal thoughts (1) I don’t necessarily “believe” mainstream QM in a religious sense, but it feels natural (for my psychology). My gut feelings of certainty about existence of things somewhat vanish unless I am directly looking at them, touching them, and concentrating with my mind to force them “into existence” through brutal attention. People like to sensationalize mainstream QM by saying that it has counterintuitive indeterminacy. At the end of the day, what offends one person’s intuitions can be instinctively natural for someone else. I hear that mainstream QM is also “OK” for people who hold Eastern belief systems (I’m atheistish, so I don’t personally know). (2) Mainstream QM has a particular pedagogical value. It offers an exercise in making reasoned deductions while resisting the urge to rely on (some) inborn intellectual instincts. I think it’s good for learning that we sometimes confuse [1] the subjective experience of *projecting* a well-defined, deterministic mental image of the dynamics of a system onto a mental blank stage representing reality with, instead, [2] the supposed process of directly perceiving and “being unified with” reality. Yes, philosophy courses can be valuable too, but in physics you can also learn to calculate the photospectra* of atoms and describe the properties of semiconductors and electronic consumer goods. * Surprisingly difficult to do in a fully QM treatment at the undergraduate level. Perturbing the atom with a classical oscillating electric field is *not* kosher. It’s much more satisfying to quantize the EM field. Does any of this mean that mainstream QM is true? No. No scientific theory is ever “true” (quotation marks refer to mock hippee existential gesture). David Liao P.S. I am happy to share my email address with you–how do I do that? Does this commenting platform share my address (sorry, not used to this system)? • Hi David, 1. Re. Hidden variables. Philosophically, I believe in “hidden variables” to the same extent (i.e. to the 100% extent) and for the same basic reason that I believe that a trrain continues to exist after it enters a tunnel and before it emerges out of the same. Lady Diana *could* suffer an accident inside a tunnel, you know… (I mean, she would have continued to exist even after entering that tunnel—whether observed by those paparazzis or not. That is, per my philosophical beliefs…) Physics-wise, I (mostly) care for only those hidden variables which appear in *my* (fledgling) approach to QM (which I have still to develop to the extent that I could publish some additional papers). I mostly don’t care for hidden variables of any other specifically physics kinds. Mostly. Out of the limitations of time at hand. 2. Oh yes, (IMO) electrons do actually whiz around. Each of them theoretically can do so anywhere in the universe, but practically speaking, each whizzes mostly around its “home” nucleus. 3. About mysticism: Check out J.M. Marin (DOI: 10.1088/0143-0807/30/4/014). Mysticism was alive and kicking in the *Western* culture even at a time that Fritjof Capra was not even born. The East could probably lay claim to the earliest and also a very highly mature development of mysticism, but then, I (honestly) am not sure to whom should go the credit for its fullest possible development: to the ancient mystics of India, or to Immanuel Kant in the West. I am inclined to believe that at least in terms of rigour, Kant definitely beat the Eastern mystics. And that, therefore, his might be taken as the fullest possible development. Accordingly, between the two, I am inclined to despise Kant even more. 4. About my email ID. This should be human readable (no dollars, brackets, braces, spaces, etc.): a j 1 7 5 t p $AT$ ya h oo [DOT} co >DOT< in . Thanks. • Entering this comment for the third time now (and so removing bquote tags)–ARJ Hi David, 1. A minor point: 2. A major point: >> “There is no motion in any energy eigenstate.” And, just how do you know? 9. Great idea for doing this. Just a hint for getting more non physicists involved: talk at least half as fast as you do, people need time to absorbe and self explain, othewise no matter how simple it is, they lose you at the beginning. 10. Pingback: Quantum Matter Animated! | Astronomy physics an... 11. Pingback: Quantum Frontiers and Tuba! | Creative Science • Mankei, Interesting. You seem to be having been fun thinking about this field for quite some time. Anyway, here is a couple of questions for you (and for others from this field): (i) Is it possible to make a mechanical oscillator/beam detectably interact with single photons at a time (i.e. statistically very high chance of only one photon at a time in the system)? [For instance, an oscillator consisting of the tip of a small triangle protruding out of a single layers of atoms as in a graphene sheet? … I am just guessing wildly for a possible and suitable oscillator here.] Note, for single photons, it won’t be an _oscillator_ in the usual sense of the term. However, any mechanical device that mechanically responds (i.e. bends), would be enough.] (ii) If such a mechanical device (say an oscillator) is taken “to” “0” K, does/would/will it continue to show the red/blue asymmetrical behavior? [Esp. for Mankei] What do you expect? • (i) In theory it’s possible, there have been a few recent theoretical papers on “single-photon optomechanics” that explore what would happen, but experimentally it’s probably very, very hard. Current experiments of this sort use laser beams with ~1e15 photons per second. (i) I have no idea what would happen then, because my math and my intuition always assume the laser beam to be very strong. Other people might be able to answer you better. • Hi Mankei, 1. Thanks for supplying what obviously is a very efficient search string. (The ones I tried weren’t even half as efficient!) … Very interesting results! 2. Other people: Assuming that the gradual emergence of the red-blue asymmetry with the decreasing temperatures (near the absolute zero) continues to be shown even as the *light flux* is reduced down to (say) the single-photon levels, then, how might Mankei’s current model/maths be reconciled with that (as of now hypothetical) observation? I thought of the single-photon version essentially only in order to remove the idea of “noise” entirely out of the picture. If there is no possibility of any noise at all, and *if* the asymmetry is still observed, wouldn’t it form a sufficient evidence to demonstrate the large-scale *quantum* nature of the mechanical oscillator (including the possibilities of a transfer of a quantum state to a large-scale device)? Or would there still remain some source of a doubt? • Hi Mankei, We also thought about the issue you brought up in arxiv:1306.2699. See, for instance, a recent paper we published with Yanbei Chen and Farid Khalili (http://pra.aps.org/abstract/PRA/v86/i3/e033840). I would consider that our experiment measured both the sum AND difference of the red and blue sideband powers. The DIFFERENCE is indeed, as shown in your arxiv post mentioned above, due to the quantum noise of the light field measuring the mechanics. The noise power of the mechanics is in the SUM of the red and blue sidebands. Our experimental data was plotted as the ratio of the red and blue sidebands, which depends upon both the sum and difference of the sidebands powers, and looks very much different than what would be expected even for a semi-classical picture in which the light is quantized and the motion not. • I guess we’ve already exchanged emails and come to a consensus, but just to recap, I agree that, through your calibrations, you’ve inferred zero-point mechanical motion and your result is consistent with quantum theory. The word “quantum” of course literally means something discrete and one could argue you haven’t observed “quantum” motion yet, but that’d be nitpicking. • And to clarify, the asymmetry itself is not proof of zero-point mechanical motion or anything quantum. The mechanical energy was obtained from the SUM of the sidebands (as Oskar said), and the asymmetry was used as a *calibration* to compare the mechanical energy with the optical vacuum noise. • So me and my boyfriend are going on a two hour car ride together in a few wes03&#82ek; I know that’s not a a long time, but I still feel like we’ll need some conversation material on the ride. Last time we took the ride, there were a few awkward silences and I just want to make sure that for most of it, we have something to chat about. Are there any car games that you guys know of that force people to talk?. • Hi Mankei, Thanks for your response. There are two main claims in your manuscript, 1) centers around the interpretation of our result, 2) is a strong claim about classical stochastic processes being the source of our observed asymmetry. In response to 1), the different interpretations of the result (and in particular, the relation between the optical vacuum noise and the zero-point motion) have been considered previously in great depth by our colleagues at IQIM (Haixing Miao and Yanbei Chen) and in Russia (Farid Khalili). I would like to point you to this paper: http://pra.aps.org/abstract/PRA/v86/i3/e033840. In response to 2), you claim to “show that a classical stochastic model, without any reference to quantum mechanics, can also reproduce this asymmetry”. We also consider this possibility in a follow-up paper which came out last year (http://arxiv.org/abs/1210.2671), where we show a derivation exactly analogous to what you’ve shown, and then go to great lengths to experimentally rule out classical noise as the source of asymmetry (by varying the probe power and showing that the asymmetry doesn’t change, and by carefully characterizing the properties of our lasers). More generally, there are fundamental limits as to what can be claimed regarding `quantum-ness’ in any measurement involving only measurements of Gaussian noise. To date there have been 5 measurements of quantum effects in the field of optomechanics, our paper being the first one (the others are Brahms PRL 2012, Brooks Nature 2012, Purdy Science 2013, and Safavi-Naeini Nature 2013 (in press)). Unfortunately, all of these measurements are based on continuous measurement of Gaussian noise. There are several groups working hard on observing stronger quantum effects (as O’Connell Nature 2010 did in an circuit QED system), but we are still some months away from that. Best, Amir • Actually, I’d like to make that 6 papers – last week Cindy Regal’s group released this beautiful paper on arXiv: http://arxiv.org/abs/1306.1268. Here as well, the `quantum-ness’ can only be inferred after careful calibration of the classical noise in the system, since the measurement is based on continuous measurement of Gaussian noise. • Actually I’d like to make that 7 papers – I forgot about the result from 2008 from Dan Stamper-Kurn’s group: Murch, et al. Nature Physics, 4, 561 (2008). 12. Pingback: Quantum Matter Animated! | Space & Time | S... 13. Pingback: Quantum Matter Animated! | Far Out News | Scoop.it 14. Pingback: Quantum Theory and Buddhism | Talesfromthelou's Blog 15. I get very annoyed whenever somebody uses the phrases “quantum jump” or “quantum leap” to imply a BIG change in some domain (such as “our new Thangomizer represents a quantum jump in Yoyodyne’s capabilities). A quantum jump is the SMALLEST POSSIBLE state change in quantum mechanics, so when somebody claims their product represents a “quantum leap,” I mentally translate that as “smallest possible degree of incremental improvement over their previous product!” 16. Pingback: My comments at other blogs—part 1 | Ajit Jadhav's Weblog 17. Is it that higher red shift and lower blue shift indicates constant shrinking of the mirror? If that is true then do we expect red shift to die down say we keep the mirror at 0K for long enough time? 18. Pingback: Squeezing light using mechanical motion | Quantum Frontiers 19. Pingback: The Most Awesome Animation About Quantum Computers You Will Ever See | Quantum Frontiers 20. Pingback: Hacking nature: loopholes in the laws of physics | Quantum Frontiers 21. Pingback: Human consciousness is simply a state of matter, like a solid or liquid – but quantum | Tucson Pool Saz: Tech - Gaming - News 22. Pingback: This Video Of Scientists Splitting An Electron Will Shock You | Quantum Frontiers 23. No. No, this shall not stand. Have you no heart, sir? You have a family now, as do I. You simply cannot go around throwing the videogaming equivalent of heneri-lacod crack at folks.Shame! SHAME! Leave a Reply to Amir Safavi-Naeini Cancel reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s
7050cf5f43e75600
IN-DEPTH | Logic takes on the physics paradoxes: Review essay on The Spacetime Model: A Theory of Everything by Jacky Jerôme  The history of science is the record of the achievements of individuals who…met with indifference or even open hostility on the part of their contemporaries…A new idea is precisely an idea that did not occur to those who designed the organizational frame, that defies their plans, and may thwart their intentions. —Ludwig von Mises, The Ultimate Foundation of Economic Science[1] A few days after watching CERN’s Higgs boson press conference, it occurred to me that if the hypothesized Higgs field is supposed to be responsible for mass, and gravity is directly related to mass, it should be fairly obvious that mass, gravity, and the Higgs field might all turn out to be aspects of the same deeper phenomenon, rather than separate, interacting layers. A search soon revealed some theorizing out there to this effect. Given the list of seemingly impossible paradoxes that have been generated in the name of quantum physics over the past century, and which have spun off an entire quantum mysticism genre, I became curious as to whether there might be alternative models that attempt to bridge the usual list of physics paradoxes in a way that made more sense. In a free 222-page PDF (Version 6.00, 2 July 2012 [Originally 2005]) replete with illustrations, Jacky Jerôme of France claims to have elaborated a single model capable of suggesting rational accounts of most of the headline physics enigmas. He characterizes it as a substantial build off of the basics of Einstein’s four-dimensional spacetime, that does not resort to any fantastic additional dimensions, yet is still consistent with experimental evidence and the accepted descriptive mathematics of both quantum mechanics and general relativity. That is a big claim, yet he still tries to avoid overhyping it: “Despite the fact that this theory is logical, coherent, and makes sense, the reader must be careful, bearing in mind that the Spacetime Model has not yet been validated by experimentation.” That said, he offers reasoned degrees of confidence as he applies his underlying concepts to particular issues, and at several points suggests further experiments to test claims. His work appears to be both compatible with the laws of logic and a provocative contender for the holy grail of physics, a “Theory of Everything,” that is, a physics model that accounts for the behavior of both the very large and the very small using the same principles.[2] Students of economics in the tradition of Ludwig von Mises might quickly recognize the potentially large gains to be had if formerly separate “micro” and “macro” specialties can really be integrated into a unified model.[3] They will also recognize the possibility that in certain situations, thinkers outside of the current establishment can be offering superior ideas that are built on fundamentally different perspectives than the conventionally accepted ones. The dual themes of logic and physics here might also capture the attention of fans of the epic rationalist-fantasy storyworld of Ayn Rand’s Atlas Shrugged, in which philosophy and physics are portrayed as the dual-pinnacle disciplines of the “rational mind.” Finally, serious students of contemplative traditions curious about the popular claims of quantum mysticism will have a fresh opportunity to consider whether and how the contrasting Spacetime Model may or may not relate to various traditional contemplative claims about the fundamental nature of reality. Two ways to look at banging a drum Jerôme’s writing caught my attention early when he made a critical distinction between the mathematical description of physical phenomena and their causal-rational explanation: We could think that the basic laws of physics are extremely complex since the mathematics of general relativity and quantum mechanics are. Such is not the case…It is thus advisable to distinguish the basic phenomena, generally very simple, from the laws governing them, generally using mathematics, which may be extremely complex.[4] He gives the example of a child knowing how to produce noise by banging a drum. We can readily understand in causal-rational terms that noise results from impacts on the drum, whereas describing the surface waves in physical-mathematical terms requires complex know-how and calculations including Bessel functions. Thus, causal-rational explanation and mathematical description are revealed as two different modes or aspects of knowledge about the same phenomena. The claim I have sometimes heard that “one can only understand quantum physics through mathematics” always struck me as a little suspicious. It speaks of a mystery that is inherently unapproachable to the non-math-genius. Yet the above distinction enables an alternative interpretation. What if this claim only signals that while the speaker understands this rarefied mathematics, he also simply lacks a rationally acceptable causal explanation of what it describes? After all, even if the subject is the same, these are two different approaches to knowledge of that subject. Each employs different languages, skills, and methods. If these approaches form a team, isn’t it possible that one of those partners (causality) could go astray even as the other (math) remained on track? Bridging these two approaches, Jerôme tackles paradoxes such as the wave-particle duality, the nature of photons, the constancy of the speed of light amid the relative motion of matter, the behavior of black holes, the location of the mysteriously missing antimatter in the universe, how such high energy is produced by nuclear reactions, and how fantastic numbers of electrons and positrons everywhere could have the same volume and charge (just either positive or negative) to unimaginably high degrees of precision. By the end, he even offers a fascinating alternative to the “Big Bang” theory of the start of the universe. He claims the Spacetime Model makes much more sense of the relevant issues and observations, while accounting for a long list of otherwise “mysterious” phenomena in the process. Any attempt at an account of the origin of the universe must ultimately be speculative to some degree, but here we must also note that any knowledge claim in the natural sciences can never be validated 100%, as those in the more abstract disciplines such as logic, praxeology, and geometry can be. Natural science hypotheses must compete with rivals on the relative question of which available contender better accounts for the observations. Yet this is not a matter of “empirical” experimentation alone. Logic (internal consistency, etc.) must also play a role in evaluating competing hypotheses. Jerôme notes that: Wrong reasoning can lead to wrong results. For example, we know three different theories of mass and gravity, which are mathematically verified: the Higgs boson, Superstrings, and the Spacetime Model. At least two of these three theories are wrong, despite the fact that they are all three mathematically verified. Here is a typical example of the way Jerôme attempts to make sense out of the numerous established mathematical principles that have been left to appear mysterious in causal-rational terms: “E = mc2. This formula is fully verified using mathematics and experimentation, but no one is able to explain it using logic and good sense. However, the solution is quite simple within the Spacetime Model.” Positivism still roosting at home? Such an advance of mathematical description over causal-rational explanation in fundamental physics should not be surprising in view of the relevant history of controversies regarding the respective roles of reason and empirical observation. Radical empiricism and logical positivism viewed axiomatic logical principles as unscientific, metaphysical anachronisms, not “really real” because they could not be empirically “observed” (meaning measured). As Ludwig von Mises noted: …the category of regularity is rejected by the champions of logical positivism. They pretend that modern physics has led to results incompatible with the doctrine of a universally prevailing regularity...In the microscopic sphere, they say…The categories of regularity and causality must be abandoned and replaced by the laws of probability.[5] It was just this mindset that accompanied the emergence of enigmas allegedly implied in a series of experiments and models in fundamental physics. The slit experiments, Schrödinger's cat, Heisenberg’s uncertainty principle, and so on, were trotted out as evidence that logic and causality had met their match, that the universe is at bottom governed by chance and uncertainty and that some entities (not really being entities as old-school philosophers might have understood them) can exist in one place and another at the same time. Maybe quarks are telepathic! Proponents of such claims did not seem to notice the possibility that it was their previous rejection of logic that enabled an environment in which stop-gap speculations could gain sober recognition. Instead of these enigmas being viewed as no more than bemusing placeholders awaiting more coherent replacements, they were instead embraced and cited as evidence against old-fashioned reason and its “metaphysical,” a priori conceits. However, such thinking not only missed its own circularity, it also missed that an experimental result and the quality of a hypothesis forwarded to explain it are entirely different matters. The quality of a hypothesis depends in part on applying the very axiomatic logic that had been abandoned. Paradoxes that appeal to the minds of those who have rejected the strictures of logic show no mystical insight, but only the failure to apply to their thinking the inescapable, ancient rules for forming and validity-checking explanations of anything whatsoever. In this light, Jerôme’s comment is telling: “As a physicist, it is necessary to leave this philosophical aspect to the philosophers and try to solve this enigma in a scientific way, with a logical and rational explanation.” This could be from the pages of Atlas Shrugged, since his let’s-get-practical use of the word “philosophers” in this sentence seems to imply that these are by definition anti-rationalist philosophers. Yet rationalist philosophers, part of whose message is precisely to uphold the requirements of logic and consistency for any valid knowledge claim, demand exactly the kind of “logical and rational explanation” that Jerôme sets as his goal. A breath of relatively reasonable quantum air Against this backdrop, I found refreshing Jerôme’s unabashed resort to “deduction,” “possibility,” and “logical consistency.” The results are consistently fascinating and provocative. He appears to make fairly short work of one physics paradox after another within a unified framework. In a key early move, he specifies a more consistent definition for volume as “closed volume.” In doing so, he notes conventional inconsistences in volume definitions across scales, highlighting the importance of what is and is not “counted” as closed volume. In his model, it is closed volume alone, and not any of the other varieties of volume he details, that creates the central phenomenon of spacetime displacement. Particles and nuclei form closed volumes, but the distributed charges of the outer electrons of atoms are so diffuse that they do not. And whereas waves do not form closed volumes and therefore have no mass; particles do and therefore have. One might also take the converse perspective and define closed volume as “that which displaces spacetime.” “Particles,” in this model, result from “pieces of wave” that form closed volumes in spacetime. As these move and reopen, they can subsequently turn back into waves. Only closed volumes cause displacement in the elastic four-dimensional spacetime fabric that Einstein described, which produces what we have come to see from two different sets of observations as “gravity” and “mass” (“mass effect”). Even the hypothesized Higgs field entails an additional dimension. The Spacetime Model claims to be able to dispense with this while still accounting for the observations associated with the entire Standard Model of particle physics, Higgs boson included. As Jerôme puts it: The 4D expression of the mass effect means that the universe can be described with only 4D expressions, as Einstein thought his whole life. We don’t need extra dimensions such as 5D, 6D, 7D...nD (string theory), or extra fields such as the Higgs field. In reality, the proposed theory is close to the Higgs boson theory. The major difference is that the famous Higgs field is nothing but spacetime....mass and gravitation are nothing but the consequence of the pressure of spacetime on closed volumes. His conclusion that “Everything is made out of spacetime” can certainly still leave us with a sense of the mysterious, but somehow manages to clean up the mystery compared to the more typical litany of enigmas. As Mises often emphasized, any given state of theory in a field must run up against some “ultimate given,” that is, it can never be expected to explain every possible thing: Jerôme’s ultimate given is quite ultimate indeed: an elastic 4D spacetime with a substructure of Spacetime Cells (sCells). Everything else is built from that. It may be easiest to start by conceiving of an sCell as a “neutral electron.” However, Jerôme’s real point is the converse: that an “electron” is a “negatively charged sCell.” Its positively charged partner in existence is called a “positron,” which explains the positive charges of protons in this model. Positrons and electrons always do have the same mass (closed volume) of 510.998918 KeV (electron masses confirmed with “precision of <0.0000086%”) and protons and electrons the same charge (with the opposite pole) of 1.602176565(35) x 10−19 Coulombs. Jerôme writes, “The relative difference between the absolute values is less than 10-21! So, the question is, ‘How can we explain the incredible equality of these electric charges?’” He hypothesizes a joint origin of both characteristics in the splitting and reproduction of identical sCells that constitutes the ongoing creation of spacetime (more on this below), which would account for this uncanny precision of commonalities. Starting with a fabric of sCells, when the neutral charge of one transfers to another, the result is one below-neutral cell and another nearby and equally above-neutral cell. These two always appear as a precisely opposite pair because the above-average charge of one and the below-average charge of the other are nothing more than two symmetrical results of a single transfer. They always have the same mass because their shared sCell substructure already predefines this in the same way in both cases. In this view, electrons and positrons are visible to us because of their charges, whereas sCells in their background average neutral state are undetectable (cannot be “observed” directly), precisely because of their neutrality, and are therefore hidden in plain sight. Positrons and electrons are just two types of lit-up sCell. Electromagnetic waves, massless because they do not form closed volumes, propagate through this sCell fabric at a consistent speed in vacuum, but never any faster (light travelling through transparent matter has been measured at slower speeds and quite slow speeds have been measured under extraordinary experimental conditions within matter cooled to near absolute zero). Jerôme attributes this to a maximum cell-to-cell transfer rate that is a natural limiting characteristic of the medium of sCells themselves. That we have come to call this maximum transfer speed of 299,792,458m/s “the speed of light” reflects the way in which we observed it and can measure it. Jerôme identifies neutral, positive, and negative states of sCells as the basic building blocks of all other particles. He proceeds to suggest how these components alone can account for the formation, disappearance, properties, masses, and charges of up and down quarks, protons, neutrons, hydrogen atoms, and onward. Neutral sCells can contribute to mass effects themselves, but only when they become enclosed within a subatomic particle or nucleus and thereby come to “count” as part of a closed volume. This pair model simultaneously accounts for the location of antimatter in the universe. Rather than being hidden many light years away, it is hidden right under our noses, concealed quite near its partner in existence within other particles. Jerôme also claims to dispose of the hypothesized Strong force as a separate force; those effects result from the enveloping rubber-band-like effect of “distributed charge fields.” In fact, according to this model, there are only two fundamental forces from which the other apparently separate forces derive: Hooke’s Force (constraint and pressure), which applies to all particles, and Coulomb’s Force (attraction and repulsion), which applies only to charged particles (Figure 5-1). He argues that the concept of a photon as a particle makes no sense. He explains why a photon must be a “quantified wave” and never a particle, and how a quantified wave travelling through an sCell substructure is both consistent with experimental evidence and in principle logically comprehensible. As for black holes, he writes: “Inside a closed volume, as inside a black hole, nothing happens. The light doesn’t exist and therefore can’t escape…” He also claims to have solved the wave-particle duality. His method of doing so is largely logical and deductive, working from a simple set of widely accepted observations. And in another illustration of differentiating mathematical description and causal-rational explanation, whereas “Schrödinger’s probability concept must be replaced by a more realistic concept called the Distributed Charge Model,” the Schrödinger equation can still be used just as before! For the finale, he offers a simple, elegant, and unified account of the beginning and ongoing growth (“expansion”) of the universe through sCell expansion and division reminiscent of the way that living cells divide and reproduce in vast quantities with nearly unimaginable precision and a few extremely rare minor variations. This approach simultaneously supplies accounts of a long list of observations for which the Big Bang offers only question marks. A single internally consistent model is thus able to suggest accounts of the major observations at both the micro and macro levels of physics, including most of the usual list of enigmas. The real nature of spin and some other points remain relatively elusive, he admits, but ventures some tentative parameters and possibilities in each case. Simplifications are used to get the basics across to general readers, while the math-heavy sections and recalculations of fundamentals using closed volume definitions are set off as supplemental information, which can be skimmed or skipped by the non-specialist. Most of the book should be within reach of those with a reasonable general science education (though more would make things easier) and might be read in a motivated afternoon or two. The prose is brief and clear and the illustrations helpful in bringing home the arguments. The English is “off” just enough to reveal that it is not the author’s first language, but the meaning remains clear and easy to follow. Although the book is clear, a quick copyediting by a native speaker would still lift the quality level. Any bones left for quantum mysticism? If this model does pass the tests of internal logical consistency, it is still left to face tests of experimentation. In contrast, some of the competing paradox-ridden and n-dimensional theories it targets do not appear to pass the tests of logic, Ockham’s Razor included. Some may be rejected on logical grounds alone. Others might be rejected if there exists a competing theory that both explains the observations and better passes the tests of logic. Ideological opponents of “metaphysical” a priori logic would have been loath to reject a hypothesis based on logic alone. Yet not doing so has probably contributed to allowing dead-end speculations to run, permeating scientific culture, and poisoning tendencies in pop philosophy for a century. The Spacetime Model could put a damper on many of the popular claims of the “new physics supports mysticism” genre, particularly claims that logic, predictability, and consistent causality are mere illusions, or that subject-object differentiation is not to be relied upon. That said, there are still some extraordinary and mind-bending claims to be found in the Spacetime Model itself that might easily be viewable as resonant with certain claims found in some traditional contemplative traditions. In the Spacetime Model, it is not only that “all is spacetime”, but more specifically that particles (matter), waves (energy), and space (medium) all consist of the same stuff, which is, in this view, “elastic four-dimensional spacetime substructure.” From there, consider some traditional formulations such as the Tibetan “non-duality of form and formlessness” and the typically pithy Zen “not one; not two.” Matter, energy, and space are presented as being both different from each other (not one) and also consisting only of the same spacetime stuff as one another (not two).[6] However strange images from our attempts to understand the deep structures of physics may appear, and even though atoms are quite clearly “99.999% vacuum with 0.001% waves or matter-energy,” as Jerôme puts it, none of this has any bearing on the reality in which we as persons do and must live and act. Matter, however strange its ultimate substructure, still behaves according to the laws of causality, and so does its substructure. Probability is ultimately a measurement of our own degree of ignorance about the precise operations of physical causality.[7] Moreover, what is visible at one level of magnification (atomic level: mostly empty) does not necessarily also apply to the view at another level of magnification (the scale at which we live and act, where stuff does bounce off walls). As Hans-Hermann Hoppe has pointed out,[8] Paul Lorenzen, in Normative Logic and Ethics,[9] argues that all of our knowledge of natural sciences, even physics itself, presupposes certain a priori true assumptions and norms that are not derivable from “empirical” experimentation, a set of knowledge types he labels protophysics, which are “definitions and the ideal norms that make measurements possible” (p. 60). Nothing we discover by measurement can validly contradict the presuppositions of measuring or we will have taken the rug from under the basis of our own claims, rendering them nothing more than sounds, chirps or barks! And the winner is…? So where is the grand reaction to Jerôme’s rather comprehensive challenge to conventional physics models and hypotheses? I have not been able to find much of one online, either by specialists or anyone else. Is it because our Mr. Jerôme is just dead wrong and hopelessly naïve in his imaginings? Is it because there are so many competing “theories of everything” out there, a dime a dozen? Or might there be something special about this one? What if this Spacetime Model really is a simpler, more elegant explanation of all the observations than the mixed and matched crop of better-known theories it challenges, and is compatible with experimental results and QM/GR mathematics, as claimed? What if it does explain much of what is in need of explaining in a better way – not perfect, just better – than the competition? A conventional mindset would have to quickly reject such possibilities: Let’s get real. He has no official position in the physics community. His speculations and diagrams are self-published on his own website! Certainly it must just be an amateur effort compared to the real experts in the establishment with their mysterious, peer-reviewed ways! Maybe. But in light of our earlier discussions of the philosophical background radiation and our distinction between mathematical description and causal-rational explanation, such a conclusion may now look less reasonable than it might have. There certainly are mathematical geniuses at work and checking on each other in a language very few people can speak well enough to even listen in. That is all to the good as far as it goes (gains from specialization), but is it also a good excuse for not making sense in causal-rational terms? Maybe these are two separate matters that deserve more robust differentiation. So I retain doubts about just writing this all off based on institutional factors such as academic pedigree and position. Yet speaking of institutional factors, we do know that establishments in many fields tend to want to remain…established. We also know that one of the ways guilds and priesthoods have always tried to preserve advantages and privileges is through the construction and preservation of a public image that highlights the great mystery and impenetrability of their subject, which is obviously accessible only to the anointed! The very first line of the copyright notice page of Jerôme’s book reminds us that: “Scientific peer journals do not accept papers from independent researchers whatever their content.” Whatever their content? Including author bio as one factor among others in accepting papers would surely make sense, but it is hard to imagine something less “scientific” and more pre-modern and guild-like than excluding intellectual work based on the author’s institutional status alone. Fortunately, in this day and age, Mr. Jerôme’s carefully developed, clearly presented set of arguments are just a click away at no cost but time and mental effort for anyone to review, consider, and attempt to refute or improve upon (or maybe print out and tape to the doors of CERN?). However this comes out, though, we ought to keep up the hard work of applying the laws of logic even when it is not easy, and not start mumbling in resigned despair: “It doesn’t really matter. Who is Jacky Jerôme anyway?” Postscript: What about Beckmann? After initially writing a draft of this review of Jerôme's book, an early reader led me to Questioning Einstein: Is Relativity Necessary by Tom Bethel, which is largely a presentation and update for general readers of the ideas of Petr Beckmann, as presented in the more technical Einstein Plus Two. This is certainly also worthy of a careful reading and also touches many issues of the relationship between empirical knowledge, the role of logic, and problems with “official” knowledge institutions that I address in the review of Jerôme’s book. However, the Beckmann/Bethel line of thinking operates only at the "macro" relativity level. In quick summary, it argues that contrary to conventional wisdom, Einstein’s special theory of relativity is on weaker, not stronger, empirical grounds, than general relativity, whereas general relativity is stronger empirically, but was made unnecessarily complex in order not to contradict the earlier special relativity claims. The observed evidence for general relativity, claim these authors, can be explained using classical physics, whereas special relativity is essentially “unfalsifiable” (its assumptions inevitably "don’t apply" to any case of evidence that actually threatens to contradict it). I do not discuss the Beckmann/Bethel line here in detail so as to focus on Jerôme’s theories, but my general impression is that the Jerôme and Beckmann/Bethel perspectives do not appear necessarily contradictory. Meanwhile, Jerôme’s model appears to make even stronger claims, which go beyond the behavior of gravity and mass to explaining what both gravity and mass are in causal-rational terms that are built up right from the micro level. One Beckmann/Bethel addition to that might presumably be to modify Jerôme’s language for describing the macro level to further remove specifically Einsteinian terminology, even “four-dimensional spacetime,” which Jerôme is still fond of maintaining in his book (and which I will also keep in my review below for simplicity). I found no evidence that either of these parties is aware of the work of the other, and yet I do not see any obvious reason why both alternative theories could not be bounced off of one another and probably cross-improved for the trouble. The Beckmann/Bethel line of thinking is also summarized elsewhere. [1] Indianapolis: Liberty Fund (2006) 117. [2] While context does or should limit the meaning of “everything” here, the “Theory of Everything” formulation still ought to be qualified to head off reductionist interpretations. As the American philosopher Ken Wilber has often pointed out, any physics “theory of everything” cannot cover “everything,” as it excludes phenomena of consciousness viewed from the interior, that is, as Mises might phrase it, from the subjective perspective of an acting person. We cannot deny that such a perspective exists without self-contradiction and it is not reducible to material description. Subjective phenomena of consciousness are emergent from, but not reducible to, physical phenomena. Thus, “everything” should at least be used with this reservation to avoid what Wilber calls “flatland,” as described, for example, in Integral Psychology. Boston: Shambala (2000), pp. 70–71. [3] Ludwig von Mises, Human Action: A Treatise on Economics. The Scholar’s Edition. Auburn, Alabama: Mises Institute (1998 [1949]). Murray N. Rothbard, Man, Economy, and State, with Power and Market. The Scholar’s Edition. Auburn, Alabama: Mises Institute (2004 [1962, 1970]). [4] While the original text is quite clear and easy to read, the author is not a native speaker of English, and in citing quotations, I have made occasional typographical alterations to language and punctuation only to head off unnecessary distraction for readers of the present article. [5] The Ultimate Foundation of Economic Science (pp. 19–20). [6] The Spacetime Model also suggests an uncanny depth to the basic elements of Ken Wilber’s integral four-quadrant model of all phenomena, one element of another “theory of everything,” but one not limited to the field of physics. Various accounts may be found in: The Marriage of Sense and Soul. New York: Random House (1998), esp. Chap. 5; Integral Psychology. esp. Chap. 14; and Integral Spirituality Boston: Integral Books (2006), esp. Introduction and Chaps. 1, 7, and 8. The second stage of the start of spacetime within the Spacetime Model is an expansion of a single sCell until it splits into two identical sCells (and then four, eight, 16, etc.). Here, 14.1 billion years ago, we already have the singular/plural distinction that forms the vertical axis of Wilber’s model. Then, at the very first sign of matter from the rare appearance of density variation in a few sCells, we find a positron and electron pair and with each of those, we already have closed volumes defining an interior and an exterior. That polarity forms the horizontal axis of Wilber’s model. The Spacetime Model thus offers possible root foundations for the construction of the integral four-quadrant model from among the very first things to ever happen in the history of spacetime. [7] As Mark R. Crovelli recently summarized this view: “If every event and phenomenon which occurs in the world has an antecedent cause of some sort, then we are forced to say that probability is a measure of human ignorance or uncertainty about the causal factors at work in the world…Man’s uncertainty in such a world could only stem from his inability to comprehend or account for all of the relevant causal factors at work in any given situation” (p. 166). in “All Probabilistic Methods Assume A Subjective Definition For Probability,” Libertarian Papers. 4 (1): 163–174. [8] “On praxeology and the praxeological foundation of epistemology,” The Economics and Ethics of Private Property, 2nd Edition. Auburn: Mises Institute (2006), pp. 265–294. [9] Mannheim: Bibliographisches Institut (1969).
666100d312e48512
Open main menu In quantum physics, a bound state is a special quantum state of a particle subject to a potential such that the particle has a tendency to remain localised in one or more regions of space. The potential may be external or it may be the result of the presence of another particle; in the latter case, one can equivalently define a bound state as a state representing two or more particles whose interaction energy exceeds the total energy of each separate particle. One consequence is that, given a potential vanishing at infinity, negative-energy states must be bound. In general, the energy spectrum of the set of bound states is discrete, unlike free particles, which have a continuous spectrum. Although not bound states in the strict sense, metastable states with a net positive interaction energy, but long decay time, are often considered unstable bound states as well and are called "quasi-bound states".[1] Examples include certain radionuclides and electrets.[clarification needed][citation needed] In relativistic quantum field theory, a stable bound state of n particles with masses corresponds to a pole in the S-matrix with a center-of-mass energy less than . An unstable bound state shows up as a pole with a complex center-of-mass energy. Let H be a complex separable Hilbert space,   be a one-parameter group of unitary operators on H and   be a statistical operator on H. Let A be an observable on H and   be the induced probability distribution of A with respect to ρ on the Borel σ-algebra of  . Then the evolution of ρ induced by U is bound with respect to A if  , where  .[dubious ][citation needed] More informally, a bound state is contained within a bounded portion of the spectrum of A. For a concrete example: let   and let A be position. Given compactly-supported   and  . • If the state evolution of ρ "moves this wave package constantly to the right", e.g. if   for all  , then ρ is not bound state with respect to position. • If   does not change in time, i.e.   for all  , then   is bound with respect to position. • More generally: If the state evolution of ρ "just moves ρ inside a bounded domain", then ρ is bound with respect to position. Let A have measure-space codomain  . A quantum particle is in a bound state if it is never found “too far away from any finite region  ,” i.e. using a wavefunction representation, Consequently,   is finite. In other words, a state is a bound state if and only if it is finitely normalizable. As finitely normalizable states must lie within the discrete part of the spectrum, bound states must lie within the discrete part. However, as Neumann and Wigner pointed out, a bound state can have its energy located in the continuum spectrum.[6] In that case, bound states still are part of the discrete portion of the spectrum, but appear as Dirac masses in the spectral measure.[citation needed] Position-bound statesEdit Consider the one-particle Schrödinger equation. If a state has energy  , then the wavefunction ψ satisfies, for some   so that ψ is exponentially suppressed at large x.[dubious ] Hence, negative energy-states are bound if V vanishes at infinity. A boson with mass mχ mediating a weakly coupled interaction produces an Yukawa-like interaction potential, where  , g is the gauge coupling constant, and ƛi = /mic is the reduced Compton wavelength. A scalar boson produces a universally attractive potential, whereas a vector attracts particles to antiparticles but repels like pairs. For two particles of mass m1 and m2, the Bohr radius of the system becomes and yields the dimensionless number In order for the first bound state to exist at all,  . Because the photon is massless, D is infinite for electromagnetism. For the weak interaction, the Z boson's mass is 91.1876±0.0021 GeV/c2, which prevents the formation of bound states between most particles, as it is 97.2 times the proton's mass and 178,000 times the electron's mass. Note however that if the Higgs interaction didn't break electroweak symmetry at the electroweak scale, then the SU(2) weak interaction would become confining.[7] See alsoEdit 1. ^ Sakurai, Jun (1995). "7.8". In Tuan, San (ed.). Modern Quantum Mechanics (Revised ed.). Reading, Mass: Addison-Wesley. pp. 418–9. ISBN 0-201-53929-2. Suppose the barrier were infinitely high ... we expect bound states, with energy E > 0. ... They are stationary states with infinite lifetime. In the more realistic case of a finite barrier, the particle can be trapped inside, but it cannot be trapped forever. Such a trapped state has a finite lifetime due to quantum-mechanical tunneling. ... Let us call such a state quasi-bound state because it would be an honest bound state if the barrier were infinitely high. 2. ^ K. Winkler; G. Thalhammer; F. Lang; R. Grimm; J. H. Denschlag; A. J. Daley; A. Kantian; H. P. Buchler; P. Zoller (2006). "Repulsively bound atom pairs in an optical lattice". Nature. 441 (7095): 853–856. arXiv:cond-mat/0605196. Bibcode:2006Natur.441..853W. doi:10.1038/nature04918. PMID 16778884. 3. ^ Javanainen, Juha; Odong Otim; Sanders, Jerome C. (Apr 2010). "Dimer of two bosons in a one-dimensional optical lattice". Phys. Rev. A. 81 (4): 043609. arXiv:1004.5118. Bibcode:2010PhRvA..81d3609J. doi:10.1103/PhysRevA.81.043609. 4. ^ M. Valiente & D. Petrosyan (2008). "Two-particle states in the Hubbard model". J. Phys. B: At. Mol. Opt. Phys. 41 (16): 161002. arXiv:0805.1812. Bibcode:2008JPhB...41p1002V. doi:10.1088/0953-4075/41/16/161002. 5. ^ Max T. C. Wong & C. K. Law (May 2011). "Two-polariton bound states in the Jaynes-Cummings-Hubbard model". Phys. Rev. A. American Physical Society. 83 (5): 055802. arXiv:1101.1366. Bibcode:2011PhRvA..83e5802W. doi:10.1103/PhysRevA.83.055802. 6. ^ von Neumann, John; Wigner, Eugene (1929). "Über merkwürdige diskrete Eigenwerte". Physikalische Zeitschrift. 30: 465–467. 7. ^ Claudson, M.; Farhi, E.; Jaffe, R. L. (1 August 1986). "Strongly coupled standard model". Physical Review D. 34 (3): 873–887. doi:10.1103/PhysRevD.34.873.
870f96794b038995
The God Within 2012, Conspiracy  -   315 Comments Ratings: 5.79/10 from 289 users. Mike Adams, the author of this documentary, says he always admired physicists. He says physicists seek answers by asking questions of nature and when they follow with rigorous scientific approach to the quest for knowledge, they refuse to be sidelined by dogma, personal belief or trickery. Science, in its most pure form, is about the search for truth. Mike is not referring to the bastardization of science by modern corporations, which use the language of science to push a kind of intellectual tyranny involving for profit GMO's, vaccines and pharmaceuticals. He's talking about pure non-corporate driven science and the quest for human understanding. This search for human understanding has lead him through a number of fascinating areas of study, but he found the most fertile ground for exploration in the fields of quantum physics, the many-worlds interpretation and the study of consciousness. Along that path, he decided to read a book by famed physicist Stephen Hawking and co-author Leonard Mlodinow. As a fan of Hawking's work over the years, Mike relished the idea of reading his explanations of the theory of everything, The Grand Design, the invisible hand behind it all. What he found in his book, however, rather surprised him. On the very first page of the book, Mike found himself quite disappointed in the apparent lack of understanding of the universe from someone as intellectually capable as Hawking. His words reflect what can only be called "the great failing" of modern day physics, to address the meaning behind the math. Far too many mainstream physicists seem stuck in what can only be called the Newtonian era of consciousness, that is, they don't yet grasp the idea that consciousness exists at all. Hawking's book, The Grand Design, did serve another useful purpose in Mike's search for understanding. It nicely summarized the outmoded view of conventional physics. This mainstream view of physics is to reality what conventional medicine is to healing. In other words, it has all the technical jargon but none of the soul, and so it misses the whole point. According to Mike, conventional physics is the clever conglomeration of high level mathematics desperately seeking to avoid any discussion of what it all means. You're not allowed to talk about consciousness or free will or the spooky connectedness that has been experimentally demonstrated to exist between all things in the universe, because that brings up too many questions that make conventional physicists uncomfortable... questions about God or the intersection of intention with the physical universe or free will. More great documentaries 315 Comments / User Reviews 1. I spent many years at ORNL and my wife is still in the scientific community and something I’ve learned is that scientists get further away from atheism as they age and move toward agnostic perspectives. The more you know the more you see how little anyone actually knows. Making blanket statements about what is and what is not is juvenile and pointless. 2. A doc with a low rating sure brings out a rather long list of comments?.......really!? I'd call that very misrated. I loved it. I've thought along these same lines decades before ever hearing these things discussed by others. Just makes sense. The same way new scientific theories are discovered. They are first perceived. 3. This contains wisdom. Minus the idea that there is an intelligent, interested creator of our universe. Why should we not be simply a minuscule part of one of that creators cells that he/she/ (more likely IT) pays no attention to? The point is, science will not cease to be naive until it realizes it will ALWAYS be a child (& of course there is no literal father or mother) and must always continue to grow, much like our selves. Perhaps there is a race of super intelligent, massive beings, and our universe is simply a cell in something around them, like a blade of grass or a mushroom. 4. This is awful. Don't waste your time watching or trying to make sense of this nonsense 5. JNANI is what this man is trying to explain. 6. I think it is interesting how emotional the comments are. Most of them are critical of Mr. Adams and try to discredit him by using "rational" logic, but it is evident that their reasoning is cramped within a larger emotional package of contempt and anger. I am not vested in any one side of the argument so I am NOT discounting any one's reasoning, but simply pointing out how emotional the responses have been. Why is that? 7. This guy is flat out can he say Stephen Hawking determenistic whose work is intricately connected to quantum physics and thermodynamics. Tweaking facts here and there, he is suggesting us to toss out everything what we have known so in Sundays and we will understand the fact that is how we have come so far as humans 8. Mr. Adams could have saved us all some time had he defined and proven the existence of all those concepts such as soul, divine, God or such. Gee, maybe real scientists don't 'believe' in those ideas because there is no evidence to prove they exist, could that be it? For example, what is the Being of God? Our being is atoms, molecules and such which we can measure and describe using the scientific method, but what is God's being, what is God made of? What is a soul made of? Does it take up space? Where is any evidence for anything except what we call natural? Look, let's face it, everyone is guessing. No one knows all the answers, obviously. Mike Adams is guessing and making it sound as if he knows something, then putting down the men who are doing science, all to support his religious suppositions, his guesses. Scientists work with what we call reality, not unreality. They are the cutting edge of modern science. But Mike Adams, from his laboratory at Divinity Now, what ideas does he have, or is he just BSing everyone. It's that edge of arrogance in his voice that has got me going........ I would suggest interested thinkers read Marvin Minsky's wonderful book "The Society of Mind" in which he describes his ideas of how mind works. For example, see if you can follow this...... "Everything that happens in our universe is either completely determined by what has already happened in the past, or else depends in part on random chance. Everything, including that which happens in our brains, depends on these and only on these: There is no room on either side for any third alternative. Whatever actions we may 'choose', it cannot make the slightest change in what might other wise have been -- because those rigid, natural laws already caused the states of mind that caused us to decide that way. And if that choice was in part made by chance -- it still leaves nothing for us to decide." When I was young they talked about a 'steady state' universe, that is until larger telescopes were invented and the concept of 'Big Bang' came along. So what does anyone know about a Big Bang? Will it still be talked about in 50 or 100 years? Isn't it more logical to think that the 'universe', or 'multiverse', wasn't created but has existed forever? And that it goes on forever? 9. Fell asleep after 12 minutes of the most arrogant monologue I've ever heard. 10. Now that's how you shave off the edges of a square peg to make it fit through a round hole....or at least attempt to do so. 11. Hilariously wrong. If you have no desire to actually learn anything about quantum physics this is the documentary for you. Narrator claims that if God isn't real then genocide is ok. (Exact words: "The belief that human beings lack souls or conciousness is dangerous for a far more serious reason. It can provide a scientific basis for heinous crimes against humanity including genocide. From Hawkings' point of view of soul-less determinism there is no reason why, the United Nations for example, can't reduce region overpopulation by simply committing genocide against human beings through population reduction programs. Because humans aren't 'real people with souls and conciousness' the poisoning of them does not violate any real ethical boundaries - according to that line of thinking.") Uhm... Sorry, Mike Adams, but that logic doesn't make one lick of sense to the rational human beings of the world who aren't Christian and don't believe in souls. Maybe you personally would suddenly start thinking genocide is ok, but that says something about you personally, it says nothing about humanity as a whole. He then goes on to say that the government is already euthenizing millions of animals every year (he doesn't say why - but it's to control animal populations from getting out of control. When animal populations get out of control 90% of their population will starve to death in a single year when food becomes scarce - If he doesn't understand that I have no idea where he gets off talking about quantum physics) but more importantly, Adams states that the government justifies these actions by claiming that animals don't have conciousness. (He wanted to say souls but he knew everyone would call him on his shit if he did so) Nope. Sorry Adams, that's just wrong and I feel bad for you if you actually believe your own words. Adams seems to think his incredibly simple philosophical statements put holes through Steven Hawkings' interpretations of conciousness, which is actually kind of sad to listen to. Honestly, this doc should be called "The Pseudo-Intellectual Layman's Guide to Misinterpreting Everything" 1. Adams actually makes a valid point. He states that modern physicists have essentially defined human beings as biological robots, i.e. our behaviour, thoughts and feelings are ultimately the result of biochemistry, that we are complex machines. Hawking has actually suggested that the human mind may one day be uploaded into a computer! What Adams is suggesting, is if our thoughts and behaviour is simply the product of a biochemical reaction, then human beings are basically machines with no soul or free will. Surely that divests us of any particular rights? The only difference between a human and a rodent is that the former has a more complex brain, but both are essentially 'machines'. If you created a robot with artificial intelligence, and an IQ of 130, how would that robot be any different to a human being? Its power source may be different - electricity instead of food - but if it has genuine intelligence and is sentient, then would it be any more immoral to terminate it, than it would to kill a human being? If so, why? 12. "Oh boy", another (born again) trained circus monkey taught to do cute tricks. What whacko group of religious lunies, and delusional fanatics funded this silly tripe? You have grossly misinterpreted the intent, and meaning of Physics professionals, the intent, and meaning of science itself, and the meaning of determinism. The only "dangerous" person I see here is you, disingenuous religious clods, and nincompoops who use cheep circus monkey tricks to wow the uneducated with your silly assertions, and gratuitous misuse of scientific jargon. It's just preconceived conclusions of amateurs, and con artists. Any damned fool can criticize the top physicists when they aren't there to correct you on your misinterpretations of their scientific, systems of observation, experimentation, and hypothesis. You've done a great job distorting the true conclusions of theoretical physics which by no means imply a state of total awareness, and total knowledge, about the universe or it's mechanisms. I actually heard you talk about the term (spirituality). "Really", GMAB. That term may suit your fearful clueless abdication of adult professional responsibility, and your desire to justify your obvious motivations to push the deity, or baby Jesus theory, but it has no place in the world of science ,which seeks neither to prove, or disprove the existence of a deity. Science seeks only to explain phenomena by testing, experimentation, and repeated retesting, and repeatability of those experiments, and their conclusions until they collapse in falsehood, or stand like a wall of undeniable scientific theory, and law. Spirituality, is merely an expression of "mystery" aka "ignorance", concerning phenomena that is not understood by the individual. It is a clear sign that you are willing to abdicate your responsibility as a true scientist, and student of theoretical physics for the world of conjecture, mysticism, and primitive psychotic wanderings in the world of "currently" unprovable, untestable fantasy. There is no doubt many scientists have made the mistake of (false omniscience) whether by innocent false conclusion, or by wanton arrogance, only to be made fools of later on. That doesn't imply however that science, or serious dedicated scientists, of integrity, and honesty believe they have reached the zenith, of all knowledge. That is BS. The thing you repeatedly hear from every Physics professional, every physics professor, is a full open ,and unequivocal admission that the universe is still full of unknowns, and contradictions that we are still in pursuit of: Dark energy, dark matter, multiverses, pre expansion realities, even the existence of the big bang, or the singularity. These things are openly described by the community as the logical conclusion of our (current mathematical models), and not in any way to be taken as an end of research, experimentation, and peer review, an endless ongoing process. You imply that the quantum realities of the universe are somehow denied by some scientists, when in reality you and 99.999% of the worlds people wouldn't have any idea of quantum theory unless those scientists you criticize, discovered the properties, and realities of the quantum world for you to explore, and discuss in the first place. One of the top theoretical physicists Leonard Susskind who countered critical aspects of Hawking's assertions, states that in the gravitational nether world of black holes (Information, like energy, and momentum is conserved) Your assessment of (biological determinism) is grossly unrealistic, and totally wrong headed. The idea that somehow we have slipped into your approaching Armageddon world of social collapse is pure childish misinformed, untrained absurdity. That's why people go to law school ,to learn about silly theories like yours, that completely distort the world of jurisprudence, human psychology, and the realities of life in the real world. The idea of democracy, and a jury of your peers implies that human organisms, have the power of observation, and reason, based on the needs of the organism, and the greater society as an extension of that organism for a species that is a social being. Laws, as any criminal, or good lawyer will tell you, were made to be broken, and interpreted. It is the purpose of a jury, and a judge to determine the meaning, and application of law, based on the needs of the biological beings on the jury, and the society as a whole. The person committing a heinous crime in one time, and place may hang, yet in another venue, may walk away as an innocent person by virtue of (perceived insanity), or be freed by virtue of the whims, and local beliefs of the jury. The condition implies that the biological needs of the wider society, and the individuals on the jury make all the difference in who is guilty, and who is innocent, a matter of local culture, and a matter of individual understanding, and perception. That is the nature of justice in a democracy, for good, or ill, and will always be so. There is no cause to determine the collapse of reason based on a deterministic approach to justice. The old expression goes, "beauty is in the eye of the beholder", or put another way, my lawyer, can beat up your lawyer. Your entire effort here, it seems to me. is mere discount store tripe. I think, not really worth the effort of serious students to consider. Those of us who know the ancient history of mystical predictions, and conjecture about the mysteries of the universe, as spirits wandering in the ether, are rightfully wary, suspicious, and disdainful of silly conjectures and adherence to mystical non reason. We remember the inquisitions, and debauchery of religious sooth sayers, clergymen, papal despots, religious fundamentalist defilers of truth, and accusatory self serving lunacy that put Galileo Galilei in the hands of the ecclesiastical court. We despise the judgement of innocent victims of religious zealots, and torturers from the Catholic inquisitions, to the lunacy of ISIS fundamentalist psychopaths. I do not, conclude, or deny, that some form of deity, or cognizant being may be found at the heart of all existence, and reality someday, in some way, because for now it is beyond our dimensional understanding. Like millions of people I would love to believe that there is a benevolent, loving deity out there, to welcome me home when I die. Unfortunately, there is absolutely no evidence for that happening, and no reason for me to believe it will ever be so. I will keep an open mind, given the fact that I do not posses all knowledge. Until some tangible, vision, appears in my dimension of reality, and perception, I shall remain skeptical of any such claim, not out of spite, or disrespect, but by adherence to the realities given to me by the gifts of human perception, sight, sound, touch taste,and smell, interpreted, accepted, or rejected by a rational biological brain, in the dimension I currently inhabit. I suggest we all do the same, and not be conned by the self serving delusions, of charlatans, and priests who seek to control , deceive, and yes profit from the fears, and frightened dreams of the innocents. 13. First a confession: Try as I might, I only made it to 31 mins. Modern computer processors and operating systems are modeled on the brain. While there is no 'consciousness' present, someone stepping out of 1970 and into 2014, upon encountering a modern computer, would be at the least astounded. Some might even call them, to borrow a term from Mr. Adams, "spooky". In fact, given the level of intuition and interactivity available from current operating systems, the more superstitious time traveler might even perceive a level of consciousness. They would be wrong, but they wouldn't think so. I wonder what their narrative would be. 14. wow this guys agrees with almost all of what i believe, *sigh* too bad he isnt of my religion 15. kudos.. Sooner or later.. We,.. We'll figure it out and then realize! 16. Mr. Adams You have repeatedly said that humans are 'made in God's image' but you don't explain in what ways we are made in 'God's' image. The National Catholic Almanac says that God is "infinite, immortal, holy, eternal, immutable, omnipotent and omniscient, perfect, supreme", and more, but none of these apply to humans. What is 'God's' Being? Our Being is flesh and blood and cells, but this is not God's Being. What could a God be made of, a God that shows up 2000 years ago with his 'son', who say they are coming back, but don't……. I suggest you do this: Since the film has shown us what Mr. Adams has imagined to be true, but which he has not proven in any way, perhaps he should do another video, such as "Where did God come from"? Science is an effort, an ongoing attempt to describe what we see around us, and to verify by experimentation and confirmation of others. Religion and God belief are guesses, myths and stories from the past with no possible verification, only individual reports that change as one goes from Christianity to Judaism to Hinduism, etc. The other side of the story is presented by George H. Smith, in his book "Atheism - The Case Against God", 1989, which should be read by all, as it explains why the 'God' concept is impossible. Another part of my difficulty with religion is the existence of evil in the world. Does God not have responsibility for the Tsunami's that inflict pain and suffering upon children who have not yet begun to live? Or the famines where humans are reduced to eating their children, such as in 2 Kings 6:29? Or the two headed children born into a world of pain? Your statement that 'You're not allowed to talk about consciousness or free will…' is absurd, consciousness and free will are being studied intensely by neuroscientists are they not, but grounded in reality, not in imagination. Scientists study what they can agree upon is the procedure I believe, called peer review. Is that not sufficient? 17. This is nothing more than a Steven Hawkings bashing party. I do not believe in any supernatural beings. Any more than i believe there is a real daffy duck Nice try. !!!Peace!! 18. Hawking's book, The Grand Design, did serve another useful purpose in Mike's search for understanding."??? don't make me laugh. You can't even call it science, it's sciencefiction, where he makes us believe a nebulous set of theories and no observable proofs are possible. a theory with no experimental support whatsoever, hence not a theory of physics at all. 19. To truly understand science one must accept that the answers to how we came to be are irrelevent .Accept this reality and work towards making life more tollerable for the current group of living beings around you . Self absorbed in trying to explain existance does little to accomplish the task at hand . Dark matter , string theory , multiple dimensions , all fantasy --- weave a rug for "Gods sake" . To create a being as the explanation for that which is observable and assume it cares about or even knows our level of awareness , is as rediculous as the self absorbed scientists wasting time trying to messure a particle which may or may not be in a particuler place at any given moment. Learn to be happy in the moment and plan for the future today to ensure survival of our species. Plant a crop , build a shelter, connect communities , and make love as often and with as many as life permits . Your actions today determine the future shared by those in it including yourself if you survive to be in it . A sexually repressive fundamentalist view espousing monogomus relationships is insane . Diversity is key to spicies survival . 20. Animals are people too.. So say the bumper stickers.. I think it should be.. People are animals too..!and when the religions that say we are above other creatures, created by God in His image, should look back three and a half billion years to when we all looked like a single cell bacteria.. And not because of book written 2000 years ago says different.. But because a 3 billion years Plus of rock strata has left evidence showing a timeline from where we came from..and where we are now.I can deal with the concept of God. I just can't deal with the Self righteous dogma many spew as a feeble attempt to challenge valid scientific findings.if God is responsible for all things...? He lit a match 14.7 billion years ago.. Turned.. And has not look back on "us" since. 21. i find it ridiculous just how closed minded most pro science advocates have become. seriously what matters more ? the universal pursuit of truth or the thunderous applause of your intellectual peers. many seem more concerned with appearing to have all the answers rather than what answers they actually have. 1. I think what's frighteing is how few answers (precisely ZERO) religion has. It's just pure dogma and ignorance that, thankfully, people are starting to see for what it is - i.e. a fraud 2. i'm not anti science but i'm also not prepared to blindly choose to believe 100 % of everything from a bunch of people who think their own sparkling logical intellect is the mightiest thing in the known universe, especially when the journey of science has always been a process of unfolding truth and revision, hence incomplete. religion is flawed but the intention is not ( sure, some set out with intent to rob a naive congregation ). it is foolish to think one has to be either in one camp or the other 100% when neither has all the answers. as far as ZERO answers is concerned, well the fact these institutions and beliefs have endured for thousands of years, even into the era of modern logic does prove that our connection with such beliefs on some level may be more fundamentally linked to our being,( hence it exist for a reason we don't yet understand) than your probably willing to will never rid itself of it's unwanted guest that is spiritual ideas. to believe that after death there is nothing is quiet unscientific as the is no realm of nothing as everything is something, what that after death something might well be.. i don't know ... but that is more scientific than saying " i know ! there is nothing !" when nobody has returned to confirm such. 22. Blind faith in scientism has led many, including myself, to a delusional state that can only be held up by keeping oneself ignorant of any dissenting viewpoints. If anyone questions scientism, they are marginalized as being creationist or conspiracy theorists or r*tarded. Ad hominem attacks come about because those who defend things like the current model of our universe cannot defend something they don't even understand themselves. 1. You may understand scientism but you don't understand science. Science must be corroborated, and it must be capable of being repeated by others. Scientists aren't ignorant of other possibilities, far from it, but they must assess the evidence that is presented for opposing theories, such as Creationism. When you introduce theories of 'gods' and 'goddesses' and 'devils' that have no referents in reality, yes there is opposition, and rightly so. Religion looks backwards in time, science looks forward, with intention. And beliefs are the social 'glue' that identifies and holds people together in a tight group, they are a socializing tool, and dissenters are punished or ostracized. Science has learned that the 'being' of humans is cellular, flesh and blood', but what is the 'Being' of God? We can describe ourselves in scientific language, but to describe us as made in 'God's image' requires not science, but simply belief. The purpose of science is to carry us forward, to understand reality and the future, while the purpose of religion is to promote group cohesion by inculcating a belief system that may or may not have any similarity to the world around us. 2. I understand your perspective but it is a bit naive. Cosmology, for one, is an area that relies on theoretical manifestations with no repeatable scientific experiments to verify (for the most part, that is). For example, black holes are created out of fallacious maths that no one in the field seems to mind. Any beginning math student knows you cannot divide by zero and yet theoretical mathemagicians feel that is it OK to do if it suits their ideas about black holes. It's utter lunacy, and yet most believe in black holes. They are everywhere and in all different sizes, or so we are told. Yet, there is no proof they exist, none. There are other areas of science that do not rely on repeatable experimentation to get the results too. The problem is that we believe in the basic objectivity of science and we project this idealistic notion on the whole of science. It's as though it is seen as the holiest of holys. This is where science becomes "scientism" -when authority replaces rational thought and dogma covers up gaping holes in theory. 3. Yes, naive perhaps, but what is the alternative? One or a hundred of the various gods and goddesses proposed by this or that group throughout history and prehistory? If you can believe in gods and devils without a bit of evidence, why should you not do the same with science. Surely you see that religion looks backwards, to a 'god' or a goddess which was invented by the tribe. Cosmology looks forward too, perhaps getting beyond the scope of our intelligence, but science begins with some evidence to suggest a direction for study. Gravity is both theory and fact, just as is evolution, are they not? So if you cannot divide by zero you can't just throw up your hands and say that's it, I think I'll believe in a god that has zero evidence of Being, except hearsay and tenth hand written accounts. Science is the future, why drag religion along? I personally wonder if there never was a 'creation', or if matter and energy have always existed. Why should 'nothing' be the ground of existence, where would 'nothing' have originated, and how much nothing would there have to be to contain 'everything'? 23. This doc is not worth watching unless you are a christian. 24. applying Occum's Razor I'd say this reasoning just doesn't "make the cut." 1. Then try again without applying Occum's Razor. Just because you saw the movie Contact does not make you an authority on life and the goings on in the Cosmos. Try a little intuition. 2. I didn't see "Contact" and I don't get my analysis tools from the movies. I don't pretend to be an authority on life or the cosmos, which incidentally you implicitly do. You can do a little research on the quirks of the human brain that give rise to beliefs in magic of all sorts (religion being one), but I doubt that anything based on fact will move you beyond your "dogmatic slumber" . It appears to me that you have decided on an epistemology that is not subject to factual verification. There was a time when that way of thinking dominated Western Civilization. That time is known as The Dark Ages. You will forgive me for not wishing to join you there, 3. I would not be so quick to judge. Scientism is being projected by headlines as truth when some of the core beliefs of established science are nothing more than many layers of assumptions. The Big Bang, dark matter, dark energy, black holes and neutron stars are some examples of the many assumptions in cosmology with no empirical evidence to support them. Instead, they are inventions created as a result of faulty models running into road blocks. Rather than scrap theories that empirical evidence contradicts, ad hoc remedies are interjected using untested theoretical mathematics as pseudo-evidence. Cosmology is but one area of science tainted by delusional people who call themselves scientists. Tha being said, I do not throw the baby out with the bath water. Science in its ideal form is a wonderful thing. In practice, however, this can be a much less benevolent force. 4. A great response to Michael Jay Burns truthseekah. I am quite amazed at how science has become the 'new religion' for some people, where they don't dare question the validity of what they are being told. Like yourself, I believe science has much to offer in terms of directive thinking, but follow Ernst Mach's advice to Einstein regarding 'intellectual skepticism'. Keep up the good work, and best wishes! 5. I will die.. that's it..if it gives comfort to some thinking they'll live in eternity they are the " lucky" ones. I can't rationalize what they are selling..if if its in your mind it is" truth":) 6. You cannot rationalize open mindedness, if you cannot accept the logic that we will never know everything. (not even with science) You can't fill a cup that is already full! The 'truth' is there is no difference between a faith denying atheist, and a science denying theist. Asking for 'proof' regarding that which is unknowable is to imply disparity where none exists. P.S: To dismiss the 'eternal' 1st law of thermodynamics (energy cannot be created, or destroyed, only converted) is contradictory/irrational for someone who can only rationalize science. (just saying) 7. Hmmmm......... 'delusional people who call themselves scientists'? What's the beef? You would prefer science to exist only in its' 'ideal form'? How curious...... Do you feel the same about religion, that it should only exist in it's 'ideal form'? Out of curiosity, do you call yourself a 'scientist'? 25. i think he misunderstands much 1. I think what he (understands) is that you don't have to be afraid of consciousness or God Tom. It won't take your dominion of the universe away from you. It will allow you to share and be one with it. 2. you don't have to be afraid of something that doesn't exist... 3. I'm glad you cleared that all up for us. We can all stop thinking now, and let you take care of the big stuff. 26. so boring and preachy in delivery regardless of its veracity 27. This documentary went a little over board on the hawking bashing. It presented really great/interesting points, but lost focus a few times, could have been presented a little more appropriately 1. Far be it from me to judge Hawking's motive and or analysis of consciousness, but it's possible he has a bit of a chip on his shoulder toward a god or an external consciousness because of his physical condition. We all might have misgivings if we were in his position. Don't you think. 28. We have not sent probes to even 1% of our known universe and we are writing a book called "Theory of Everything"? We are like a baby, having explored the 4 corners of his cradle and then deciding that this is what the entire world must look like. Hundreds of years from now, if we ever survive that long, we would look back at this book and smile at how ignorant, arrogant, and narrow-minded we were. We would realize that many of the constants that we know of in physics today are not as constant as we think they are throughout the universe. And there are still gapping holes in our sciences and in our understanding of our universe, for example, in the study of emergence from seemingly chaotic systems which would explain some of the questions that we have always been wondering about ourselves and our universe. 1. Everything you just said made basically no sense, "seemingly chaotic systems" is not the same at "chaotic systems" and you're purposely being vague because you don't want to held accountable for your claims. 2. Seems to me..(being vague ) is all we can be at this point, unless you know something the rest of us don't. 29. Biased with an agenda, muddled and weak arguments, dishonest and at times contemptuous of matters that I suspect he may not fully understand. It's that type of documentary where the arguments presented reveal how the information was never learned with any openness first day, that the presenters mind was already looking for some way of abusing the facts from the moment he encountered them. I actually got to the stage where I was expecting him to start telling me how bananas were shaped to fit human hands. I had to stop it around the 25 minute mark as it made me want to pierce my eye with a spoon. 1. Couldn't agree more. Horrifying. 30. Oooh, “Come let us reason together” before my head explodes (see my AVATAR please) how can I understand all this? Cosmologists (and their other academic peers/buddies) say that the universe is made up of 70% Dark Energy and 26% Dark Matter (or there a bouts) and that the remaining 4% (which is called Baryonic matter) is everything that reflects light (interacts with photon energy) to reveal our world (empirical reality). Then chemists and physicists come along say that all elements (that’s the Baryonic stuff, again) are (at the atomic level) 99.9999% empty space (leaving 0.0001% ‘real’ stuff/things). So with ‘that’ (science folks)…are going to be able to tell us that they know what reality is?! The majority of it you/we cannot even see and that which we can see is mostly empty of stuff? Then they pop to the FACT that all periodic table elements (such as: iron, carbon, oxygen, nitrogen especially the major eight that make up us and all life) are cooked from hydrogen by fusion (gravity and heat) then are exploded out into the vastness of space (which by-the-way is expanding…but it-space-is not expanding into anything). How can anyone grab such ‘knowledge’ and get it to stick inside their noodle? How about ya say: “We really don’t have much of a clue what’s going on (empirically)…but whatever reality is, it (and life) does not revolve around ‘stuff’? I’m also not overly happy with Darwin: If we are just evolved ‘bugs’ and fit somewhere on a branch of the evolutionary tree then why do we have the size brains we have? Evolution should not have wasted its ‘natural selection’ energy (our brains are too big for our environment)…and as poorly as we use it…it’s a wonder it hasn’t shrunk). There is no evolutionary survival value associated with us trying to learn particle physics, or build LHC, or to leave footprints and a couple dune buggies on the moon (etc etc). Frankly, I don’t even know how Natural Selection is able to allow us to have this discussion/talk/thought? For none of it, has the slightest evolutionary (survival) value! Natural selection (Darwinism) should not have cared (to allow us such imagination and abilities), for all we need to ‘do’ to fit in the ‘Tree’ is just survive, and that means: eat while not being eaten, adapt to the environment, and produce offspring. Oddly, I think we are more alien to this orb than all the other creatures here on it…we are as if “foreigners in a strange land, on a temporary journey in time, and all the while trying to get home”. And what is it with morals; they do not dice well with Darwinism? Where does curiosity come from…and how do we know how to use conscience to guide our actions…even to the point of seeking an apology should we wrong someone? Seagulls don’t apologize for stealing food from their peers, why then do we understand stealing as wrong (after all isn’t it biology/science that makes us animals surviving in Darwin’s Tree)??! We are also concerned for the other creatures (all life) on this planet, as if we were put here to care for them and it (the planet) as if we were/are to “steward a garden”. Heck, scientifically we KNOW that if we do not care for this place that it will no longer support us…so why don’t we DO IT?! ANS: Perhaps, we are ‘poisoned’ “we know what we ought to do, but just always do the opposite and then regret it; yet do it wrong again and again”. Why then is the following statement (morally) wrong: Why do we Darwinian types have concern for endangered species or (in fact) any species—survival is about YOU/ME (aaah that’s ME first then you) not them!?? Why should we care? We value abstract qualities like fairness and perfection and trust and we hate the insidious itch of time (can’t scratch it away from bothering us)… yet we have this sick hope for the future…even to the point of terraforming Mars and colonizing some far-off exo-planet (that’s laughable – what are we going to do if that place is inhabited – save the poor retched heathens like the Spanish did to the Aztecs or we did to the American Indians!!?). We can’t even take care of our own planet never mind get along with each other. We are a mess ("we fall short") and this exo-planet (Earth) would fare better if we went extinct or went back to where (ever) we came from. We are probably the colonizing force that left some other world in search of a home (exo-planet) and found Earth and now we have settled in but ODDLY we seem to have forgotten where we came from (and none of that works). If we did figure it out, it would be/seem so weird that we would think them gods—it’s like we have been on an extended six thousand-year camping trip and long for home where all the abstract desires which our inner spirit longs for work correctly, such as: fairness, perfection, love, honesty, and there is no time and no stuff: there just is…eternity). CS Lewis put it this way: ‘If we find within I’m convinced that the only way we can ever understand the physical world is by the non-physical. Frankly, I’m concerned that the word ‘thing’ is totally incorrect—there is no such thing as a thing. Nothing IS nothing. We are as if in a giant hologram and all is energy to include us (E=MC^2). Matter is energy gone berserk; And life is matter infused with intelligent code seeking Home. One thing seems certain. If one finds information (and we never make it—we discover it) it never points to confusion or chaos or chance or mistake. Information always point to some kind of intelligence which is responsible for its cause. And we are VERY good (almost too good at finding information)--at pattern recognition, empirical abilities, seeing symmetry, and doing math and science yet we are also conscious observers and moral agents…and we do not like being insulted, mislead, or abused. We do not come here with a blank slate rather we come into ‘life’ as if hard-wired with some basic relational skills! Could life be all about relationship? Isn’t all: science, philosophy and even religion basically about relationship (the inter-actions at the micro and macro levels the plank limit that ultimately reveal to us what we call reality)? Perhaps we are intelligent energy (spirit), patterned after some grander Intelligence (sort of like “in its image”) yet we do not operate quite as we should (we’re a bit broken, we function poorly—we relate incorrectly). We reside within these electro- chemical bio-mechanical Earthsuits (called bodies) in order to care for this ‘Spaceship Earth’ yet we are “foreigners here”. Perhaps we graduate life through death and that is how this intelligent energy gets to return home (for energy is never destroyed): the elements of our body go back into the Earth and the essence that is the self, “the spirit returns to whence it came“? Life begets life. It comes with a code (DNA/RNA), it has a purpose (least you be just stuff – a meat robot). Life comes from intelligence because life has within it information and nowhere does information come from chaos. Life does not ‘like’ death (ya think?) yet life is caught in time and “in the bondage of decay” or entropy (the second Law of thermodynamics). Death is not normal to energy (that almost seems a contradiction)…rather energy just changes—it moves on. And wouldn’t it be true that is energy moved at the speed of light that all distance would shrink to zero and all time would stop or be non-existent…or be eternal?! Our existence and all life smacks of conspiracy and not coincidence!! When I consider the small span of my life absorbed in the eternity of all time or the small span of space which I can touch or see engulfed by the infinite immensity of spaces that I know not and that know me not, I am frightened and astonished to see myself here instead of there, now instead of then, me instead of you, even why instead of just because. By whose command and act were this time and place allotted to me!? (Blaise Pascal) BTW all the full quotes are snippets from the Bible! OUCH!! 1. In light of the ignorance of chemistry, physics ("Matter is energy gone berserk.") and biology (especially evolution "[Evolution should not have wasted its ‘natural selection’ energy (our brains are too big for our environment"]) expressed in your rambling and at times incoherent post, what is your scientific background? 2. lol - wow - you could possibly have begun the writing of 'god within 2', you seem to have a lot in common. "Our existence and all life smacks of conspiracy and not coincidence!" you say - ah well, perhaps after all, this place is hell - (many writers would agree) Good quote from Pascal, and a great philosophic question. In my opinion, asking the questions is honest and honourable, but an open mind and courage are required if one would accept the answers - peace 3. Hey Jo, If you are speaking of the Blaise Pascal the scientist, you may like to know he was a Godly man converted into Christom. 4. good words are good words, i sincerely have no problem with 'whom' is the motivator or intended. " When I consider the small span of my life absorbed in the eternity of all time or the small span of space which I can touch or see engulfed by the infinite immensity of spaces that I know not and that know me not, I am frightened and astonished to see myself here instead of there, now instead of then, me instead of you, even why instead of just because"...I would have stopped there, but, it's still good, honest, human - I like that. 5. And you might note that Isaac Newton believed that the basis of Christianity, the concept of the Trinity, was blasphemy and a violation of the :"one god" rule. 6. Sorry, that's because Newton did not understand the trinity is three persons, but one essence., Not 3 separate gods. This does not make sence from a human perspective, but is perfectly logical from God's extradimensional existence. 7. If it doesn't make sense from a human perspective, how can a human articulate it? Or are you divine? What exists on the spiritual plane, if existent beyond the unseen microcosmic or uninterpreted macro-cosmic, is simply beyond us. And don't give me that God can will a human to understand or one with faith can understand. Because that experience still can't be verified and only happening in one human's head as no two have the same experience. The Scientific method is likely the best interface God would utilize to introduce the divine to man. As the 'method' is without politic. 8. We are all thinking too hard.... I am that I am. Heaven is not somewhere you go too, if you are good when you die. "That's a lie!" Heaven is here in the now. It has always been. We never left the "Garden of Eden" only tricked to believe we have! Our creator is talking to us all the time; most of us just are not listening, . . Meditating is the door. If your are seeking the answers through science, religion, or any other way, you will never find You will always be seeking! Be still do not seek! ... . Listen" Creation, consciousness is speaking. right now! this very moment. Are you hearing me? become one with the "I am" in the "now" That is where you will find me. . . . . Artywayne 9. G; your words have filled me with so much optimism and love.. Inspired would be an understatement, so rare do I feel proud to be human as I do now! time and time again when I feel exhausted by the lack of positive curiosity and insight, which at times overcomes me, I will read them and be ready to continue my journey... Thankyou for your courage to express... and thanks be to god for sharing your beautiful mind/words with us all! :) p.s any recommendations of favourite books would be greatly appreciated if you have the time... kind regards, romy rose Well, some noodles are stickier than others. With my medium-sticky noodle I take the position that the things there are to understand will always be greater than the human capacity to grasp them, but it is our nature to keep working at it anyway. That's one of the things that I like about us. 11. you know that "dark matter" idea always struck me as being cut from the same cloth as the "cosmological constant" i.e. something that you just make up so that you don't have to discard your favorite theory. Accountants call that "plugging" most of us just call it "cheating" 31. I don't like this Doc. 1. In order to give your potentially profound statement meaning, what Docs do you like? 2. one's not full of sh1t 32. I have found a philosophical reflection from Richard Dawkins, to his ten year old daughter, that reveals how feelings can be implemented legitimately by scientists. It comes from brainpickings org "Inside feelings are valuable in science too, but only for giving you ideas that you later test by looking for evidence. A scientist can have a ‘hunch’ about an idea that just ‘feels’ right. In itself, this is not a good reason for believing something. But it can be a good reason for spending some time doing a particular experiment, or looking in a particular way for evidence. Scientists use inside feelings all the time to get ideas. But they are not worth anything until they are supported by evidence." - an excerpt (edit: not paraphrased) from Richard Dawkins. (copy paste in google for the whole article). So there we have it...feelings are important on our path to understanding the truth, from the great man himself. Who'd have guessed? Inspiration, free will, abstraction, intuition and deduction... the biological robot hypothesis seems more and more ill conceived. Instead, ALL things considered, it seems to me that there is a hidden motive of provoking certain counter beliefs in claiming such a thing. Which is fine...but let's "call it by it's right name" - Chris McCandless (aka Alexander Supertramp). 1. "Scientists use inside feelings all the time to get ideas. BUT THEY ARE NOT WORTH ANYTHING UNTIL THEY ARE SUPPORTED BY EVIDENCE." (emphasis added) In short, the beginning of your second paragraph is simply a paraphrase out of context and a distorted one at that. Second, there is nothing about philosophy in the entire quote. Just another display of your inherent dishonesty. 2. I'm afraid, but not sorry, you are incorrect on all counts Mr Allen. And it's very essence is touchingly philosophical - he is sharing his thoughts and ideas with his daughter on how best to think critically about the world around her. 3. Has nothing to do with philosophy, but rather with hunches based upon accumulated knowledge. Once again, another display of your dishonesty. 4. As I have shown you, with evidence of you being dishonest on a few occasions, I would ask you to point out where I have been dishonest, or cease calling the kettle black. So far your claims of dishonesty are unsubstantiated (obviously). My point has been made so... *** End of My Discussion With You ** 5. Once again, Dr. Dawkins' statement has nothing to do with philosophy, but rather with scientific hunches and trying to pinhole it into philosophy exhibits a desperation on your part amounting to dishonesty. 6. Yeah... good luck with that. 7. So every time a scientist gets a hunch, he becomes a philosopher or he practices philosophy. Won't wash. 8. Digi just thank "God" that you don't manifest his mindset. I can't imagine living with myself like that. I know your not a "dishonest" person and he does too. He just wants to hurt you. I apologize for him. Lets wish him the best and move on. 9. I don't accuse you of dishonesty, but I agree with robert's view that Dawkins is referring to the subconscious promptings based on accumulated knowledge, which, after all, is what imagination of any sort is based on, no matter how farfetched the result of the creative process. Dawkins has made it very clear that he does not believe in the supernatural, even though he's willing to change his mind should some convincing proof offer itself. 10. Dawkins career would go down the drain if he was to say that awareness "could be" the mother of matter. In fact Dawkins's career is fueled on the base that matter is all there is, ANY ONE who says it is not so, is a fool to him. Nothing could convince him to probe deeper into awareness. He would vanish if he did, in other words he would become a joke...something he is not about to allow under any circumstance of his own doing. 33. This guy is not very smart. His arguments don't stack up. Just to pick 1 thing: he seems completely oblivious to the effect of environment: he says if you can know the molecular state of a brain you can predict its future actions. WRONG! You would have to know the molecular state of the entire environment as well (physical world and interactions with other beings), environment plays a massive role in the actions of living beings. It seems to me that genetics and environment are at least 99% responsible for our actions. What we like or don't like for example, these are like commandments given unto us from the mix of genes and environment. Nobody DECIDES freely what they like and don't like. Are serial killers like you and me? Do they suddenly of their own free will decide to commit gross acts of murder? Or are they unable to resist the murderous urges that 'normal' people simply do not have? If there is such a thing as free will, then it is a very, very small thing indeed. 34. Even though Adams is not a credible source of valid information, I think his main point is valid: scientists are over-reaching and corrupting science into scientism. 1. Corrupting? Over-reaching? How about providing a few examples. 35. If you assume science is the only game in town, you are locked into a false assumption. that's like Michael Jordan telling Tom Brady football is a fraud because it is not based in reality. 1. Science isn't the only game in town. It is, however, the only honest game. 2. Says who? A bunch of scientists? 3. Can you name anything better than science for determining scientific matters? 4. no, but scientists insist on pontificating on matters beyond their scope. When they confront a mystery they insist on battering it with scientific nonsense. 5. Such as? 6. Will said jerrymack. As a scientists i agree with you. By the way there are a few "honest" scientists out there 7. When it comes to honesty, it certainly has religion beat. 8. Can you make any examples?? 9. What is more honest about it then religion? 10. When it comes to something scientific, can you name any other games? 36. I endured the first 5 minutes with an open mind hoping for a constructive premise besides disparaging remarks about physics and Hawkins from Adams. The proceeding 5 minutes only served to convince me how little Adams knows or how closed minded he is about science, physics, and consciousness. Ironically, I find Adams fitting the profile of the corporate protagonists he describes at the beginning of the video i.e. hiding behind the veil of science to promote an agenda. I apologize if he has something constructive beyond 10mins but that is where I stopped watching because it did not seem to be leading anywhere besides physics and Hawkins bashing. Adams deserves credit for effort albeit seemingly in search of content to confirm own set bias. 37. Physics does not address "consciousness". Neuropsychology addresses that. In turn, neuropsychology does not address quantum relativity. And you don't usually go to a plumber to get your piano tuned. Adams seems to be one of those who argue: I don't understand it; they don't understand it; therefore: God did it! He doesn't understand quantum mechanics* and he doesn't understand consciousness, so they must be related. Physics has no soul? Neither has it leprechauns, elves, or angels. Science can study primitive mythology. It has done so, and dismissed it. Get over it! *"If anyone claims to understand quantum mechanics, they don't understand quantum mechanics." -- Richard Feynman 38. I agree with Adams. Hawkings is a dangerous man because his ideas are not only wrong, they are taken seriously by many people. Determinism is a product of reductionism, which states that man is nothing but a sophisticated bag of biochemicals with no apparent purpose or meaning. How can you believe this without embracing despair? Without wallowing in cynicism? And when you consider the arrogance of Hawkings' statements, it's hard not to feel outrage.. 1. On whose behalf do you feel outraged? 2. the human race. 3. aam641 On Your Behalf And Yours Alone. 4. I have been following Hawking for years and have read some of his books and observed some of his public statements. I really feel sad for that man for making so many illogical statements. By the way, Hawking may be a popular scientist. But did not even make the top 10 list for best physicists in the 20th century. 39. I am not a scientist, but if what Mike Adam's says is true, then we should all fear the future if it to to remain in the hands Frankensteins. These people have no idea what it means to be human. 1. Please educate us. What does it mean to be human?? 2. Philosophy is split, arguably, in to 4 main areas...your question takes up one entire area on its own. ...could be a long response... 3. Who cares how a bunch of philosophers define what it means to be human? What do they know? 4. what the hell does anybody know? We are like dogs trying to grasp calculus. 5. We send men to the moon, construct atomic colliders and increase life expectancy. We must know something. 40. this guy is a joke , apparently he has a gay school girl crush on hawkings . i mean seriously the guy seems so mad at him , he has no actual arguments besides making lame jokes in an eloquent manner at hawkings and other physicists works . which is quite sad . 1. So called science is actually often times a so called "joke" by being stuck in a one sided indoctrinated materialistic world conception. 2. This approach produces actual results. Praying, groveling, and sacrificing to a magic sky-daddy does not. 3. Thanks samir, for your eloquence. 41. Maybe it's just my perspective, but this seems like a documentary of thinly veiled religious apologeticts. A virtual fallacy feeding frenzy to nourish the ignorant. 42. Is Stephen Hawking real or is he just a science robot? Would certainly be easy to make him represent and follow an agenda. I know, how dare I say? 1. Yep...better not say. Stephen has been wrong and noted for his stubbornness of assertions even when he was. Being wrong in his field can be a bit like a pop star having a flop - you're only as good as your last contribution. So there is a clear incentive to be stubborn. ...hang on a sec...STUBBORN? What place does such an emotional characteristic response have in the upper echelons of science? That's blatant hypocrisy in employing the scientific method! Or is it just that the 'alpha' scientist rules with the greatest influence? Incidentally...Stephen Hawkings' PhD was on the assertion/discovery that the Big Bang originated in a singularity (such as that found at the centre of a black hole). But...a singularity is simply a point of infinite, zero, -1 or undefined depending how you look at it. "The point where all calculations, physics and maths can go no further based on our current understandings" would be more accurate. So the massive Stephen Hawkings proved in his PhD that the Big Bang came out of...something we can not understand or work with in our current science.... ...And the crowds went mad, applauding with emotional reverence, respect and utter wonder... :-D Present day science says that those areas of singularity merely point to where we need new ideas and input, because they are the points where our tools no longer work. ...And the crowd...sat down again quietly. :-/ Biological robots? From a man without a full working set of tools? Sounds to me like we need a trip to a hardware store! 2. And your point is? And by the way, what are your scientific credentials? Do they in any way match Hawkings'? 3. He has said: "Brain could exist outside the body". His might already be the case. His brain expresses words using the computer-generated voice he controls with a facial muscle and a blink from one eye. Let's see science do it that well (in such complexity) with someone who does not have his condition. Makes me wonder how much of his talks are filled in by the controllers of that computer. It's not like he could argue the result like all scientists would if anything was said or written that they didn't agree with fully. I know, how dare I say? 4. I'll be happy to compare his scientific qualifications with yours any day. 5. Woo-hoo *** POST 1000 **** ...made it! He said that?! Well I guess that fits really - in that he is saying the brain (and thus mind, consciousness and whatever else is up there) is nothing special. ...In a way... I'm not surprised really - it's a case of if he didn't say that, someone would say that for him about his philosophy through a derogatory it helps, perhaps, to claim such a thing for himself first, before it is later used against him in some ridicule argument. Just a thought. 6. It's after 1000 that you really start noticing your opponents . LOL 7. Oh Gawd*! *A way of writing "Oh my God." while denoting that you're rolling your eyes at the same time. - urban dictionary com. 8. Like Oh shoot! A way of writing.... denoting that you feel you may be in deep trouble. 9. This is going in to the realms of A.I. and quantum computing, which I love. There was a fantastic book in 2005 from Raymond Kurzweil called "The Singularity is near". In it he optimistically states (overly so most agree) we will equal human brain computation and complexity by around 2035. The following year or will be twice as complex (Moore's Law). But... to my mind, as exciting as that may be to see ...the essence of mind has yet to be bottled. 10. The essence of mind would have to be bottled by the essence of a mind. If the complexity of a mind aka consciousness was to be fully understood (I imagine it would be an instant realization, a conscious big bang sort of thing), would that mind find a way to share such finding and have it accepted by others. Perhaps each mind has to find it's own path, if the "terminal" is "existable" (sorry just invented this word). 11. Yes. There is some anecdotal evidence that suggests - for the mind to be fully aware of itself, would require, at the very least, a slightly bigger mind. To understand this, think about a computer program being aware of its programming. That program would need some extra lines of code to understand it's code. But what would understand that extra code? Some more code of course. ...And so on. There is little logical reason to suggest the same would not be the case for any form of full awareness. This is, of course, philosophical though. ;-) 12. Perhaps not a bigger brain in size but a mind, free as in unobstructed. How does one live unobstructed in our world, perhaps by chance or by choice. 13. That's true oQ, a bigger mind to observe that mind. But since the mind is internal to the brain...that would mean a bigger brain... unless you mean Stephen's external box of tricks? All things being connected with eternal influences through the sharing of quantum information, infers that you could not 'get away' from the obstructions. Unless you could somehow step outside the universe(s). However does this apply to consciousness? Consciousness is affected by experience, environment and in the same way, unless you could step outside your mind through some sort of barbaric electro therapy lobotomy (not recommended), the same is probably true. So what's the moral of the story? Matter, energy, mind...if you can't beat em join em. 14. And your point is? 15. Why are you interested, bored? 16. No, I hate phony intellectualism. 17. We know. Take a break from us and participate on some doc in a positive way, it will make you feel better. 18. Do you not get it ? 19. Thanks for all your supportive comments, much appreciated :-) 20. I find it strange that your file's activity is private but on the other hand your home address is in plain sight. I could google earth right above your house or in front of the blue garage door??? 21. Go for your life, I could probably do the same for you, but the thought would not ever occur to me, but then I don't hide behind a shield of obscurity like most bloggers etc. If you want any further details just ask, i'm not afraid of justifiable, or constructive criticism. Unlike some people who seem to want to control these comments as if they own them. It is only opinions based on whatever facts we feel we know of. It would seem that some want to infer that their facts are correct, & the rest can change or be damned. Sorry discussion is just that, not lecturing ! 22. 1i think you misunderstood me. When I click on your photograph (or name) it opens your file on TDF which shows your full address and says that your activity is private although I see that you have made 14 comments. I have never seen someone include their address information here. I have been on TDF for many years and I am often curious about new comers, that's what drove me to look up your file. You are free to portray yourself anyway you want, no lecturing. 23. I have no respect for gibberish (read pseudointellectualism). 24. May be you don't need to step outside of the universe, you may only need the universe to step outside of your brain. Deep Deep and deeper meditation perhaps. 25. And just how does one step outside the universe, as if you know? 26. "you may only need the universe to step outside of your brain". not the other way around. 27. And just does the universe step outside of the brain? Complete gibberish. You're not fooling anyone. 28. I'm not even fooling myself...that's why I use may be. 29. Ok, let's consider what you're saying... 1) Energy and matter is 'entangled' (this is what I meant by "All things being connected with eternal influences through the sharing of quantum information" 2) Consciousness may be entangled with it. The double slit experiment of 'observing' affects the quantum state. (Even if the observer is an indirect camera). Because of entanglement, it may not be possible to 'step outside' at all, since all will remain entangled. We could however, in theory, observe our minds fully aware since a universal mind would be bigger than our own, potentially. In order to keep this on topic, that is where we can use philosophy and where science begins to fall by the way side. Science gives us a map, but the map is not the reality. Reality is looking out the window. This is where the line from the matrix has its roots: "Welcome to the desert of the real." - it means we have a map so real and precise that we forget to look out the window and the map becomes reality. How's that for DEEP!? lol. 30. Do you have any hard evidence to back up this drivel? If not, then you can't claim that science which demands hard evidence begins to fall by the wayside (note spelling). 31. Your entanglement post is as clear as mud, don't know what you are talking about. Will enter a "Quantum Entanglement" post without the math. 32. That's a very, very good link Achems, that explains entanglement and much more besides very clearly. In a BBC Doc, I don't have the source, with Prof. Cox, he explains that the 'instantaneous communication' is felt across the entire universe. He finds it fascinating too that further, any influence on one single electron has an instantaneous affect on every other electron in the entire universe...through entanglement of quantum information (and energy conservation) as explained in your link. Thanks for the info. 33. 1i think it's a cool link. 34. You are deep in to Philosophical discussion now ! 35. The mind may not be internal to the brain. 36. Of course!! Oops. Thanks for the nudge ;-) 37. And just what is that supposed to mean? 38. Unobstructed by what? Sheer gibberish. 39. Ooh, we are getting in DEEP here ! 40. When I first heard about this, I couldn't 'get it', but the 'program code' example formed a clear easy picture in my mind as a good example - since I'm a programmer. Hope it helps. 41. I think the mind is well aware of itself but the brain is not aware of the mind's potentiality. I would tend to think that mind/consciousness is size less. It just is. The brain that perceives consciousness is sizeable. 42. Yep. In Buddhism it is the Universal Mind. A power house. Our brains are 'used' like light bulbs drawing a trickle from a large power station, but it does not mean we can't tap in to more power if we somehow choose to. That's the analogy anyway. This is also where the power of positive thinking draws its ideology, by tapping in to the Universal Mind. However... I've tried it. I have a friend who's been on Ted Talks who teaches it - privately we both agree it's probably b0llocks - having a positive outlook if you are negative will improve your well being, but beyond that? Getting rich and finding you soul mate, etc. ...well maybe, but not for any magical reasons. If you simply persevere with an idea, relentlessly, you will likely succeed at some point to some degree (imho). 43. Sheer gibberish. 44. Just how do you know this or is it simply more gibberish? 45. Just what are you talking about? 46. What if, being made from universe and being bound to become the dust of it again some day, we are not ever to be capable of understanding our own minds? What if each of us is not a complete mind, just one of seven billion human shaped cells thrown into the mix with a trillion others with differing roles to play? As a species we are endlessly inquisitive, we explore, study, imagine, create, dismantle, even put ourselves in danger, all so that we can learn more. Might be that we are the eye of a navel gazing universe! ;) 47. One does not have to be fully aware of all of the processes of the mind to be aware, any more than one has to consciously beat one's own heart, or breathe. This would be apparent to you if you were reasoning rather than rationalizing. 48. Sheer gibberish. 49. I am sure to you it is pure gibberish, nice to see that you read it anyway. 50. It is pure gibberish in that it says absolutely nothing. 51. Still you had to read it to come to your conclusion....or perhaps you didn't read it, the avatar is enough for you to conclude. 52. Which humans brain though? 53. Ahhh....your lightest touch :-D Indeed, indeed! 54. May be yours. Have you noticed that you can never foresee what your mind will think until it does. Perhaps no one can plan the realization of consciousness...only try and try...which may be in the way. 55. How do you know this? More gibberish. 56. A gibberish you seem to enjoy or are you just lynching your mind with my words? Notice... may be and perhaps and then may be again. 57. And just how does one lynch ones mind with someone's words? More gibberish! 58. I'm playing with you like a cat except the mouse is not trying to get away, it keeps coming back. 59. Ah, I see the lovebirds are at it again, I hear there are some nice mansions up in the hill in Los angeles where you can share your lives in bliss? 60. It would be nice if you deleted the love birds quarrel and left the conversation between Digi and is on topic and would like to see if anyone wants to add to it. thanks gave me a good laugh. 61. It would be even nicer if you learned to use the objective after a preposition, just as in French. 62. I'm particular about the people with whom I associate. 63. A computer built around my mind? That would be pen and paper :) If we knew what we were about to think, all the world's thinking would already have been done and our thoughts would be mundane. Rather like being able to read but being allowed only one book for the whole of your life. Knowing how to read would become pointless, as would the book - nothing new would ever come from it. I'd much prefer to be given an empty page, just keep scribbling away 'til something beautiful happens ;) 64. I guess what I was trying to say is that it is perplexing how thoughts come from nowhere, the good ones and the bad ones. We can't control the qualities of thoughts that enter our mind although we can instantly decide which ones to put aside. The thought of thoughts coming from knowhere is equally perplexing. 65. Ohhh, I see! I misunderstood :) I love that about thoughts, some come marauding and unruly, only to be tempered by those that lurk in the back of your head. I guess quality control has its use, Imagine how noisy it could get between your ears if all thoughts had equal standing. Maybe we are just random thought generators, monkeys with typewriters. I think my daft thoughts get recycled as dreams, had some wild ones lately - catpeoplebabies! ;) 66. Random thought generators? could be right! Who would have thought the pinnacle of intelligent inspiration would be the mastery of becoming a good filing clerk!? That said, however, there is a fascinating (and indeed insightful) BBC Horizon on YouTube called "The Creative Brain – How Insight Works" (2013) that has a permanent place on my shelf. 67. If by filing you mean stuffing everything in a drawer 'til the bottom drops out, I am already a master! Will check that doc, have a feeling I might have seen it... and then filed it with the important stuff ;) 68. You've hit an important point about what life is 'for' Otherwise, if we all simultaneously understood that it all equates to an answer of 42, we might as well just pack up and go home ...where? lol. 69. Most people, most of the time, make their decisions in the amygdala, and only then rationalize them in the anterior cingulate cortex. Actual reasoning takes care, self-discipline and training that most people cannot or will not exercise. Hence, we have religion and patriotism and other destructive delusions. Science works by subjecting conclusions to actual observation and reasoning. 70. Interesting, where can I read more on this process? 71. you remind me of someone a lot of us miss around here ;) 72. Morning edge, I'll be sure to let her know ;)) 73. Hi girl, I missed you too. 74. Thank you Blue :) 75. I read your opinions with interest. So help me out. Can you give me an accurate definition of "infinite"?,that is, something that is NOT finite? 76. Four (4) is finite. Four divided by zero (4/0) is infinite/indeterminate. (But the arctangent of infinity is Pi/2!) 43. Okay yea Stephen Hawking is a little extreme but hes stuck with nothing to do but think so i think you would be a little crazy in the same situation. Free will has its limits no one can consciously stop breathing that's why people hang themselves, take pills or slash their wrists. Most physicists say they don't know what came before the the big bang not that it spontaneously erupted, oh and concerning the big bang it is a theory not fact the most likely theory but just a theory all the same. Its not our consciousness per say that affects the quantum experiments that the guy vaguely refers to its direct observation you can still indirectly observe the experiments (through cameras) even though you consciously watching still the experiment is no longer affected. dark energy and dark matter mean by definition i dont know what the hell it is but something has to be there otherwise the observed universe wouldn't act the way it does. Besides that i agree that alot of science these days is theoretical they do the math and when all the math is solid they come up with a theory that best fits, the mistake is that people hear scientific theory and it somehow means scientific fact even scientists do this is very annoying. overall good documentary gets people thinking. 1. You obviously have no idea what a scientific theory is and I suggest that you find out before causing yourself further embarrassment. 2. please explain scientific theory to me then, please dont say im wrong without correcting me i like to know when im wrong. ive seen alot of your comments and they tend to be very demeaning and then you offer no advice i dont know why you do this but hey to each thier own. 3. (A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on knowledge that has been repeatedly confirmed through observation and experimentation.) I conclude that the big bang is a hypothesis because you cant test or observe this hypothesis thats all it will ever be. 44. Religions and theological concepts are man made.However, there is nothing wrong with contemplating the awesomeness of nature and our perceived reality by diving into the pool that certainly exists outside of our very left brained square.xx 1. Just what is this pool "that certainly exists outside of our very left brained square" and how do you know that it exists? 2. Well said norlavine! The pool, Robert, is ...the frontier of the unknown ...the expanse of questions that remain unanswered, and those which have not yet been asked... and all of which you know little to nothing about because you care not to enquire... You would deny philosophy, just as you would deny religion (without seemingly understanding the difference - you just bundle them and others up in a bucket of trash). But sadly, you go further, you condemn those who practice philosophical contemplation, as estoterical nonsense, including every other thing that has no perceived practical use or evidence from your perspective. You have demonstrated over and over, a philosophy of existentialism (or possibly even nihilism) should seriously read the wiki entries on them. [From Wiki: "Existentialism is different from Nihilism, but there is a similarity. Nihilists believe that human life does not have a meaning (or a purpose) at all; existentialism says that people must choose their own purpose." After you have looked at your own philosophical reflections, ask me again what answers have philosophy provided. If you are not willing to look, don't ask me again. Philosophy has much to teach, and contrary to your assertions, many answers to provide. Especially for a person such as yourself. Trouble is, you may not be able or willing to even ask the questions. The sad part (another demonstrated part) is your condemnation of those who are willing to investigate (and learn something deep and meaningful...i.e. wisdom). Is there resentment on your part towards philosophy, because of what it says? And what is says about you? You should not resent the love of wisdom - "love of wisdom" is what philosophy originally translates from. Indeed, how can you be against a love of wisdom? Rhetorical question Robert...that means no need to answer, just think about the question....philosophically. 3. So philosophy has much to teach and many answers to provide. Like what? And etymology proves nothing. 4. Philosophy teaches us, for example, that existentialists often have a sad, lonely outlook as it sees humans, with will and consciousness, as being in a world of objects which do not have those qualities. By identifying with a philosophical outlook, we can better see why we think the way we do, and perhaps learn to see things in a more positive, compassionate way through changing our philosophy to something more 'preferable'. Questions... answers... suggestions... meaning... wisdom... understanding. It's all there for the learning, through Philosophy. 5. "Philosophy teaches us, for example, that existentialists often have a sad, lonely outlook as it sees humans, with will and consciousness, as being in a world of objects which do not have those qualities." Source? Even if true, is this philosophy or psychology? So all we have to do is choose one "philosophical outlook" from the many and we stand a wonderful chance of seeing things in a more positive compassionate way. Tell that to Hitler, Pol Pot, Stalin, Idi Amin and Attila the Hun. 6. lol...I would tell them, but they're all dead. Yes it's philosophy. Psychology being a scientific branch of it. The source? You didn't look up existentialism on wiki after all did you... oh well, I gave you some philosophical answers any way. Even though you didn't do what you were supposed to... couldn't be free will could it? 7. "Yes it's philosophy. Psychology being a scientific branch of it." Source? You're the claimant; you're the one charged with providing a source which you failed to do. Contending that somehow I should have figured out that you were quoting or drawing from Wikipedia is patently dishonest. 8. Ok my last post on this thread... "After you have looked at your own philosophical reflections [on the wiki entries suggested], ask me again what answers have philosophy provided. If you are not willing to look, don't ask me again." - I said. to which you replied... "So philosophy has much to teach and many answers to provide. Like what?" From this, I cordially assumed you did look it up, as requested, and so replied with an answer. You had not. Patently dishonest? Me? You are not fooling anyone. 9. You "cordially" assumed (whatever cordially is supposed to mean in this context). That was your downfall. You're the loser. 10. You sound like one of my 'teachers' at the WSP ! 45. Mike Adams misrepresents what doesn't fit into his believer's mold. He states that those who don't believe in a 'free will' or a 'soul' are mindless and without conscience or morals, etc. Nonsense. The prisons are filled with believers, not atheists. And he's given no evidence to support any belief in a 'God', which I notice he's called 'him' at the end of his presentation. Now where would, or could, a 'God' have come from? His Father, another god? Seriously, if you were a god, an omniscient, omnipotent god, would you create a world such as this, where everything eats everything else, usually alive? Did God sit idly by, as Christopher Hitchens loved to say, for a hundred thousand years as evolving humans suffered and died from childbirth and tooth infections, before finally revealing Himself to an illiterate group of goat herders in an obscure corner of the middle east? You might try reading Marvin Minskey's book "The Society of Mind" to get a feel for the complexity of mind. And from our earliest days we experience bodily homeostasis which may in fact be a major determinant of our paths. Does consciousness control, or does it simply observe and record? 1. For this reason, long time ago, I stopped calling this world civilized. On a higher spiritual level the very idea of a necessity of nourishing oneself with some others living beings (plants included) seems to me a violence. 2. Indeed! I see our world of today as being just one step ahead of our thousands of years of 'primitive' past. And the curious phenomenon of every living creature requiring sustenance from outside itself ....... well, what can be said of that? Down to the molecular level even, the oxygen in water or the specific wave lengths of light utilized for the continuance of another species. I mean no offense to any religious belief, but what does it mean to say that 'humans are made in the image of God?' What could that possibly mean? In so many attributes humans are the exact opposite of the image of God, you can go on and on with them, such as not having everlasting life, need other creatures flesh and blood to sustain us, have limited knowledge and wisdom, are made of flesh, bone and blood, and on and on. And the National Catholic Almanac states specifically that God is "almighty, eternal, holy, immortal, invisible, omniscient, ..... perfect, provident, supreme....", none of which could be applied to a human. Tell me, please, anyone, in what way humans are made in the image of a god? 3. You're right..... And what can one say regarding a 'perfect God' who creates a world where every living creature must eat some other living creature, usually alive, in order to survive. 46. It's very easy to comprehend a computer program as having no free will. It is very easy to comprehend that there is a difference between our brain function and a computer program. That difference is free will (imo). As a programmer, I know that at a crossroads (of equal consideration) I have to invoke a random number to make a decision or choice. But existence of a random number, inside a computer, does not exist: only a psuedo-random number based on a math algorithm 'seed'. It is, perhaps, possible to create a 'true' random number in machine code though. One such method uses a 'seed' taken from atmospheric noise. Truly random though? Or just an extreme degree of complexity? Think about it. Which brings me to how science employs randomness. Think about genetic mutations of DNA. A 'mistake' in the copying process which has led to all the diversity of life we see. Science STATES it is random. Philosophy can ASK what if it is not? What if those random mistakes were not truly random? What if they were subtly infuenced by complexity. A mistake would become an influence. Just because science declares a process as random, through possibly not understanding the process, does not make it so. Yet the consequences on philosophical interpretation can be enormous. I'm not saying DNA mutations are influenced, I'm saying consider the consequences of such a small nuance and how Science and Philosophy would each interpret such a discovery. 1. The difference is that when a scientist asks the question, he will research the hard evidence, whereas the philosopher merely asks the question and then contemplates. This clearly puts science ahead of philosophy. 2. Why do we have a theory called the big bang, instead of a simple "we don't know"? Why do we even discuss such a thing? Why does science allow for consensus of ideas and beliefs, when all the facts are not in? Why is conjecturing permitted in science? The reason, which has deluded you thus far, is that scientists like to ask philosophical questions too. Any question that leads to a subjective answer is philosophical by nature. For example: "Do milkmaids get smallpox less often than others?" was philosophical, but ceased to be on discovering a definitive answer. "How much is 2 + 2?" is not, though even it may have been? Do you understand the difference between objective and subjective Science? Even scientific consensus is not objective, but poses philosophical questions. Evidence without proof is conjecture and theory. Only objective science is not philosophical. Scientists, therefore, practice philosophy all the time in their research, as a tool to discover understanding. So, to make this clear... You wish to separate scientist from philosopher in to two people. When really it's the fields that are separate. Indeed, you should be aware that every major science, including physics, biology, and chemistry are all disciplines that originally were considered philosophy. This literally puts philosophy "ahead" of science. ;-) It is the analysis and speculation of philosophical thought that eventually develops the branches of science. 3. So what if several hundred years ago physics, biology and chemistry were all considered philosophy? We've learned a lot since then. "The reason, which has deluded [SIC] you thus far, is that scientists like to ask philosophical questions too." Well, so do milkmen and garbage collectors, so what? "You wish to separate scientist from philosopher in to two people. When really it's the fields that are separate." So every scientist is a philosopher? No more than every composer, painter, architect, etc. is one. "It is the analysis and speculation of philosophical thought that eventually develops the branches of science." Really? What about the field of genetics which was developed solely through hard hands-on endeavor as was immunology, microbiology, quantum physics, not do-nothing philosophy. "Scientists, therefore, practice philosophy all the time in their research, as a tool to discover understanding." So those researching cures for certain forms of cancer are actually practicing philosophy? So Werner von Braun, Neils Bohr and Jonas Salk actually practiced philosophy and not science. What abysmal crap!!! Now how about providing some examples of subjective science and while you're at it explain how string theory which is in its conjectural stage is really philosophy. 4. try some metgodology of science. phylosophy of science. the oppinions you say are valid but only partially. 5. i like your views man but i just want to inform you the reason the big bang theory is the most probable explanation is because the universe is expanding so at one point its likely it started in one place. As for saying why not say we just dont know? if we always said we just dont know then we probably wouldnt learn alot that way. not trying to put you down i just thought that was important. 6. Wrong. Stating that you don't know is not only honest, but a fine prelude to scientific inquiry. 7. Now this is starting ti make some sense ! 47. Some cool comments here, some outright angry ones, and a few mundane dismissals. ...but addressing the importance of Philosophy in our understanding? Missing :-/ Philosophy and Science are the yin and yang of our search for truth and understanding. They are often incompatible, yet one without the other can only hinder that search. Science STATES: 'This is the evidence'. Philosophy ASKS, 'What does that mean?' Because scientists are not responsible for the consequences of their discoveries, there is rightly no place in Science for philosophical reflection (concerning ethics and morality). But for Science to dismiss Philosophy would be akin to saying there are no consequences, ethics and morality worth considering (a philosophical perspective in itself). Midway through he states '...the philosophy that we have no free will'. Indeed, believing we have no free will is also a philosophy in itself, as there is no proof for denying free will (and arguably evidence for both persuasions). Philosophy only questions Science when it attempts to be philosophical. Science therefore has no place being philosophical about its discoveries. One, it seems, can not escape the other, and for good reason - they are necessarily entwined. Yet only one attacks the other with the intention to destroy it: If science wants to kill off philosophy, then it must first show it is responsible for its actions, measure and weigh up consequences and consider ethics. This documentary shows how ill equipped Science would be in doing so, and Science has openly declared itself unwilling to do so. Science is the one on the attack. Science is the one that has to be kept in check, ethically. Philosophy provides the means to do so, but only if we don't allow Science to convince us it is dead. Ok, hands up, I asked for this doc to be posted up. I found it interesting and sarcastically humorous, with some great moments of truth. I hope at least some of you can take something positive away from its message. P.S. I had no idea who Mike Adams was (certainly no foul intended). 1. If philosophy is so important to science, how many philosophers are employed by CERN, NASA, Merck and other mainstream, hands-on, scientific organizations? How many philosophy courses not required of the general student population must science majors take? And just how does philosophy manage to keep science in check ethically, assuming that this is needed? By some armchair philosopher warning of the dangers of nuclear weapons and chemical warfare? Philosophy might ask what something means which any child of two can do, but it has yet to provide anything approaching answers and thus is useless. 2. As soon as a philosophical question is answered, it becomes science and ceases to be philosophy. I could say to you science has yet to ask anything of interest, and it would be just as inaccurate as your misleading claim. 3. Once again, if philosophy is so important to science, how many philosophers are employed by CERN, NASA, Merck and other mainstream, hands-on, scientific organizations? How many philosophy courses not required of the general student population must science majors take? The proof is in the pudding. How about a few examples of some answered philosophical questions (not scientific ones relying on hard evidence). 4. May be they(ScO's, should employ & consult Philos, so then they have another perspective of the reasoning & moralistic queries philosophers pose . 5. A philosopher is not heeded to help build a Hadron Collider or to discover cures for various types of cancer or for that matter anything scientific. Philosophy accomplishes absolutely nothing. 6. I sent your oft-repeated question to NASA. If or when they answer, I will post it. Either way, the answer will be interesting! 7. Would you mind posting your question? 8. No problem. First I quoted you, and then made sure the question was qualified as you did in another post - a philosopher doing philosophy, not science. See below. A fellow on a documentary site that allows comments is of the same ilk as Richard Feynman, et al, in that he totally dismisses philosophy as useless. I know that Professor Feynman was on the team that investigated one of the shuttle accidents, but that is not on topic, just an aside. This fellow says this, in fact copies it over and over - "If philosophy is so important to science, how many philosophers are employed by CERN, NASA, Merck and other mainstream, hands-on, scientific organizations? How many philosophy courses not required of the general student population must science majors take?" Do you have an answer to that for NASA or any of the other agencies that do hard science? Can you give a number of philosophers that do philosophy and are employed by NASA for that purpose? It would be interesting either way you answer. I will look elsewhere for the info, but I think I need an insider to answer that question. 9. Fair enough. I'd be interested in the answer. Did you write to NASA in general or to some specific department such as personnel? 10. No one specific. I went to the Contact Us link and it went to Public-Inquiries. At hq dot Nasa dot gov. Could be a long wait. 11. Again, fair enough, but perhaps this should be taken further and CERN, Merck, the Venter Institute and other hard scientific entities should be contacted--and perhaps I should assist because it is not fair for you to do all the work. 12. Okay, I'll take CERN and Fermilab for good measure. My employer's USA home office isn't too far from there in Illinois and I'm told it is a fascinating place to visit. I thought NASA had the best chance of having a philosopher on staff. I suppose I was thinking of things like sending people to Mars, and such. Perhaps a Psychologist or Psychiatrist would be more appropriate as there are lots of people issues to solve along with the issues of getting there and doing stuff. 13. Psychologists or psychiatrists I can see as far as NASA, but these people are not philosophers. 14. No, I didn't mean to imply that at all. I should have written that so it was clear. However, I don't know if I would call psychiatry or psychology a hard science. Not yet, anyway. 15. And neither do I. Every psychiatrist I have spoken to has described his profession as an art form. However, as I stated, I can envision NASA hiring a few--but this is far different from philosophy. 16. They do use psychologists to vet their ideas already. 17. Source? One way or the other, a psychologist/psychiatrist is not a philosopher. 18. I received a reply from Fermilab today. Hello Paul, No, we don't employ philosophers here at Fermilab. Our budget covers scientific research only. Thanks for the question. Andre Salles Fermilab Office of Communication 19. Hey decaf, not quite the same thing but CERN has Artists in Residence :) 20. Yes, I spent quite a bit of time on their site and noticed that. Looks like a fascinating place to punch in every morning. That is, unless one has chosen a career in philosophy. 21. Slap on a wide-eyed and toothy grin, mumble something about aesthetics and skip away, they'll put it down to artistic temperament, all good! ;) 22. As you've pointed out, artists in residence are not philosophers. At least artists in residence produce something. 23. True, but if they hope to bring understanding to us with wandering pencils and abstract photography, why not also through philosophy. Some of it is beautiful enough to be poetry. They have given Art a place in their world, all or none, surely? Art is subjective, until the beauty of a piece becomes a consensus, then it's beauty is objective - but that's just philosophy of art. As The Mighty Achem would say, 'bakes no bread' ;) 24. Art is simply the appreciation of something for its own sake. So if you want to consider philosophy an art form, go right ahead, but it's no more than that. However, philosophy itself has no place in any art form. It can't tell you how to paint a picture, compose a symphony or write a novel and when it's starts dictating on these which all too often it does, it falls flat on its face. 25. From Wiki Answers: Why philosophy is a science and art? "Philosophy is a science because it systematically develops a hypothesis on a premise with analytical tools to resolve the problem through logical reasoning (induced or deduced). It is always open for debates as a human endeavour to seek the truth through learnt knowledge. [edit: exactly what I have been saying all along about scientists employing philosophy as a tool for understanding] Philosophy is an art because you require inherent skills & natural ability to apply the philosophical principles." From the Guardian: "Studying philosophy will [teach] you to think logically and critically about issues, to analyse and construct arguments and to be open to new ways of thinking." Once again, you're dismissing what you do not understand, which unfortunately reveals you to be unscientific in your approach. 26. First of all, you need to comprehend what you read. I wrote, "So if you want to consider philosophy an art form, go right ahead, but it's no more than that." Once again, philosophy does not help to compose a symphony, write a novel,chisel a sculpture or direct a movie. I hope by now you've read the response which Pwndecaf received from Fermilabs--i.e., no philosophers on staff. So much for philosophy's "importance" to science, so much for philosophy's "importance" to art and so much for you. 27. Fair enough, philosophy can't tell art what it is. It can play with the idea of what makes art, art though :) 28. And play is all it does. Knowing "what makes art" (assuming that this is possible) is quite different from actually doing art which makes philosophy valueless. 29. Philosophy is the art of thinking around an idea. 30. "Around" is the operative word. 31. 1i thinks, without it's surrounding, a thought would remain asleep. 32. "Around" is still the operative word. 33. ..and from Wikipedia itself: 34. Science tries to answer important questions by coming up with answers about real things (and nothing but) and asking why; only science often succeeds whereas philosophy generally fails. 35. Thanks for responding. "Our budget covers scientific research only" says a lot. 36. I finally got a response from Cern. They are succinct! Sorry , we don’t employ philosophers. Best regards, Recruitment Service HR Department, CERN CH-1211 Geneva 23 TAKE PART ! My opportunities at CERN 48. Another layman who cannot solve Schrödinger equation is interpreting quantum mechanics. Seriously, be an expert first and then start to discuss such issues. Any yet, if the discussed issue cannot be falsified, it does not matter anyway. 49. To say free will is an illusion is a bit contradictory, isn't it? Whether it is or not, do we not require consciousness to interpret that? What is it that is able to recognize an illusion? What faculty decides the truth or untruth of anything? If not consciousness, what? 1. How about hard evidence and hard results? 2. ALL OF US know there is something beyond the love, you have dreams, visions, a bunch of intangibles. Only sociopaths and bafoons who are too cowardly to look at reality say stuff like this. Go ahead, live in your no-consciousness, no intangible world. Don't love anyong, don't dream, don't think or do intangible stuff cuz you NEED HARD EVIDENCE before you believe any of that exists. My sister with down sydrome has more common sense than you and some of the "intellectuals" on here who intellectualize their way into denial of reality. 3. This is one of the greatest tradegies of this species. A cold, calculative, indifferent mind. Converting everything into numbers, formulas, theories, hard evidence or lack of thereof. And then they wonder why everything around is so dull and grey and frozen. 4. ..there's plenty of evidence for a God-designed world, just like there's plenty of evidence for the invisible force of gravity (which is only theory) cuz you, me and others cannot explain the origin or essence of it, we can only describe the RESULTS or repercussions of it. This is why bafoons like you are such i*iots. You pretend that everything in the world is math and formulas and that you DON'T believe in anything invisible, like love, dreams or even gravity, the magnetic field or dark matter. then you mock folks who believe in something that's invisible because they are only going off of evidence for it and you sheeple demand hard core empirical evidence when you don't even do that for the aforementioned invisible things. What a pathetic hypocrite. 5. Since you are making a claim of "there's plenty of evidence for a God-designed world" please show us your evidence, without resorting to any circular logic etc: thank you. 50. What does it all mean? It doesn't matter. It makes no difference. If you really need there to be a "purpose," then do like Mike did: make it up. 51. What is this physical brain that so many scientists give all power to? What is it made of? What is its foundation? Science itself seems to tell us that physical-matter is not made of physical-matter; but rather indefinable vibrations of “pure energy”. What is this thing we call energy? No one knows. Until we know, what do we really know?....if anything? I only know that I don't know. 1. How about the capacity of a system to perform work for starters? Among other things, we certainly know enough to send men to the moon and build the Hadron Collider? 2. See my last post/reply to Bob Trees. 52. lol. haha hahaha. heh heh heh... I'm gonna watch it again. 1. Thank goodness! Someone at least found this funny. Good on you Jo. I thought the humour was fantastic. 2. it was fun. I did almost feel sorry for this guy 'Mike'; he 'almost' gets some things right, and then he goes so far wrong the incredulity becomes satirical. I was thinking, someone (maybe the BBC) could have a bit of fun creating a satire - hell - based on this spot of fluff, the script is practically written, HA...peace 3. :-( oh dear, seems I missed your point and you mine. Did you see on RT today frozen light? Photons frozen in time for up to 60 seconds...apparently. I think 'Mike' might be on to something. 53. With all the quality Docs on this site that discuss the topic of consciousness how on earth did this one wiggle in? There may be some interesting points that are fertile ground for comment but anyone familiar with this Mike Adams knows not to bother. 54. Ahem. Someone sends me Mike Adams's Natural News newsletter several times a month. Everything he writes supports his bizarre worldviews and hucksters for his "snake oil" product that "cures everything." There's a "conspiracy" to prevent the world from knowing about all these cures his snake oil performs. 55. I liked this doc! I wished I was paying more attention to it while doing other stuff, but focused more about half way thorough. I liked the different ideas presented on the big bang theory but there was one that I kinda lean more to. I'm thinking that the big bang is somewhat of a pulse or cycle. The universe is expanding but at some point it contracts to the point of becoming a big bang again. I've got absolutely nothing to point that this may be the scenario. Which brings me to another point on this. Why can't others come to the relization that we don't know somethings? Why can't "we don't know" be an acceptable answer for some things? If physicists offer that as an answer, it seems like they are deemed to be less "knowledgeable". 1. When I was spending time at the The School of Philosophy, in Wellington NZ, (NOT Victoria Uni., But a private institution) we were learning what is called 'Practical Philosophy', ( to some people that will be an anachronism), About 2 years in we were discussing Knowledge, where does it come from ? One of the end products of this discussion was the thought that "You Don't Know, What You Don't Know !". And this I now use to discuss these very subjects & their like ! 56. Mike Adams is the owner of Natural News, a website dedicated to alternative medicine and various conspiracy theories, such as chemtrails, the alleged dangers of fluoride in drinking water and health problems caused by "toxic" ingredients in vaccines, including the now-discredited link to autism. In addition, Adams is an AIDS denialist, an promoter of conspiracy theories surrounding the Sandy Hook Elementary School shooting and has endorsed Burzynski: Cancer Is Serious Business. I believe that says everything. 1. Wow. "Him smart". 2. damn...just needed Anti-Semite as well and that would have been Bingo! 57. I love when people are all lie "The best part of this film was the part about the universe being a simulation" Lol if your going to refute all of the other BS why not refute the while matrix idea as well. 58. I had to stop when i realized he does not understand what the "theory of everything" is meant do do or explain. The author is just seeing blindly what HE wants to think of it. not worth watching. 1. What hypocrite....You are complaining that the author is seeing "what HE thinks of it". THAT'S what folks do, you id**t. A person analyzes info and then draws his/her conclusions after doing so. And here you are drawing your conclusions, being the typical arrogant hypocrite who wants to shut out other opinions. 59. Here's an idea, we need to extract an exotic particle, so we take a cylinder and fill it with plasma, we then compress it so it starts to form into particles, it creates about a hundred particles with a number of isotopes that are particles changing from one thing to another, we then insert an organism to extract the most stable of those particles. standard process for filtering what we want. The universe is not big it is just you are very small, welcome to the cylinder, now do as your suppose to do and extract the most stable particle in this soup, gold. We are farmed so therefor created to complete a task, when done we will be flushed like all other processes when done. 60. We have a money system that creates money and then charges interest that is not created , so the only way to pay that interest is to create things of value, the only thing the bankers will take is gold, so therefore the money system was created to force us to dig holes in the ground to collect gold, anybody in the last 5000 years that attempted to change the system was done away with, so it is clear why man is here, don't see lions, tigers or sparrows digging holes to collect gold. The concept of god along with physics were created so to give the illusion of some power over the environment, the fact that there is a harmonic relationship between things shows it is one organism. We all have a liver, to the liver we are god but can any of us really understand a liver, can we mod it, do we really know how it works, but if it is our liver then we must have knowledge of it to make it, but we don't, what does that tell you. In just a few years man gained virtual worlds and used them to escape, and within those looks for other social structures to escape, so you could say it is within the nature of life to escape. The best explanation I ever saw for god and physics came from a very old text, it said "god was lonely so fragmented himself to have some fun", seems being fragmented makes you forget what you knew when whole. What ever organism we are part of seems to need gold, as the only intelligent life in existence that we know of, digs holes in the ground and converts rock into gold bars, what does that tell you. In fact if you read the oldest text they tell you that and why, but why let an old story get in the way. Gold is of no use to us, so why do we dig it up. 61. free will exists and awareness exists and science cannot fully understand that it is okay science can eventually 62. I noticed the analysis of physics. And the role they play in determining our different life sciences. This doc is great affirmation of general evidences we hold, in understanding our existence. The God Within is a title that evokes religious implication. It is not; more so a "transcendental abstract" term. 63. how could the universe not be a simulation? that is what else could it be? what are the other options? 1. Brings up the age old 'who created the creator' argument. If our universe is a simulation, its being simulated on a computer in a universe built by someone or something. Not impossible, but highly unlikely. There still needs to be a universe. 2. We don't have a clue what anything could be. Time may only be a factor in the manifest world? In our world there is a start and an ending. Why is this? This must be an interlude a stop off on the way somewhere else? Why even bother with one time around? does it stand to reason there would be an abstract life without reason to be. There are many beings on Earth we discount them as being sole less? No creator just are? I think we are from another "planet" and came here. We are so different then all the rest of the life forms on this Earth? And in other ways almost the same, like procreation? The basic things caring for our young and so on. The rest of life doesn't have a Bible or Koran etc.? Very complex? 3. Are humans more like a snake than a bat is like a snake? 4. That's a question for an evolutionary biologist. 5. "We [I assume you mean homo sapiens] are so different then [SIC] all the rest of the life forms on this Earth?[SIC]." Just how? The remainder of your post is sheer gibberish ("We don't have a clue what anything could be. Time may only be a factor in the manifest world?[SIC] There must be an interlude a stop off on the way somewhere else?[SIC] . . . does it stand to reason there would be an abstract life without reason to be.] Try again when you have something to say and express it in clear, conventional English. 64. You'll be a fool if you take any of this man's definitions and assertions seriously. He has an agenda, and the thing is therefore fallacious from start to finish. Having said that much, it is nevertheless interesting to listen to in parts, in my opinion, especially that regarding whether or not the universe is a simulation. 1. i agree 2. So were all robots? LOL. This a perfectly coherent alternative viewpoint to modern physics.. 3. Biological Robots! 4. Please explain to me the "agenda" this man has? 5. He demonstrates an agenda within the first two minutes of the doc. He is disappointed with Hawking's first page of the book "The Theory of Everything". Apparently, Hawking did not validate this guy's beliefs, so he is summarily dismissed. He concedes Hawking's brilliance, but then says that Hawking lacks the insight that he, Mike Adams, has. I'm paraphrasing, of course but he is a self-proclaimed genius. "Them scientists, they smart, but not so smart like me." I wonder if the simulator of the simulation, has his own creator. Maybe this is the nature of the universe. One simulation built onto another one. We are the leading edge, preparing to create the next simulation. Will this be a human collective effort or maybe millions of single creators, writing the code for their own private simulations...billions of universes, created just so that new simulators can create the next generation of simulations? This could lead to the implication that God has his own God, which has his own God, into infinity. This philosophy thing can be so much fun. Put out an idea, without proof, and then bask in my own brilliance. I love it. 6. You certainly have his number. Just a suggestion though, in the last line, change "my" to "one's." 7. Good suggestion but upon reflection, I think I'll let it stand. The use of "my" demonstrates how self indulgent this brand of philosophy can be. After all, it is me speaking and it is "my" beliefs (truth?). It is the branch of "science" (don't laugh) where any speculation can be the truth because its very vagueness is its defence. 8. O.K., but do change the two "it's" in the last line of your previous post. 9. The 'Matryoshka Doll' theory of the universe. These types of bullsh-t-session speculations can be fun, but I don't take them too seriously, at this point. It's just entertainment, like smoking dope with Donald Sutherland's character in 'Animal House'. But it's conceivable that there could be something to them, I suppose. As a matter of fact, physicists are set to run an experiment shortly to see whether there is any validity to the thought-experiment in the real world (I loved saying that). Here's a section from an article on Huffington Post: wondered, too. Professor Martin Savage at the University of Washington says while an atom's nucleus, there are already "signatures of resource constraints" which could tell us if larger models are possible. This is where it gets complex. Essentially, Savage said that computers used to build simulations perform "lattice quantum chromodynamics calculations" - dividing space force which binds subatomic particles together into neutrons and including the development of complex physical "signatures", that researchers don't program directly into the computer. In looking for they hope to find similarities within our own universe. Whoa. Whoa, dude! (lol) My 10 year old son checked out a youtube video about some of Nick Bostrom's ideas with me the night before last, and dismissed any idea of a simulated universe with the heady words, "Daaad, I eat! So I'm not a simulation." Sounds just a little bit like Descartes, doesn't it? 10. I suggest Thomas Campbell's trilogy "My Big TOE" 65. What is this... an argument against the 'no free will' hypothesis ? a pseudo scientific argument for an external self or duality ? an attempt to lever the words of a respected man of science like Hawking to support religion ? Consciousness and free will is not external, Have you ever taken a hard look at artificial intelligence and neural networks ? Its pretty spooky stuff when you discover that some simple math compounded a billion times over can create the infinite possibilities we call choice and the ability to learn. Most of the things those physicists have 'calculated' have been observed. The 'God particle' aka the higgs boson was predicted mathematically, we just didn't have the technology to observe it at the time. Our mathematical predictions tell us the universe had a point where it began, we can only theorize at this point how it began. We have no solid evidence of what the 'nothing' origin of the universe looked like or what anything outside what we perceive as an infinite universe could really be.... yet. If physicists say there's dark matter and dark energy out there, believe it. There's something they don't understand out there, it accounts for gaps in the calculations and one day they'll come up with a way to explore the questions it raises and answer just what it is. Science is admirable in the fact that it seeks its answers without being sidetracked by dogma and bias. Theres gaps in our knowledge for sure, we find more at every step, but lets not fill them with god. Take a second look at your work author ! 1. In what way is the "God Particle" "God"? 2. Name only. It was thought it would usher in the unified theory with a testable and observable answer to all the questions about the formation of the universe. We will wait and see if it delivers. 3. It got the name because it's supposed to be what imparts mass to certain particles, without which, of course, there would be no matter, and hence no universe as we know it. 4. That's funny its a "particle", itself, the "god particle". So its a particle bring mass to other particles, right? 5. Actually, it's the Higgs field that the particles move through that enable certain of them to acquire mass. The Higgs boson, or God particle, is what confirms the existence of the field. 6. Where did the "Higg's field" emanate from? What does that "field" consist of? Where does that consistence derive from and so on and so forth? etc......... 7. Higgs mechanism... 8. Sidetracked by dogma and bias? Dogma is bias... 66. What a pile of c***. Mike Adams is a fraud and an i****.
a250270a89025458
Friday, January 31, 2014 Multiverse tries to escape the supernatural The WSJ published this letter: Multiverse Doesn't Help Explain Origins Kurt Gödel's "Incompleteness Theorem" basically proves mathematically that the answer to the origin of anything, even the physical universe, always lies outside of the thing itself. Peter Woit's review of Max Tegmark's "Our Mathematical Universe" (Books, Jan. 18) emphasizes the role of math in physics but leaves out the work of 1930s mathematician Kurt Gödel and his "Incompleteness Theorem." This theorem basically proves mathematically that the answer to the origin of anything, even the physical universe, always lies outside of the thing itself. Therefore the origin of this universe that is incredibly fine-tuned in over 100 parameters, must have a supernatural agent outside of itself. A natural agent would have to be included in the encircled physical universe. The multiverse concept basically says that there are an infinite number of universes out there, and we just happen to live in the one and only one where all the dials were randomly set "just right." In a nutshell, perhaps many are simply trying to intellectually escape from accountability to this supernatural agent. James Kraft Green Bay, Wis. Gödel's Incompleteness Theorem says nothing of the kind. Tegmark's answer is that Godel uses infinities, and the physical universe has no infinities. Tegmark says that there are an infinite number of universes in the multiverse. The universe is fine-tuned, but I am not sure why that implies a supernatural agent. It has been known for centuries that the Earth is fine-tuned for life. Tuesday, January 28, 2014 Dyson against wave-function collapse Famous mathematical physicist Freeman Dyson answers 2014 : WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT?: The Collapse Of The Wave-Function Fourscore and seven years ago, Erwin Schrödinger invented wave-functions as a way to describe the behavior of atoms and other small objects. According to the rules of quantum mechanics, the motions of objects are unpredictable. The wave-function tells us only the probabilities of the possible motions. When an object is observed, the observer sees where it is, and the uncertainty of the motion disappears. Knowledge removes uncertainty. There is no mystery here. Unfortunately, people writing about quantum mechanics often use the phrase "collapse of the wave-function" to describe what happens when an object is observed. This phrase gives a misleading idea that the wave-function itself is a physical object. A physical object can collapse when it bumps into an obstacle. But a wave-function cannot be a physical object. A wave-function is a description of a probability, and a probability is a statement of ignorance. Ignorance is not a physical object, and neither is a wave-function. When new knowledge displaces ignorance, the wave-function does not collapse; it merely becomes irrelevant. Dyson was a genius, but this is nonsense. Observing an electron does not just remove uncertainty; it alters the electron. The wave-function may not be a physical object, but it still collapses. Dyson says that it does not collapse, but becomes irrelevant and is replaced with a new wave-function. That is what collapse means -- the old wave-function is projected to a subspace based on the observation. I thought that he was going to advocate the many worlds interpretation (MWI), as those are the main one who argue against collapse of the wave-function. They argue that the collapsing part of the wave-function is really escaping to a parallel universe. The argument is based on a belief that wave-function uncertainty should be some sort of conserved quantity like energy, so they postulate a vast collection of unobservable alternate universes. Dyson is botching up an explanation of conventional quantum mechanics. I guess that is better than advocating some completely unscientific multiverse idea. David Deutsch likes the MWI, and hence dislikes wave-function collapse, but his answer objects to quantum jumps: The term "quantum jump has entered everyday language as a metaphor for a large, discontinuous change. It has also become widespread in the vast but sadly repetitive landscape of pseudo-science and mysticism. ... OK, maybe some physicists still subscribe to an exception to that, namely the so-called "collapse of the wave function" when an object is observed by a conscious observer. But that nonsense is not the nonsense I am referring to here. ... Quantum jumps are an instance of what used to be called "action at a distance": something at one location having an effect, not mediated by anything physical, at another location. Newton called this "so great an Absurdity that I believe no Man who has in philosophical Matters a competent Faculty of thinking can ever fall into it". And the error has analogues in fields quite distant from classical and quantum physics. For example in political philosophy the "quantum jump" is called revolution, and the absurd error is that progress can be made by violently sweeping away existing political institutions and starting from scratch. In the philosophy of science it is Thomas Kuhn's idea that science proceeds via revolutions—i.e. victories of one faction over another, both of which are unable to alter their respective "paradigms" rationally. In biology the "quantum jump" is called saltation: the appearance of a new adaptation from one generation to the next, and the absurd error is called saltationism. Deutsch is right about this, as expressed in my motto. A new article explains that reality of the wave function is a continuing debate: It is not exaggerated to claim that one of the major divides in the foundations of non-relativistic quantum mechanics derives from the way physicists and philosophers understand the status of the wave function. On the instrumentalist side of the camp, the wave function is regarded as a mere instrument to calculate probabilities that have been established by previous measurement outcomes.1 On the other “realistic” camp, the wave function is regarded as a new physical entity or a physical field of some sort. That's right, those are the two main views. Both are tenable, I guess, but you should be suspicious of anyone who makes strong claims based on the reality of the wave function, without recognizing the other view. Monday, January 27, 2014 Hawking flips on black holes Fox Newe reports: Stephen Hawking now says there are no black holes, doing an about-face on the objects that helped cement his reputation as the world’s preeminent scientist, ... Here is Hawking's paper, and the New Scientist story. Another account says: The wheelchair-bound genius has posted a paper online that demolishes modern black hole theory. He says that the idea of an event horizon, from which light cannot escape, is flawed. It is considered one of the pillars of physics that the incredible gravitational pull created by the collapse of a star will be so strong that nothing can break free...much of this is thanks to Hawking’s own work. But Hawking smashes this idea by saying that rather than there being an inescapable event horizon, we should think of a far less total “apparent horizon”. And, at a stroke, he has contradicted Albert Einstein. He sets out his argument in the paper, called Information Preservation and Weather Forecasting For Black Holes, which is likely to send his fellow scientists into a spin. Hawking writes: “The absence of event horizons means that there are no black holes — in the sense of regimes from which light can't escape to infinity.” Hawking is the world's most famous physicist, and his biggest accomplishments are in theoretical work supporting the existence of black holes and event horizons. And now he puts out some stupid 2-page paper saying that they don't exist?! Physics has degenerated to the point where its biggest names give nonsense interviews about nonsense papers claiming to solve nonsense problem. There is no actual scientific evidence brought to beat at all. Peter Woit criticized Max Tegmark's math multiverse, and finds himself saying: I am not now and never have been a creationist. He blocks comments on this subject. He seem a bit sensitive to me, as no New York liberal SWPL intellectual wants to be associated with creationists. Not that anyone even accused him of having anything to do with creationists. Intelligent Design does have something in common with Tegmark's mathematical universe hypothesis. Both say that the universe shows properties of a mathematical design. Both refuse to be limited by materialism. Both are criticized for lacking empirical evidence. Both make arguments like "I find it valuable when the community carefully explores the full range of logical possibilities." Woit also complains about a new movie about the Earth being the center of the universe, and featuring physicists. Apparently they quote physicists about the anthropic principle, fine-tuning, etc. It may also have some people misinterpreting the Bible. I do think that a lot of physicists say crazy stuff, so they should not be too upset when a movie shows them saying crazy stuff. And yes, I expect more from physicists than I do from Bible scholars. Update: The movie producer is defensive about the movie, and says it is about the Copernican principle. He refuses to say whether he advocates geocentrism. He quotes: Einstein on Copernicus: “The struggle, so violent in the early days of science, between the views of Ptolemy and Copernicus would then be quite meaningless. Either CS [coordinate system] could be used with equal justification. The two sentences, ‘the sun is at rest and the earth moves’, or ‘the sun moves and the earth is at rest’, would simply mean two different conventions concerning two different CS [coordinate systems].” That is correct, and at the time of Copernicus the scientific evidence was against him. Those who say that Copernicus was more scientific than Ptolemy are seriously mistaken. Tegmark doubles down on his creationist analogy, and argues that it is unscientific to say that multiverse ideas are nonsense without mentioning evidence to the contrary. I have made a point of posting Tegmark's so-called evidence. I say that there is no scientific evidence for the multiverse, except maybe for matter outside our light cone (level I multiverse). Update: Wikipedia just reinserted this in its list of common misconceptions: This entry originally said that the misconception is that a black hole is like a cosmic vacuum cleaner, but many black holes are in fact sucking in huge amounts of matter. Maybe the physicists like Hawking and Polchinski have more misconceptions than the laymen. Sunday, January 26, 2014 Physicists promote quantum woo The London Daily Mail had an article on Quantum physics proves that there IS an afterlife, claims scientist: By looking at the universe from a biocentric's point of view, this also means space and time don't behave in the hard and fast ways our consciousness tell us it does. In summary, space and time are 'simply tools of our mind.' Once this theory about space and time being mental constructs is accepted, it means death and the idea of immortality exist in a world without spatial or linear boundaries. Theoretical physicists believe that there is infinite number of universes with different variations of people, and situations taking place, simultaneously. Lanza added that everything which can possibly happen is occurring at some point across these multiverses and this means death can't exist in 'any real sense' either. Lanza, instead, said that when we die our life becomes a 'perennial flower that returns to bloom in the multiverse.' ... Lanza cites the double-slit test, pictured, to backup his claims. When scientists watch a particle pass through two slits, the particle goes through one slit or the other. If a person doesn't watch it, it acts like a wave and can go through both slits simultaneously. This means its behaviour changes based on a person's perception This is nonsense, but I cannot be bothered with all the nonsense in the world. This blog focuses mainly on nonsense from physicists pretending to do physics. Physicist Phil Moriarty posted a video rant against the above article and its quantum woo. Leftist-atheist-evolutionist Jerry Coyne praises it. I agreed with much of what Moriarty said, but along the way he argues: You make a measurement on this one [particle], and this [other distant] one responds instantaneously. Not at the speed of light, instantaneously. ... We don't understand it. [at 5:00] ... Spin is not spin. No, it has never been shown that a particle responds instantaneously to a distant measurement. He is referring to the phenomenon of entanglement, which is well-understood and explain in textbooks. (BTW, all three volumes of the Feynman lectures are not freely online.) If such nonlocality were ever proved, a Nobel Prize would be given for it, and it would be one of the great discoveries in the history of science. When genuine physicists recite this nonsense, there is little wonder that non-physicist intellectuals say it, and the popular press reports it. I blame the physicists. His argument that spin is not spin is also nonsense. Quantum spin is the quantization of classical spin, as explained in The electron is spinning, after all. If you treat the electron as a classical particle, you will get some paradoxes, but not just with spin. You get them with position, momentum, charge, and every other observable. Spin is real spin just like those other observables. Philosopher Massimo Pigliucci I’ve been reading for a while now Jim Baggott’s Farewell to Reality: How Modern Physics Has Betrayed the Search for Scientific Truth, a fascinating tour through cutting edge theoretical physics, led by someone with a physics background and a healthy (I think) dose of skepticism about the latest declarations from string theorists and the like. Chapter 10 of the book goes through the so-called “black holes war” (BHW) ... And now comes what Baggott properly refers to as the reality check. Let us start with the obvious, but somehow overlooked, fact that we only have (very) indirect evidence of the very existence of black holes, the celestial objects that were at the center of the above sketched dispute. And let us continue with the additional fact that we have no way of investigating the internal properties of black holes, even theoretically (because the laws of physics as we understand them break down inside a black hole’s event horizon). We don’t actually know whether Hawking radiation is a real physical phenomenon, nor whether black holes do evaporate. To put it another way, the entire BHW was waged on theoretical grounds, by exploring the consequences of mathematical theories that are connected to, but not at all firmly grounded in, what experimental physics and astronomy are actually capable of telling us. How, then, do we know if any of the above is “true”? Well, that depends on what you mean by truth or, more precisely, to what sort of philosophical account of truth (and of science) you subscribe to. I previously made similar points in my book, How Einstein Ruined Physics. "Ruined Physics" is another way of saying "Modern Physics Has Betrayed the Search for Scientific Truth". One of my examples is how the black hole war is debated based on faulty theoretical concepts for things like information, with no possibility of scientific evidence either way. I partially blame the trend on how Einstein idolizers claim to be following in his footsteps. Baggott does not go so far in blaming Einstein. Coyne hates Pigliucci for calling him on the bad science, philosophy, and theology of the New Atheists like himself. I am not sure about the philosophical issues, but Pigliucci does explain decisively why Coyne and the others are wrong about free will. Thursday, January 23, 2014 Tegmark on book tour I criticized Max Tegmark's new book, and I attended his book tour lecture in Santa Cruz. He mainly tried to impress the audience that the history of science had two big trends: finding the universe to be bigger than expected, and finding it to be more mathematical than expected. He is taking these trends to the logical conclusion, and hypothesizing that the universe includes all imaginable possibilities, and that they are all purely mathematical. I thought that I had an understanding of what he meant by "mathematical". But not I do not think that he has a coherent idea himself. A student asked that if the universe is reducible to math, then is math reducible to axioms, set theory, homotopy type theory, or what? He evaded the question, and did not answer it. In response to another question, he said that he likes infinity, and mathematicians and physicists use infinity all the time, but he does not believe in it. He not only does not believe in infinite cardinals, he does not believe that the real numbers are infinitely divisible. At least not the real numbers that match up to his mathematical universe. By avoiding infinity, he says he also avoid Goedel paradoxes. (Update: See Tegmark's clarification in the comments below.) This makes very little sense. The Goedel paradoxes occur with just finite proofs about finite natural numbers. I guess he can assume that the universe is some finite discrete automaton with only finitely many measurement values possible, but then the universe is not truly described by differential equations. All of his arguments for the universe being mathematical were based on differential equations. Tegmark also spent a lot of time arguing that the govt should spend a lot more money trying to reduce risk of future disasters, such as funding the Union of Concerned Scientists or monitoring stray asteroids. He complained that Justin Bieber is more famous than some Russian technician who helped avert war during the Cuban missile crisis. The trouble with this argument is that his math multiverse philosophy requires him to believe that time, randomness, probability, risk, human caring, emotion, and free will are all illusions. What seems like a choice is really determined. We might appear to be lucky when an asteroid misses the Earth, but a parallel asteroid hits a parallel Earth in a parallel universe, and someone with the same thoughts and feelings as you gets killed. The difference between you are the parallel guy who gets killed is just another illusion. I asked him about this afterwards, and he claimed that I should care about the outcome of this universe for the same reasons that I put my clothes on in the morning. The woman next to me suggested that I read Sartre, if I wanted to blindly contemplate my own existence. No thanks, he was a Marxist kook. I also listened to Tegmark's FQXi podcast on his new paper, Consciousness as a State of Matter, in addition to the solid, liquid, and gas states. On another blog, Tegmark gave this experimental evidence for his ideas: a) Observations of the cosmic microwave background by the Planck satellite etc. have make some scientists take cosmological inflation more seriously, and inflation in turn generically predicts (according to the work of Vilenkin, Linde and others) a Level I multiverse. b) Steven Weinberg’s use of the Level II multiverse to predict dark energy with roughly the correct density before it was observed and awarded the Nobel prize has made some scientists take Level II more seriously. c) Experimental demonstration that the collapse-free Schrödinger equation applies to ever larger quantum systems appears to have made some scientists take the Level III multiverse more seriously. Is it really completely obvious that these people are all deluded and that none of these three developments have any bearing on your question? I can believe that there is matter outside of our observable universe (light cone), and that maybe we will get indirect evidence for it, even tho we cannot see it. Call it another universe if you want. But beyond that, these multiverse arguments are silly. Weinberg's argument was merely an argument about how different dark energy densities could affect galaxy formation. It says nothing about any mulitiverse. (Lee Smolin gives another argument.) And those quantum experiments have no evidence against the Copenhagen interpretation, or you would hear about it. Being a mathematician, my prejudices are toward a Pythagorean view that math explains everything. But Tegmark seems completely misguided to me. He has put himself out there before the public promoting these ideas as legitimate science, but I do not see it as either good math or good physics. Update: Woit posted some sharper criticism: The “Mathematical Universe Hypothesis” and Level IV multiverse of Tegmark’s book is not “controversial”. As far as I can tell, no serious scientist other than him thinks these are non-empty ideas. There is a controversy over the string theory landscape, but none here. These ideas are also not “radical”, they are content-free. That is wishful thinking. The various multiverse ideas, such as many worlds, are increasingly popular. The only serious criticism of Tegmark, as far as I know, is my 2012 FQXi essay. Wednesday, January 22, 2014 Quantum indefiniteness, not retrocausality David Ellerman writes: From the beginning of quantum mechanics, there has been the problem of interpretation, and, even today, the variety of interpretations continues to multiply [21]. ... mathematics itself contains a very basic duality that can be associated with two meta-physical types of reality: 1. the common-sense notion of objectively definite reality assumed in classical physics, and 2. the notion of objectively indefinite reality suggested by quantum physics. The "problem" of interpreting quantum mechanics (QM) is essentially the problem of making sense out of the notion of objective indefiniteness. ... There has long been the notion of subjective or epistemic indefiniteness ("cloud of ignorance") that is slowly cleared up with more discrimination and distinctions (as in the game of Twenty Questions). But the vision of reality that seems appropriate for quantum mechanics is objective or ontological indefiniteness. The notion of objective indefiniteness in QM has been most emphasized by Abner Shimony ([34], [35], [36]). ... In addition to Shimony's "objective indefiniteness" (the phrase used here), other philosophers of physics have suggested related ideas such as: Peter Mittelstaedtís "incompletely determined" quantum states with "objective indeterminateness" [31], Paul Busch and Greg Jaegerís "unsharp quantum reality" [4], Paul Feyerabendís "inherent indeÖniteness" [16], Allen Stairsí"value indeÖniteness" and "disjunctive facts" [37], E. J. Loweís "vague identity" and "indeterminacy" that is "ontic" [28], Steven French and Decio Krauseís "ontic vagueness" [18], Paul Tellerís "relational holism" [39], and so forth. Indeed, the idea that a quantum state is in some sense "blurred" or "like a cloud" is now rather commonplace even in the popular literature. The problem of making sense out of quantum reality is the problem of making sense out of the notion of objective indefiniteness that "conflicts sharply with common sense." The quantum indefiniteness is not so hard to understand, and it is implicit in the uncertainty principle. An electron is really a wave, and its position and momentum cannot be simultaneously observed. You can accept the objectivity of the electron, but the position and momentum are indefinite until observed. A reader questions whether Bohr really won the Bohr-Einstein debates. F.A. Muller writes: In his Nobel Lecture of 1969, Murray Gell-Mann notoriously declared that an entire generation of physicists was brainwashed into believing that the interpretation problems of QM were solved, by the Great Dane. Gell-Mann also called quarks a useful mathematical figment. I guess a lot of physicists and philosophers still do not accept what Bohr had to say, but it is a historical fact that Bohr's side of the argument is what made it into the textbooks. (Gell-Mann's lecture is not on the Nobel Sweden site, so I could not confirm his statement. He got the prize for discovering quarks, but he was always afraid to say that the quarks were real, until after everyone else accepted them.) For a more level-headed explanation of quantum mechanics, see Why Delayed Choice Experiments do NOT imply Retrocausality. Those experiments are a little puzzling, and they do demonstrate quantum indefiniteness, but they not imply retrocausality, nonlocality, or other mystical concepts. Monday, January 20, 2014 Math multiverse fails falsifiability test Peter Woit trashes Tegmark's new book on the math universe as grandiose nonsense from a crank: Tegmark's "mathematical universe" is really a misnomer, because it is a mathematical multiverse. He does not just say that the universe is mathematical, but he postulates a multitude of other unobservable universe based on purely mathematical constructions. Massimo Pigliucci also explains what is wrong with Tegmark's idea, but does get hung up on some side issues: Now, the spin of a particle, although normally described as its angular momentum, is an exquisitely quantum mechanical property (i.e., with no counterpart in classical mechanics), and it is highly misleading to think of it as anything like the angular momentum of a macroscopic object. That is nonsense. Quantum spin is the quantized version of ordinary classical spin angular momentum, as explained here. Pigliucci also gets hung up on some issues involving Goedel and infinity. So does Tegmark. A comment tries to defend Tegmark: Does de Broglie-Bohm and Many Worlds tell you something non-empty about quantum mechanics? Tegmark’s work is in the same vein. Perhaps you don’t find it compelling, but that puts you in the Copenhagen camp that dismissed the utility of such alternate interpretations for years, ie. “shut up and calculate!”. Alternate interpretations don’t say anything interesting if you’re only interested in empirical data or calculated predictions, since all interpretations are formally equivalent. They do have significant explanatory power though, a power that Copenhagen completely lacks. The de Broglie-Bohm and Many Worlds interpretations are also empty. They have told us nothing, and have no explanatory power. Scott Aaronson mocks the idea that these interpretations tell us something about determinism: 1. “Bohmian mechanics achieves something amazing and overwhelmingly important: namely, it makes quantum mechanics completely deterministic! Of course, the actual outcomes of actual measurements are just as unpredictable as they are in standard QM, but that’s a mere practical detail.” 2. “Hey, don’t forget Many-Worlds, which also makes QM completely deterministic! Of course, the ‘determinism’ we’re talking about here is at the level of the whole wavefunction, rather than at the level of the actual outcomes you actually experience in your little neck of the woods.” That's right, if some interpretations of quantum mechanics are deterministic, and some are not, then determinism is not a useful concept. And those who give arguments for or against free will based on determinism are misguided. So it is not true that those interpreations say anything interesting. Roger Penrose proposed a provocative and speculative mechanism for how quantum mechanics could enable human consciousness and free will, called Orchestrated objective reduction. It is widely regarded as having been falsified. However new research claims that the theory is still viable. Sean M. Carroll attacks falsifiability: Carroll has acquired a public image as a prominent physicist who speaks out for science and against religion. He is currently on the TCM TV channel promoting old movies with science themes. He is an embarrassment because he promotes crackpot ideas and denies scientific evaluation. I had assumed that he was a Cal Tech professor, but he is not. His blog is on the Wikipedia local blacklist. I don't know why, except maybe for all the silly unfalsifiable ideas he promotes in the name of science. Aaronson has the good sense to defend Popper and falsifiability, even if he does misunderstand how it would apply to a statement about zebras, but Lumo says that string theory needs a free pass: Rahul: there are hundreds of rock-solid reasons to be near certain that string theory is right and none of these reasons has anything to do with experiments of the last 40 years. The characteristic scale of string theory – or any other hypothetical unifying theory or theory of quantum gravity – is inaccessible to direct experiments which means that the bulk of pretty much any progress is of mathematical nature. Am I really the first one who tells you about this fact? Update: Aaronson adds his justification for quantum computing research: Meanwhile, far away from the din of the circus tent lies the actual truth of the matter: that we’re in more-or-less the same situation with QC that Charles Babbage was with classical computing in the 1830s. We know what we want and we know why the laws of physics should allow it, but that doesn’t mean our civilization happens to have reached the requisite technological level to implement it. With unforeseen breakthroughs, maybe it could happen in 20 years; without such breakthroughs, maybe it could take 200 years or longer. Either way, though, I’d say that the impact QC research has already had on classical computer science and on theoretical and experimental physics, more than justifies the small number of people who work on it (fewer, I’d guess, than the number of people whose job is to add pointless features and bugs to Microsoft Office) to continue doing so. I guess that there is some worthwhile research that makes extravagant claims in order to get funding. I am not seeing that as a good thing. Also, Google still hopes for QC performance. Friday, January 17, 2014 No evidence of quantum speedup Quantum computing is one of the most overhyped technologies of our day, with nearly all the press stories and experts saying that it is an inevitable consequence of known physics. And yet after maybe $100M of research money over 20 years, it has been a total failure. MIT quantum computing theorist Scott Aaronson announces: A few days ago, a group of nine authors (Rønnow, Wang, Job, Boixo, Isakov, Wecker, Martinis, Lidar, and Troyer) released their long-awaited arXiv preprint Defining and detecting quantum speedup, which contains the most thorough performance analysis of the D-Wave devices to date, and which seems to me to set a new standard of care for any future analyses along these lines. The paper says: The development of small-scale digital and analog quantum devices raises the question of how to fairly assess and compare the computational power of classical and quantum devices, and of how to detect quantum speedup. ... we find no evidence of quantum speedup when the entire data set is considered ... Separately M.I. Dyakonov posts another paper, Prospects for quantum computing: extremely doubtful: When will we have useful quantum computers? The most optimistic experts say: “In 10 years”, others predict 20 to 30 years, and the most cautious ones say: “Not in my lifetime”. The present author belongs to the meager minority answering “Not in any foreseeable future”, and this paper is devoted to explaining such a point of view. I agree with him, for reasons stated here, and I have been banned from Aaronson's blog for expressing that view. (In fairness, it is alleged that Dyakonov does not address this paper from last month.) I agree with Lumo that Scientific theories need to be falsifiable, and quantum computing has not yet been falsified. But unless someone finds some evidence of a quantum speedup, people are going to stop believing in this nonsense. In this recent NPR interview, Seth Lloyd and others describe quantum computing as technologically inevitable, but breaking communications in real time might be 5 years away. This is as crazy as saying that we might have a manned space station on Jupiter in 5 years. The above D-wave device supposedly has 503 qubits. Meanwhile the NSA classifies any research on a mere 3 logical qubits. If the research were showing an increasing number of qubits with a corresponding quantum speedup, then there would be reason to believe that some progress was being made. But whether the device as 1 or 503 qubits, no quantum speedup has been found. Wednesday, January 15, 2014 The reality of the quantum state Many physicists have had this idea: Notes on the reality of the quantum state Monday, January 13, 2014 Tegmark book pushes math universe MIT and FQXi physicist Max Tegmark writes in his new book, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality: In summary, there are two key points to take away: The External Reality Hypothesis implies that a “theory of everything” (a complete description of our external physical reality) has no baggage, and something that has a complete baggage-free description is precisely a mathematical structure. Taken together, this implies the Mathematical Universe Hypothesis, i.e., that the external physical reality described by the theory of everything is a mathematical structure. So the bottom line is that if you believe in an external reality independent of humans, then you must also believe that our physical reality is a mathematical structure. Everything in our world is purely mathematical – including you. ... This crazy-sounding belief of mine that our physical world not only is described by mathematics, but that it is mathematics, makes us self-aware parts of a giant mathematical object. As I describe in the book, this ultimately demotes familiar notions such as randomness, complexity and even change to the status of illusions; it also implies a new and ultimate collection of parallel universes so vast and exotic that all the above-mentioned bizarreness pales in comparison, forcing us to relinquish many of our most deeply ingrained notions of reality. Yes it is a crazy idea. A much more sensible SciAm reader comments: Confusing mathematics with physical reality (extreme Platonism) has a long and undistinguished history going back to a few ancient Greek philosophers. No one doubts that mathematics is an effective way to describe the physical world, but to give mathematics some sort of physical substance, or to say there is no physical substance - it's just math, is a fairly bizarre way to understand nature. Could this conjecture be tested? I doubt it. Is it science? Not by my definition. A more sensible view of mathematics is that it is a very effective, but artificial and invented language we use to model nature. Further, all mathematical models are approximations to physical reality, and thus subject to change and evolution. The extreme Platonists like Tegmark are true believers who want to lead us into the completely abstract and unscientific world of fantasy, and define it as "reality". We do not have to follow. ... Michael S. Turner said in the late 1990s that the "go-go junk bond days of physics were over". That may have been wishful thinking, or maybe it was just what was PC at the moment, but the fact is that junk-bond physics has grown even more prolific and exotic since then. He is right -- the junk-bond physics has grown even more prolific and exotic than ever. I explained Tegmark's errors in my FQXi essay. Tegmark replies: Thanks Robert for raising these important issues! I discuss them extensively in the book, exploring the full spectrum of views. Like you and Popper, I view untestable theories as unscientific. Please beware that parallel universes are not a theory, merely a prediction of certain mathematical theories (such as cosmological inflation and unitary quantum mechanics), which have in turn passed some experimental tests (hence their popularity) but may be ruled out during new experiments in coming years. ... You asked for more detailed material: you'll find a sample on (click either "Popular" or "Technical" depending on your taste), and please feel free to ask me direct questions on Facebook as well. The book is of course mainly on uncontroversial but fascinating recent discoveries in modern physics, from cosmology to particle physics, but Scientific American predictably chose to highlight some of the most controversial material. So he blames SciAm for this choice? Tegmark has spent much of his career promoting the math (multi) universe. See his 2003 SciAm cover story on Parallel Universes - Not just a staple of science fiction, other universes are a direct implication of cosmological observations and his defense to criticisms, The Multiverse Strikes Back. His own web site says: Articles about the book • Discover magazine, December 2013 issue (excerpts from Chapter 10; paywall disappears December 3 2013) • Discover magazine (interview with me about a key idea from the book) Those articles are titled, "Everything in the Universe Is Made of Math – Including You" and "Is the Universe Actually Made of Math? Cosmologist Max Tegmark says mathematical formulas create reality." So yes, the SciAm excerpt on the "key idea" from the book. His proposal is silly because no one has ever reduced any physical object to math, not even a photon or electron. By his own admission, he needs to assume that randomness, complexity, and change (over time) are just illusions. These are philosophical issues that were debated by the ancient Greeks. His position is no stronger today than it was 2300 years ago. Update: Woit reviews Tegmark. Thursday, January 9, 2014 Predicting quantum computers Brian Hayes describes quantum computers and writes: Languages such as QCL and Quipper may well solve the problem of how to write programs for quantum computers. ... If that is a prediction, then I would like to bet against it. Those programming languages will never run faster on a quantum computer than on the simulator. Hayes never mentions that quantum computers may be impossible, and those dozen qubits are not real. Monday, January 6, 2014 Misconceptions about QM and free will Swedish physicist Sabine Hossenfelder posts 10 Misconceptions about Free Will: No, saying that science cannot answer a question means that there is no experiment or set of observations to resolve the question. Are sunsets beautiful? Is Obamacare a failure? Is P = NP? These are legitimate questions, but science cannot answer them. There you have the folly of modern physics in a nutshell. Yes of course free will lies at the heart of how we understand ourselves. Yes, free will plays a central role for the foundations of quantum mechanics, as QM is unique among modern science theories in that it accommodates free will, as shown by the free will theorem. But no, an Eisnteinian belief in determinacy will never lead to any progress in quantum gravity. She lays bare her illogical argument: So, first the facts. It would be understandable if she said this in the 19th century. But QM is our leading theory, and it teaches that future decisions are not determined by the past, and that they are only random in the sense of not being predictable by others. In other words, QM is completely compatible with free will. If you think that QM is somehow incompatible with free will, ask yourself this question: Is it possible for any physical theory to be more compatible with free will than QM? I do not see how. Free will is baked into QM. If the answer is yes, then ask why no one has ever proposed such a theory. If the answer is no, and you still oppose free will, then your opposition to free will is purely philosophical and independent of any physical principles. Some ancient philosophers opposed free will, so it is possible to take such a view. Just do not pretend that the view is informed by science, because it is not. The conflict that doesn't go away is this: There's no sense in which you can change your future. ... "I could go and jump out of a window. That would change my future dramatically." No it wouldn't. Your future either does or doesn't contain you jumping out the window. There's nothing you can change about that. The root of her problem is that she does not believe in counterfactuals. She rejects a concept that is easily understood by little kids. I will post more on that. She believes in Einsteinian determinism, and even likes superdeterministic hidden variables theories, more commonly known as "conspiracy theories". Lumo addresses her misconceptions one by one. Saturday, January 4, 2014 NSA classifies quantum computers According to leaks reported by the Wash. Post, the NSA classifies quantum crypto research: The National Security Agency is conducting what it calls "basic research" to determine whether it's possible to build a quantum computer that would be useful for breaking encryption. ... “It seems improbable that the NSA could be that far ahead of the open world without anybody knowing it,” MIT Professor of Electrical Engineering and Computer Science Scott Aaronson told the Post. We have to keep with all the others making no progress in this field: Seth Lloyd, an MIT professor of quantum mechanical engineering, said the NSA’s focus is not misplaced. “The E.U. and Switzerland have made significant advances over the last decade and have caught up to the U.S. in quantum computing technology,” he said. Here are the qubit details: (S//REL) Level B QC - Classified theoretical and/or experimental research in the design, physical implementation, and operation of quantum computers, as established by the Laboratory for Physical Sciences/R3. The boundaries are based on the number and quality of qubits, realism and specificity of design, control precision, and detail of analysis. While these boundaries may change over time, as of the publication of this guide, the values are: (1) (S//REL) Detailed engineering design of 51 or more physical qubits; (2) (S//REL) Implementation and operation of a high-fidelity 21-or-more physical-qubit device; or (3) (S//REL) Implementation and operation of three (3) or more logical qubits, with sufficient speed and precision to allow preservation of quantum information and logical gates between the qubits. So they classify 51 physical qubits, but a mere 3 logical qubits. A qubit is not really a qubit unless it can preserve quantum information and be operated on by logic gates. So the NSA is using the term "logical qubit" for a true qubit, which has never been demonstrated. Apparently it is customary for quantum computer researchers to claim some large number of qubits, but they are not real qubits. The NSA will have a major top-secret breakthru if it obtains a device with a mere 3 logical qubits. I am still banned from Aaronson's blog. He recently posted, in response to skeptics: In my personal opinion, proving that QC can work would be at least as interesting—purely for fundamental physics, setting aside all applications!—as the discovery of the Higgs boson was. And our civilization (OK, mostly the EU) decided that finding the Higgs boson was worth $11 billion. It’s hard for me to understand how people can simultaneously believe that that was justified, and that spending a comparable amount to prove the reality of beyond-classical computational power in our universe wouldn’t be. Here is my censored reply: The EU would not have agreed to spend $11B just to find the Higgs. The LHC was sold as a machine to disprove the Standard Model and find more fundamental laws of physics. If they knew that the LHC was just going to confirm what was already known, plus give a mass value for the Higgs, I doubt that they would have funded it. Physicists overhype these projects in order to get funding. They are still complaining about how the Texas Superconducting Super Collider got killed, but that was supposed to discover a new unified field theory. The LHC has only produced evidence confirming the theory that was developed and accepted in the 1970s. The promise for quantum computing has been going on for about 20 years, and will probably go on for another 20 years. No computational shortcuts have ever been demonstrated. A top-secret NSA 3-qubit computer would not do much either. The NSA can say that it has to do this research to keep up with whomever else might be doing it. Friday, January 3, 2014 Mermin taking Bohr seriously N. David Mermin writes in QBism as CBism: Solving the Problem of "the Now": In a Physics Today Commentary,1 and more carefully, extensively, and convincingly with Chris Fuchs and Ruediger Schack,2 I argued that stubborn longstanding problems in the interpretation of quantum mechanics fade away if one takes literally Niels Bohr’s dictum3 that the purpose of science is not to reveal “the real essence of the phenomena” but to find “relations between the manifold aspects of our experience.” I have mentioned QBism and Mermin's coauthored paper, with the comment that people should have just listened to Bohr in the first place. The 2012 Physics Today article starts: Quantum mechanics is the most useful and powerful theory physicists have ever devised. Yet today, nearly 90 years after its formulation, disagreement about the meaning of the theory is stronger than ever. New interpretations appear every year. None ever disappear. Probability theory is considerably older than quantum mechanics and has also been plagued from the beginning by questions about its meaning. And quantum mechanics is inherently and famously probabilistic. For the past decade, Carl Caves, Chris Fuchs, and Ruediger Schack have been arguing that the confusion at the foundations of quantum mechanics arises out of a confusion, prevalent among physicists, about the nature of probability.1 They maintain that if probability is properly understood, the notorious quantum paradoxes either vanish or assume less vexing forms. I agree with most of that, except that I say that quantum mechanics is not more probabilistic than any other scientific theory. Many times on this blog I have defended the Copenhagen interpretation against those who say that it is incoherent, unscientific, and obsolete. I repeatedly trash big-shot physicists who promote many-worlds or nonlocal interpretations of quantum mechanics as being somehow necessitated by disproving traditional interpretations. Mermin and Lubos Motl are about the only living physicists where I have agreed with their views on this. To me, the paradox is why so many smart people find quantum mechanics so paradoxical, when the essentials were so clearly explained by Bohr, Heisenberg, and von Neumann 80 years ago. Mermin's theory for this has to do with a refusal to accept a probability interpretation. I have some other ideas, and I will be posting them. Mermin defends his article with the opinion of another quantum founder: a 1931 letter from Erwin Schrödinger to Arnold Sommerfeld:3 "Quantum mechanics forbids statements about what really exists--statements about the object. It deals only with the object-subject relation. Although this holds, after all, for any description of nature, it appears to hold in a much more radical and far-reaching sense in quantum mechanics" There is also a May 2013 SciAm article on QBism where it is again treated as something new, but it is really just the Copenhagen interpretation. Anyone proposing a new (or old) quantum interpretation should at least recognize the Bohr-Heisenberg position, as that was considered standard for decades. People today act as if Bohr's position was nonsensical, as can be seen in the recent Dilbert cartoon. I am not sure why a new name is needed to reiterate what Bohr said. After all, Bohr won those Bohr-Einstein debates back in the 1930s. Update: Mermin previously promoted some opposing views of quantum mechanica in what he called the "Ithaca interpretation" (he is a professor in Ithaca). He said in 1996: To live with so many requirements I need room for maneuver. This is provided by adopting, as my sixth and final desideratum, the view that probabilities are objective intrinsic properties of individual physical systems. I freely admit that I cannot give a clear and coherent statement of what this means. The point of my game is to see if I can solve the interpretive puzzles of quantum mechanics, given a primitive, uninterpreted notion of objective probability. ... It therefore appears that the view of probability underlying the Ithaca interpretation must be anti-Bayesian. And he said in 1997: I shall not explore further the notion of probability and correlation as objective properties of individual physical systems, though the validity of much of what I say depends on subsequent efforts to make this less problematic. My instincts are that this is the right order to proceed in: objective probability arises only in quantum mechanics. We will understand it better only when we understand quantum mechanics better. My strategy is to try to understand quantum mechanics contingent on an understanding of objective probability, and only then to see what that understanding teaches us about objective probability.10 10 That objective probability plays an essential role in the quantum mechanical description of an individual system was stressed by Popper, who used the term "propensity". See Karl Popper, Quantum Theory and the Schism in Physics, Rowman and Littlefield, Totowa, New Jersey, 1982. Heisenberg may have had something similar in mind with his term "potentia". While I agree with Popper that quantum mechanics requires us to adopt a view of probability as a fundamental feature of an individual system, I do not believe that he gives anything like an adequate account of how this clears up what he called the "quantum mysteries and horrors". I do not agree that there is any such thing as "objective probability" or propensity in quantum mechanics, any more than probability figures into other scientific theories. Apparently Mermin tried to improve quantum mechanics with his own Ithaca interpretation and objective probability. Then he realized that was all a mistake, and switched back to completely subjective probability, and calls it QBism. I don't want to blame him for changing his mind, but it would be nice if he explained that QBism is a throwback to Bohr and its novelty is mainly in correcting errors by himself and others. A comment below disagrees with me, and cites R.P. Feynman for how illogical QM is. I cannot agree. Feynman's textbook on quantum mechanics is now online, and it is not nonsensical, absurd, or illogical. He gives a coherent exposition of the theory. Yes, he has also been quoted as saying QM is strange and mysterious.
52a555f428d0db7a
When solving the Schrödinger equation for finite potential well, the solution outside of the well is $$\psi _{1}=Fe^{{-\alpha x}}+Ge^{{\alpha x}}\,\!$$ and $$\psi _{3}=He^{{-\alpha x}}+Ie^{{\alpha x}}\,\!$$ However, when solving the Schrödinger equation for the quantum barrier, the solution of the regions are \begin{align} \psi _{L}(x)&=A_{r}e^{{ik_{0}x}}+A_{l}e^{{-ik_{0}x}}\quad x<0\\ \psi _{C}(x)&=B_{r}e^{{ik_{1}x}}+B_{l}e^{{-ik_{1}x}}\quad 0<x<a\\ \psi_{R}(x)&=C_{r}e^{{ik_{2}x}}+C_{l}e^{{-ik_{2}x}}\quad x>a \end{align} The solutions for quantum barrier have the imaginary $i$ on the exponent of $e$ for example $A_{l}e^{{-ik_{0}x}}$. But the solution for the finite potential well does not have the imaginary $i$ on the exponent $e$ for example $Fe^{{-\alpha x}}$. So why is there an imaginary $i$ for quantum barrier problem while there isn't an imaginary $i$ for potential well problem when both solution of the problem is derived by solving the Schrödinger equation in the form of $(\left[{\frac {d^{2}}{dx^{2}}}\psi (x)\right]=b\psi (x))$ ? Thank you for the reply, and I understand the quantum barrier and potential well a bit more. However, I have a follow up question: Why is the region with potential $V=0$ in the finite potential well have a wavefunction of the form $$\psi _{2}=Asinkx+Bsinkx\,\!$$ But for the quantum barrier, the regions with potential $V=0$ have a wavefunction of $$\psi _{L}(x)=A_{r}e^{{ik_{0}x}}+A_{l}e^{{-ik_{0}x}}\quad x<0\\ $$ So why does the Schrodinger equation produce different result for the region of $V=0$? • 1 $\begingroup$ Note that, in either case, the coefficient can be imaginary and so one could, for example, stipulate that $\alpha = ik_1$ and then write $\psi_C(x) = B_re^{\alpha x}+B_le^{-\alpha x}$. $\endgroup$ – Alfred Centauri Nov 20 '17 at 15:51 • $\begingroup$ The Ansatz is $\psi(x) = A~e^{\alpha x}$ for general (non-zero) complex A and $\alpha$. The exact form of $\alpha$ and A are determined from "boundary conditions". $\endgroup$ – DanielC Nov 20 '17 at 16:04 • $\begingroup$ hyperphysics.phy-astr.gsu.edu/hbase/quantum/pfbox.html $\endgroup$ – Gert Nov 20 '17 at 17:08 • $\begingroup$ Just remember the Euler's formula $\endgroup$ – Ruslan Nov 23 '17 at 20:42 It is two different situations of the TISE$^1$: 1. A bound state has $E<0$ and the wave function $$ \psi(x)~=~Ae^{-\kappa |x|} , \qquad \kappa~:=~\frac{\sqrt{-2mE}}{\hbar}~>~0, \tag{1}$$ decreases exponentially in the asymptotic regions $|x|\to \infty$. An exponentially decreasing wave function is the hallmark of negative kinetic energy, i.e. quantum tunneling into classically forbidden regions. 2. A scattering state has $E>0$ and the wave function $$ \psi(x)~=~A_+e^{ik x}+A_-e^{-ik x} , \qquad k~:=~\frac{\sqrt{2mE}}{\hbar}~>~0, \tag{2}$$ behaves oscillatory in the asymptotic regions $|x|\to \infty$. An oscillatory wave function is the hallmark of positive kinetic energy, i.e. classically allowed regions. Or alternatively: Note that when the energy $E$ changes sign from negative to positive, then the square root $\kappa$ in eq. (1) becomes imaginary and can be identified with $\pm ik$ from eq. (2), cf. comments by Alfred Centauri & DanielC. (By the way, there is another intimate relation between bound states & scattering states: If we analytically continue the real $k$ into the complex plane $\mathbb{C}$, then the scattering reflection & transmission coefficients will have poles at positions $k=i\kappa$ along the imaginary axis in the complex $k$-plane whenever $\kappa>0$ corresponds to one of the discrete bound states, cf. e.g. Ref. 1.) 1. P.G. Drazin & R.S. Johnson, Solitons: An Introduction, 2nd edition, 1989; Section 3.3. $^1$Short of gravity the potential function $V(x)$ is only physically relevant up to a constant. Let us here for simplicity adjust the constant, so that the potential $V(x)$ vanishes in the asymptotic regions, i.e. assume that $V(x)\to 0$ for $|x|\to \infty$. • $\begingroup$ Thanks, but I have further questions about why the different results Schrodinger equation produce for the region V=0 for finite potential well and potential barrier. If you know about it, please consider answering, it would be very useful. Thanks very much. ;) $\endgroup$ – Just A Bad Programmer Nov 23 '17 at 15:25 • $\begingroup$ I updated the answer. $\endgroup$ – Qmechanic Nov 23 '17 at 20:23 Your Answer
5ddb82207fb1ec6f
PT-symmetric Quantum Walks and Centrality Testing on Directed Graphs Requires a Wolfram Notebook System Requires a Wolfram Notebook System This Demonstration lets you draw a graph using the mouse (or choose a premade graph from the drop-down menu) and then view the continuous-time quantum walk (CTQW) probability distribution. If the graph drawn has direction, then the CTQW probability blows up to infinity. This is avoided, however, for directed graphs with PT-symmetry (pseudo-Hermiticity). You can also choose to view the vertex centrality measurement of the graph, comparing the classical PageRank algorithm, the time-averaged CTQW, and the pseudo-Hermitian CTQW. To draw the graph, click and drag to create vertices and edges, click to create disjoint vertices, and click an existing vertex to create a self-loop. Vertices can be deleted by right-clicking and moved by holding down the CTL/CMD key and dragging. To make a directed edge undirected, draw a new edge along it in the opposite direction. Contributed by: Josh Izaac (July 2016) After work by: Josh Izaac, Jingbo Wang, Paul C. Abbott, and Xiaosong Ma Open content licensed under CC BY-NC-SA Snapshot 1: The CTQW of the directed six-vertex cycle graph is shown. As the graph is directed, the Hamiltonian is non-Hermitian, and it can be seen that probability is not conserved. Snapshot 2: This is the pseudo-Hermitian CTQW (-CTQW) of the same graph; by pseudo-Hermiticity, we can define a similarity transform to allow the walker probability to be conserved. Snapshot 3: This shows the vertex centrality ranks of the graph using three different measures: (1) the classical PageRank; (2) the non-unitary CTQW; and (3) the unitary -CTQW. Snapshot 4: This is similar to snapshot 3, however this time plotted on a line plot, with the axis sorted from vertices with highest PageRank (left) to vertices with lowest PageRank (right). In a classical random walk, when a walker is on a vertex with possible edges to walk along, the walker flips an -sided coin to decide which of the edges to walk down. However, in a quantum walk, the walker utilizes quantum superposition to walk down all possible edges. This leads to markedly different properties to the classical walk, and the quantum walk is able to propagate through a graph quadratically faster than the classical walk [1]. In fact, the quantum walk has been shown to be a system of universal quantum computation—any quantum circuit can be reformulated as a quantum walk on a graph [2]. The continuous-time quantum walk on a graph is defined as follows. For a graph , composed of vertices and edges and with adjacency matrix , the Hamiltonian can be given by either or , depending on the convention preferred (this can be set in the Demonstration using the Hamiltonian radio buttons). Solving the Schrödinger equation, gives the formal solution , where is the initial state, and the state is given by the complex wavefunction . In this Demonstration, the initial state has been chosen to be an equal superposition of all vertices; . If the graph is directed, then the adjacency matrix and the Hamiltonian become nonsymmetric and non-Hermitian. As a consequence, the time-evolution operator is no longer unitary (), causing the probability of the walker over time to blow up to infinity. As such, the standard CTQW is unsuited to walks on directed graphs. This can be seen from the Demonstration by viewing plots of the CTQW for a directed graph, for example, the graph . One solution explored here comes in the form of PT-symmetry [3]. Graphs that exhibit PT-symmetry, while still non-Hermitian and non-unitary, will have real eigenvalues. This results in a total probability that oscillates with time—this can be seen by choosing an example of a PT-symmetric graph from the drop-down box in the Demonstration. In order to ensure the probability remains constant with time, we can utilize the "pseudo-Hermiticity" of these graphs by finding an operator such that , where is Hermitian yet retains information about the structure of the graph. This is known as a pseudo-Hermitian CTQW [4], or -CTQW for short, and can be viewed by clicking the "Pseudo-Hermitian -CTQW" tab. Finally, we compare the vertex centrality ranking of the graph vertices using the CTQW, -CTQW, and the classical PageRank algorithm. For the CTQW and -CTQW, this is done simply by using the time-average probability for each vertex. The PageRank algorithm was created by Google for ranking their search results; it finds the fixed points of the so-called Google matrix, defined by , where is the column normalized adjacency matrix, and is generally chosen to be 0.85. In this Demonstration, you can view the vertex rankings in a line plot or a bar chart, and reorder the plots based on each ranking algorithm. In the toolbar, there is also a slider allowing you to change the value of used in the PageRank algorithm. [1] J. Kempe, "Quantum Random Walks: An Introductory Overview", Contemporary Physics, 44(4), 2003 pp. 307–327. doi:10.1080/00107151031000110776. [2] A. M. Childs, "Universal Computation by Quantum Walk," Physical Review Letters, 102(18), 2009. doi:10.1103/PhysRevLett.102.180501. [3] C. M. Bender and S. Boettcher, “Real Spectra in Non-Hermitian Hamiltonians Having PT Symmetry,” Physical Review Letters, 80(24), 1998 pp. 5243–5246. doi:10.1103/PhysRevLett.80.5243. [4] J. Izaac, J. B. Wang, P. C. Abbott, and X. S. Ma, "Quantum Centrality Testing on Directed Graphs via PT-Symmetric Quantum Walks", 2016. [5] S. Brin and L. Page, "The Anatomy of a Large-Scale Hypertextual Web Search Engine," Computer Networks and ISDN Systems, 30(1–7), 1998 pp. 107–117. doi:10.1016/S0169-7552(98)00110-X. Feedback (field required) Email (field required) Name Occupation Organization
db9ba9f0c979a4d7
Nine hypotheses concerning the measurement problem That a measurement problem exists was already clear in the first half of the 20th century. The question is how and why the quantum state wave, which is in fact a probability distribution, comes to an end. Probabilities are non-physical and therefore have the substance of a thought. Despite all the discussions, there is still no consensus between the physicists about the correct interpretation. Therefore I present here a very brief overview of the most common hypotheses each having their own club of supporters. Hypotheses 1,2,3,4,5 and 7 try to save our image of an objective material universe. Hypothesis 6 is strongly inspired by virtual computer technology. The last two, 8 and 9, introduce the non-physical consciousness of the observer as an explanation. For the experimental proof that consciousness should not be ignored, read here. The order in which the hypotheses are represented is not an indication of their degree of acceptance by physicists. 1: Copenhagen interpretation of Niels Bohr and Werner Heisenberg The quantum state wave is non-physical. This state wave collapses through a measuring instrument of sufficient size. Criticism: Sufficient size has not been specified by Bohr. The double slit slide itself is undoubtedly a measuring device that is very large in relation to the quantum object but obviously does not collapse the state wave because then we would not see interference. Finally, every measurement system is always connected to the rest of the world. 2: Decoherence The molecular unrest of the measuring device causes the state wave to collapse in such a way that only one of all these possibilities remains which then becomes the measured object. This is the reason that Qubits, the components of quantum computers, are cooled to near absolute zero and are mounted extremely vibration-free. Criticism: Exactly the same objections as with the Copenhagen interpretation. 3: Hidden variables The quantum mechanics would be incomplete. It is assumed that the quantum object always exists in material reality, but that we do not yet know the variables that would exactly describe its trajectory. Criticism: The Bell tests have repeatedly shown that at least faster-than-light communication has to occur between entangled quantum objects. That is in direct conflict with Einstein’s relativity laws. Furthermore, the sophisticated nature of that supposed communication between quantum objects is still completely unknown. The fact that quantum mechanics is incomplete is also remarkable in view of its resounding success. 4: Multiversa Everything that is possible also really happens. In this way we are rid of the wave of probabilities that is supposed to collapse into the measured object by a measurement . With every possible outcome of any event, such as the decay of a single radioactive atom, the physical universe splits into multiple physical universes, each containing a different possible outcome. A popular however rather anthropocentric variant of this is that the splitting into universes only happens with every decision we make, for example, whether or not to make a purchase. Criticism: This hypothesis can neither be proven nor falsified. Assuming: 1 / the information content of the universe estimated by Seth Loyd, 1090 bits, and 2 / the smallest unit of time, the Planck time, of 5.4 x 10-43 seconds I arrive at a unlikely number of split-off universes per second. Furthermore, the recent findings in quantum biology, the still unexplained efficiency of the quantum phenomena in plants and animals, are a strong argument against these splits. See elsewhere on this site: Multiverse hypothesis disproved by quantum biology. 5: Super-selection In super-selection it is assumed that super-positions of macroscopic different states do not occur, just like nature does not allow super-positions of different charges. Nature would then not allow a superposition of the quantum state of the quantum particle with the quantum state of the macroscopic measuring instrument. Criticism: That nature does not allow something is not an explanation of the phenomenon in question but an explanation of not understanding. Here the workings of nature seem to be considered as an inherently closed box. 6: The Matrix We are logged into a digital virtual reality world simulated in a computer of cosmic proportions. The laws of nature are nothing else but the rules of the software. This cosmic computer exists of course outside the physical universe, so it has to be a metaphysical computer. Logging out means dying (most of the time). This hypothesis provides an explanation for the digital nature of the quantum phenomena. Criticism: It is unclear how we log in. Furthermore, this only moves the question about the nature of reality to the nature of metaphysical reality in which the cosmic computer exists. On the other hand, this is what physics always does, moving the question to the next perspective. Nothing really wrong with that. 7: Spontaneous collapse The Schrödinger equation from which the quantum wave results is extended with a new natural constant making every physical system spontaneously collapsing the quantum state wave depending on the system’s size. The larger the system, the faster the collapse. This idea is one that could be measured by experiments with measuring instruments of increasing size. That is why this hypothesis seems falsifiable, for which it deserves to be tested. Criticism: No system can exist absolutely separately from the rest of the world. A measuring instrument is not hanging in a vacuum insulated from the rest of the universe, but is attached to the laboratory table, to the floor, to the building, etc. 8: No collapse The universe is one big entangled quantum wave that never collapses. The universe is therefore one large network of all possibilities in which every observer travels a possible trajectory and thus records his own history. This resembles the multiverse hypothesis of Sean Carroll rather closely. In this interpretation, however, there is no physical reality, only perceiving consciousness and a non-physical wave of chance. Criticism: It is unclear how the interaction of the consciousness of the observer with that universal quantum wave takes place. Furthermore, it is a problem how two observers can agree on their observations. See Eugene Wigner. 9: Projection postulate by John von Neumann Measuring instruments must also comply with the quantum mechanical laws, since they are composed of quantum objects, electrons, protons, neutrons, photons, etc. This implies that the measuring instrument should be still in the quantum wave state after the measurement. The instrument becomes entangled with quantum object. This develops into a chain of expanding quantum wave states. The human observer is the last in that chain of quantum states, but since man is ultimately also made up of quantum objects, his physical body cannot cause the quantum collapse. His body will be in an quantum wave state entangled with the total measurement chain. In fact, nothing that is in the physical material domain can collapse the wave state and therefore the quantum collapse should have a non-physical cause. The obvious candidate is our non-physical consciousness. Criticism: Two non-red objects do not necessarily have the same color. The non-physical cause of the non-physical quantum collapse is just like this logical example not necessarily connected to the assumed non-physical consciousness. The deep question is whether something non-physical can have an effect on the physical since the non-physical must then have some physical components. Finally, the problem of several observers also arises here. See Eugene Wigner. Read here on Quanta Magazine about a proposed – but not yet conducted – experiment that should provide clarity about the different hypotheses. In short, so many minds, so many ideas. All these interpretations are still alive. The measurement problem is therefore still not satisfactorily resolved. Despite all those different interpretations, quantum mechanics is the most successful physical theory when you judge success based on its unrivaled accurate predictions. You can therefore permit yourself to ignore this problem in your laboratory. But from a philosophical point of view, the measurement problem is definitely an extremely important issue, namely about the nature of reality. We can try to formulate a hypothesis that contains and explains all phenomena. To do this we need first to formulate a strict set of principles. Proceed to the next page.
7aa4031f9dc3d45a
01743nas a2200205 4500008004100000245007000041210006900111260001500180490000600195520112800201100002101329700002001350700001301370700002401383700002201407700002301429700002301452700002501475856003701500 2019 eng d00aLocality and digital quantum simulation of power-law interactions0 aLocality and digital quantum simulation of powerlaw interactions c07/10/20190 v93 a The propagation of information in non-relativistic quantum systems obeys a speed limit known as a Lieb-Robinson bound. We derive a new Lieb-Robinson bound for systems with interactions that decay with distance r as a power law, 1/rα. The bound implies an effective light cone tighter than all previous bounds. Our approach is based on a technique for approximating the time evolution of a system, which was first introduced as part of a quantum simulation algorithm by Haah et al. [arXiv:1801.03922]. To bound the error of the approximation, we use a known Lieb-Robinson bound that is weaker than the bound we establish. This result brings the analysis full circle, suggesting a deep connection between Lieb-Robinson bounds and digital quantum simulation. In addition to the new Lieb-Robinson bound, our analysis also gives an error bound for the Haah et al. quantum simulation algorithm when used to simulate power-law decaying interactions. In particular, we show that the gate count of the algorithm scales with the system size better than existing algorithms when α>3D (where D is the number of dimensions). 1 aTran, Minh, Cong1 aGuo, Andrew, Y.1 aSu, Yuan1 aGarrison, James, R.1 aEldredge, Zachary1 aFoss-Feig, Michael1 aChilds, Andrew, M.1 aGorshkov, Alexey, V. uhttps://arxiv.org/abs/1808.0522501443nas a2200109 4500008004100000245005800041210005800099520110300157100002301260700001301283856003701296 2019 eng d00aNearly optimal lattice simulation by product formulas0 aNearly optimal lattice simulation by product formulas3 a Product formulas provide a straightforward yet surprisingly efficient approach to quantum simulation. We show that this algorithm can simulate an n-qubit Hamiltonian with nearest-neighbor interactions evolving for time t using only (nt)1+o(1) gates. While it is reasonable to expect this complexity---in particular, this was claimed without rigorous justification by Jordan, Lee, and Preskill---we are not aware of a straightforward proof. Our approach is based on an analysis of the local error structure of product formulas, as introduced by Descombes and Thalhammer and significantly simplified here. We prove error bounds for canonical product formulas, which include well-known constructions such as the Lie-Trotter-Suzuki formulas. We also develop a local error representation for time-dependent Hamiltonian simulation, and we discuss generalizations to periodic boundary conditions, constant-range interactions, and higher dimensions. Combined with a previous lower bound, our result implies that product formulas can simulate lattice Hamiltonians with nearly optimal gate complexity. 1 aChilds, Andrew, M.1 aSu, Yuan uhttps://arxiv.org/abs/1901.0056401870nas a2200133 4500008004100000245004600041210004600087260001500133520150400148100001401652700002001666700001301686856003701699 2019 eng d00aQuantifying the magic of quantum channels0 aQuantifying the magic of quantum channels c2019/03/113 a To achieve universal quantum computation via general fault-tolerant schemes, stabilizer operations must be supplemented with other non-stabilizer quantum resources. Motivated by this necessity, we develop a resource theory for magic quantum channels to characterize and quantify the quantum "magic" or non-stabilizerness of noisy quantum circuits. For qudit quantum computing with odd dimension d, it is known that quantum states with non-negative Wigner function can be efficiently simulated classically. First, inspired by this observation, we introduce a resource theory based on completely positive-Wigner-preserving quantum operations as free operations, and we show that they can be efficiently simulated via a classical algorithm. Second, we introduce two efficiently computable magic measures for quantum channels, called the mana and thauma of a quantum channel. As applications, we show that these measures not only provide fundamental limits on the distillable magic of quantum channels, but they also lead to lower bounds for the task of synthesizing non-Clifford gates. Third, we propose a classical algorithm for simulating noisy quantum circuits, whose sample complexity can be quantified by the mana of a quantum channel. We further show that this algorithm can outperform another approach for simulating noisy quantum circuits, based on channel robustness. Finally, we explore the threshold of non-stabilizerness for basic quantum circuits under depolarizing noise. 1 aWang, Xin1 aWilde, Mark, M.1 aSu, Yuan uhttps://arxiv.org/abs/1903.0448301651nas a2200157 4500008004100000245006300041210006100104260001500165520118500180100002301365700002301388700001301411700001401424700001801438856003701456 2019 eng d00aTime-dependent Hamiltonian simulation with L1-norm scaling0 aTimedependent Hamiltonian simulation with L1norm scaling c06/17/20193 a The difficulty of simulating quantum dynamics depends on the norm of the Hamiltonian. When the Hamiltonian varies with time, the simulation complexity should only depend on this quantity instantaneously. We develop quantum simulation algorithms that exploit this intuition. For the case of sparse Hamiltonian simulation, the gate complexity scales with the L1 norm ∫t0dτ∥H(τ)∥max, whereas the best previous results scale with tmaxτ∈[0,t]∥H(τ)∥max. We also show analogous results for Hamiltonians that are linear combinations of unitaries. Our approaches thus provide an improvement over previous simulation algorithms that can be substantial when the Hamiltonian varies significantly. We introduce two new techniques: a classical sampler of time-dependent Hamiltonians and a rescaling principle for the Schrödinger equation. The rescaled Dyson-series algorithm is nearly optimal with respect to all parameters of interest, whereas the sampling-based approach is easier to realize for near-term simulation. By leveraging the L1-norm information, we obtain polynomial speedups for semi-classical simulations of scattering processes in quantum chemistry. 1 aBerry, Dominic, W.1 aChilds, Andrew, M.1 aSu, Yuan1 aWang, Xin1 aWiebe, Nathan uhttps://arxiv.org/abs/1906.0711501328nas a2200133 4500008004100000245006600041210006200107260001500169520092300184100001801107700001301125700001901138856003701157 2018 eng d00aApproximate Quantum Fourier Transform with O(nlog(n)) T gates0 aApproximate Quantum Fourier Transform with Onlogn T gates c2018/03/133 a The ability to implement the Quantum Fourier Transform (QFT) efficiently on a quantum computer enables the advantages offered by a variety of fundamental quantum algorithms, such as those for integer factoring, computing discrete logarithm over Abelian groups, and phase estimation. The standard fault-tolerant implementation of an n-qubit QFT approximates the desired transformation by removing small-angle controlled rotations and synthesizing the remaining ones into Clifford+t gates, incurring the t-count complexity of O(n log2 (n)). In this paper we show how to obtain approximate QFT with the t-count of O(n log(n)). Our approach relies on quantum circuits with measurements and feedforward, and on reusing a special quantum state that induces the phase gradient transformation. We report asymptotic analysis as well as concrete circuits, demonstrating significant advantages in both theory and practice. 1 aNam, Yunseong1 aSu, Yuan1 aMaslov, Dmitri uhttps://arxiv.org/abs/1803.0493301295nas a2200169 4500008004100000245008000041210006900121260001500190490000600205520078500211100001800996700001901014700001301033700002301046700001901069856003701088 2018 eng d00aAutomated optimization of large quantum circuits with continuous parameters0 aAutomated optimization of large quantum circuits with continuous c2017/10/190 v43 a We develop and implement automated methods for optimizing quantum circuits of the size and type expected in quantum computations that outperform classical computers. We show how to handle continuous gate parameters and report a collection of fast algorithms capable of optimizing large-scale quantum circuits. For the suite of benchmarks considered, we obtain substantial reductions in gate counts. In particular, we provide better optimization in significantly less time than previous approaches, while making minimal structural changes so as to preserve the basic layout of the underlying quantum algorithms. Our results help bridge the gap between the computations that can be run on existing hardware and those that are expected to outperform classical computers.  1 aNam, Yunseong1 aRoss, Neil, J.1 aSu, Yuan1 aChilds, Andrew, M.1 aMaslov, Dmitri uhttps://arxiv.org/abs/1710.0734501848nas a2200121 4500008004100000245006300041210006300104520147500167100001401642700002001656700001301676856003701689 2018 eng d00aEfficiently computable bounds for magic state distillation0 aEfficiently computable bounds for magic state distillation3 a Magic state manipulation is a crucial component in the leading approaches to realizing scalable, fault-tolerant, and universal quantum computation. Related to magic state manipulation is the resource theory of magic states, for which one of the goals is to characterize and quantify quantum "magic." In this paper, we introduce the family of thauma measures to quantify the amount of magic in a quantum state, and we exploit this family of measures to address several open questions in the resource theory of magic states. As a first application, we use the min-thauma to bound the regularized relative entropy of magic. As a consequence of this bound, we find that two classes of states with maximal mana, a previously established magic measure, cannot be interconverted in the asymptotic regime at a rate equal to one. This result resolves a basic question in the resource theory of magic states and reveals a fundamental difference between the resource theory of magic states and other resource theories such as entanglement and coherence. As a second application, we establish the hypothesis testing thauma as an efficiently computable benchmark for the one-shot distillable magic, which in turn leads to a variety of bounds on the rate at which magic can be distilled, as well as on the overhead of magic state distillation. Finally, we prove that the max-thauma can outperform mana in benchmarking the efficiency of magic state distillation.  1 aWang, Xin1 aWilde, Mark, M.1 aSu, Yuan uhttps://arxiv.org/abs/1812.1014501266nas a2200133 4500008004100000245004700041210004700088260001500135520088800150100002301038700002101061700001301082856003701095 2018 eng d00aFaster quantum simulation by randomization0 aFaster quantum simulation by randomization c2018/05/223 a Product formulas can be used to simulate Hamiltonian dynamics on a quantum computer by approximating the exponential of a sum of operators by a product of exponentials of the individual summands. This approach is both straightforward and surprisingly efficient. We show that by simply randomizing how the summands are ordered, one can prove stronger bounds on the quality of approximation and thereby give more efficient simulations. Indeed, we show that these bounds can be asymptotically better than previous bounds that exploit commutation between the summands, despite using much less information about the structure of the Hamiltonian. Numerical evidence suggests that our randomized algorithm may be advantageous even for near-term quantum simulation. 1 aChilds, Andrew, M.1 aOstrander, Aaron1 aSu, Yuan uhttps://arxiv.org/abs/1805.0838502563nas a2200145 4500008004100000245011000041210006900151260001500220520207500235100001902310700001302329700002002342700001802362856003702380 2018 eng d00aQuantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics0 aQuantum singular value transformation and beyond exponential imp c2018/06/053 a Quantum computing is powerful because unitary operators describing the time-evolution of a quantum system have exponential size in terms of the number of qubits present in the system. We develop a new "Singular value transformation" algorithm capable of harnessing this exponential advantage, that can apply polynomial transformations to the singular values of a block of a unitary, generalizing the optimal Hamiltonian simulation results of Low and Chuang. The proposed quantum circuits have a very simple structure, often give rise to optimal algorithms and have appealing constant factors, while usually only use a constant number of ancilla qubits. We show that singular value transformation leads to novel algorithms. We give an efficient solution to a certain "non-commutative" measurement problem and propose a new method for singular value estimation. We also show how to exponentially improve the complexity of implementing fractional queries to unitaries with a gapped spectrum. Finally, as a quantum machine learning application we show how to efficiently implement principal component regression. "Singular value transformation" is conceptually simple and efficient, and leads to a unified framework of quantum algorithms incorporating a variety of quantum speed-ups. We illustrate this by showing how it generalizes a number of prominent quantum algorithms, including: optimal Hamiltonian simulation, implementing the Moore-Penrose pseudoinverse with exponential precision, fixed-point amplitude amplification, robust oblivious amplitude amplification, fast QMA amplification, fast quantum OR lemma, certain quantum walk results and several quantum machine learning algorithms. In order to exploit the strengths of the presented method it is useful to know its limitations too, therefore we also prove a lower bound on the efficiency of singular value transformation, which often gives optimal bounds. 1 aGilyen, Andras1 aSu, Yuan1 aLow, Guang, Hao1 aWiebe, Nathan uhttps://arxiv.org/abs/1806.0183801404nas a2200133 4500008004100000245005700041210005500098260001500153490000600168520102800174100001301202700001801215856003701233 2018 eng d00aTime-reversal of rank-one quantum strategy functions0 aTimereversal of rankone quantum strategy functions c2018/01/250 v23 a The quantum strategy (or quantum combs) framework is a useful tool for reasoning about interactions among entities that process and exchange quantum information over the course of multiple turns. We prove a time-reversal property for a class of linear functions, defined on quantum strategy representations within this framework, that corresponds to the set of rank-one positive semidefinite operators on a certain space. This time-reversal property states that the maximum value obtained by such a function over all valid quantum strategies is also obtained when the direction of time for the function is reversed, despite the fact that the strategies themselves are generally not time reversible. An application of this fact is an alternative proof of a known relationship between the conditional min- and max-entropy of bipartite quantum states, along with generalizations of this relationship. 1 aSu, Yuan1 aWatrous, John uhttps://arxiv.org/abs/1801.0849101648nas a2200169 4500008004100000245006100041210006100102300001400163490000900177520116300186100002301349700001901372700001801391700001901409700001301428856003701441 2018 eng d00aToward the first quantum simulation with quantum speedup0 aToward the first quantum simulation with quantum speedup a9456-94610 v115 3 a 1 aChilds, Andrew, M.1 aMaslov, Dmitri1 aNam, Yunseong1 aRoss, Neil, J.1 aSu, Yuan uhttps://arxiv.org/abs/1711.1098001360nas a2200181 4500008004100000022001400041245007000055210006900125260001500194520081800209100001801027700001901045700001801064700001301082700001901095700001601114856004801130 2017 eng d a1871-409900aExtreme learning machines for regression based on V-matrix method0 aExtreme learning machines for regression based on Vmatrix method c2017/06/103 a 1 aYang, Zhiyong1 aZhang, Taohong1 aLu, Jingcheng1 aSu, Yuan1 aZhang, Dezheng1 aDuan, Yaowu uhttp://dx.doi.org/10.1007/s11571-017-9444-2
4c1553a92d125722
Spring 2019 Seminars and Abstracts Spring 2019 Seminars and Abstracts Seminars take place on Monday afternoons 14:00–15:00, unless specified otherwise. Everyone is welcome. February 11, (TEC 1.06), Dr Toby Wood (University of Newcastle), Superfluid vortices and fluxtubes in a neutron star February 25, (JSC 2.02), Prof Giles Thomas (University College London), Slamming behavior of large high speed catamarans in large waves March 4, (MED 2.02), Dr Anita Faul (University of Cambridge), The model is simple until proven otherwise March 11, (TEC 1.06), Prof Robb MacDonald (University College London), Application of complex analysis to the geometry of river valleys and networks March 18, (TEC 1.06), Prof Ted Johnson (University College London), The long-wave vorticity dynamics of rotating buoyant outflows March 25, (SCI 1.20), Dr Matthew Moore (University of Oxford), Moving beyond Wagner: applying classical impact theory to droplet impact problems Dr Toby Wood (February 11): Neutron stars contain the highest densities and strongest magnetic fields found anywhere in the Universe, so they present a fantastic opportunity to test our understanding of the laws of physics under extreme conditions.  Throughout most of the star, the neutrons and protons are condensed into superfluids, which are coupled through mutual entrainment and electromagnetic forces.  We present a microscale MHD model for these fluids, using the Gross-Pitaevskii/Ginzburg-Landau framework, and employing (quasi)periodic boundary conditions.  We study the onset of superconductivity near the critical temperature, and the transition between type-I and type-II superconductivity, demonstrating an intermediate "type-1.5" regime of fluxtubes bunches. Prof Giles Thomas (February 25): When large high-speed catamarans (such as the one pictured) operate in waves, they can be exposed to large wave loads, particularly from slamming events – an impact that occurs when the bow enters the water at high relative velocities. These loads can cause damage to the hull by generating large bending moments as well as producing a dynamic hydroelastic response, known as whipping, which can cause fatigue issues.  This presentation will outline work using full scale measurements, model experiments and numerical techniques to better understand the behaviour of these vessels in waves. The work has led to improvements in the design of the vessels to reduce the magnitude of the loads and better structural configurations to avoid damage occurrence. Dr Anita Faul (March 4): Machine Learning and AI have enjoyed an unprecedented rise in popularity. In academia as well as industry, they are often viewed as the future solution to all problems. However, systems have become so complex that it is no longer humanly comprehensibly, how an algorithm arrives at an answer. In some cases, companies refuse to disclose the proprietary algorithm. This has lead to controversies such as the COMPAS algorithm giving scores on the likelihood to reoffend. The organisation ProPublica claims that the software exhibits racial bias [1] which the company disputes [2]. Another example is Amazon’s gender bias recruitment tool [3]. Partly to blame is the data used to train algorithms. If the data is biased, then the algorithm will be. More seriously, it might exacerbate the bias, since algorithms distill the essential distinguishing features. If these are then highly correlated with black - white, male - female, we have a problem. While humans can also have bias, they are also capable of realizing their world view is too simplistic. The talk presents work in progress of increasing the complexity of a model, if the data suggests more features are necessary to model the data. This approach aides to understand the ”black magic” inside the ”black box”. [1] www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm [2] go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf [3] www.bbc.co.uk/news/technology-45809919 Prof Rob MacDonald (March 11): Valleys and networks shaped by groundwater-fed streams exhibit remarkable geometric properties, two of which are considered in this talk: (i) the finger-like shape of an individual valley is modelled by a diffusive free boundary whose solution gives the 3D structure of the valley; (ii) the 2D bifurcation dynamics of evolving streams is modelled by analytically by Loewner's equation and numerically using Matlab's Schwarz-Christoffel toolbox. Comparison to field data is made in both (i) and (ii). Prof Ted Johnson (March 18): This talk discusses the evolution of coastal currents by considering, relative to a rotating frame, the flow development when buoyant fluid is injected into a quiescent fluid bounded by a solid wall. The initial rapid response is determined by the Coriolis force–pressure gradient balance with a Kelvin wave propagating rapidly, at the long-wave speed, with the bounding wall to its right (for positive rotation). However fluid columns can stretch or squash on ejection from coastal outflows so that the ejected fluid gains positive or negative relative vorticity. Depending on its sign, the image in the solid wall of this vorticity can reinforce or oppose the zero potential vorticity-anomaly (PVa) current set up by the Kelvin wave (KW). This talk presents a simple, fully nonlinear, dispersive, quasi geostrophic model to discuss the form of coastal outflows as the relative strength of vortex to KW driving is varied. The model retains sufficient physics to capture both effects at finite amplitude and thus the essential nonlinearity of the flow, but is sufficiently simple so as to allow highly accurate numerical integration of the full problem and also explicit, fully nonlinear solutions for the evolution of a uniform PVa outflow in the hydraulic limit. Outflow evolutions are shown to depend strongly on the sign of the PVa of the expelled fluid, which determines whether the vortex and KW driving are reinforcing or opposing, and on the ratio of the internal Rossby radius to the vortex-source scale, $|V/D^2 \Pi|^{1/2}$ , of the flow (where $D$ measures the outflow depth, $\Pi$ the PVa of the outflow and $V$ the volume flux of the outflow), which measures the relative strengths of the two drivers. Comparison of the explicit hydraulic solutions with the numerical integrations shows that the analytical solutions predict the flow development well with differences ascribable to dispersive Rossby waves on the current boundary and changes in the source region captured by the full equations but not present in the hydraulic solutions. The analytical results are then  extended to Rossby numbers of order unity in the long-wave, reduced-gravity, shallow -water equations: the semi-geostrophic equations. Dr Matthew Moore (March 25): Impact problems have a wide range of applications throughout real-world phenomena and industrial processes ranging from ship-slamming to inkjet printing and coating processes. With recent advances in camera technology, visualisation techniques and high-powered computing, we are increasingly able to see more of the phenomena occurring at small length- and timescales in droplet impacts. Understanding these phenomena can be vital in both promoting and inhibiting splashing. In this talk, we will discuss the role that classical Wagner theory, developed for applications to naval architecture in ship-slamming, can play in the much smaller scale problem of droplet impact. Wagner theory was developed to help predict pressure profiles on the hulls of ships and sea-planes, so much of the finer detail of the splash itself was ignored. However, it is this very splash that is of importance in many droplet impact scenarios. Since impact problems are highly nonlinear and complex, it is desirable to use modelling to help predict certain properties, such as the location of the root of the splash jet (or ejecta), enabling us to focus our numerical or experimental investigations on a subset of the full impact problem. Therefore, we will develop a droplet Wagner model and compare the predictions to direct numerical simulations in Gerris, highlighting what the theory does well and where it is lacking. In the latter cases, we will discuss possible extensions to the theory to improve the predictions. Autumn 2018 Seminars and Abstracts Autumn 2018 Seminars and Abstracts Seminars take place on Monday afternoons 14:00–15:00, unlesss specified otherwise. Everyone is welcome. October 8, (MED 1.02), Dr Anna Kalogirou (UEA), Linear stability and nonlinear dynamics of two-layer flow in the presence of surfactants October 15, (Queens 0.09), Dr Pavel Berloff (Imperial College London), Dynamically Consistent Parameterization of Mesoscale Eddies October 22, (Queens 2.22), Dr Raphael Stuhlmeier (University of Plymouth), (In)stability and evolution of inhomogeneous, broad-banded seas October 29, (Queens 1.04), Prof Grae Worster (University of Cambridge), The Dynamics of Marine Ice Sheets November 2, 4pm, (SCI 1.20), Dr Hiromitsu Takeuchi (Osaka City University), Stability of a vortex with winding number two of the nonlinear Schrödinger equation for Bose-Einstein condensates November 5, (Queens 2.22), Prof Robin Cleveland (University of Oxford), Planning ultrasound surgery with 3D patient specific models November 12, (MED 1.02), Dr Helen Burgess (University of St. Andrews), Long frontal waves and dynamic scaling in freely evolving equivalent barotropic flow November 19, (Queens 0.08), Prof Kostas Bellibasakis (National Technical University of Athens), Wave propagation in nearshore/coastal environment by coupled-mode models (problems and applications) November 19, 4pm, Prof Larry Forbes (University of Tasmania, Australia), Unstable Interfaces and Plumes November 26, (ZICER 0.02), Dr David Lloyd (University of Surrey), Localised patterns and invasion fronts on the surface of a ferrofluid December 10, (Queens 0.08), Dr Giovanni Barontini (University of Birmingham), Multi-component Bose-Einstein condensates: from neutron stars to ultracold engines Dr Anna Kalogirou (October 8): A two-fluid shear flow in the presence of surfactants is considered. The flow configuration comprises two superposed layers of viscous and immiscible fluids confined in a long horizontal channel. The two fluids can have in general different densities, viscosities and thicknesses. The surfactants can be insoluble, i.e. located at the interface between the two fluids only, or soluble in the lower fluid. A primary aim of this study is to investigate the effect of surfactants on the stability of the interface, and in particular surfactants in high concentrations and above the critical micelle concentration (cmc). An asymptotic model valid in the approximation of a thin fluid layer is also derived, comprising a set of nonlinear PDEs to describe the evolution of the film thickness and surfactant concentration. Interfacial instabilities are induced due to the acting forces of gravity and inertia, as well as the action of Marangoni forces generated as a result of the dependence of surface tension on the interfacial surfactant concentration. The underlying physical mechanism responsible for the formation of interfacial waves will be discussed, together with the complex flow dynamics (typical nonlinear phenomena associated with thin-film flows include travelling waves, solitary pulses, quasi-periodic and chaotic dynamics). Dr Pavel Berloff (October 15): This work aims at developing new approach for parameterizing mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. These effects are often modelled as some diffusion process or a stochastic forcing, and the proposed approach is implicitly related to the latter category. The idea is to approximate transient eddy flux divergence in a simple way, to find its actual dynamical footprints by solving a simplified but dynamically relevant problem, and to relate the ensemble of footprints to the large-scale flow properties. Dr Raphael Stuhlmeier (October 22): Nonlinear interaction, along with wind input and dissipation, is one of the three mechanisms which drive wave evolution, and is included in every modern wave-forecast model. The mechanism behind the nonlinear interaction terms in such models is based on the kinetic equation for wave spectra derived by Hasselmann. This does not allow, for example, for statistically inhomogeneous wave fields, nor for the modulational instability which depends on such inhomogeneity, and which has been implicated in the appearance of exceptionally high rogue waves. Beginning with the basics of third-order wave theory, we sketch the derivation of a discretized equation for the evolution of random, inhomogeneous surface wave fields on deep water from Zakharov's equation, along lines first laid out by Crawford, Saffman, and Yuen. This allows for a general treatment of the stability and long-time behaviour of broad-banded sea states. It is investigated for the simple case of degenerate four-wave interaction, and the instability of statistically homogeneous states to small inhomogeneous disturbances is demonstrated. Furthermore, the long-time evolution is studied for several cases and shown to lead to a complex spatio-temporal energy distribution. The possible impact of this evolution on the statistics of rogue wave occurrence is explored within the framework of this simplified example. Prof Grae Worster (October 29): Most of the West Antarctic Ice Sheet (WAIS) sits on bedrock that is one to two kilometres below sea level. Its weight causes the ice sheet to flow outwards towards the ocean, thinning as it goes until it is thin enough to float on the ocean as an ice shelf before it ultimately breaks up into ice bergs. Some areas of the WAIS have been accelerating in recent years, as the point at which the sheet begins to float recedes, and this contributes to the rise in global sea level. I shall describe some recent analogue laboratory experiments and associated mathematical models that describe and quantify the fundamental dynamical controls on ice sheets that terminate in the ocean, focusing particularly on the role that floating ice shelves play in buttressing the ice sheet against collapse. Dr Hiromitsu Takeuchi (November 2): The stability of doubly quantized vortex (DQV), a vortex with winding number two, in uniform system is a crucial problem in low temperature physics. If a DQV could be stable, a lot of literatures in the long history of research on superfluid system should be re-examined since they assumed a multiply quantized vortex should be unstable there. In this work, we revisit this fundamental problem of the stability of a DQV in uniform single-component Bose-Einstein condensates at zero temperature [1]. To reveal the stability, the system-size dependence of the excitation frequency of the system with a DQV was analyzed through large-scale simulations of the Bogoliubov-de Gennes equation. We found that the system remains dynamically unstable even in an infinite-system-size limit. The system-size dependence is characterized by introducing the perturbation theory, based on the theory of Hamiltonian dynamical systems [2] and the semi-classical theories based on the WKB approximation, extended to the case with complex eigen-energy. [1] Hiromitsu Takeuchi, Michikazu Kobayashi, and Kenichi Kasamatsu, Is a Doubly Quantized Vortex Dynamically Unstable in Uniform Superfluids?, Journal of the Physical Society of Japan 87, 023601 (2018); arXiv:1710.10810 [2] R. S. MacKay, in Hamiltonian Dynamical Systems, ed. R. S. MacKay and J. D. Meiss (Adam Hilger, Bristol, U.K., 1987) p. 137. Prof Robin Cleveland (November 5): High intensity focused ultrasound has been used clinically to thermally ablate tissue, for example destroying cancer tumours, and to mechanically fractionating tissue, for example enlargening the urethra in the prostate. In order to focus the ultrasound it is normally assumed that the sound speed in soft-tissue is uniform and so it is mostly limited to targets with soft-tissue paths. In reality tissue has a range of different sound speeds, with fat typically 100 m/s slower than other soft-tissue. This can affect the ability to focus accurately. Here two applications are considered using realistic 3D patient models derived from CT data. The first is thermal ablation of the kidney where it is shown that fat layers in the path can result in fragmentation of the focus. It is shown that if the phase aberration can be accounted for then it is possible to recover a tight focus. In the second application it is shown that be using an array it is possible to focus ultrasound to the centre of a vertebral disc, despite the presence of the bone structures. A paradigm for mechanically fractionating tissue in the disc is described employing cavitation nucleation agents. These examples, demonstrate how patient specific models can be employed to improve the performance of high intensity focused ultrasound. Dr Helen Burgess (November 12): We present a scaling theory that links the frequency of long frontal waves to the decay rate of kinetic energy and inverse transfer of potential energy in freely evolving equivalent barotropic turbulence. The flow is initialised with a potential vorticity field whose characteristic length scale is LD, the Rossby radius of deformation. As the turbulence evolves fronts of width O(LD) emerge, bounding large vortices within which potential vorticity is well-mixed and arranged into a staircase structure. The jets collocated with the fronts support long wave undulations with wavelengths >> LD and time scales fast relative to the time scale of the flow evolution. These undulations facilitate collisions and mergers between the vortices, implicating the frontal dynamics in the growth of potential-energy-containing flow features. Mergers generate disturbances with radius of curvature O(LD), which then propagate along the jets, causing them to shed filaments of kinetic energy and smooth out. A decay law for the total frontal length LF(t)~t^(-1/3) follows from assuming self-similar vortex growth and using the dispersion relation for long frontal waves [1]. High resolution simulations show that kinetic energy, potential enstrophy, and enstrophy, which are concentrated along the fronts and proportional to LD*LF(t), decay like t^(-1/3). Interestingly this is the same decay law followed by enstrophy in the vortex populations of freely evolving barotropic turbulence [2, 3]. [1] J. Nycander, D. G. Dritschel, and G. G. Sutyrin. The dynamics of long frontal waves in the shallow-water equations. Phys. Fluids A, 5:1089-1091, 1993. [2] D. G. Dritschel, R. K. Scott, C. Macaskill, G. A. Gottwald, and C. V. Tran. Unifying scaling theory for vortex dynamics in two-dimensional turbulence. Phys. Rev. Lett., 101:094501, 2008. [3] B. H. Burgess, D. G. Dritschel, and R. K. Scott. Extended scale invariance in the vortices of freely evolving two-dimensional turbulence. Phys. Rev. Fluids, 2:114702, 2017.  Prof Kostas Bellibasakis (November 19): Results from the development and application of coupled-mode models to predict propagation of water waves in nearshore/coastal environment with variable bathymetry and other inhomogeneities are presented, including interaction with marine structures. This method models reflection, refraction, diffraction and dispersion phenomena, without introducing mild-slope type assumptions. The theory is based on an improved representation of the field in a series of local vertical modes, enhanced by appropriate terms to satisfy boundary conditions on the free surface and the sloping bottom. The additional modes significantly accelerate the convergence of the modal expansion and make the method suitable for horizontally large-scale applications. Next, with the aid of variational principles, the problem of propagation and interaction in non-homogeneous environment is reformulated as a system of partial differential equations on the horizontal plane, having the property to reduce to mild-slope type models in subregions where bathymetry and other parameters are slowly varying, saving computational cost. Various examples are presented and discussed demonstrating the applicability of the present method, including effects of waves on marine floating or fixed structures in an environment characterized by variable bottom topography. Prof Larry Forbes (November 19): In many applications involving waves at the surface of a fluid, it is often sufficient just to consider steady-state situations, where the wave pattern does not change noticeably with time.  Waves behind moving ships are one such example.  There is an enormous literature on such situations, and at least for two-dimensional flow, steady waves can be computed reasonably accurately. As computers have developed, it has now become possible to look at 2D and even 3D unsteady problems, where the fluid interface evolves with time.  These unsteady flows have some surprising behaviour, both analytically and numerically.  If fluid viscosity is ignored, it is now known that classical flows such as the Rayleigh-Taylor Instability fail within finite time, when the curvature of the free surface becomes infinite at certain points.  This appears to be a common feature in unsteady inviscid flows.  In other geometries, such as the initially spherical outflow from a source, unsteady effects can lead to the surprising result that the lowest mode is the most unstable, so that a one-sided outflow jet evolves. This talk will consider some examples of unsteady fluid flow, characterized by the presence of an unstable interface.  We will discuss how fluid viscosity and interface thickness change the singular behaviour predicted by (non-linear) inviscid theory, in some surprisingly subtle ways. Dr David Lloyd (November 26): In this talk, I will give an overview of work (both mathematical and experimental) on localised pattern formation seen on the surface of a magnetisable fluid seen in experiments. I will also present on-going work looking at proving the existence of radial spots and invasion fronts where cellular hexagon spikes are left behind in the wake of the front.  Dr Giovanni Barontini (December 20): I will present our latest results on the thermodynamics of spin-1 polar Bose-Einstein Condensates, including evidence of the observation of 2-photon Feshbach resonances, and our progress towards the realization of a “quantum printer” that will allow us to simulate high-energy physics phenomena. In particular, I will present our recent work aiming at unveiling the Andreev-Bashkin effect in superfluid mixtures. In the final part of the talk I will present the progress of our project for the realization of quantum engines using ultracold atomic mixtures. For further details about the seminars, or to join our mailing list, please contact Anna Kalogirou. For details of previous talks, please use the menu links on the left.
566b05d3dd1f0ae4
The EPR Paradox The Paradox Okay, so we have two claims: The Resolution(s) Measurement without interaction in quantum mechanics Is the double slit experiment evidence that consciousness causes collapse? No! No no no. This might be surprising to those that know the basics of the double slit experiment. For those that don’t, very briefly: A bunch of tiny particles are thrown one by one at a barrier with two thin slits in it, with a detector sitting on the other side. The pattern on the detector formed by the particles is an interference pattern, which appears to imply that each particle went through both slits in some sense, like a wave would do. Now, if you peek really closely at each slit to see which one each particle passes through, the results seem to change! The pattern on the detector is no longer an interference pattern, but instead looks like the pattern you’d classically expect from a particle passing through only one slit! When you first learn about this strange dependence of the experimental results on, apparently, whether you’re looking at the system or not, it appears to be good evidence that your conscious observation is significant in some very deep sense. After all, observation appears to lead to fundamentally different behavior, collapsing the wave to a particle! Right?? This animation does a good job of explaining the experiment in a way that really pumps the intuition that consciousness matters: (Fair warning, I find some aspects of this misleading and just plain factually wrong. I’m linking to it not as an endorsement, but so that you get the intuition behind the arguments I’m responding to in this post.) The feeling that consciousness is playing an important role here is a fine intuition to have before you dive deep into the details of quantum mechanics. But now consider that the exact same behavior would be produced by a very simple process that is very clearly not a conscious observation. Namely, just put a single spin qubit at one of the slits in such a way that if the particle passes through that slit, it flips the spin upside down. Guess what you get? The exact same results as you got by peeking at the screen. You never need to look at the particle as it travels through the slits to the detector in order to collapse the wave-like behavior. Apparently a single qubit is sufficient to do this! It turns out that what’s really going on here has nothing to do with the collapse of the wave function and everything to do with the phenomenon of decoherence. Decoherence is what happens when a quantum superposition becomes entangled with the degrees of freedom of its environment in such a way that the branches of the superposition end up orthogonal to each other. Interference can only occur between the different branches if they are not orthogonal, which means that decoherence is sufficient to destroy interference effects. This is all stuff that all interpretations of quantum mechanics agree on. Once you know that decoherence destroys interference effects (which all interpretations of quantum mechanics agree on), and also that a conscious observing the state of a system is a process that results in extremely rapid and total decoherence (which everybody also agrees on), then the fact that observing the position of the particle causes interference effects to vanish becomes totally independent of the question of what causes wave function collapse. Whether or not consciousness causes collapse is 100% irrelevant to the results of the experiment, because regardless of which of these is true, quantum mechanics tells us to expect observation to result in the loss of interference! This is why whether or not consciousness causes collapse has no real impact on what pattern shows up in the wall. All interpretations of quantum mechanics agree that decoherence is a thing that can happen, and decoherence is all that is required to explain the experimental results. The double slit experiment provides no evidence for consciousness causing collapse, but it also provides no evidence against it. It’s just irrelevant to the question! That said, however, given that people often hear the experiment presented in a way that makes it seem like evidence for consciousness causing collapse, hearing that qubits do the same thing should make them update downwards on this theory. Decoherence is not wave function collapse In the double slit experiment, particles travelling through a pair of thin slits exhibit wave-like behavior, forming an interference pattern where they land that indicates that the particles in some sense travelled through both slits. Now, suppose that you place a single spin bit at the top slit, which starts off in the state |↑⟩ and flips to |↓⟩ iff a particle travels through the top slit. We fire off a single particle at a time, and then each time swap out that spin bit for a new spin bit that also starts off in the state |↑⟩. This serves as an extremely simple measuring device which encodes the information about which slit each particle went through. Now what will you observe on the screen? It turns out that you’ll observe the classically expected distribution, which is a simple average over the two individual possibilities without any interference. Okay, so what happened? Remember that the first pattern we observed was the result of the particles being in a superposition over the two possible paths, and then interfering with each other on the way to the detector screen. So it looks like simply having one bit of information recording the path of the particle was sufficient to collapse the superposition! But wait! Doesn’t this mean that the “consciousness causes collapse” theory is wrong? The spin bit was apparently able to cause collapse all by itself, so assuming that it isn’t a conscious system, it looks like consciousness isn’t necessary for collapse! Theory disproved! No. As you might be expecting, things are not this simple. For one thing, notice that this ALSO would prove as false any other theory of wave function collapse that doesn’t allow single bits to cause collapse (including anything about complex systems or macroscopic systems or complex information processing). We should be suspicious of any simple argument that claims to conclusively prove a significant proportion of experts wrong. To see what’s going on here, let’s look at what happens if we don’t assume that the spin bit causes the wave function to collapse. Instead, we’ll just model it as becoming fully entangled with the path of the particle, so that the state evolution over time looks like the following: Now if we observe the particle’s position on the screen, the probability distribution we’ll observe is given by the Born rule. Assuming that we don’t observe the states of the spin bits, there are now two qualitatively indistinguishable branches of the wave function for each possible position on the screen. This means that the total probability for any given landing position will be given by the sum of the probabilities of each branch: But hold on! Our final result is identical to the classically expected result! We just get the probability of the particle getting to |j⟩ from |A⟩, multiplied by the probability of being at |A⟩ in the first place (50%), plus the probability of the particle going from |B⟩ to |j⟩ times the same 50% for the particle getting to |B⟩. In other words, our prediction is that we’d observe the classical pattern of a bunch of individual particles, each going through exactly one slit, with 50% going through the top slit and 50% through the bottom. The interference has vanished, even though we never assumed that the wave function collapsed! What this shows is that wave function collapse is not required to get particle-like behavior. All that’s necessary is that the different branches of the superposition end up not interfering with each other. And all that’s necessary for that is environmental decoherence, which is exactly what we had with the single spin bit! In other words, environmental decoherence is sufficient to produce the same type of behavior that we’d expect from wave function collapse. This is because interference will only occur between non-orthogonal branches of the wave function, and the branches become orthogonal upon decoherence (by definition). A particle can be in a superposition of multiple states but still act as if it has collapsed! Now, maybe we want to say that the particle’s wave function is collapsed when its position is measured by the screen. But this isn’t necessary either! You could just say that the detector enters into a superposition and quickly decoheres, such that the different branches of the wave function (one for each possible detector state) very suddenly become orthogonal and can no longer interact. And then you could say that the collapse only really happens once a conscious being observes the detector! Or you could be a Many-Worlder and say that the collapse never happens (although then you’d have to figure out where the probabilities are coming from in the first place). You might be tempted to say at this point: “Well, then all the different theories of wave function collapse are empirically equivalent! At least, the set of theories that say ‘wave function collapse = total decoherence + other necessary conditions possibly’. Since total decoherence removes all interference effects, the results of all experiments will be indistinguishable from the results predicted by saying that the wave function collapsed at some point!” But hold on! This is forgetting a crucial fact: decoherence is reversible, while wave function collapse is not!!!  Screen Shot 2019-03-17 at 8.21.51 PM Pretty picture from doi: 10.1038/srep15330 Let’s say that you run the same setup before with the spin bit recording the information about which slit the particle went through, but then we destroy that information before it interacts with the environment in any way, therefore removing any traces of the measurement. Now the two branches of the wave function have “recohered,” meaning that what we’ll observe is back to the interference pattern! (There’s a VERY IMPORTANT caveat, which is that the time period during which we’re destroying the information stored in the spin bit must be before the particle hits the detector screen and the state of the screen couples to its environment, thus decohering with the record of which slit the particle went through). If you’re a collapse purist that says that wave function collapse = total decoherence (i.e. orthogonality of the relevant branches of the wave function), then you’ll end up making the wrong prediction! Why? Well, because according to you, the wave function collapsed as soon as the information was recorded, so there was no “other branch of the wave function” to recohere with once the information was destroyed! This has some pretty fantastic implications. Since IN PRINCIPLE even the type of decoherence that occurs when your brain registers an observation is reversible (after all, the Schrodinger equation is reversible), you could IN PRINCIPLE recohere after an observation, allowing the branches of the wave function to interfere with each other again. These are big “in principle”s, which is why I wrote them big. But if you could somehow do this, then the “Consciousness Causes Collapse” theory would give different predictions from Many-Worlds! If your final observation shows evidence of interference, then “consciousness causes collapse” is wrong, since apparently conscious observation is not sufficient to cause the other branches of the wave function to vanish. Otherwise, if you observe the classical pattern, then Many Worlds is wrong, since the observation indicates that the other branches of the wave function were gone for good and couldn’t come back to recohere. This suggests a general way to IN PRINCIPLE test any theory of wave function collapse: Look at processes right beyond the threshold where the theory says wave functions collapse. Then implement whatever is required to reverse the physical process that you say causes collapse, thus recohering the branches of the wave function (if they still exist). Now look to see if any evidence of interference exists. If it does, then the theory is proven wrong. If it doesn’t, then it might be correct, and any theory of wave function collapse that demands a more stringent standard for collapse (including Many-Worlds, the most stringent of them all) is proven wrong. On decoherence Consider the following simple model of the double-slit experiment: A particle starts out at |O⟩, then evolves via the Schrödinger equation into an equal superposition of being at position |A⟩ (the top slit) and being at position |B⟩ (the bottom slit). To figure out what happens next, we need to define what would happen for a particle leaving from each individual slit. In general, we can describe each possibility as a particular superposition over the screen. Since quantum mechanics is linear, the particle that started at |O⟩ will evolve as follows: If we now look at any given position |j⟩ on the screen, the probability of observing the particle at this position can be calculated using the Born rule: Notice that the first term is what you’d expect to get for the probability of a particle leaving |A⟩ being observed at position |j⟩ and the second term is the probability of a particle from |B⟩ being observed at |j⟩. The final two terms are called interference terms, and they give us the non-classical wave-like behavior that’s typical of these double-slit setups. Now, what we just imagined was a very idealized situation in which the only parts of the universe that are relevant to our calculation are the particle, the two slits and the detector. But in reality, as the particle is traveling to the detector, it’s likely going to be interacting with the environment. This interaction is probably going to be slightly different for a particle taking the path through |A⟩ than for a particle taking the path through |B⟩, and these differences end up being immensely important. To capture the effects of the environment in our experimental setup, let’s add an “environment” term to all of our states. At time zero, when the particle is at the origin, we’ll say that the environment is in some state |ε0⟩. Now, as the particle traverses the path to |A⟩ or to |B⟩, the environment might change slightly, so we need to give two new labels for the state of the environment in each case. |εA⟩ will be our description for the state of the environment that would result if the particle traversed the path from |O⟩ to |A⟩, and |εB⟩ will be the label for the state of the environment resulting from the particle traveling from |O⟩ to |B⟩. Now, to describe our system, we need to take the tensor product of the vector for our particle’s state and the vector for the environment’s state: Now, what is the probability of the particle being observed at position j? Well, there are two possible worlds in which the particle is observed at position j; one in which the environment is in state |εA⟩ and the other in which it’s in state |εB⟩. So the probability will just be the sum of the probabilities for each of these possibilities. This final equation gives us the general answer to the double slit experiment, no matter what the changes to the environment are. Notice that all that is relevant about the environment is the overlap term ⟨εAB⟩, which we’ll give a special name to: This term tells us how different the two possible end states for the environment look. If the overlap is zero, then the two environment states are completely orthogonal (corresponding to perfect decoherence of the initial superposition). If the overlap is one, then the environment states are identical. And look what we get when we express the final probability in terms of this term! Perfect decoherence gives us classical probabilities, and perfect coherence gives us the ideal equation we found in the first part of the post! Anything in between allows the two states to interfere with each other to some limited degree, not behaving like totally separate branches of the wavefunction, nor like one single branch. The problem with the many worlds interpretation of quantum mechanics The Schrodinger equation is the formula that describes the dynamics of quantum systems – how small stuff behaves. One fundamental feature of quantum mechanics that differentiates it from classical mechanics is the existence of something called superposition. In the same way that a particle can be in the state of “being at position A” and could also be in the state of “being at position B”, there’s a weird additional possibility that the particle is in the state of “being in a superposition of being at position A and being at position B”. It’s necessary to introduce a new word for this type of state, since it’s not quite like anything we are used to thinking about. Now, people often talk about a particle in a superposition of states as being in both states at once, but this is not technically correct. The behavior of a particle in a superposition of positions is not the behavior you’d expect from a particle that was at both positions at once. Suppose you sent a stream of small particles towards each position and looked to see if either one was deflected by the presence of a particle at that location. You would always find that exactly one of the streams was deflected. Never would you observe the particle having been in both positions, deflecting both streams. But it’s also just as wrong to say that the particle is in either one state or the other. Again, particles simply do not behave this way. Throw a bunch of electrons, one at a time, through a pair of thin slits in a wall and see how they spread out when they hit a screen on the other side. What you’ll get is a pattern that is totally inconsistent with the image of the electrons always being either at one location or the other. Instead, the pattern you’d get only makes sense under the assumption that the particle traveled through both slits and then interfered with itself. If a superposition of A and B is not the same as “A and B’ and it’s not the same as ‘A or B’, then what is it? Well, it’s just that: a superposition! A superposition is something fundamentally new, with some of the features of “and” and some of the features of “or”. We can do no better than to describe the empirically observed features and then give that cluster of features a name. Now, quantum mechanics tells us that for any two possible states that a system can be in, there is another state that corresponds to the system being in a superposition of the two. In fact, there’s an infinity of such superpositions, each corresponding to a different weighting of the two states. Now, the Schrödinger equation is what tells how quantum mechanical systems evolve over time. And since all of nature is just one really big quantum mechanical system, the Schrödinger equation should also tell us how we evolve over time. So what does the Schrödinger equation tell us happens when we take a particle in a superposition of A and B and make a measurement of it? The answer is clear and unambiguous: The Schrödinger equation tells us that we ourselves enter into a superposition of states, one in which we observe the particle in state A, the other in which we observe it in B. This is a pretty bizarre and radical answer! The first response you might have may be something like “When I observe things, it certainly doesn’t seem like I’m entering into a superposition… I just look at the particle and see it in one state or the other. I never see it in this weird in-between state!” But this is not a good argument against the conclusion, as it’s exactly what you’d expect by just applying the Schrödinger equation! When you enter into a superposition of “observing A” and “observing B”, neither branch of the superposition observes both A and B. And naturally, since neither branch of the superposition “feels” the other branch, nobody freaks out about being superposed. But there is a problem here, and it’s a serious one. The problem is the following: Sure, it’s compatible with our experience to say that we enter into superpositions when we make observations. But what predictions does it make? How do we take what the Schrödinger equation says happens to the state of the world and turn it into a falsifiable experimental setup? The answer appears to be that we can’t. At least, not using just the Schrödinger equation on its own. To get out predictions, we need an additional postulate, known as the Born rule. This postulate says the following: For a system in a superposition, each branch of the superposition has an associated complex number called the amplitude. The probability of observing any particular branch of the superposition upon measurement is simply the square of that branch’s amplitude. For example: A particle is in a superposition of positions A and B. The amplitude attached to A is 0.8. The amplitude attached to B is 0.4. If we now observe the position of the particle, we will find it to be at either A with probability (.6)2 (i.e. 36%), or B with probability (.8)2 (i.e. 64%). Simple enough, right? The problem is to figure out where the Born rule comes from and what it even means. The rule appears to be completely necessary to make quantum mechanics a testable theory at all, but it can’t be derived from the Schrödinger equation. And it’s not at all inevitable; it could easily have been that probabilities associated with the amplitude rather than the amplitude squared. Or why not the fourth power of the amplitude? There’s a substantive claim here, that probabilities associate with the square of the amplitudes that go into the Schrödinger equation, that needs to be made sense of. There are a lot of different ways that people have tried to do this, and I’ll list a few of the more prominent ones here. The Copenhagen Interpretation (Prepare to be disappointed.) The Copenhagen interpretation, which has historically been the dominant position among working physicists, is that the Born rule is just an additional rule governing the dynamics of quantum mechanical systems. Sometimes systems evolve according to the Schrödinger equation, and sometimes according to the Born rule. When they evolve according to the Schrödinger equation, they split into superpositions endlessly. When they evolve according to the Born rule, they collapse into a single determinate state. What determines when the systems evolve one way or the other? Something measurement something something observation something. There’s no real consensus here, nor even a clear set of well-defined candidate theories. If you’re familiar with the way that physics works, this idea should send your head spinning. The claim here is that the universe operates according to two fundamentally different laws, and that the dividing line between the two hinges crucially on what we mean by the words “measurement and “observation. Suffice it to say, if this was the right way to understand quantum mechanics, it would go entirely against the spirit of the goal of finding a fundamental theory of physics. In a fundamental theory of physics, macroscopic phenomena like measurements and observations need to be built out of the behavior of lots of tiny things like electrons and quarks, not the other way around. We shouldn’t find ourselves in the position of trying to give a precise definition to these words, debating whether frogs have the capacity to collapse superpositions or if that requires a higher “measuring capacity”, in order to make predictions about the world. The Copenhagen interpretation is not an elegant theory, it’s not a clearly defined theory, and it’s fundamentally at tension with the project of theoretical physics. So why has it been, as I said, the dominant approach over the last century to understanding quantum mechanics? This really comes down to physicists not caring enough about the philosophy behind the physics to notice that the approach they are using is fundamentally flawed. In practice, the Copenhagen interpretation works. It allows somebody working in the lab to quickly assess the results of their experiments and to make predictions about how future experiments will turn out. It gives the right empirical probabilities and is easy to implement, even if the fuzziness in the details can start to make your head hurt if you start to think about it too much. As Jean Bricmont said, “You can’t blame most physicists for following this ‘shut up and calculate’ ethos because it has led to tremendous develop­ments in nuclear physics, atomic physics, solid­ state physics and particle physics.” But the Copenhagen interpretation is not good enough for us. A serious attempt to make sense of quantum mechanics requires something more substantive. So let’s move on. Objective Collapse Theories These approaches hinge on the notion that the Schrödinger equation really is the only law at work in the universe, it’s just that we have that equation slightly wrong. Objective collapse theories add slight nonlinearities to the Schrödinger equation so that systems sometimes spread out in superpositions and other times collapse into definite states, all according to one single equation. The most famous of these is the spontaneous collapse theory, according to which quantum systems collapse with a probability that grows with the number of particles in the system. This approach is nice for several reasons. For one, it gives us the Born rule without requiring a new equation. It makes sense of the Born rule as a fundamental feature of physical reality, and makes precise and empirically testable predictions that can distinguish it from from other interpretations. The drawback? It makes the Schrödinger equation ugly and complicated, and it adds extra parameters that determine how often collapse happens. And as we know, whenever you start adding parameters you run the risk of overfitting your data. Hidden Variable Theories These approaches claim that superpositions don’t really exist, they’re just a high-level consequence of the unusual behavior of the stuff at the smallest level of reality.  They deny that the Schrödinger equation is truly fundamental, and say instead that it is a higher-level approximation of an underlying deterministic reality. “Deterministic?! But hasn’t quantum mechanics been shown conclusively to be indeterministic??” Well, not entirely. For a while there was a common sentiment amongst physicists that John Von Neumann and others had proved beyond a doubt that no deterministic theory could make the predictions that quantum mechanics makes. Later subtle mistakes were found in these purported proofs that left a door open for determinism. Today there are well-known fleshed-out hidden variable theories that successfully reproduce the predictions of quantum mechanics, and do so fully deterministically. The most famous of these is certainly Bohmian mechanics, also called pilot wave theory. Here’s a nice video on it if you’d like to know more, complete with pretty animations. Bohmian mechanics is interesting, appear to work, give us the Born rule, and is probably empirically distinguishable from other theories (at least in principle). A serious issue with it is that it requires nonlocality, which is a challenge to any attempt to make it consistent with special relativity. Locality is such an important and well-understood feature of our reality that this constitutes a major challenge to the approach. Many-Worlds / Everettian Interpretations Ok, finally we talk about the approach that is most interesting in my opinion, and get to the title of this post. The Many-Worlds interpretation says, in essence, that we were wrong to ever want more than the Schrödinger equation. This is the only law that governs reality, and it gives us everything we need. Many-Worlders deny that superpositions ever collapse. The result of us performing a measurement on a system in superposition is simply that we end up in superposition, and that’s the whole story! So superpositions never collapse, they just go deeper into superposition. There’s not just one you, there’s every you, spread across the different branches of the wave function of the universe. All these yous exist beside each other, living out all your possible life histories. But then where does Many-Worlds get the Born rule from? Well, uh, it’s kind of a mystery. The Born rule isn’t an additional law of physics, because the Schrödinger equation is supposed to be the whole story. It’s not an a priori rule of rationality, because as we said before probabilities could have easily gone as the fourth power of amplitudes, or something else entirely. But if it’s not an a posteriori fact about physics, and also not an a priori knowable principle of rationality, then what is it? This issue has seemed to me to be more and more important and challenging for Many-Worlds the more I have thought about it. It’s hard to see what exactly the rule is even saying in this interpretation. Say I’m about to make a measurement of a system in a superposition of states A and B. Suppose that I know the amplitude of A is much smaller than the amplitude of B. I need some way to say “I have a strong expectation that I will observe B, but there’s a small chance that I’ll see A.” But according to Many-Worlds, a moment from now both observations will be made. There will be a branch of the superposition in which I observe A, and another branch in which I observe B. So what I appear to need to say is something like “I am much more likely to be the me in the branch that observes B than the me that observes A.” But this is a really strange claim that leads us straight into the thorny philosophical issue of personal identity. In what sense are we allowed to say that one and only one of the two resulting humans is really going to be you? Don’t both of them have equal claim to being you? They each have your exact memories and life history so far, the only difference is that one observed A and the other B. Maybe we can use anthropic reasoning here? If I enter into a superposition of observing-A and observing-B, then there are now two “me”s, in some sense. But that gives the wrong prediction! Using the self-sampling assumption, we’d just say “Okay, two yous, so there’s a 50% chance of being each one” and be done with it. But obviously not all binary quantum measurements we make have a 50% chance of turning out either way! Maybe we can say that the world actually splits into some huge number of branches, maybe even infinite, and the fraction of the total branches in which we observe A is exactly the square of the amplitude of A? But this is not what the Schrödinger equation says! The Schrödinger equation tells exactly what happens after we make the observation: we enter a superposition of two states, no more, no less. We’re importing a whole lot into our interpretive apparatus by interpreting this result as claiming the literal existence of an infinity of separate worlds, most of which are identical, and the distribution of which is governed by the amplitudes. What we’re seeing here is that Many-Worlds, by being too insistent on the reality of the superposition, the sole sovereignty of the Schrödinger equation, and the unreality of collapse, ends up running into a lot of problems in actually doing what a good theory of physics is supposed to do: making empirical predictions. The Many-Worlders can of course use the Born Rule freely to make predictions about the outcomes of experiments, but they have little to say in answer to what, in their eyes, this rule really amounts to. I don’t know of any good way out of this mess. Basically where this leaves me is where I find myself with all of my favorite philosophical topics; totally puzzled and unsatisfied with all of the options that I can see. Deriving the Lorentz transformation My last few posts have been all about visualizing the Lorentz transformation, the coordinate transformation in special relativity. But where does this transformation come from? In this post, I’ll derive it from basic principles. I saw this derivation first probably a year ago, and have since tried unsuccessfully to re-find the source.  It isn’t the algebraically simplest derivation I’ve seen, but it is the conceptually simplest. The principles we’ll use to derive the transformation should all seem extremely obvious to you. So let’s dive straight in! The Lorentz transformation in full generality is a 4D matrix that tells you how to transform spacetime coordinates in one inertial reference frame to spacetime coordinates in another inertial reference frame. It turns out that once you’ve found the Lorentz transformation for one spatial dimension, it’s quite simple to generalize it to three spatial dimensions, so for simplicity we’ll just stick to the 1D case. The Lorentz transformation also allows you to transform to a coordinate system that is both translated some distance and rotated some angle. Both of these are pretty straightforward, and work the way we intuitively think rotation and translation should work. So I’ll not consider them either. The interesting part of the Lorentz transformation is what happens when we translate to reference frames that are co-moving (moving with respect to one another). Strictly speaking, this is called a Lorentz boost. That’s what I’ll be deriving for you: the 1D Lorentz boost. So, we start by imagine some reference frame, in which an event is labeled by its temporal and spatial coordinates: t and x. Then we look at a new reference frame moving at velocity v with respect to the starting reference frame. We describe the temporal and spatial coordinates of the same event in the new coordinate system: t’ and x’. In general, these new coordinates can be any function whatsoever of the starting coordinates and the velocity v. Screen Shot 2018-12-09 at 10.31.11 PM.png To narrow down what these functions f and g might be, we need to postulate some general relationship between the primed and unprimed coordinate system. So, our first postulate! 1. Straight lines stay straight. Our first postulate is that all observers in inertial reference frames will agree about if an object is moving at a constant velocity. Since objects moving at constant velocities are straight lines on diagrams of position vs time, this is equivalent to saying that a straight path through spacetime in one reference frame is a straight path through spacetime in all reference frames. More formally, if x is proportional to t, then x’ is proportional to t’ (though the constant of proportionality may differ). Screen Shot 2018-12-09 at 10.41.03 PM.png This postulate turns out to be immensely powerful. There is a special name for the types of transformations that keep straight lines straight: they are linear transformations. (Note, by the way, that the linearity is only in the coordinates t and x, since those are the things that retain straightness. There is no guarantee that the dependence on v will be linear, and in fact it will turn out not to be.)  These transformations are extremely simple, and can be represented by a matrix. Let’s write out the matrix in full generality: Screen Shot 2018-12-09 at 10.45.02 PM.png We’ve gone from two functions (f and g) to four (A, B, C, and D). But in exchange, each of these four functions is now only a function of one variable: the velocity v. For ease of future reference, I’ve chosen to name the matrix T(v). So, our first postulate gives us linearity. On to the second! 2. An object at rest in the starting reference frame is moving with velocity -v in the moving reference frame This is more or less definitional. If somebody tells you that they had a function that transformed coordinates from one reference frame to a moving reference frame, then the most basic check you can do to see if they’re telling the truth is verify that objects at rest in the starting reference frame end up moving in the final reference frame. And again, it seems to follow from what it means for the reference frame to be moving right at 1 m/s that the initially stationary objects should end up moving left at 1 m/s. Let’s consider an object sitting at rest at x = 0 in the starting frame of reference. Then we have: Screen Shot 2018-12-09 at 10.52.06 PM.png We can plug this into our matrix to get a constraint on the functions A and C: Screen Shot 2018-12-09 at 10.54.59 PM.png Great! We’ve gone from four functions to three! Screen Shot 2018-12-09 at 10.56.02 PM.png 3. Moving to the left at velocity v and to the right at the same velocity is the same as not moving at all More specifically: Start with any reference frame. Now consider a new reference frame that is moving at velocity v with respect to the starting reference frame. Now, from this new reference frame, consider a third reference frame that is moving at velocity -v. This third reference frame should be identical to the one we started with. Got it? Formally, this is simply saying the following: Screen Shot 2018-12-09 at 11.01.36 PM.png (I is the identity matrix.) To make this equation useful, we need to say more about T(-v). In particular, it would be best if we could express T(-v) in terms of our three functions A(v), B(v), and D(v). We do this with our next postulate: 4. Moving at velocity -v is the same as turning 180°, then moving at velocity v, then turning 180° again. Again, this is quite self-explanatory. As a geometric fact, the reference frame you end up with by turning around, moving at velocity v, and then turning back has got to be the same as the reference frame you’d end up with by moving at velocity -v. All we need to formalize this postulate is the matrix corresponding to rotating 180°. Screen Shot 2018-12-09 at 11.07.28 PM.png There we go! Rotating by 180° is the same as taking every position in the starting reference frame and flipping its sign. Now we can write our postulate more precisely: Screen Shot 2018-12-09 at 11.09.47 PM Screen Shot 2018-12-09 at 11.10.44 PM.png Now we can finally use Postulate 3! Screen Shot 2018-12-09 at 11.11.56 PM Doing a little algebra, we get… Screen Shot 2018-12-09 at 11.12.42 PM.png (You might notice that we can only conclude that A = D if we reject the possibility that A = B = 0. We are allowed to do this because allowing A = B = 0 gives us a trivial result in which a moving reference frame experiences no time. Prove this for yourself!) Now we have managed to express all four of our starting functions in terms of just one! Screen Shot 2018-12-09 at 11.18.23 PM.png So far our assumptions have been grounded by almost entirely a priori considerations about what we mean by velocity. It’s pretty amazing how far we got with so little! But to progress, we need to include one final a posteriori postulate, that which motivated Einstein to develop special relativity in the first place: the invariance of the speed of light. 5. Light’s velocity is c in all reference frames. The motivation for this postulate comes from mountains of empirical evidence, as well as good theoretical arguments from the nature of light as an electromagnetic phenomenon. We can write it quite simply as: Screen Shot 2018-12-09 at 11.43.23 PM Plugging in our transformation, we get: Screen Shot 2018-12-09 at 11.43.28 PM Multiplying the time coordinate by c must give us the space coordinate: Screen Shot 2018-12-10 at 3.27.16 AM And we’re done with the derivation! Summarizing our five postulates: Screen Shot 2018-12-10 at 12.37.23 AM.png And our final result: Screen Shot 2018-12-10 at 3.29.09 AM.png Swapping the past and future There are a few more cool things you can visualize with the special relativity program from my last post. First of all, a big theme of the last post was the ambiguity of temporal orderings. It’s easy to see the temporal ordering of events when there are only three, but gets harder when you have many many events. Let’s actually display the temporal order on the visualization, so that we can see how it changes for different frames of reference. Display Order Of Three Events Order of Many Events.gif Looking at this second GIF, you can see the immense ambiguity that there is in the temporal order of events. Now, where things get even more interesting is when we consider the spacetime coordinates of events that are not in your future light cone. Check this out: Outside the Light Cone.gif Here’s a more detailed image of the paths traced out by events as you change your velocity: Screen Shot 2018-12-06 at 10.22.20 PM.png Instead of just looking at events in your future light cone, we’re now also looking at events outside of your light cone! We chose to look at a bunch of events that are initially all in your future (in the frame of reference where v = 0). Notice now that as we vary the velocity, some of these events end up at earlier times than you! In other words, by changing your frame of reference, events that were in your future can end up in your past. And vice versa; events in the past of one frame of reference can be in the future in the other. We can see this very clearly by considering just two events. Future Past Swap.gif In the v = 0 frame, Red and Green are simultaneous with you. But for v > 0, Green is before Red is before you, and for v < 0, Green is after Red is after you. The lesson is the following: when considering events outside of your light cone there is no fact of the matter about what events are in your future and which ones are in your past. Now, notice that in the above GIFs we never see events that are in causal contact leave causal contact, or vice versa. This holds true in general. While things certainly do get weirder when considering events outside your light cone, it is still the case that all observers will agree on what events are in causal contact with one another. And just like before, the temporal ordering of events in causal contact does not depend on your frame of reference. In other words, basketballs are always tossed before they go through the net, even outside your light cone. The same holds when considering interactions between a pair of events that straddle either side of your light cone: Straddling No Cause.gif Straddling With Cause If A is in B’s light cone from one frame of reference, then A is in B’s light cone from all frames of reference. And if A is out of B’s light cone in one frame of reference, then it is out of B’s light cone in all frames of reference. Once again, we see that special relativity preserves as absolute our bedrock intuitions about causality, even when many of our intuitions about time’s objectivity fall away. Now, all of the implications of special relativity that I’ve discussed so far have been related to time and causality. But there’s also some strange stuff that happens with space. For instance, let’s consider a series of events corresponding to an object sitting at rest some distance away from you. On our diagram this looks like the following: Screen Shot 2018-12-08 at 11.12.10 PM.png What does this look like when we if we are moving towards the object? Obviously the object should now be getting closer to us, so we expect the red line to tilt inwards towards the x = 0 point. Here’s what we see at 80% of the speed of light: Screen Shot 2018-12-08 at 11.14.01 PM.png As we expected, the object now rushes towards us from our frame of reference, and quickly passes us by and moves off to the left. But notice the spatial distortion in the image! At the present moment (t = 0), the object looks significantly closer than it was previously. (You can see this by starting from the center point and looking to the right to see how much distance you cover before intersecting with the object. This is the distance to the object at t = 0.) This is extremely unusual! Remember, the moving frame of reference is at the exact same spatial position at t = 0 as the still frame of reference. So whether I am moving towards an object or standing still appears to change how far away the object presently is! This is the famous phenomenon of length contraction. If we imagine placing two objects at different distances from the origin, each at rest with respect to the v = 0 frame, then moving towards them would result in both of them getting closer to us as well as each other, and thus shrinking! Evidently when we move, the universe shrinks! One last effect we can see in the diagram appears to be a little at odds with what I’ve just said. This is that the observed distance between yourself and the object increases as you move towards it (and as the actual distance shrinks). Why? Well, what you observe is dictated by the beams of light that make it to your eye. So at the moment t = 0, what you are observing is everything along the two diagonals in the bottom half of the images. And in the second image, where you are moving towards the object, the place where the object and diagonal intersect is much further away than it is in the first image! Evidently, moving towards an object makes it appear further away, even though in reality it is getting closer to you! This holds as a general principle. The reason? When you observe an object, you are really observing it as it was some time in the past (however much time it took for light to reach your eye). And when you move towards an object, that past moment you are observing falls further into the past. (This is sort of the flip-side of time dilation.) Since you are moving towards the object, looking further into the past means looking at the object when it was further away from you. And so therefore the object ends up appearing more distant from you than before! There’s a bunch more weird and fascinating effects that you can spot in these types of visualizations, but I’ll stop there for now. Visualizing Special Relativity I’ve been thinking a lot about special relativity recently, and wrote up a fun program for visualizing some of its stranger implications. Before going on to these visualizations, I want to recommend the Youtube channel MinutePhysics, which made a fantastic primer on the subject. I’ll link the first few of these here, as they might help with understanding the rest of the post. I highly recommend the entire series, even if you’re already pretty familiar with the subject. Now, on to the pretty images! I’m still trying to determine whether it’s possible to embed applets in my posts, so that you can play with the program for yourself. Until I figure that out, GIFs will have to suffice. lots of particles Let me explain what’s going on in the image. First of all, the vertical direction is time (up is the future, down is the past), and the horizontal direction is space (which is 1D for simplicity). What we’re looking at is the universe as described by an observer at a particular point in space and time. The point that this observer is at is right smack-dab in the center of the diagram, where the two black diagonal lines meet. These lines represent the observer’s light cone: the paths through spacetime that would be taken by beams of light emitted in either direction. And finally, the multicolored dots scattered in the upper quadrant represent other spacetime events in the observer’s future. Now, what is being varied is the velocity of the observer. Again, keep in mind that the observer is not actually moving through time in this visualization. What is being shown is the way that other events would be arranged spatially and temporally if the observer had different velocities. Take a second to reflect on how you would expect this diagram to look classically. Obviously the temporal positions of events would not depend upon your velocity. What about the spatial positions of events? Well, if you move to the right, events in your future and to the right of you should be nearer to you than they would be had you not been in motion. And similarly, events in your future left should be further to the left. We can easily visualize this by plugging in the classical Galilean transformation: Classical Transformation.gif Just as we expected, time positions stay constant and spatial positions shift according to your velocity! Positive velocity (moving to the right) moves future events to the left, and negative velocity moves them to the right. Now, technically this image is wrong. I’ve kept the light paths constant, but even these would shift under the classical transformation. In reality we’d get something like this: Classical Corrected.gif Of course, the empirical falsity of this prediction that the speed of light should vary according to your own velocity is what drove Einstein to formulate special relativity. Here’s what happens with just a few particles when we vary the velocity: RGB Transform What I love about this is how you can see so many effects in one short gif. First of all, the speed of light stays constant. That’s a good sign! A constant speed of light is pretty much the whole point of special relativity. Secondly, and incredibly bizarrely, the temporal positions of objects depend on your velocity!! Objects to your future right don’t just get further away spatially when you move away from them, they also get further away temporally! Another thing that you can see in this visualization is the relativity of simultaneity. When the velocity is zero, Red and Blue are at the same moment of time. But if our velocity is greater than zero, Red falls behind Blue in temporal order. And if we travel at a negative velocity (to the left), then we would observe Red as occurring after Blue in time. In fact, you can find a velocity that makes any two of these three points simultaneous! This leads to the next observation we can make: The temporal order of events is relative! The orderings of events that you can observe include Red-Green-Blue, Green-Red-Blue, Green-Blue-Red, and Blue-Green-Red. See if you can spot them all! This is probably the most bonkers consequence of special relativity. In general, we cannot say without ambiguity that Event A occurred before or after Event B. The notion of an objective temporal ordering of events simply must be discarded if we are to hold onto the observation of a constant speed of light. Are there any constraints on the possible temporal orderings of events? Or does special relativity commit us to having to say that from some valid frames of reference, the basketball going through the net preceded the throwing of the ball? Well, notice that above we didn’t get all possible orders… in particular we didn’t have Red-Blue-Green or Blue-Red-Green. It turns out that in general, there are some constraints we can place on temporal orderings. Just for fun, we can add in the future light cones of each of the three events: RGB with Light Cones.gif Two things to notice: First, all three events are outside each others’ light cones. And second, no event ever crosses over into another event’s light cone. This makes some intuitive sense, and gives us a constant that will hold true in all reference frames: Events that are outside each others’ light cones from one perspective, are outside each others’ light cones from all perspectives. Same thing for events that are inside each others’ light cones. Conceptually, events being inside each others’ light cones corresponds to them being in causal contact. So another way we can say this is that all observers will agree on what the possible causal relationships in the universe are. (For the purposes of this post, I’m completely disregarding the craziness that comes up when we consider quantum entanglement and “spooky action at a distance.”)  Now, is it ever possible for events in causal contact to switch temporal order upon a change in reference frame? Or, in other words, could effects precede their causes? Let’s look at a diagram in which one event is contained inside the light cone of another: RGB Causal Looking at this visualization, it becomes quite obvious that this is just not possible! Blue is fully contained inside the future light cone of Red, and no matter what frame of reference we choose, it cannot escape this. Even though we haven’t formally proved it, I think that the visualization gives the beginnings of an intuition about why this is so. Let’s postulate this as another absolute truth: If Event A is contained within the light cone of Event B, all observers will agree on the temporal order of the two events. Or, in plainer language, there can be no controversy over whether a cause precedes its effects. I’ll leave you with some pretty visualizations of hundreds of colorful events transforming as you change reference frames: Pretty Transforms LQ And finally, let’s trace out the set of possible space-time locations of each event. Screen Shot 2018-12-06 at 3.22.43 PM.png Try to guess what geometric shape these paths are! (They’re not parabolas.) Hint. Fractals and Epicycles Norwood Russell Hanson, “The Mathematical Power of Epicyclical Astronomy” A friend recently showed me this image… …and thus I was drawn into the world of epicycles and fractals. Epicycles were first used by the Greeks to reconcile observational data of the motions of the planets with the theory that all bodies orbit the Earth in perfect circles. It was found that epicycles allowed astronomers to retain their belief in perfectly circular orbits, as well as the centrality of Earth. The cost of this, however, was a system with many adjustable parameters (as many parameters as there were epicycles). There’s a somewhat common trope about adding on endless epicycles to a theory, the idea being that by being overly flexible and accommodating of data you lose epistemic credibility. This happens to fit perfectly with my most recent posts on model selection and overfitting! The epicycle view of the solar system is one that is able to explain virtually any observational data. (There’s a pretty cool reason for this that has to do with the properties of Fourier series, but I won’t go into it.) The cost of this is a massive model with many parameters. The heliocentric model of the solar system, coupled with the Newtonian theory of gravity, turns out to be able to match all the same data with far fewer adjustable parameters. So by all of the model selection criteria we went over, it makes sense to switch over from one to the other. Of course, it is not the case that we should have been able to tell a priori that an epicycle model of the planets’ motions was a bad idea. “Every planet orbits Earth on at most one epicycle”, for instance, is a perfectly reasonable scientific hypothesis… it just so happened that it didn’t fit the data. And adding epicycles to improve the fit to data is also not bad scientific practice, so long as you aren’t ignoring other equally good models with fewer parameters.) Okay, enough blabbing. On to the pretty pictures! I was fascinated by the Hilbert curve drawn above, so I decided to write up a program of my own that generates custom fractal images from epicycles. Here are some gifs I created for your enjoyment: Negative doubling of angular velocity (Each arm rotates in the opposite direction of the previous arm, and at twice its angular velocity. The length of each arm is half that of the previous.) Trebling of angular velocity Negative trebling Here’s a still frame of the final product for N = 20 epicycles: Screen Shot 2018-11-27 at 7.23.55 AM ωn ~ (n+1) 2n (or, the Fractal Frog) ωn ~ n, rn ~ 1/n radius ~ 1:n, frequency ~ n.gif ωn ~ n, constant rn ωn ~ 2n, rn ~ 1/n2 And here’s a still frame of N = 20: high res pincers (All animations were built using, which I highly recommend for quick and easy construction of visualizations.)
23a0bae1d9a68e84
Open access peer-reviewed chapter The Intersubband Approach to Si-based Lasers By Greg Sun Published: April 1st 2010 DOI: 10.5772/8672 Downloaded: 2557 1. Introduction Silicon has been the miracle material for the electronics industry, and for the past twenty years, technology based on Si microelectronics has been the engine driving the digital revolution. For years, the rapid “Moore’s Law” miniaturization of device sizes has yielded an ever-increasing density of fast components integrated on Si chips: but during the time that the feature size was pushed down towards its ultimate physical limits, there has also been a tremendous effort to broaden the reach of Si technology by expanding its functionalities well beyond electronics. Si is now being increasingly investigated as a platform for building photonic devices. The field of Si photonics has seen impressive growth since early visions in the 1980s and 1990s [1,2]. The huge infrastructure of the global Si electronics industry is expected to benefit the fabrication of highly sophisticated Si photonic devices at costs that are lower than those currently required for compound semiconductors. Furthermore, the Si-based photonic devices make possible the monolithic integration of photonic devices with high speed Si electronics, thereby enabling an oncoming Si-based “optoelectronic revolution”. Among the many photonic devices that make up a complete set of necessary components in Si photonics including light emitters, amplifiers, photodetectors, waveguides, modulators, couplers and switches, the most difficult challenge is the lack of an efficient light source. The reason for this striking absence is that bulk Si has an indirect band gap where the minimum of the conduction band and the maximum of the valence band do not occur at the same value of crystal momentum in wave vector space (Fig. 1). Since photons have negligible momentum compared with that of electrons, the recombination of an electron-hole pair will not be able to emit a photon without the simultaneous emission or absorption of a phonon in order to conserve the momentum. Such a radiative recombination is a second-order effect occurring with a small probability, which competes with nonradiative processes that take place at much faster rates. As a result, as marvelous as it has been for electronics, bulk Si has not been the material of choice for making light emitting devices including lasers. Nevertheless, driven by its enormous payoff in technology advancement and commercialization, many research groups around the world have been seeking novel approaches to overcome the intrinsic problem of Si to develop efficient light sources based on Si. One interesting method is to use small Si nanocrystals dispersed in a dielectric matrix, often times SiO2. Such nano-scaled Si clusters are naturally formed by the thermal annealing of a Si-rich oxide thin film. Silicon nanocrystals situated in a much wider band gap SiO2 can effectively localize electrons with quantum confinement, which improves the radiative recombination probability, shifts the emission spectrum toward shorter wavelengths, and Figure 1. Illustration of a photon emission process in (a) the direct and (b) the indirect band gap semiconductors. decreases the free carrier absorption. Optical gain and stimulated emission have been observed from these Si nanocrystals by both optical pumping [3,4] and electrical injection [5], but the origin of the observed optical gain has not been fully understood as the experiments were not always reproducible – results were sensitive to the methods by which the samples were prepared. In addition, before Si-nanocrystal based lasers can be demonstrated, the active medium has to be immersed in a tightly confined optical waveguide or cavity. Another approach is motivated by the light amplification in Er-doped optical fibers that utilize the radiative transitions in Er ions (Er3+) [6]. By incorporating Er3+ in Si, these ions can be excited by energy transfer from electrically injected electron-hole pairs in Si and will subsequently relax by emitting photons at the telecommunication wavelength of 1.55 µm. However, the concentration of Er3+ ions that can be doped in Si is relatively low and there is a significant energy back-transfer from the Er3+ ions to the Si host due to the resonance with a defect level in Si. As a result, both efficiency and maximum power output have been extremely low [7,8]. To reduce the back transfer of energy, SiO2 with an enlarged band gap has been proposed as host to remove the resonance between the defect and the Er3+ energy levels [9]. Once again, Si-rich oxide is employed to form Si nanocrystals in close proximity to Er3+ ions. The idea is to excite Er3+ ions with the energy transfer from the nearby Si nanocrystals. Light emitting diodes (LEDs) with efficiencies of about 10% have been demonstrated [10] on par with commercial devices made of GaAs, but with power output only in tens of µW. While there have been proposals to develop lasers using doped Er in Si-based dielectric, the goal remains elusive. The only approach so far that has led to the demonstration of lasing in Si exploited the effect of stimulated Raman scattering [11-13], analogous to that produced in fiber Raman amplifiers. With both the optical pumping and the Raman scattering below the band gap of Si, the indirectness of the Si band gap becomes irrelevant. Depending on whether it is a Stokes or anti-Stokes process, the Raman scattering either emits or absorbs an optical phonon. Such a nonlinear process requires optical pumping at very high intensities (~100MW/cm2) and the device lengths (~cm) are too large to be integrated with other photonic and electronic devices in any type of Si VLSI-type circuit [14]. Meanwhile, the search for laser devices that can be integrated on Si chips has gone well beyond the monolithic approach to seek solutions using hybrid integration of III-V compounds with Si. A laser with an AlGaInAs quantum well (QW) active region bonded to a silicon waveguide cavity was demonstrated [15]. This fabrication technique allows for the optical waveguide to be defined by the CMOS compatible Si process while the optical gain is provided by III-V materials. Rare-earth doped visible-wavelength GaN lasers fabricated on Si substrates are also potentially compatible with the Si CMOS process [16]. Another effort produced InGaAs quantum dot lasers deposited directly on Si substrates with a thin GaAs buffer layer [17]. Although these hybrid approaches offer important alternatives, they do not represent the ultimate achievement of Si-based lasers monolithically integrated with Si electronics. While progress is being made along these lines and debates continue about which method offers the best promise, yet another approach emerged that has received a great deal of attention in the past decade—an approach in which the lasing mechanism is based on intersubband transitions (ISTs) in semiconductor QWs. Such transitions take place between quantum confined states (subbands) of conduction or valence bands and do not cross the semiconductor band gap. Since carriers remain in the same energy band (either conduction or valence), optical transitions are always direct in momentum space rendering the indirectness of the Si band gap irrelevant. Developing lasers using ISTs therefore provides a promising alternative that completely circumvents the issue of indirectness in the Si band gap. In addition, this type of laser can be conveniently designed to employ electrical pumping – the so-called quantum cascade laser (QCL). The pursuit of Si-based QCLs might turn out to be a viable path to achieving electrically pumped Si-based coherent emitters that are suitable for monolithic integration with Si photonic and electronic devices. In this chapter, lasing processes based on ISTs in QWs are explained by drawing a comparison to conventional band-to-band lasers. Approaches and results towards SiGe QCLs using ISTs in the valence band are overviewed, and the challenges and limitations of the SiGe valence-band QCLs are discussed with respect to materials and structures. In addition, ideas are proposed to develop conduction-band QCLs, among them a novel QCL structure that expands the material combination to SiGeSn. This is described in detail as a way to potentially overcome the difficulties that are encountered in the development of SiGe QCLs. 2. Lasers based on intersubband transitions Research on quantum confined structures including semiconductor QWs and superlattices (SLs) was pioneered by Esaki and Tsu in 1970 [18]. Since then confined structures have been developed as the building blocks for a majority of modern-day semiconductor optoelectronic devices. QWs are formed by depositing a narrower band gap semiconductor with a layer thickness thinner than the deBroglie wavelength of the electron (~10nm) between two wider band gap semiconductors (Fig. 2(a)). The one-dimensional quantum confinement leads to quantized states (subbands) in the direction of growthzwithin both conduction and valence bands. The energy position of each subband depends on the band offset (ΔEc,ΔEv) and the effective mass of the carrier. In directions perpendicular toz(in-plane), the carriers are unconfined and can thus propagate with an in-plane wave vectorkwhich gives an energy dispersion for each subband. (Fig. 2(b)) Figure 2. Illustration of (a) conduction and valence subband formations in a semiconductor QW and (b) in-plane subband dispersions with optical transitions between conduction and valence subbands. Obviously, if the band offset is large enough, there could be multiple subbands present within either conduction or valence band as shown in Fig. 3 where two subbands are confined within the conduction band. The electron wavefunctions (Fig. 3(a)) and energy dispersions (Fig. 3(b)) are illustrated for the two subbands. The concept of ISTs refers to the physical process of a carrier transition between these subbands within either the conduction or valence band as illustrated in Fig. 3. Carriers originally occupying a higher energy subband can make a radiative transition to a lower subband by emitting a photon. Coherent sources utilizing this type of transition as the origin of light emission are called intersubband lasers. The original idea of creating light sources based on ISTs was proposed by Kazarinov and Suris [19] in 1971, but the first QCL was not demonstrated until 1994 by a group led by Capasso at Bell Laboratories [20]. In comparison with the conventional band-to-band lasers, lasers based on ISTs require much more complex design of the active region which consists of carefully arranged multiple QWs (MQWs). The reason for added complexity can be appreciated by comparing the very different band dispersions that are involved in these two types of lasers. In a conventional band-to-band laser, it appears that the laser states consist of two broad bands. But a closer look at the conduction and valence band dispersions (Fig. 2(b)) reveals a familiar four-level scheme where in addition to the upper laser states|u, located near the bottom of the conduction band and the lower laser states|l, near the top of the valence band, there are two other participating states - intermediate states|i, and ground states|g. The pumping process (either injection or optical) places electrons into the intermediate states,|i, from which they quickly relax toward the upper laser states|uby inelastic scattering intraband processes. This process is very fast, occurring on a sub-pico-second scale. But once they reach states|u, they tend to stay there for a much longer time determined by the band-to-band recombination rate which is on the order of nanoseconds. Electrons that went through lasing transitions to the lower laser states|lwill quickly scatter into the lower energy states of the valence band – ground states|g--by the same fast inelastic intraband processes. (A more conventional way to look at this is the relaxation of holes toward the top of the valence band.) The population inversion between|uand|lis therefore established mostly by the fundamental difference between the processes determining the lifetimes of upper and lower laser states. As a result, the lasing threshold can be reached when the whole population of the upper conduction band is only a tiny fraction of that of the lower valence band. Figure 3. a) Two subbands formed within the conduction band confined in a QW and their election envelope functions, (b) in-plane energy dispersions of the two subbands. Radiative intersubband transition between the two subbands is highlighted. Let us now turn our attention to the intersubband transition shown in Fig. 3(b). The in-plane dispersions of the upper|uand lower|lconduction subbands are almost identical when the band nonparabolicity can be neglected. For all practical purposes they can be considered as two discrete levels. Then, in order to achieve population inversion it is necessary to have the whole population of the upper subband exceed that of the lower subband. For this reason, a three- or four-subband scheme becomes necessary to reach the lasing threshold. Even then, since the relaxation rates between different subbands are determined by the same intraband processes, a complex multiple QW structure needs to be designed to engineer the lifetimes of involved subbands. Still, intersubband lasers offer advantages in areas where the conventional band-to-band lasers simply cannot compete. In band-to-band lasers, lasing wavelengths are mostly determined by the intrinsic band gap of the semiconductors. There is very little room for tuning, accomplished by varying the structural parameters such as strain, alloy composition, and layer thickness. Especially for those applications in the mid-IR to far-IR range, there are no suitable semiconductors with the appropriate band gaps from which such lasers can be made. With the intersubband transitions, we are no longer limited by the availability of semiconductor materials to produce lasers in this long wavelength region. In addition, for ISTs between conduction subbands with parallel band dispersions, the intersubband lasers should therefore have a much narrower gain spectrum in comparison to the band-to-band lasers in which conduction and valence bands have opposite band curvatures. A practical design that featured a four-level intersubband laser pumped optically was proposed by Sun and Khurgin [21,22] in the early 1990s. This work laid out a comprehensive analysis of various intersubband processes that affect the lasing operation including scattering mechanisms that determine subband lifetimes, conditions for population inversion between two subbands, band engineering to achieve it, and optical gain sufficient to compensate for losses under realistic pumping intensity. The QCLs developed soon thereafter significantly expanded the design in order to accommodate electrical pumping by implementing a rather elaborate scheme of current injection with the use of a chirped SL as the injector region placed in between the active regions (Fig. 4). The QCL has a periodic structure with each period consisting of an active and an injector region. Both active and injector regions are composed of MQWs. By choosing combinations of layer thicknesses and material compositions, three subband levels with proper energy separations and wave function overlaps are obtained in the active region. The injector region, on the other hand, is designed with a sequence of QWs having decreasing well widths (chirped SL) such that they form a miniband under an electric bias that facilitates electron transport. The basic operating principle of a QCL is illustrated in Fig. 4. Electrons are first injected through a barrier into subband 3 (upper laser state) of the active region, they then undergo lasing transitions to subband 2 (lower laser state) by emitting photons, followed by fast depopulation into subband 1 via nonradiative processes. These electrons are subsequently transported through the injector region into the next active region where they repeat the process in a cascading manner, typically 20 to 100 times. Figure 4. Schematic band diagram of two periods of a QCL structure with each period consisting of an active and an injector region. Lasing transitions are between the states 3 and 2 in the active regions with rapid depopulation of lower state 2 into state 1 which couples strongly with the minibands formed in injector regions that transport carriers to state 3 in the next period. The magnitude-squared wavefunctions for the three subbands in active regions are illustrated. Advances of QCLs since the first demonstration have resulted in dramatic performance improvement in spectral range, power and temperature. They have become the dominant mid-IR semiconductor laser sources covering the spectral range of3λ25µm [23-25], many of them operating in the continuous-wave mode at room temperature with peak power reaching a few watts [26,27]. Meanwhile, QCLs have also penetrated deep into the THz regime loosely defined as the spectral region100GHzf10THz or30λ3000µm, bridging the gap between the far-IR and GHz microwaves. At present, spectral coverage from 0.84-5.0 THz has been demonstrated with operation in either the pulsed or continuous-wave mode at temperatures well above 100K [28]. 3. Intersubband theory In order to better explain the design considerations of intersubband lasers, it is necessary to introduce some basic physics that underlies the formation of subbands in QWs and their associated intersubband processes. The calculation procedures described here follows the envelope function approach based on the effective-mass approximation [29]. Thekpmethod [30] was outlined to obtain in-plane subband dispersions in the valence band. Optical gain for transitions between subbands in conduction and valence bands is derived. Various scattering mechanisms that determine the subband lifetimes are discussed with an emphasis on the carrier-phonon scattering processes. 3.1. Subbands and dispersions Let us treat the conduction subbands first. It is well known in bulk material that near the band edge, the band dispersion with an isotropic effective mass follows a parabolic relationship. In a QW structure, along the in-plane direction (k=kxx̂+kyŷ) where electrons are unconfined, such curvature is preserved for a given subbandi, assuming the nonparabolicity that describes the energy-dependent effective massme*can be neglected, whereħis the Planck constant andEiis the minimum energy of subbandiin a QW structure. This minimum energy can be calculated as one of the eigen values of the Schrödinger equation along the growth directionz, where thez-dependence ofme*allows for different effective masses in different layers andVc(z)represents the conduction band edge along the growth directionz,. The envelope function of subbandi,φi(z), together with the electron Bloch functionue(R)and the plane waveejkr, gives the electron wavefunction in the QW structure as where the position vector is decomposed into in-plane and growth directionsR=r+zẑ. Since we are treating electron subbands, the Bloch function is approximately the same for all subbands and allk-vectors. The electron envelope function can be given as a combination of the forward and backward propagations in a given regionlof the QW structure (either a QW or a barrier region),dlzdl+1 whereAlandBlare constants that need to be fixed with the continuity conditions at each of the interfacesz=dl, in conjunction with the relationship between the subband minimum energyEiand the quantized wave vectorkzin thez-direction wherekzassumes either real or imaginary value depending onEiVc(z). The continuity conditions in Eq.(5) ensure continuous electron distribution and conservation of electron current across the interface. In the presence of an electric fieldEapplied in thez-direction, the potential termVc(z)in the Schrödinger equation Eq.(2) becomes tilted alongthez-direction according toeEz. If the Coulomb effect due to the distribution of electrons in the subband needs to be taken into consideration, then the potential in regionlof the QW structure with the conduction band edgeVc,lshould be modified as whereeϕ(z)takes into account the potential due to electron distributions in all subbands and can be obtained by solving the Poisson equation consistently with Eq.(2), whereeis the charge of a free electron,ε0is the permittivity of free space,ε(z)is thez-dependent dielectric constant of the QW structure,niis the electron density of subbandi, andNd(z)is the n-type doping profile in the structure. In comparison with the conduction band, the situation in the valence band is far more complex mostly because of the interactions between subbands of different effective masses that produce strong nonparabolicity. The in-plane dispersion of valence subbands and their associated envelope functions can be obtained in the framework of the effective mass approximation by applying thekptheory [30] to QWs [31] where, in the most general treatment, an 8×8 Hamiltonian matrix is employed to describe the interactions between the conduction, heavy-hole (HH), light-hole (LH), and spin-orbit split off (SO) bands. Often times, for semiconductors in which the conduction band is separated far in energy from the valence band, the coupling of the conduction band can be ignored. For the group-IV semiconductors Si and Ge with indirect band gaps, this approximation is particularly adequate. In those structures where there is little strain such as GaAs/AlGaAs, the SO band coupling can also be ignored. The 8×8 Hamiltonian matrix can then be reduced to a 4×4 matrix. But for systems with appreciable lattice mismatch, strain induces strong coupling between LH and SO bands. For the SiGe system with a large lattice mismatch, the SO band should be included and a 6×6 matrix Hamiltonian equation needs to be solved to come up with the dispersion relations and envelope functions. Such a 6×6 matrix Hamiltonian equation can be solved exactly in multiple QW structures under the bias of an electric field. A procedure based on the Luttinger-Kohn Hamiltonian [32,33] is outlined as follows. The 6×6 Luttinger-Kohn Hamiltonian matrix including the uniaxial stress along (001) is given in the HH (|32,±32), LH (|32,±12), and SO (|12,±12) Bloch function space as whereVv(z)is the valence band edge profile (degenerate for HH and LH bands) of the QW structure, in whichm0is the mass of a free electron,γ1,γ2,γ3are the Luttinger parameters andav,bare the deformation potentials [34] with different values in QWs and barriers, and the lattice mismatch strain witha0,abeing the lattice constants of the substrate (or buffer) and the layer material, andC11andC12the stiffness constants. The Hamiltonian in Eq.(9) operates on wavefunctions that are combinations of six mutually orthogonal HH (|32,±32), LH (|32,±12), and SO (|12,±12) Bloch functions whereχn(z),n=1,2,,6forms a six-component envelope-function vectorχ(z). Each component in a given regionlof the QW structure (either QW or barrier),dlzdl+1, is a superposition of the forward and backward propagations identical to Eq.(4) with constantsAn,landBn,l,n=1,2,,6that can be fixed by the continuity equations that require at each interfacez=dl, to maintain undisruptive carrier distribution and current across the interface. It is important to point out that when the above described algorithms are used for the situation where an electric field is applied along the growth directionz, it is necessary to digitize the potential termVc(z)andVv(z), i.e. the regions that are used in Eq.(4) are no longer defined by the QW and barrier boundaries; instead, there could be many regions within each QW or barrier depending on the number of digitization steps used to satisfy the accuracy requirement. This procedure applied at each wave vector point (k=kxx̂+kyŷ) produces the in-plane dispersion for each subband. An example is illustrated in Fig. 5 for a 70Å/50Å GaAs/Al0.3Ga 0.7As SL [35]. In-plane dispersions of three subbands (two for HH and one for LH) are shown where strong nonparabolicity is demonstrated. It can be seen from Fig. 5 that the band nonparabolicity could be so severe that the LH subband maximum is no longer at theΓ-point which leads to useful valence QCL design applications in Section IV. Figure 5. In-plane dispersions of subbands HH1, LH1, and HH2 for a 70 Å/50 Å GaAs/AlGaAs SL [35]. 3.2. Optical gain For lasing to occur between two subbands, it is necessary to induce stimulated emission between them. To sustain such emission of photons, there must be sufficient optical gain to compensate various losses in the laser structure. The intersubband optical gain can be obtained by analyzing transition rates between two subbands. According to the Fermi Golden rule, the transition rate between two discrete states 1 and 2 that are coupled by a perturbation electro-magnetic (EM) field with a frequency ofωis whereHm=1|Hex|2is the transition matrix element under the influence of a perturbation HamiltonianHexbetween the two states with an exact transition energyE2E1in the absence of any broadening. In reality, the transition lineE2E1is not infinitely sharp and is always broadened. As a result,E2E1is not known exactly, instead a probability for it to appear in the energy intervalEE+dEis described. In the case of homogeneous broadening, this probability should be given asL(E)dEwith the Lorentzian lineshape centered at some peak transition energyE0 whereΓis the full width at half maximum (FWHM) that characterizes the broadening due to various homogeneous processes that include collisions and transitions. The transition rate in Eq.(15) should thus be modified by an integral that takes into account of this broadening as essentially replacing theδ-function in Eq.(15) with the Lorentzian lineshape Eq.(16). In the presence of an EM field with an optical potential vectorAin a medium with isotropic effective mass, the perturbation HamiltonianHexthat describe the interaction between the field and electron in isotropic subbands is wherePis the momentum operator. From Eq.(18), it is not difficult to see that the selection rules for intersubband transitions in the conduction band are such that only those EM fields that are polarized in the growth direction (z) can induce optical transitions. The transition matrix element can then be given as where the momentum matrix element is evaluated as the envelope function overlap between the two subbands, which is related to the dipole matrix element [36] and to the oscillator strength [37] It is not difficult to see from Eq.(17) that the transition rate induced by an EM field between two eigen states is the same for upward and downward transitions. Now let us apply Eq.(17) to intersubband transitions between the upper subband 2 and lower subband 1 in the conduction band (Fig. 3). Since momentums associated with photons are negligible, all photon-induced transitions are vertical ink-space. It is therefore possible to obtain a net downward transition rate (in the units of number of transitions per unit time per unit sample area) between the two subbands by evaluating the following integral wheref1(E1,k)andf2(E2,k)are the electron occupation probabilities of those states at the samekin subbands 1 and 2, respectively, andρr(E2,kE1,k)is the reduced density of states (DOS) betweenE1,kandE2,k, which is equal to DOS of subbands 1 (ρ1) and 2 (ρ2) when they are parallel,ρr=ρ1=ρ2=me*/π2. Since the Lorentzian lineshape in Eq.(23) should be much broader than the spread of the energy transitions between the two parallel subbands which can be approximated as sharply centered at the subband separation at their energy minimaE12=E2E1. Thus, whereN1andN2are the total electron densities in subband 1 and 2 per unit area, respectively. The optical gain coefficientγthat describes the increase of the EM field intensity,I, asγ=I1dI/dzcan be defined as power increase per unit volume divided by the intensity, which in turn can be expressed in terms of the net downward transition rate Eq.(24) using the momentum and dipole matrix element relation whereLpis the length of the QW structure that is equal to the length of one period in case of QCLs. In order to relate the EM field intensityIthat propagates in in-plane with the optical potentialApolarized alongz, a real expression for the potentialAhas to be used whereβis the in-plane propagation wave vector of the EM field. It is easy to see that only one of the two terms on the right side of Eq.(26) couples with subbands 1 and 2,E2E1=ħω. Thus, the optical potential that participates in the transition matrix Eq.(19) is only half of its real amplitude,A=A0/2. Since the EM field intensityIis related to the optical potential amplitudeA0asI=ε0cneffA02ω2/2, Eq.(25) can be written as wherecis the speed of light in free space andneffis the effective index of refraction of the QCL dielectric medium. The population inversionN2N10is clearly necessary in order to achieve positive gain which peaks at the frequencyω0=E12/with a value of For transitions between valence subbands with nonparallel dispersions and strong mixing between HH, LH, and SO bands, we have to re-examine the intersubband transition rate. Consider the intersubband transition in Fig. 5 from the upper state|uin subband LH1 to the lower state|lin subband HH1, if the spread of intersubband transitions is wide enough compared to the homogeneous broadening, the Lorentzian lineshape in the net downward transition rate Eq.(23) can be approximated as aδ-function yielding whereρr(ElEh)is the reduced DOS for the transition between subbands LH1 and HH1,fLH(El)andfHH(Eh)are hole occupation probabilities at states with energyElandEhin subband LH1 and HH1, respectively, at the same in-plane wave vectorkseparated by a photon energyω, and the optical transition matrix between LH1 and HH1 taking into account of the mixing whereχn(l)andχn(h)are respectively then-th component of the envelope function vectors for subband LH1 and HH1 as defined in Eq.(12), andmn*are the corresponding hole effective mass inz-direction withm1,4*=m0/(γ12γ2)for HH,m2,3*=m0/(γ1+2γ2)for LH, andm5,6*=m0/γ1for SO. The optical gain can then be expressed in terms of momentum matrixPn(lh)=χn(l)|jħz|χn(h)as well as dipole matrix elementszn(lh)=χn(l)|z|χn(h)between the samen-th component of the envelope function vectors of the two valence subbands. In comparison with the optical gain Eq.(28) for the conduction subbands, we can see that it is not necessary to have total population inversion,NlNh0, in order to have positive gain between the valence subbands. Instead, all we need is local population inversion[fLH(El)fHH(Eh)]|ElEh=ω0in the region where the intersubband transition takes place (those states near|uand|lin Fig. 5). 3.3. Intersubband lifetimes It has been established in Eqs.(27) and (28) that the population inversion between the upper (2) and lower (1) subbands,N2N10, is necessary in order to obtain optical gain. But what determines the population inversion? This question is answered with the analysis of lifetimes of these subbands as a result of various intersubband relaxation mechanisms including carrier-phonon, carrier-carrier, impurity, and interface roughness scattering processes. Among them, phonon scattering is the dominant process, especially when the energy separation between the two subbands exceeds that of an optical phonon, in which case the transitions from upper to lower subband are highly efficient with the emission of optical phonons. Different from the optical transitions, these scattering processes do not necessarily occur as vertical transitions ink-space. In the case of phonon scattering, the conservation of in-plane momentum can be satisfied by a wide range of momentum of involved phonons as shown in Fig. 6(a) where intersubband as well as intrasubband transitions due to phonon scattering are illustrated. Figure 6. a) Intersubband and intrasubband transitions due to electron-phonon scattering (b) the 22→11 transition induced by the electron-electron scattering. Up to now, practically all approaches in developing Si-based QCLs are based on materials from group-IV, mostly Si, Ge, SiGe alloy, and more recently, SiGeSn alloy. Different from the polar III-V and II-VI semiconductors, group-IV materials are nonpolar. The carrier scatterings by nonpolar optical phonons are much slower than those due to polar optical phonons [38]. Starting from Fermi Gold rule Eq.(15), the scattering rate for a carrier in subband 2 with the in-plane wave vectorkto subband 1 withk'by a phonon with an energyħωQand wave vector,Q=q+qzẑ, can be expressed as an integral over all the participating phonon states whereHepis the electron-phonon interaction matrix element, the carrier energiesE1,k'andE2,kare given by Eq.(1) for conduction subbands, but for valence subbands, they need to be obtained by thekpmethod described above. We will proceed with the following approximations: 1) all phonons are treated to be bulk-like by neglecting the phonon confinement effect in QW structures, 2) energies of acoustic phonons are negligibleħωQ0, and 3) optical phonon energies are taken as a constantħωQħω0. The matrix element of carrier-phonon interaction for different type of phonons can be written as [39,40] |Hep|2={Ξ2KBT2cLΩδq,±(k'k)|G12(qz)|2,acoustic phononħD22ρω0Ωδq,±(k'k)[n(ω0)+1212]|G12(qz)|2,nonpolar optical phononE33 where the upper sign is for absorption and lower for emission of one phonon,KBis the Boltzmann constant,Ωis the volume of the lattice mode cavity,cLis the elastic constant for acoustic mode,ΞandDare the acoustic and optical deformation potential, respectively, andn(ω0)is the number of optical phonons at temperatureT The wavefunction interference effect between conduction subbands is and between valence subbands is The Kronecker symbolδq,±(k'k)in the matrix element Eq.(33) represents the in-plane momentum conservationk'=k±q. Since phonon modes have density of statesΩ/(2π)3, the participating phonon states in the integral Eq.(32) can be expressed as whereθis the angle betweenkandq. For conduction subbands with a parabolic dispersion Eq.(1), the phonon scattering rate Eq.(32) can be evaluated analytically 1τ12={Ξ2KBTme*4πcLħ3|G12(qz)|2dqz,acousticD2me*[n(ω0)+1212]4πρħ2ω0|G12(qz)|2dqz,nonpolar optical.E38 But for valence subbands where there is a strong nonparabolicity, Eq.(32) can no longer be integrated analytically. However, if we take the wave vector of the initial state in subband 2 to be at theΓ-pointk=0, then the phonon wave vectorq=k', Eq.(38) can still be used to evaluate the phonon scattering rate between valence subbands by substituting the effective mass with some average effective mass in the final subband 1. The phonon scattering rate in Eq.(38) has been used to compare the lifetimes of two similar three-level systems, SiGe/Si and GaAs/AlGaAs, as shown in Fig. 7(a) [38]. The lifetime difference between the upper (3) and lower (2) subband is calculated as the function of the transition energyE3E2which is varied by changing the barrier width between the two QWs that host the two subbands. The main result in Fig. 7(b) is that the lifetimes in the SiGe system can be an order of magnitude longer than in the GaAs/AlGaAs system because of SiGe’s lack of polar optical phonons. This property can potentially lead to significantly reduced lasing threshold for the SiGe system. The sudden drops in the lifetimes have to do with the shifting of subband energy separationsE2E1andE3E2, to either below or above the optical phonon energy. Figure 7. a) HH valence band diagram of one period of a SiGe/Si SL with hole energy increase in the upward direction, and (b) comparison of Lifetime difference ( τ 3 − τ 2 ) between the SiGe/Si and GaAs/AlGaAs SL (in a similar three scheme) as a function of the transition energy ( E 3 − E 2 ) [38]. Among the different phonon scattering processes – emission and absorption of acoustic and optical phonons, the emission of an optical phonon is by far the fastest process. But in far-IR QCLs when the subband energy separation is less than the optical phonon energy and the emission of an optical phonon is forbidden, phonon scattering may no longer be the dominant relaxation mechanism. Other scattering mechanisms need to be taken into consideration, such as the carrier-carrier [41], impurity [42], and interface roughness scatterings [43], all of which are elastic processes. The carrier-carrier scattering is a two-carrier process that is particularly important when carrier concentration is high which increases the possibility of two carriers interacting with each other. There are many possible outcomes as a result of this interaction in inducing intersubband as well as intrasubband transitions. Among them, the 22→11 process where both carriers originally in subband 2 end up in subband 1 is the most efficient one in terms of inducing intersubband transitions (Fig. 6(b)). It has been reported in experiment that the intersubband transition times on the order of tens of ps have been observed for carrier densities of109~1011/cm2 in GaAs/AlGaAs QWs [44]. In QCLs, since doping is mostly introduced away from the active region where optical transitions take place, impurity scattering does not seem to play a major role in determining the lifetimes of laser subbands, however, its influence on carrier transport in the injection region can be rather important. Interface roughness depends strongly on the process of structural growth, its impact on scattering should be more significant on narrow QWs, particularly for those transitions between two wavefunctions that are localized in MQWs that span across several interfaces. 4. Valence band SiGe QCLs Up to now, all of the demonstrated QCLs are based on epitaxially grown III-V semiconductor heterostructures such as GaInAs/AlInAs, GaAs/AlGaAs, and InAs/AlSb, using electron subbands in conduction band. With the promise of circumventing the indirectness of Si band gap, a SiGe/Si laser based on intersubband transitions was first proposed by Sun et al in 1995 [38] where a comparative study was performed between the SiGe/Si and GaAs/AlGaAs systems. Since then there has been a series of theoretical and experimental investigations aimed at producing Si-based QCLs. A natural choice of the material system is SiGe because Si and Ge are both group-IV elements, SiGe alloys have been routinely deposited on Si to produce heterojunction bipolar transistors or as a strain-inducing layer for CMOS transistors [45]. While QCLs based on SiGe alloys can be monolithically integrated on Si if successfully developed, there are significant challenges associated with this material system. First, there is a 4% lattice mismatch between Si and Ge. Layers of Si1-xGex alloys deposited on Si substrates induce strain which can be rather significant in QCLs because a working structure typically consists of at least hundreds of layers with a total thickness that easily exceeds the critical thickness above which the built-in strain simply relaxes to develop defects in the structure. In dealing with the issue of strain in SiGe/Si QC structures, one popular approach is to use strain balanced growth where the compressively strained Si1-xGex and tensile strained Si are alternately stacked on a relaxed Si1-yGey buffer deposited on a Si substrate where the buffer composition (yx) is chosen to produce strains in Si1-xGex and Si that compensate each other, so that the entire structure maintains a neutral strain profile [46,47]. Strain balanced structures have effectively eliminated the limitations of critical thickness and produced high quality SiGe/Si structures consisting of nearly 5000 layers (15 μm) by chemical vapor deposition [48]. Second, the band offsets between compressively strained SiGe and tensile strained Si or between SiGe of different alloy compositions is such that the conduction band QWs are shallow, and nearly all band offsets are in valence band. Practically all of the investigations of SiGe QCLs are focused on intersubband transitions in the valence band. But the valence subband structure is much more complex in comparison with the conduction subbands because of the mixing between the HH, LH and SO bands. Their associated subbands are closely intertwined in energy making the design of valence QCLs extremely challenging. Third, any valence QCL design in general has to inevitably involve HH subbands since they occupy lower energies relative to LH subbands because of their large effective mass. In SiGe, the HH effective mass is high (~0.2me), which leads to small IST oscillator strength between the laser states and poor carrier transport behavior associated with their low mobilities. The challenge presented by the valence-band mixing also creates an opportunity to engineer desirable subband dispersions such that total population inversion between the subbands becomes unnecessary in a way analogous to the situation in conventional band-to-band lasers discussed in section II. It was reported in QCLs that the population inversion was only established locally ink-space in the largek-vector region of the conduction subbands because the interactions between the subbands produced nonparallel in-plane dispersions [49]. In comparison with the conduction-band nonparabolicity, this effect is known to be much stronger in the valence band [31]. As a matter of fact, in the valence band of most diamond and zinc-blende semiconductors, LH and HH subbands usually anti-cross, and near the point of anti-crossing, the LH subband in-plane dispersion becomes electron-like. Thus, an earlier design was accomplished to effectively tailor the dispersions of two valence subbands in a GaAs/AlGaAs QW (Fig. 5) similar to those of the conduction and valence bands, in which one of the subbands is electron-like and the other hole-like, i.e., one of the subbands shall have its effective mass inverted [35]. If we now designate states near the Γ-point of subband LH1 as the intermediate states,|i, states near the valley (inverted-effective-mass region) of subband LH1 as the upper laser states|u, states in subband HH1 vertically below the valley of subband LH1 as the lower laser states|l, and states near the Γ-point of subband HH1 as the ground states|g(counting the hole energy downward in Fig. 5), we can see that the situation closely resembles the one in the conventional band-to-band semiconductor laser. The upper and lower laser states can now be populated and depopulated through fast intrasubband processes, while the lifetime of the upper laser states is determined by a much slower intersubband process between subbands LH1 and HH1. Such a large lifetime difference between the upper and lower laser states is certainly favorable for achieving population inversion between them. The inverted mass approach was later on applied to the SiGe system [50,51]. Two slightly different schemes were developed, one utilized the inverted LH effective mass [50], and the other inverted HH mass [51]. In both cases, the effective mass inversion is the result of strong interaction between the valence subbands. The inverted-effective-mass feature requires the coupled subbands to be closely spaced in energy, typically less than all the optical phonon energies in the SiGe material system (37meV for the Ge-Ge mode, 64meV for the Si-Si mode, and 51meV for the Si-Ge mode [52]), which suppresses the nonradiative intersubband transitions due to the optical phonon scattering, but also limits the optical transitions in the THz regime. The structures under investigation were strain balanced with compressively strained Si1-xGex QW layers and the tensile strained Si barrier layers deposited on a relaxed Si1-yGey buffer layer (0yx) on Si. The in-plane dispersions of the inverted LH scheme are shown in Fig. 8 for a 90Å/50Å Si0.7Ge0.3/Si super lattice (SL). Three lowest subbands are shown. The numbers 1, 2, 3, and 4 indicate how this inverted mass Figure 8. Dispersions of subbands HH1, LH1 and HH2 in a 90Å/50Å Si0.7Ge0.3/Si SL strained balanced on a Si0.81Ge0.19 buffer, obtained with a 6 × 6 valence band matrix taking into account HH, LH and SO interactions and strain effect [50]. intersubband laser mimics the operation of a conventional band-to-band laser. The lifetime of the upper laser state 3 is long because the intersubband transition energy at 6THz (~50μm) is below that of optical phonons, allowing only much weaker acoustic phonon scattering between the two subbands. Calculation results have shown that optical gain in excess of 150/cm can be achieved without total population inversion being established between the LH and HH subbands. The inverted LH effective mass approach utilizes optical transitions between the LH and HH subband. It can be argued from the component overlap of the envelope functions in Eq.(30) that the optical transition matrix between subbands of different types is always smaller than that between subbands of the same type. We therefore tried to engineer the same inverted effective mass feature between two HH subbands. The challenge is to lift the LH subband above the HH2 subband. Once again, a strain balanced SL structure is considered but with different SiGe alloy compositions and layer thicknesses [51]. The band structure for a 90Å/35Å Si0.8Ge0.2/Si SL under an electric bias of 30kV/cm is shown in Fig. 9(a) where each QW has two active doublets formed by bringing HH1 and HH2 subbands in the neighboring QWs into resonance under the bias. There is a 3meV energy split within the doublet. The resulting in-plane dispersions for the two doublets are shown in Fig. 9(b). Simulation results showed that optical gain of 450/cm at 7.3THz can be achieved at a pumping current density of 1.5KA/cm2 at 77K. Figure 9. a) Band diagram of the Si0.8Ge0.2/Si SL under an electric bias of 30 kV/cm. The labels ( n − 1 , n , n + 1 , … ) represent the QWs in which the wave functions are localized [51], (b) Dispersions of the four levels (two doublets) in a QW. Electroluminescence (EL) from a SiGe/Si quantum cascade emitter was first demonstrated in a SiGe/Si quantum cascade emitter using HH to HH transitions in the mid-IR range in 2000 [53]. Since then several groups have observed EL from the same material system with different structures. EL emissions have been attributed to various optical transitions including HH-to-HH [54], LH-to-HH [55], and HH-to-LH [56], with emission spectra ranging from mid IR to THz (8 ~ 250μm). But lasing has not been observed. Improvement on the QCL design has been to be made. One of the most successful III-V QCL designs has been the approach of bound-to-continuum where the lower laser state sitting at the top of a miniband is delocalized over several QWs while the upper laser state is a bound state in the minigap as illustrated in Fig. 10 [57,58]. Electrons that are injected into the bound upper state 2 are prevented from escaping the bound state by the minigap, and then undergo lasing transitions to a lower state 1. The depopulation of the lower state 1 is accelerated through the efficient miniband carrier transport. Such a design has led to improved performance in terms of operating temperature as well as output power for III-V QCLs. A similar bound-to-continuum design has been implemented in SiGe with both bound and continuum formed by HH states, once again showing just EL with no lasing [59]. It is believed that in this structure LH states are mixed within the HH states. Although the impact of this intermixing has not been fully understood, these LH states can in principle present additional channels for carriers to relax from the upper laser state reducing its lifetime. An improved version has been sought after by lifting the LH states out of all involved HH states for bound and continuum with the use of strain, as a result, a clear intersubband TM polarized EL is shown suggesting that LH states have been pushed away from the HH radiative transitions [60]. Figure 10. Illustration of two periods of a bound-to-continuum QCL. Lasing transition occurs between an isolated bound upper state 2 (formed in the minigap) and a delocalized lower state 1 (sitting on top of a miniband). Nearly a decade has passed since the first experimental demonstration of EL from a SiGe/Si quantum cascade emitter [53]. During this period III-V QCLs have been improved dramatically to allow for commercialization and system integration for various applications, however there are still no SiGe QCLs. The seemingly inherent difficulties with the valence QCL approach have propelled some researchers back into the conduction band to look for solutions. 5. Conduction band Si-based QCLs Before QCLs can be designed using conduction subbands, there must be sufficient conduction band offset. Contrary to the situation in compressively strained Si1-xGex, tensile strained Si1-xGex can have larger conduction band offset, but the conduction band minima occur at the twoΔ2-valleys whose effective mass (longitudinal) along the growth direction is very heavy (ml~0.9m0) resulting in small oscillator strength and poor transport behavior possibly even worse than the HHs. Any approach of developing Si-based QCLs based on transitions between conduction subbands needs to necessarily go beyond the conventional methods in selecting the material system and growth technique. Prospects of developing such Si-based QCLs have been investigated theoretically. One approach stayed with the Si-rich SiGe/Si material system but instead of the conventional growth direction in (100), the structural orientation has been rotated to the [111] crystal plane [61]. Conduction band offset was calculated to be 160meV at the conduction band minima consisting of six-degenerate Δ-valleys, sufficient for designing far IR QCLs. The effective mass along (111) direction can be obtained as the geometric average of the longitudinal and transverse effective masses of the Δ -valley,~0.26m0, lower than that of longitudinalml~0.9m0in the (100) structure. Another design relying on the Ge-rich Ge/SiGe material system has been proposed to construct conduction band QCLs using compressively strained Ge QWs and tensile strained Si0.22Ge0.78 alloy barriers grown on a relaxed [100] Si1-yGey buffer [62]. The intersubband transitions in this design are within theL-valleys which are the conduction band minima in Ge QWs whose effective mass along (100) direction has been determined to be~0.12m0. Since Si1-xGex alloys withx0.85are similar to Si in that the conduction band minima appear in the Δ -valleys, the conduction band lineup in the Ge/Si0.22Ge0.78 structure is rather complex with conduction band minima in Ge atL-valleys but in Si0.22Ge0.78 atΔ2-valleys along the (100) growth direction. Although the band offset at theL-valleys is estimated to be as high as 138 meV, the overall band offset between the absolute conduction band minima in Ge and Si0.22Ge0.78 is only 41meV. Although the quantum confinement effect helps to lift those electron subbands atΔ2-valleys, the twoΔ2-valleys are inevitably entangled with theL-valleys in the conduction band, leading to design complexity and potentially creating additional nonradiative decay channels for the upper laser state. Recently, a new group-IV material system that expands beyond the Si1-xGex alloys has been successfully demonstrated with the incorporation of Sn. These new ternary Ge1-x-ySixSny alloys have been studied for the possibility of forming direct band gap semiconductors[63-66]. Since the first successful growth of this alloy [67], device- quality epilayers with a wide range of alloy contents have been achieved [68,69]. Incorporation of Sn provides the opportunity to engineer separately the strain and band structure since we can vary the Si (x) and Sn (y) compositions independently. Certain alloy compositions of this material system offer three advantages: (1) the possibility of a “cleaner” conduction band lineup in which the L-valleys in both well and barrier sit below other valleys (Γ,Δ), (2) an electron effective mass along the (001) growth direction that is much lower than the HH mass, and (3) a strain free structure that is lattice matched to Ge. In addition, recent advances in the direct growth of Ge layer on Si provide a relaxed matching buffer layer on a Si substrate upon which the strain-free Ge/Ge1-x-ySixSny is grown [70]. Based on this material system, a strain free QCL operating in the conduction L-valleys was proposed [71]. Since band offsets between ternary Sn-containing alloys and Si or Ge are not known experimentally, we have calculated the conduction band minima for a lattice- matched heterostructure consisting of Ge and a ternary Ge1-x-ySixSny based on Jaros' band offset theory [72] which is in good agreement with experiment for many heterojunction systems. For example, this theory predicts an average valence band offset,ΔEv,av=0.48eV for a Ge/Si heterostructure (higher energy on the Ge side), close to the accepted value ofΔEv,av=0.5eV. The basic ingredients of our calculation are the average (between HH, LH, and SO bands) valence band offset between the two materials and the compositional dependence of the band structure of the ternary alloy. For the Ge/-Sn interface, Jaros’ theory predictsΔEv,av=0.69eV (higher energy on the Sn side). For the Ge1-x-ySixSny/Ge interface we have used the customary approach for alloy semiconductors, interpolating the average valence band offsets for the elementary heterojunctions Ge/Si and Ge/-Sn. Thus we used (in eV) Once the average valence band offset is determined, the energies of individual conduction band edges in the Ge1-x-ySixSny alloy can be calculated relative to those in Ge from the compositional dependence of the spin-orbit splitting of the top valence band states and the compositional dependence of the energy separations between those conduction band edges and the top of the valence band in the alloy [73]. We have assumed that all required alloy energies can be interpolated between the known values for Si, Ge, and α-Sn as The bowing parametersbGeSi,bGeSn, andbSiSnhave been discussed in Refs. [74] and [75]. Finally, for the indirect conduction band minimum near the X-point, Weber and Alonso find (in eV) for Ge1-xSix alloys [76]. On the other hand, the empirical pseudo-potential calculations of Chelikovsky and Cohen place this minimum at 0.90 eV in -Sn, virtually the same as its value in pure Ge [77]. We thus assume that the position of this minimum in ternary Ge1-x-ySixSny alloys is independent of the Sn concentrationy. The conduction band minima results are shown in Fig. 11 for Sn concentrations0y0.1. The Si concentrationxwas calculated using Vegard's law in such a way that the ternary Ge1-x-ySixSny is exactly lattice-matched with Ge. It can be seen from Fig. 11 that a conduction-band offset of 150 meV at L-valleys can be obtained between lattice-matched Ge and Ge0.76Si0.19Sn0.05 alloy while all other conduction-band valleys (Γ, X, etc) are above the L-valley band edge of the Ge0.76Si0.19Sn0.05 barrier. This band alignment presents a desirable alloy composition from which a QCL operating at L-valleys can be designed using Ge as QWs and Ge0.76Si0.19Sn0.05 as barriers without the complexity arising from other energy valleys. Figure 12 shows the QCL structure based upon Ge/ Ge0.76Si0.19Sn0.05 QWs. Only L-valley conduction-band lineups are shown in the potential diagram under an applied electric field of 10 kV/cm. In order to solve the Schrödinger equation to yield subbands and their associated envelope functions, it is necessary to determine the effective massmz*along the (001) growth direction (z) within the constant-energy ellipsoids at the L-valleys along the (111) direction which is tilted with respect to (100). Using theL-valley principal transverse effective massmt*=0.08m0, and the longitudinal effective massml*=1.60m0for Ge, we obtainmz*=(2/3mt*+1/3ml*)1=0.12m0. The squared magnitudes of all envelope functions are plotted at energy positions of their associated subbands. As shown in Fig. 12, each period of the QCL has an active region for lasing emission and an injector region for carrier transport. These two regions are separated by a 30 Å barrier. The active region is constructed with 3 coupled Ge QWs that give rise to three subbands marked 1, 2, and 3. The lasing transition at the wavelength of 49 µm is between the upper laser state 3 and the lower laser state 2. The injector region consists of 4 Ge QWs of decreasing well widths all Figure 11. Conduction band minima at L , Γ , X points of Ge1-x-ySixSny that is lattice matched to Ge [71]. Figure 12. Formula: Eqn281.wmf>-valley conduction band profile and squared envelope functions under an electric field of 10kV/cm. Layer thicknesses in angstrom are marked with bold numbers for Ge QWs and regular for GeSiSn barriers. Array marks the injection barrier [71]. separated by 20Å Ge0.76Si0.19Sn0.05 barriers. The depopulation of lower state 2 is through scattering to state 1 and to the miniband downstream formed in the injector region. These scattering processes are rather fast because of the strong overlap between the involved states. Another miniband in the injector region formed of quasi-bound states is situated 45 meV above the upper laser state 3, effectively preventing escape of electrons from upper laser state 3 into the injector region. The nonradiative transition rates between different subbands in such a low-doped nonpolar material system with low injection current should be dominated by deformation-potential scattering of nonpolar optical and acoustic phonons. For this Ge-rich structure, we have used bulk-Ge phonons for calculation of the scattering rate to yield lifetimes for the upper laser stateLand the lower laser stateτ3, as well as the 3→2 scattering timeτ2[78]. The results obtained from Eq.(38) are shown in Fig. 13 as a function of operating temperature. These lifetimes are at least one-order of magnitude longer than those of III-V QCLs owing to the nonpolar nature of GeSiSn alloys. The necessary condition for population inversionτ32is satisfied throughout the temperature range. Using these predetermined lifetimes in the population rate equation under current injection: where{N3t=ηJeN3N¯3τ3N2t=N3N¯3τ32N2N¯2τ2is the area carrier density per period in subbandNi,(i=2,3)under injected current densityiwith an injection efficiencyJ, andηis the area carrier density per period due to thermal population. Solving the above rate equation at steady state yields population inversion which can then be used to evaluate the optical gain of the TM polarized mode following Eq.(28) at the lasing transition energyN3N2=τ3(1τ2τ32)ηJe(N¯2N¯3).meV as For the QCL structure in Fig. 12, the following parameters are used: index of refractionγ(ω0)=2e2m02ω0z232ε0cneffmz*2ΓLp[τ3(1τ2τ32)ηJe(N¯2N¯3)]., lasing transition FWHMneff=3.97meV, length of one period of the QCLΓ=10Å, area doping density per period of 1010 cm2, and unit injection efficiencyLp=532. Since the relatively small conduction band offset limits the lasing wavelength to the far-IR or THz regime (roughly 30 µm and beyond), the waveguide design can no longer rely on that of conventional dielectric waveguides such as those used in laser diodes and mid IR QCLs. This is mainly because the thickness required for the dielectric waveguide would have exceeded what can be realized with the epitaxial techniques employed to grow the laser structures. One solution is to place the QCL active structure between two metal layers to form the so-called plasmon waveguide [79,80]. While the deposition of top metal is trivial, placing bottom metal requires many processing steps such as substrate removal, metal deposition, and subsequent wafer bonding. The QCL waveguides are typically patterned into ridges as shown in Fig. 14. Figure 13. Upper state lifetime η = 1 , lower state lifetime τ 3 , and scattering time τ 2 between them as a function of temperature. Figure 14. Schematic of a ridge plasmon waveguide with the Ge/GeSiSn QCL sandwiched between two metal layers. This plasmon waveguide supports only TM polarized EM mode that is highly confined within the QCL region,τ32. We can assume Drude model to describe the metal dielectric function whereεM=1ωp2ω2+jγmωis the metal plasmon frequency, andωpis the metal loss (γmeV,ħωp=8.11meV for Au [81]), andħγm=65.8for the Ge rich Ge/Ge0.76Si0.19Sn0.05 QCL active region. Consider the EM wave propagate along theεD=neff216-direction as shown in Fig. 14, its electric field can then be obtained as whereE={E0εcoshcosh(kd/2)(jβẑ+qx̂)eq(zd/2)ej(βxωt),zd2E0[jβcoshcosh(kz)ẑksinhsinh(kz)x̂]ej(βxωt),|z|d2E0εcoshcosh(kd/2)(jβẑqx̂)eq(z+d/2)ej(βxωt),zd2, andε=εM/εDis a constant. The complex propagation constantE0follows these relations,β=β'+jβ",β2k2=kD2withβ2q2=εkD2. It is easy to see that the continuity of the normal component of the electric displacement is satisfied at the boundarieskD=εDω/c, the requirement of continuity of tangential electric field leads to which determines the TM modes that can propagate in this plasmon waveguide. The waveguide lossk2[ε2tanh2tanh2(kd2)1]=kD2(1ε)is dominated by the metal loss, which can be determined from the imaginary part of the propagation constant, asαw. As a superposition of two surface plasmon modes bound to the two metal-dielectric interfaces atαw=2β", this TM mode decays exponentially into the metal, providing an excellent optical confinement factor defined asz=±d/2. We have simulated the TM-polarized mode in a QCL structure of 40 periods (Γw=d/2d/2|E|2dz/|E|2dzµm) that is confined by double-Au-plasmon waveguide and obtained near unity optical confinementd=2.13and waveguide lossΓw1.0/cm. Assuming a mirror lossαw=110/cm for a typical cavity length of 1 mm, the threshold current densityαm=10can be calculated from the balancing relationship,Jth, whereΓwγth=αw+αmis the optical gain Eq.(44) obtained at the thresholdγth. The result is shown in Fig. 15 forJththat ranges from 22 A/cm2 at 5 K to 550 A/cm2 at 300 K. These threshold values are lower than those of III-V QCLs as a result of the longer scattering times due to nonpolar optical phonons. While GeSiSn epilayers with alloy compositions suitable for this QCL design have been grown with MOCVD [68,69], implementation of Ge/GeSiSn QCLs is currently challenged by the structural growth of the large number of hetero-layers in the QCL structure with very fine control of layer thicknesses and alloy compositions. Nevertheless, progress is being made towards experimental demonstration. Figure 15. Simulated threshold current density of the Ge/GeSiSn QCL as a function of temperature. 6. Summary Silicon-based lasers have been long sought after for the possibility of monolithic integration of photonics with high-speed Si electronics. Many parallel approaches are currently taken to reach this goal. Among them Si nanocrystals and Er-doped Si have been investigated rather extensively. While EL has been demonstrated, lasing has not been observed. The only reported lasing in Si so far has been achieved using stimulated Raman scattering which requires optical pumping at very high intensity on a device of large scale – impractical for integration with Si electronics. The QCLs that have been successfully developed in III-V semiconductors offer an important alternative for the development of Si-based lasers. The salient feature of QCLs is that lasing transitions take place between subbands that are within the conduction band without crossing the band gap. Such a scheme makes the indirect nature of the Si band gap irrelevant. In order to appreciate the QCL designs, some theoretical background underlying the basic operating principles has been introduced here. In particular, subband formation and energy dispersion in semiconductor QWs are described in the framework of envelope functions with the effective-mass approximation for both conduction and valence band taking into account mixing between HH, LH, and SO bands. Optical gain based on ISTs is derived and intersubband lifetimes are discussed with a more detailed treatment of carrier-phonon scattering. The development of Si-based QCLs has been primarily focused on ISTs between valence subbands in the Si-rich SiGe/Si material system. Such a material system has been routinely used in CMOS-compatible processes. There are two reasons for using holes instead of electrons. One is that the compressively strained Si1-xGex with tensile strained Si grown on a relaxed Si1-yGey has very small conduction band offset ­ QWs are too shallow to allow for elaborate QCLs. Tensile strained Si1-xGex, on the other hand, can have larger conduction band offset, but the conduction band minima occur at the twoJth-valleys whose effective mass (longitudinal) along the growth direction is heavy (Δ2) resulting in small oscillator strength and poor transport behavior such as reduced tunneling probabilities. It is generally believed that SiGe QCLs have to be pursued within the valence band as a p-type device. But the situation in valence band also presents challenges in several perspectives. First, the strong mixing of HH, LH, and SO bands makes the QCL design exceedingly cumbersome albeit the opportunities presented by the strong nonparabolicity in valence subbands to take advantage of schemes such as the inverted effective mass where the total population inversion between subbands may not be necessary. Second, there is a great deal of uncertainty in various material parameters for the SiGe alloy – often times approximation has to be made to linearly extrapolate parameters from those of Si and Ge, thus, the accuracy of the designs has a great degree of ambiguity. Third, any valence QCLs have no choice but to deal with HH subbands; their large effective mass hinders carrier injection efficiency and leads to small IST oscillator strength between laser states. Fourth, for any significant band offset needed to implement QCLs, lattice-mismatch-induced strain in SiGe QWs and Si barriers even in strain balanced structures is significant, which presents a challenge in structural growth and device processing. While EL was demonstrated from a valence-band SiGe/Si quantum cascade emitter nearly a decade ago, lasing remains elusive. Recently, several ideas of developing Si-based conduction-band QCLs have emerged to circumvent the hurdles in the SiGe/Si valence-band approach. The proposals offer ways to increase the conduction band offset and to reduce the effective mass along the growth direction. One scheme proposes to orient the structural growth along the (111) direction, and another relies on ISTs in theml~0.9m0-valleys of the conduction band in Ge-rich Ge/SiGe material system. The former has accomplished more in increasing the conduction-band offset, and the latter in reducing the effective mass. A third approach that expands the material system beyond SiGe to GeSiSn has been discussed in detail. A Ge/Ge0.76Si0.19Sn0.05 QCL that operates atL-valleys of the conduction band was designed. According to our estimation of the band lineup, this particular alloy composition gives a “clean” conduction band offset of 150meV atL-valleys with all other energy valleys conveniently out of the way. All QCL layers are lattice matched to a Ge buffer layer on a Si substrate and the entire structure is therefore strain free. The electron effective mass along the growth direction is much lighter than that of heavy holes bringing a significant improvement in tunneling rates and oscillator strengths. The lasing wavelength of this device is 49 µm. With different GeSiSn alloy compositions that are lattice matched to Ge, QCLs can be tuned to lase at other desired wavelengths. Lifetimes determined from the deformation potential scattering of nonpolar optical and acoustic phonons are at least an order of magnitude longer than those in III-V QCLs with polar optical phonons, leading to a reduction in threshold current density and the possibility of room temperature operation. While there are considerable challenges in material growth of this QCL design, advances in fine control of structural parameters including layer thicknesses and alloy compositions are moving towards implementation of conduction-band QCLs in the GeSiSn system. When are we going to realize Si-based lasers that can be integrated with Si electronics? Clearly, breakthroughs in material science and device innovation are necessary before that happens, but with the variety of approaches that are being pursued--driven by the potential pay-off in commercialization--the prospect is promising. How to cite and reference Link to this chapter Copy to clipboard Cite this chapter Copy to clipboard Greg Sun (April 1st 2010). The Intersubband Approach to Si-based Lasers, Advances in Lasers and Electro Optics, Nelson Costa and Adolfo Cartaxo, IntechOpen, DOI: 10.5772/8672. Available from: chapter statistics 2557total chapter downloads More statistics for editors and authors Access personal reporting Related Content This Book Next chapter Evolution of Optical Sampling By Gianluca Berrettini, Antonella Bogoni, Francesco Fresi, Gianluca Meloni and Luca Poti Related Book Frontiers in Guided Wave Optics and Optoelectronics Edited by Bishnu Pal First chapter Frontiers in Guided Wave Optics and Optoelectronics By Bishnu Pal More about us
57cb05f511f3b4da
Avshalom Elitzur Last updated Avshalom Elitzur Avshalom Elitzur Picture.jpg A. Elitzur in 2009 Born (1957-05-31) May 31, 1957 (age 63) Alma mater Tel Aviv University (Ph.D.) Awards Noetic Medal (2010) Scientific career Fields quantum mechanics Institutions Weizmann Institute of Science Tel Aviv University Bar-Ilan University Thesis Time's Passage and the Time-Asymmetries (1999) Doctoral advisor Yakir Aharonov Website a-c-elitzur.co.il/site/siteHomePage.asp Avshalom Cyrus Elitzur (Hebrew : אבשלום כורש אליצור; born 30 May 1957) is an Israeli physicist and philosopher. [1] Avshalom Elitzur was born in Kerman, Iran, to a Jewish family. When he was two years old, his family immigrated to Israel and settled in Rehovot. He left school at the age of sixteen and began working as a laboratory technician at the Weizmann Institute of Science in Rehovot. Elitzur received no formal university training before obtaining his PhD. Elitzur was a senior lecturer at the Unit for Interdisciplinary Studies, Bar-Ilan University, Ramat-Gan, Israel. He is noted for the Elitzur–Vaidman bomb-testing problem in quantum mechanics, which was publicised by Roger Penrose in his book Shadows of the Mind . In 1987, he published his book: Into the Holy of Holies: Psychoanalytic Insights into the Bible and Judaism. During that same year, he was invited to present an unpublished manuscript on quantum mechanics at an international conference in Temple University in Philadelphia. Consequently, he was later invited by Yakir Aharonov of Tel Aviv University, the doyen of physicists in Israel, to write a doctoral thesis on the subject. He was the chief editor of natural sciences in Encyclopaedia Hebraica. In 2008, he was a visiting professor at Joseph Fourier University. Elitzur is the founder of the Iyar, The Israeli Institute for Advanced Research. [2] Elitzur had a relationship with journalist Timura Lessinger, with whom he has a daughter. The medal Noetic Medal of Consciousness and Brain Research.jpg The medal In 2010, Elitzur won the Noetic Medal of Consciousness and Brain Research for his contributions to cosmology of mind and Quantum Theory. [3] Published works Related Research Articles Erwin Schrödinger Austrian physicist Erwin Rudolf Josef Alexander Schrödinger, sometimes written as Erwin Schrodinger or Erwin Schroedinger, was a Nobel Prize-winning Austrian-Irish physicist who developed a number of fundamental results in quantum theory: the Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. Reality is the sum or aggregate of all that is real or existent within a system, as opposed to that which is only imaginary. The term is also used to refer to the ontological status of things, indicating their existence. In physical terms, reality is the totality of a system, known and unknown. Philosophical questions about the nature of reality or existence or being are considered under the rubric of ontology, which is a major branch of metaphysics in the Western philosophical tradition. Ontological questions also feature in diverse branches of philosophy, including the philosophy of science, philosophy of religion, philosophy of mathematics, and philosophical logic. These include questions about whether only physical objects are real, whether reality is fundamentally immaterial, whether hypothetical unobservable entities posited by scientific theories exist, whether God exists, whether numbers and other abstract objects exist, and whether possible worlds exist. David Bohm Brian Josephson Welsh Nobel Laureate in Physics Dean Radin is a parapsychologist. Following a bachelor and master's degree in electrical engineering and a PhD in educational psychology Radin worked at Bell Labs, researched at Princeton University, GTE Laboratories, University of Edinburgh, SRI International, Interval Research Corporation, and was a faculty member at University of Nevada, Las Vegas. Radin then became Senior Scientist at the Institute of Noetic Sciences (IONS), in Petaluma, California, USA, which is on Stephen Barrett's Quackwatch list of questionable organizations. Radin served on dissertation committees at Saybrook Graduate School and Research Center, and was former President of the Parapsychological Association. He is also co-editor-in-chief of the journal Explore: The Journal of Science and Healing. <i>What the Bleep Do We Know!?</i> 2004 film by William Arntz What the Bleep Do We Know!? is a 2004 American pseudo-scientific film that posits a spiritual connection between quantum physics and consciousness. The plot follows the fictional story of a photographer, using documentary-style interviews and computer-animated graphics, as she encounters emotional and existential obstacles in her life and begins to consider the idea that individual and group consciousness can influence the material world. Her experiences are offered by the filmmakers to illustrate the film's scientifically-unsupported thesis about quantum physics and consciousness. Henry Pierce Stapp is an American mathematical physicist, known for his work in quantum mechanics, particularly the development of axiomatic S-matrix theory, the proofs of strong nonlocality properties, and the place of free will in the "orthodox" quantum mechanics of John von Neumann. Christian de Quincey is an American philosopher and author who teaches consciousness, spirituality and cosmology at universities and colleges in the United States and Europe. He is also an international speaker on consciousness. Evan Harris Walker, was an American physicist and parapsychologist. Yakir Aharonov Yakir Aharonov is an Israeli physicist specializing in quantum physics. He is a Professor of Theoretical Physics and the James J. Farley Professor of Natural Philosophy at Chapman University in California. He is also a distinguished professor in the Perimeter Institute and a professor emeritus at Tel Aviv University in Israel. He is president of the IYAR, The Israeli Institute for Advanced Research. Lev Vaidman Lev Vaidman is a Russian-Israeli physicist and Professor at Tel Aviv University, Israel. He is noted for his theoretical work in the area of fundamentals of quantum mechanics, which includes quantum teleportation, the Elitzur–Vaidman bomb tester, and the weak values. He was a member of the Editorial Advisory Board of The American Journal of Physics from 2007 to 2009. In 2010, the Elitzur–Vaidman bomb tester was chosen as one of the "Seven Wonders of the Quantum World" by New Scientist Magazine. The quantum mind or quantum consciousness is a group of hypotheses proposing that classical mechanics cannot explain consciousness. It posits that quantum-mechanical phenomena, such as entanglement and superposition, may play an important part in the brain's function and could explain consciousness. Michel Bitbol is a French researcher in philosophy of science. In philosophy, noetics is a branch of metaphysics concerned with the study of mind as well as intellect. There is also a reference to the science of noetics, which covers the field of thinking and knowing, thought and knowledge, as well as mental operations, processes, states, and products through the data of the written word. Harry J. Lipkin Harry Jeannot Lipkin, also known as Zvi Lipkin, was an Israeli theoretical physicist specializing in nuclear physics and elementary particle physics. He is a recipient of the prestigious Wigner Medal. Interactionism or interactionist dualism is the theory in the philosophy of mind which holds that matter and mind are two distinct and independent substances that exert causal effects on one another. It is one type of dualism, traditionally a type of substance dualism though more recently also sometimes a form of property dualism. Patrick Aidan Heelan, S.J. was an Irish Jesuit priest, physicist, and philosopher of science. He was William A. Gaston Professor of Philosophy at Georgetown University. Maciej Lewenstein, is a Polish theoretical physicist, currently an ICREA professor at ICFO – The Institute of Photonic Sciences in Castelldefels near Barcelona. He is an author of over 480 scientific articles and 2 books, and recipient of many international and national prizes. In addition to quantum physics his other passion is music, and jazz in particular. His collection of compact discs and vinyl records includes over 9000 items. Quantum social science is an emerging field of interdisciplinary research which draws parallels between quantum physics and the social sciences. Although there is no settled consensus on a single approach, a unifying theme is that, while the social sciences have long modelled themselves on mechanistic science, they can learn much from quantum ideas such as complementarity and entanglement. Some authors are motivated by quantum mind theories that the brain, and therefore human interactions, are literally based on quantum processes, while others are more interested in taking advantage of the quantum toolkit to simulate social behaviours which elude classical treatment. Quantum ideas have been particularly influential in psychology, but are starting to affect other areas such as international relations and diplomacy in what one 2018 paper called a "quantum turn in the social sciences". 1. "Archived copy" (PDF). Archived from the original (PDF) on 2013-09-30. Retrieved 2012-03-09.CS1 maint: archived copy as title (link) 2. Iyar, The Israeli Institute for Advanced Research (English) Archived October 3, 2011, at the Wayback Machine 3. The 2010 Noetic Medal awarded to Israeli Physicist Avshalom C. Elitzur